Pat Shuff
Oracle Linux on Amazon AWS
When we click on the EC2 console instance it allows us to look at our existing instances as well as create new ones.
Clicking on the "Launch Instance" button allows us to start the virtual machine instance creation. We are given a choice of sources for the virtual machine. The default screen does not offer Oracle Linux as an option so we have to go to the commercial or community screens to get OEL 6.x as an option.
It is important to note that the commercial version has a surcharge on an hourly basis. If we search on Oracle Linux we get a list of different operating system versions and database and WebLogic installations. The Orbitera version in the commercial version adds a hefty surcharge of $0.06 per hour for our instance and gets more expensive on an hourly basis as the compute shapes get larger. This brings the cost to 7x times that of the Oracle Compute Service and 5x the times of the Microsoft Azure instance.
The community version allows us to use the same operating system configuration without the surcharge. The drawback to this option is trustability on the configuration as well as repeatability. The key advantage over the commercial version is that it has version control and will be there a year from now. The community version might or might not be there in a year and if you need to create a new virtual machine based on something that you did a year ago might or might not be there. On the flip side, you can find significantly older versions of the operating system in the community version that you can not in the commercial version.
Given that I am cheap (and funding this out of my own pocket) we will go through the community version to reduce the hourly cost. The main problem with this option is that we installed Oracle Linux 6.4 when installing on Oracle Compute Cloud Service and Microsoft Azure. On Amazon AWS we have to select Oracle Linux 6.5 since the 6.4 version is not available. We could select 6.6 and 6.3 but I wanted to get as close to 6.4 as possible. Once we select the OS version, we then have to select a processor shape.
Note that the smaller memory options are not available for our selection. We have to scroll down to the m3.medium shape with 1 virtual processor and 3.75 GB of RAM as the smallest configuration.
The configuration screen allows us to launch the virtual machine into a virtual network configuration as well as different availability zones. We are going to accept the defaults on this screen.
The disk selection page allows us to configure the root disk size as well as alternate disks to attach to the services. By default the disk selection for our operating system is 40 GB and traditional spinning disk. You can select a higher speed SSD configuration but there are additional hourly charges for this option.
The tags screen is used to help you identify this virtual machine with projects, programs, or geographical affiliations. We are not going to do anything on this screen and skip to the port configuration screen.
The port screen allows us to open up security ports to communicate with the operating system. Note that this is an open interface that allows us to open any ports that we desire and provide access to ports like 80 and 443 to provide access to web services. We can create white lists or protected networks when we create access points or leave everything open to the internet.
We are going to leave port 22 as the only port open. If we did open other ports we would need to change the iptables configuration on the os instance. We can review the configuration and launch the instance on the next screen.
When we create the instance we have to select a public and private key to access the virtual machine. You had to previously create this instance through the AWS console.
Once we select the key we get a status update of the virtual machine creation.
If we go to the EC2 instance we can follow the status of our virtual machine. In this screen shot we see that the instance is initializing.
We can now connect using putty or ssh to attach to the virtual machine. It is important to note that Amazon uses a different version of the private key. They use the pem extension which is just a different version of the ppk extension. There are tools to convert the two back and forth but we do need to select a different format when loading the private key using putty on Windows. By default the key extension that it looks for is ppk. We need to select all files to find the pem keys. If you follow the guidelines from Amazon you can convert the pem key to a ppk key and access the instance as was done previously.
It is important to note that you can not login as oracle but have to login as root. To enable logging in as oracle you will need to copy the public key into the .ssh directory in the /home/oracle directory. This is a little troubling having the default login as root and having to enable and edit files to disable this. A security model that allows you to login as oracle or opc and sudo to root is much preferable.
In summary, the virtual machine creation is very similar to the Oracle Compute Cloud Service and Microsoft Azure Cloud Service. The Amazon instance was a little more difficult to find. Oracle installations are not the sweet spot in AWS and other Linux instances are preferred. The ssh keys are a little unusual in that the EC2 instance wants a different format of the ssh keys and if Amazon generates them for you it requires a conversion utility to get it into standard format. The cost of the commercial implementation drives the price almost to cost prohibitive. The processor and memory configuration are similar to the other two cloud providers but I was able to try a 1 processor and 1 GB instance and it failed due to insufficient resources. We had to fall back to a much larger memory footprint for the operating system to boot.
All three cloud vendors need to work on operating system selection. When you search for Oracle Linux you not only get various versions of the operating system but database and weblogic server configurations as well. The management consoles are vastly different between the three cloud vendors as well. It is obvious what the background and focus is of the three companies. Up next, using bitnami to automate service installations on top of a base operating system.
Oracle Linux on Microsoft Azure
From this console we can either create a Virtual Machine or Virtual Machine (classic). From the main console click on the "Virtual Machine" button on the left side of the screen. We will walk down this path rather than the classic mode for this installation. After you click on the virtual machine menu item and the "+ Add" button at the top left we can select the operating system type.
From this screen we can search for an installation type. If we type "oracle" in the search field we get over 20 entries provided by the Oracle Corporation. The first few are Linux only installations. The next few are Database and Java/WebLogic installations on Linux or Windows.
For our test, we will select the Oracle Linux 6.4 to match what we did in the previous blog. With this selection we get another screen that provides links to an informational page and more information. Notice that the deployment model only allows us to create the virtual machine in the classic mode. Had we gone down the Virtual machine (classic) menu item at the start we would end up in the same place with this selection. Clicking on the "create" button takes us to the next screen where we define the properties of the virtual machine. The basic information that we need are the virtual machine name, a user name to log in as, basic security access information (password or ssh key access), and compute size.
Rather than providing a username and password to access the virtual machine, we are going to select an ssh key. We will use the same key that we used in the last blog and copy the public ssh key that we created with puttygen and upload a text copy of the public ssh key. It is important to note that we can create the virtual machine with a password but to be honest we don't recommend it. You can do this but security becomes a huge issue. The password that you enter does not check for viability or security. I was able to type in the traditional "Welcome1" password and the system accepted it as a viable password. Again, it is not recommended to do this but I was testing to see if I could enter a simple password that is easily found in the dictionary.
When we click on the Pricing Tier we can select the compute shape. When we first clock on this we get three recommended choices. These choices are all single core options with a small memory footprint. It is important to note that all are IO limited at 500 IOs per second and all have the option for a load balancer to be put in front of the virtual machine. The key difference is the processor type. The A processor is a lower speed, older processor that does not have as much power. The D processor is a higher speed, newer processor. Both options are lower clock speeds than the Oracle compute shape which is a 3.0 Ghz Xeon processor. The memory configuration is significantly lower with 1.75 or 3.5 GB when compared to 15 GB or 30 GB offered by the Oracle Compute Service.
If we want to explore more options we can click on the "View All" option at the top right. This allows us to look at over 60 different configurations that have different core counts, memory configurations, and disk options.
For our exercise we are going to go with the recommended A1 Standard configuration with 1 virtual processor and 1.75 GB of RAM.
The final step that we need to look at are the network, disk, and availability zone configurations. To be honest, we could accept the defaults and not configure these options.
If we look at the optional configurations, we can configure an availability zone. This allows us to replicate services between user defined zones. For this instance we are going to use the standalone virtual machine configuration.
The next configuration option allows us to define the local network configurations, which subnet it will belong to, and the server name. We recommend not changing the subnet information because this could cause issues if you enter the wrong network or enter an invalid subnet that does not have a dhcp server.
We can select a reserved ip address rather than a dynamically allocated ip address. It is important to enter this information correctly because you could step on an existing server on the internet and not be able to get to your virtual machine. It is also important to map the static ip address to the domain name that you have reserved through naming servers on the internet. We will use the default dhcp rather than use a reserved ip address.
We could attach alternate disks to this instance. For example, if we wanted to pre-load the Oracle database binary, we could mount the disk as a secondary disk and attach it to our instance. We will not do this as part of our exercise but go with the default boot disk to show how to create a basic virtual image.
We can also configure the ports that are open to the virtual machine. It is important to note that by default ssh is open and available. We could open port 80 or port 443 if we wanted to provide web access to this machine. We would also have to change the iptables configuration on the operating system to gain access to these services.
Finally, we can add options to the Linux operating system. This would be similar to selecting the Orchestration option on the Oracle Compute Service. My recommendation is to not select this but do this with the apt-get or yum installation method using post configuration utilities and methods.
The final options that we have deal with how we pay for the service and which data center we drop the virtual machine into. We will accept the defaults of "Pay as you go" and "US East" data center for our exercise.
When we click "Create" we are put back on the main portal screen with an update window showing progress. The create takes a few minutes and will be dropped into the Virtual Machine window when finished.
You can click on the bell shaped icon on the top right to see the progress bar and click on the progress bar to look at the ongoing status of creation. Note that the status is "creating" and will change when the virtual machine is finished creating.
Once the status turns to "running" we can click on the machine name and get more detailed information like the ip address assigned and more detailed data on the final configuration.
Once everything is running we can login via putty on Windows or ssh on Linux or Mac. We get the ip address from the status page on the virtual machine. We use the username that we entered when we created the Linux instance. We use the private ssh keys to connect to the instance just like the Oracle Compute Service.
Once we accept the keys we can login and verify the os version and disk shape created.
In summary, the difference between the Oracle Compute Cloud and the Microsoft Azure Cloud is not that different. The selection of the operating system is much more of a GUI experience with Microsoft and the Oracle shapes are much larger when it comes to memory. The Microsoft options have more options on the low end and high end but the Oracle solution is designed for the Oracle Database and WebLogic servers. It took about the same amount of time to create the two virtual machines. Security is a little tighter with Oracle but can be made the same between the two. Azure gives you the option of using a username and password and allows you to open any port that you want into the virtual machine. Given that these instances are on the public internet we recommend a tighter security configuration.
Oracle Linux on Oracle Compute Cloud
If we look to the right of the Compute Cloud Service header we see a "Service Console" tab. Clicking on this allows us to create a new virtual machine by clicking on the "Create Instance" button. Not all accounts will have the create instance button. Your account needs to have the funding behind generic compute and the ability to consume either metered or un-metered IaaS options.
Note that we have two virtual machines that have previously been created. The first listed is a database service that was created. The compute infrastructure is automatically created when we create a database as a service instance. The second listed is a Java service that was created through the Java Service console. The compute infrastructure was also created for the JaaS option. We can drill into these compute services to look at security, networking, and ip addresses assigned.
To create a virtual machine we click on the "Create Instance" button which takes us to the next screen. On this screen we enter the name of the virtual machine that we are creating, a description label, the operating system instance and type, the shape of the instance. By shape we mean the number of processors and memory since this is how compute shapes are priced.
To select the different types of operating systems, we can enter a "*" into the Image type and it lists a pull down of operating system types. You can scroll down to select different configurations and instances. In the screen shot below we see that we are selecting OEL 6.4 with a 5 GB root directory. The majority of the images are generic Linux instances with different disk configurations, different software packages installed, and different OS configurations.
The next step is to select the processor and memory size. Again, this is a pull down menu with a pre-configured set of options. We can select from 1, 2, 4, 8, and 16 virtual processors and either 15 GB of RAM or 30 GB of RAM per processor. These options might be a bit limiting for some applications or operations but are optimized and configured for database and java servers.
In this example we selected a 1 virtual processor, 15 GB of RAM, 5 GB of disk for the operating system, and Oracle Linux 6.4 as the operating system. We can enter tags so that we can associate this virtual machine with a target, production environment, system, or geographic location consuming the resources.
At this time we are not selecting any custom attributes and not using Orchestration to provision services, user accounts, passwords, or other services into our virtual machine. We click the "Next" button at the top of the screen to go to network configurations.
In the network configuration we can accept the defaults and have an ip address assigned to us. If we have an ip address on reserve we can consume that reserved address and even assign a name to it to resolve to linux6.mydomain.net if we wanted to map this to an internet name. In this example we just accept the defaults and try not to get too fancy on our first try. This will create an ip address for our server, open port 22 for ssh access, and allow us to network it to other cloud services inside our instance domain with local network routing.
The next step is to configure a disk to boot from. We are presented with the option of using a pre-configured existing disk or creating a new one. The list that we see here is a list of disks for the database and java servers that we previously created. We are going to click on the create new check box and have the system create the disk for us.
The storage property pull down allows us to select the type of disk that we are going to use. If we are trying to boot from this disk, we should select the default option. If we were installing a database we would select something like the iSCSI option to attach as the data or fast recovery disk options.
The final step is to upload the public key of our ssh key pair. This is probably the biggest differential between the three services. Amazon requires that you use their shared and secret key that they generate for you. Microsoft allows you to create a service without an ssh key and use a username and password to connect to the virtual machine. Oracle requires that you use the ssh public-private key that you generate with puttygen or ssh-keygen. The public key is uploaded during this creation time (or selected if you have previously uploaded the key). The private key is presented via putty or ssh connection to the server once it is created. The default users that are created in the Oracle instances are the oracle account that is part of the orainst group and the opc account that has sudo privileges.
Once we have everything entered, we click on next and review the screen. Clicking on the "Create" button will create a compute instance with the configuration and specifications that we requested. In this example we are creating a Linix 6.4 instance onto a 1 OCPU machine with 15 GB of memory and attaching a 5 GB disk drive for the operating system.
As the system is provisioning, it updates us on progress
When everything is done we can view the details of the virtual machine and see the ip address, which key was used, and how the service is configured.
Before we can attach to the server, we need to open up the ssh port (port 22). This is done by going into the Network tab and adding a "Security Rule". This rule needs to be defined as public internet access to our default security rule since we accepted the default network rules as our network protocol in the creation of the virtual machine.
Note in this example we selected ssh as the protocol, public internet as the source, and default as the destination. With this we can now connect using putty from our Windows desktop. We need to configure putty with the ip address of our virtual machine and our private key as the ssh protocol for connecting. We could create a tunnel for other services but in this example we are just trying to get a command line login to the operating system.
Note that we can confirm that we are running Linux 6.4 and have a 5 GB hard drive for the operating system. The whole process takes a few minutes. This is relatively fast and can be scripted with curl commands. More on that later.
compute as a service
Amazon Web Services
The initial look and feel of the console starts the experience. It does show what the three companies are focused on. Let's start with Amazon (it is first in the alphabet and I had to pick something). The console lists a wide variety of services and things that you can purchase. Without doing research I would not have known that S3 stands for storage and EC2 stands for compute.
I get what a virtual server in the cloud is but how does that differ from a docker container and why should I care? Why should I care about managing Web Apps if I am just looking for raw compute? Why do I want to run code outside of a virtual machine? Which one should I choose? We are not going to go into depth on any of these subjects. If we are just looking at running a Linux instance, the simple EC2 should be adequate. We can install Docker as a package in our Linux instance to help us control how much of a processor is allocated to a service or program. We can install applications like Tomcat or WebLogic to run Web Apps. Linux gives us the foundation to do all of this with packages. Lambda is a totally different beast in that I can run code snippets to do things like voice command interpretation for an Amazon Echo or asynchronous events from devices and launch web sites or REST apis without having to install, manage, and configure an operating system. The rest of the world calls this a Node.js function and offers it as a separate service as well. I realize that I am oversimplifying this but you have to know what you are trying to accomplish before you start to create your first compute instance in the cloud.
Microsoft Azure Services
The Azure services are a little different in that they focus more on the user creation of virtual machines, SQL server, and some app services. Creation of a virtual image is relatively easy and it makes sense what you are doing. The console is relatively simple and clean with more options on the second page instead of the first page as is done with Amazon.
As you click on the Add button for Virtual Image you get an expanded set of operation system options and configurations.
Note that you can search for Oracle Linux and get a listing of various versions of the database. The virtual machine is easy to configure and create using the portal. If, however, you want to configure and create this via a command line, you need to download the PowerShell and run everything inside the application. The command line is Microsoft specific and difficult to port and migrate to other services. With Amazon and Oracle you can easily use RESTapi calls to provision and create services. Microsoft makes it a little more difficult to generically script but easily do this in their shell and language.
Oracle Cloud Services
The Oracle cloud compute services are new to the market. In the past compute services have been sold in bundles of 500 processors but have recently been made available in single processor consumption models. The cloud console has a different look and feel because the focus of the cloud services are more on the PaaS layer and less on the compute and storage layers.
Note that the screen shot starts with the storage and compute services but scrolling down shows database, java, SOA, and more PaaS layers.
To create a virtual image, you need to click on the compute cloud service - Service Console and it will allow you create an instance. The operating system selection is not as graphical or user friendly as the Microsoft interface but does list a variety of operating system options and configurations.
In conclusion, all three of these cloud consoles allow you to create a virtual image. In the next blog entry we will walk through the steps needed to create a Linux 6 instance on each of the three cloud platforms. We will not talk about how to create accounts. We will assume that you can find account setup and creation on your own. All three sites offer "try me" services that give at least 30 days evaluations. The eventual recommendation will be to use services like bitnami.com that takes public domain services like LAMP servers, Wiki engines, blog servers, and other public domain tools. The Bitnami site allows you to select a pre-configured instance and provision it into all three of these cloud services along with a few other cloud providers.
printing from apex using pl/sql
A typical question form would look like the following image allowing us to ask processor shape as well as quantity. If the total mount exceeds, 50 processors, we output a message suggesting dedicated compute rather than compute on demand.
To get this message on the screen, we first had to pull in the questions that we ask using variables. In this example, we read in the UNMETERED_COMPUTE_SHAPE which is a pick list that allows you to select (1, 2, 4, 8, or 16) OCPU shapes. You can also type in a quantity number into UNMETERED_COMPUTE_QUANTITY. The product of these two values allows us to suggest dedicated or compute on demand for economic reasons.
To execute pl/sql commands, we have to change the content type. To create this area we first create a sub-region. We change the name of the sub-region to represent the question that we are trying to answer. For this example we use the title "Compute on Demand vs Dedicated Compute" as the sub-region header. We then change the type to "PL/SQL Dynamic Content". Under this we can then enter our dynamic code. The sub-region looks like
If you click on the expand button it opens a full editor allowing you to edit the code. In our example we are going to read the variables :UNMETERED_COMPUTE_SHAPE and :UNMETERED_COMPUTE_QUANTITY. Notice the colon in front of these names. This is how we treat the values as variables read from APEX. The code is very simple. It starts with a begin statement followed by an if statement. The if statements looks to see if we are allocating more than 50 processors. We then output a statement suggesting dedicated or compute on demand using the htp.p function call. This call prints what is passed to it to the screen. The code should look like .
Overall, this is a simple way of outputting code that requires control flow. In the previous example we used a select statement to output calculations. In this example we are outputting different sections and different recommendations based on our selections. We could also set variables that would expose or hide different sub-regions below this section. This is done by setting :OUTPUT_VARIABLE = desired_value. If we set the value inside the pl/sql code loop, we can hide or expose sections as we did in a previous blog by setting a value from a pull down menu.
The code used to output the recommendation is as follows
BEGIN if (:UNMETERED_COMPUTE_SHAPE * :UNMETERED_COMPUTE_QUANTITY > 50) THEN htp.p('You might consider dedicated compute since you have ' || :UNMETERED_COMPUTE_SHAPE * :UNMETERED_COMPUTE_QUANTITY || ' OCPUs which is greater than the smallest dedicated compute of 50 OCPUs'); else htp.p('Compute on Demand for a total of ' || :UNMETERED_COMPUTE_SHAPE * :UNMETERED_COMPUTE_QUANTITY || ' OCPUs'); end if; END;
converting excel to apex
One thing that I have noted is that stuff done in a spreadsheet can be automated via navigation menus in apex. I talk about this in another blog on how to create a navigation system based on parts of a service that you want to get you to the calculation that you need. This is much better if you don't really know what you want and need to be lead through a menu system to help you decide on the service that you are looking for.
To create a calculator for metered and un-metered services in a spreadsheet requires two workbooks. You can tab between the two and enter data into each spreadsheet. If something like a pricelist is entered into a unique spreadsheet, static references and dynamic calculations can be easily. For example, we can create a workbook for archive - metered storage services and a workbook for archive - unmetered services which will be blank since this is not a service that is offered. If we create a third workbook called pricelist, we can enter the pricing for archive services into the pricelist spreadsheet and reference it from the other sheets. For archive cloud services you need to answer four basic questions; how many months, how much you will start archiving, how much you will end up with, and how much do we expect to read back during that period. We should see the following as questions How Many Months?cell F6 Initial Storage Capacitycell F7 Final Storage CapacityCell F8 Retrieval FactorCell F9
The cost will be calculated as Storage Capacity((F8+F7+((F8-F7)/F6))*F6/2*price_of_archive_per_month)/F6((F8+F7+((F8-F7)/F6))*F6/2*price_of_archive_per_month) Retrieval Cost(((F8+F7+((F8-F7)/F6)/2)*(F9/100))*price_of_archive_retrieval/F6(((F8+F7+((F8-F7)/F6)/2)*(F9/100))*price_of_archive_retrieval Outbound Data Transfersumifs(table lookup, table lookup, ...)sumifs(table lookup, table lookup,...*F6 In Apex, this is done a little differently with a sequence of select statements and formatting statements to get the right answer
select ' sub-part: ' || PRICELIST.PART_NUMBER || ' - Archive Storage Capacity ' as Description, to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2, '$999,990') as PRICE from PRICELIST PRICELIST where PRICELIST.PART_NUMBER = 'B82623' UNION select ' sub-part: ' || PRICELIST.PART_NUMBER || ' - Archive Retrieval ' as Description, to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100), '$999,990') as PRICE from PRICELIST PRICELIST where PRICELIST.PART_NUMBER = 'B82624' UNION select ' sub-part: ' || PRICELIST.PART_NUMBER || ' - Archive Deletes ' as Description, to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:DELETE_ARCHIVE/100), '$999,990') as PRICE from PRICELIST PRICELIST where PRICELIST.PART_NUMBER = 'B82629' UNION select ' sub-part: ' || PRICELIST.PART_NUMBER || ' - Archive Small Files ' as Description, to_char(:SMALL_ARCHIVE, '$999,990') as PRICE from PRICELIST PRICELIST where PRICELIST.PART_NUMBER = 'B82630' UNION select ' sub-part: ' || PRICELIST.PART_NUMBER || ' - Outbound Data Transfer ' as Description, to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100), '$999,990') as PRICE from PRICELIST PRICELIST where PRICELIST.PART_NUMBER = '123456' UNION select ' Total:' as Description, to_char(sum(price), '$999,990') as Price from ( select PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2 as price from PRICELIST where pricelist.part_number = 'B82623' UNION select PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100) as price from PRICELIST where pricelist.part_number = 'B82624' UNION select PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:DELETE_ARCHIVE/100) as price from PRICELIST where pricelist.part_number = 'B82629' UNION select PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100) as price from PRICELIST where pricelist.part_number = '123456' );The variables :INITIAL_ARCHIVE replaces F7, :FINAL_ARCHIVE replaces F8, and :RETRIEVE_ARCHIVE replaces F9. Rather than referring to the pricelist spreadsheet, we enter the pricing information into a database and do a select statement with the part_number being the key for the lookup. This allows for a much more dynamic pricebook and allows us to update and add items without risk of breaking the spreadsheet linkages. We can also use REST apis to create and update pricing using an outside program to keep our price calculator up to date and current. Using a spreadsheet allows users to have out of date versions and there really is not any way of communicating to users who have downloaded the spreadsheet that there are updates unless we are all using the same document control system.
Note that we can do running totals by doing a sum from a select ... union statement. This allows us to compare two different services like Amazon Glacier and Oracle Archive easily on the same page. The only thing that we need to add is the cost of Glacier in the database and generate the select statements for each of the Glacier components. We can do this and use a REST api service nightly or weekly to verify the pricing of the services to keep the information up to date.
The select statements that we are use are relatively simple. The difficult part is the calculation and formatting out the output. For the bulk of the select statements we are passing in variables entered into a form and adding or multiplying values to get quantities of objects that cost money. We then look up the price from the database and print out dollar or quantity amounts of what needs to be ordered. The total calculation is probably the most complex because it uses a sum statement that takes the results of a grouping of select statements and reformats it into a dollar or quantity amount.
An example of the interfaces would look like
a traditional spreadsheet
and in Application Express 5.0
pulling X-Auth-Token from login
Realize that the blogging software that is used does not allow me to type in "c url" without the space. If you see "c url" somewhere in this text, take out the space.
Most of the information that I got is from an online tutorial around creating storage containers. I basically boiled this information down and customized it a little to script everything.
First, authentication can be obfuscated by hiding the username and password in environment variables. I typically use a Mac so everything works well in a Terminal Window. On Windows 7 I use CygWin-64 which includes Unix like commands that are good for scripting. The firs tsetp is to hide the username, identity domain, and password in environment variables.
- export OPASS=password
- export OUID=username
- export ODOMAIN=identity_domain
- export OPASS=password
- export OUID=cloud.admin
- export ODOMAIN=metcsgse00026
c url -v -X GET -H "X-Storage-User: Storage-$ODOMAIN:$OUID" -H "X-Storage-Pass: $OPASS" https://$ODOMAIN.storage.oraclecloud.com/auth/v1.0
Note the -v is for verbose and displays everything. If you drop the -v you don't get back the return headers. Passing the -i might be a better option since the -v echos the user password and the -i only replies back with the tokens that you are interested in.
c url -i -X GET -H "X-Storage-User: Storage-$ODOMAIN:$OUID" -H "X-Storage-Pass: $OPASS" https://$ODOMAIN.storage.oraclecloud.com/auth/v1.0 In our example, this returned
HTTP/1.1 200 OK
date: 1458658839620
X-Auth-Token: AUTH_tkf4e26780c9e6b1d171f3dbeafa194cac
X-Storage-Token: AUTH_tkf4e26780c9e6b1d171f3dbeafa194cac
X-Storage-Url: https://storage.us2.oraclecloud.com/v1/Storage-metcsgse00026
Content-Length: 0
Server: Oracle-Storage-Cloud-Service
When you take this output and try to strip the X-Auth-Token from the header you get a strange output and need to add -is to the command to suppress timing of the outputs.
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
If you add the grep "X-Auth-Token followed by awk '{print $2]' you get back just the AUTH_string which is what we are looking for.
c url -is -X GET -H "X-Storage-User: Storage-metcsgse00026:cloud.admin" -H "X-Storage-Pass: $OPASS" https://metcsgse00026.storage.oraclecloud.com/auth/v1.0 | grep -s "X-Auth-Token" | awk '{print $2}'
AUTH_tkf4e26780c9e6b1d171f3dbeafa194cac
accessing oracle cloud storage from command line
Now that we have the cost and use out of the way, let's talk about how to consume these services. Unfortunately, consuming raw blocks, either tape or spinning disk, is difficult in the cloud. Amazon offers you an S3 interface and exposes the cloud services as an iSCSi interface through a downloadable object or via REST api services. Azure offers something similar with REST api services but offers SMB downloadable objects to access the cloud storage. Oracle offers REST api services but offers NFS downloadable objects to access the cloud storage. Let's look at three different ways of consuming the Oracle Cloud services.
The first way is to use the rest API. You can consume the services by accessing the client libraries using Postman from Chrome or RESTClient from Firefox. You can also access the service from the c url command line.
c url -v -X GET -H "X-Storage-User: Storage-metcsgse00026:cloud.admin" -H "X-Storage-Pass: $OPASS" https://metcsgse00026.storage.oraclecloud.com/auth/v1.0
In this example we are connecting to the identity domain metcsgse00026. The username that we are using is cloud.admin. We store the password in an environment variable OPASS and pull in the password when we execute the c url command. On Linux or a Mac, this is done from the pre-installed c url command. On Windows we had to install cygwin-64 to get the c url command working. When we execute this c url command we get back and AUTH header that can be passed in to the cloud service to create and consume storage services. In our example above we received back X-Auth-Token: AUTH_tk928cf3e4d59ddaa1c0a02a66e8078008 which is valid for 30 minutes. The next step would be to create a storage container
c url -v -s -X PUT -H "X-Auth-Token: AUTH_tk928cf3e4d59ddaa1c0a02a66e8078008" https://storage.us2.oraclecloud.com/v1/Storage- metcsgse00026/myFirstContainer
This will create myFirstContainer and allow us to store data either with more REST api commands or tools like CloudBerry or NFS. More information about how to use the REST api services can be found in an online tutorial
The second way of accessing the storage services is through a program tool that takes file requests on Windows and translates them to REST api commands on the cloud storage. CloudBerry has an explorer that allows us to do this. The user interface looks like and is setup with the File -> Edit or New Accounts menu item. You need to fill out the access to look like . Note that the username is a combination of the identity domain (metcsgse00026) and the username (cloud.admin). We could do something similar with PostMan or RESTClient extensions to browsers. Internet Explorer does not have plug ins that allow for REST api calls.
The third, and final way to access the storage services is through NFS. Unfortunately, Windows does not offer NFS client software on desktop machines so it is a little difficult to show this as a consumable service. Mac and Linux offer these services as mounting an nfs server as a network mount. Oracle currently does not offer SMB file shares to their cloud services but it is on the roadmap in the future. We will not dive deep into the Oracle Storage Cloud Appliance in this blog because it gets a little complex with setting up a VM and installing the appliance software. The documentation for this serviceM is a good place to start.
In summary, there are a variety of ways to consume storage services from Oracle. They are typically program interfaces and not file interfaces. The service is cost advantageous when compared to purchasing spinning disks from companies like Oracle, NetApp, or EMC. Using the storage appliance gets rid of the latency issues that you typically face and difficulty in accessing data from a user perspective. Overall, this service provides higher reliability than on-premise storage, lower cost, and less administration overhead.
accessing cloud storage
To calculate the cost of cloud storage from Oracle, look at the pricing information on the cloud web page. for metered pricing and for un-metered pricing.
If we do a quick calculation of the pricing for our example previously where we start with 1 TB and grow to 120 TB over a year we can see the price difference between the two solutions but also note how much reading back will eventually cost. This is something that Amazon hides when you purchase their services because you get charged for the upload and the download. for un-metered pricing and for metered pricing. Looking at this example we see that 120 TB of storage will cost us $43K per year with un-metered services but $36K per year for metered services assuming a 20% reading of the data once it is uploaded. If the read back number doubles, so does the cost and the price jumps to $50K. If we compare this cost to a $3K-$4K/TB cost of on-site storage, we are looking at $360K-$480K plus $40K-$50K in annual maintenance. It turns out it is significantly cheaper to grow storage into the cloud rather than purchasing a rack of disks and running them in your own data center.
The second way to consume storage cloud services is by using tape in the cloud rather than spinning disk in the cloud. Spinning disk on average costs $30/TB/month whereas tape averages $1/TB/month. Tape is not offered in an un-metered service so you do need to look at how much you read back because there is a charge of $5/TB to read the data back. This compares to $7/TB/month with Amazon plus the $5/TB upload and download charges.