Steve Karam

Subscribe to Steve Karam feed
Steve Karam's Blog
Updated: 11 hours 26 min ago

Building Classrooms in the Cloud

Wed, 2016-11-16 16:22
Jumpbox Lab Server

Let’s face it: education without interaction is about as effective as shouting origami instructions at a lumberjack who is cutting down trees. Sure, your informative lessons will come in handy when the product of their work finally becomes paper, but it will be long forgotten and ultimately worthless by then. The only way a student is going to learn is if they can put their hands on what you’re teaching, walk the journey with you, and attach a positive interactive experience to the lesson.

A little over a year ago I was trying to solve this interaction problem. As Director of the Customer Education team at Delphix, I wanted to provide an interactive experience during classroom training that:

  • gives each student their own full lab stack
  • makes labs available worldwide
  • is easy to create, manage, and tear down
  • can be built upon over time
  • is intuitive for learners to access and perform their labs
  • is cost effective

I couldn’t find a product that did all of these things in the way I needed; and so, I decided to build my own. It started as a python command line package and evolved into a web application and a huge suite of features. In this post, I’d like to provide an introduction to the LabAlchemy application, its overall features, and how it is used. I’ll be going over the tools used to build it in detail in later posts.

Introducing LabAlchemy – Classroom Management in the Amazon Web Services (AWS) Cloud

In order to fulfill bullets #1-4 above, it was clear the platform would need to be built in the cloud. Gone are the days of empty rooms full of servers that students will VNC into, or even using your own laptop as your test bed. These things worked well, but they’re not intuitive and difficult for everyone to work with. It is absolutely critical that your solution be intuitive for the learner, so they don’t fall down a rabbit hole trying to get into the labs in the first place.

And so, I decided to build the platform on top of Amazon Web Services, a rugged and incredibly diverse set of services for building the equivalent of your own datacenters in the cloud. LabAlchemy itself is composed of:

  • The LabAlchemy central server that hosts the command line interface (written in python), configuration files (in YAML), web application (written in node.js), metadata repository (in mongoDB), and classroom routing (nginx, more on this later)
  • Amazon Machine Images (AMIs) for our various classroom components like Delphix Engines, source and target servers (Linux and Windows), and the ever important “jumpbox” (more on this later too)
  • An automated set of Virtual Private Clouds (VPC), subnets, security groups, EC2 instances, EBS volumes, and copious amounts of tags to keep everything tidy and well orchestrated.

In summary, what LabAlchemy gives us is this: classroom labs on demand, at the push of a button. Each lab can have a number of students, each student gets their very own lab server and a complete Delphix stack to perform their labs. Labs are spun up in Amazon cloud, segregated into their own VPCs for easy network management, and can be stopped, started, and terminated at will.

LabAlchemy in Action

Creating labs can be done using the Command Line Interface (CLI) or web application. Let’s look at the web app. It’s more fun.

Choosing a Classroom Type

There are a few things going on behind the scenes here; most notably logging in, which is done with SSO for Delphix employees. Classroom TypesOnce a user is logged in and given permissions by my team, they are able to spin up lab environments. The first step is for them to choose a classroom type.

These classroom types are defined in a YAML file that bundles different AMIs into pre-created classrooms. All of the details about each classroom type you see on this screen are baked into the YAML file. As you can see here, we have classrooms with different versions of Delphix (laid out as tabs across the top) and different configurations like Oracle, MSSQL Server, Sybase, etc.

Class Details

Lab DetailsOnce you’ve picked a classroom type you have to enter details about the class including: classroom name, number of students, and cost center (for internal accounting). Once the details are entered and confirmed, the job can be submitted.

Class Creation

Once the job is submitted, LabAlchemy allows you to quickly view the job log. Here is a sample job log, showing this classroom being spun up for 5 students in the “alchemy” classroom. As you can see, LabAlchemy automates:Job Log

  • Configuration of a VPC for the classroom
  • Subnet setup
  • Internet Gateway setup
  • Servers for each student based on the AMI specifications

The log is updated in real time using a feature called WebSockets. We’ll cover that in a future post. Once the log is complete, LabAlchemy waits for the instances to come online, executes any final steps, and deems the classroom setup finished.

Viewing/Managing the Classroom

Current ClassroomsOnce a classroom has been created, you can manage it from the “Current Classrooms” page. I’ve set it up to only show my classrooms (3 out of 16 total right now). Each classroom is displayed with a small status box to the right showing whether it is started, stopped, creating, etc. The “alchemy” classroom details are displayed here. There are also a number of options for the administrator:

  • View – Snapshots of each student’s lab server are taken once a minute so instructors can monitor usage and progress
  • Direct Jumpbox Links – Easy access by IP address to student desktops (more in a moment)
  • Direct Delphix Links – Easy access by IP address to Delphix Engines in each student lab environment to use if the student desktop is not sufficient
  • Start/Stop/Terminate – Controls to stop, start, or terminate the servers in AWS, providing cost savings when labs are stopped, and easy self-service controls.
Accessing Labs

Now we can finally get to the fun part: accessing the labs! Jumpbox Lab ServerAgain, it’s very important to provide a clean and intuitive interface for your learners. To that end, I decided to design the idea of a student lab desktop, or jumpbox. This server is a Linux system running Ubuntu and xfce4 and a theme that more or less looks like Windows. It includes Chrome, terminal, putty, SQL Developer, Remmina RDP client, and a number of other tools to help students complete their work.

The best part though, is how they’re accessed. On the backend, LabAlchemy configures an nginx reverse proxy, which basically forwards their connection on based on a URL scheme. In this case, the scheme is http://classname.labalchemyhost.com/studentnum, where “classname” is the classroom name given on creation, “studentnum” is the student’s number (assigned after creation), and “labalchemyhost.com” is the [redacted] hostname for LabAlchemy itself. You simply go to that URL in your browser, enter the username and password provided by the instructor, and the whole lab server is available to you via HTML5 in your browser. No Flash, no plugins, just easy VNC over HTML5. For example, you can see the desktop for Student #3 in this section.

I didn’t invent this capability. I’m using an amazing open source software package called guacamole, which allows for VNC, RDP, or SSH over HTML5 with a wonderful set of additional features. In LabAlchemy, I orchestrated the configuration of guacamole for all jumpboxes, and provide access to it over the nginx reverse proxy using WebSockets. What you see is the result: a fully functional desktop in your web browser.

Neat! What Else?

As you can see from the previous section, the goal was to make it easy to access these labs. No specific software requirements (like a VNC client), no complex changing IP address to take down. Just a browser and a standardized URL. But I also took that kind of ease of use functionality a bit further with some additional features:

  • Guacamole allows connection sharing. This means that more than one person can be on a desktop at a time. In a classroom setting, that allows for students to work on labs, and other students or an instructor to help them out if needed.
  • All servers assigned to a student (Delphix Engine, Linux servers, etc.) are accessed using an easy to follow IP scheme: 10.0.studentnumber.suffix. “Studentnumber” is the number the student was given prior to class. Suffix depends on the system; Delphix Engines are at .10, Linux source servers on .20, Linux Targets on .30 and .40, and so on. This makes it so students always get a consistent experience from their labs, every time.
  • Lab Guides! This is for Delphix Education, remember? By deploying Lab Guides to these systems, students are able to work through a number of activities right from their jumpbox.
  • Carepackages, just like in Hunger Games. LabAlchemy was designed to bring up pre-created labs out of systems saved in Amazon as AMIs. In order to make modifications, add extra features, etc. I designed a system of private/public keys and deployable carepackages, which allow for custom instructions, files, etc. following classroom launch.
  • Mobile! Since the LabAlchemy app was created with a mobile friendly graphics library (Twitter Bootstrap 3), and the jumpboxes are displayed with HTML5, you can spin up/tear down classrooms and take labs with an iPad or other tablet. You can do it on a phone too, but man that’s tiny.

That’s it for today! We’ve been using this system internally for a little over a year to provide training, do internal testing, development, and all manner of demos and learning workshops. At its peak we’ve had over 550 servers up at a single time. The best part: on average, the cost of running labs is on average about $0.75 per student per hour. The beauty of the burstable cloud.

Join me next time for more of a deep dive on the technology behind LabAlchemy: python, node.js, guacamole, and more.

The post Building Classrooms in the Cloud appeared first on Oracle Alchemist.

Provisioning Virtual ASM (vASM)

Mon, 2015-08-03 07:30
Create vFiles

In this post, we’re going to use Delphix to create a virtual ASM diskgroup, and provision a clone of the virtual ASM diskgroup to a target system. I call it vASM, which is pronounced “vawesome.” Let’s make it happen.

She’s always hungry. She always needs to feed. She must eat. – Gollum

Most viewers assume Gollum was talking about Shelob the giant spider here, but I have it on good authority that he was actually talking about Delphix. You see, Delphix (Data tamquam servitium in trinomial nomenclature) is the world’s most voracious datavore. Simply put, Delphix eats all the data.

Now friends of mine will tell you that I absolutely love to cook, and they actively make it a point to never talk to me about cooking because they know I’ll go on like Bubba in Forrest Gump and recite the million ways to make a slab of meat. But if there’s one thing I’ve learned from all my cooking, it’s that it’s fun to feed people who love to eat. With that in mind, I went searching for new recipes that Delphix might like and thought, “what better meal for a ravenous data muncher than an entire volume management system?”

vASM

Delphix ArchitectureIn normal use, Delphix links to an Oracle database and ingests changes over time by using RMAN “incremental as of SCN” backups, archive logs, and online redo. This creates what we call a compressed, deduped timeline (called a Timeflow) that you can provision as one or more Virtual Databases (VDBs) from any points in time.

However, Delphix has another interesting feature known as AppData, which allows you to link to and provision copies of flat files like unstructured files, scripts, software binaries, code repositories, etc. It uses rsync to build a Timeflow, and allows you to provision one or more vFiles from any points in time. But on top of that (and even cooler in my opinion), you have the ability to create “empty vFiles” which amounts to an empty directory on a system; except that the storage for the directory is served straight from Delphix. And it is this area that serves as an excellent home for ASM.

We’re going to create an ASM diskgroup using Delphix storage, and connect to it with Oracle’s dNFS protocol. Because the ASM storage lives completely on Delphix, it takes advantage of Delphix’s deduplication, compression, snapshots, and provisioning capabilities.

Some of you particularly meticulous (read: cynical) readers may wonder about running ASM over NFS, even with dNFS. I’d direct your attention to this excellent test by Yury Velikanov. Of course, your own testing is always recommended.

I built this with:

  • A Virtual Private Cloud (VPC) in Amazon Web Services
  • Redhat Enterprise Linux 6.5 Source and Target servers
    • Oracle 11.2.0.4 Grid Infrastructure
    • 11.2.0.4 Oracle Enterprise Edition
  • Delphix Engine 4.2.4.0
  • Alchemy
Making a vASM Diskgroup

Before we get started, let’s turn on dNFS while nothing is running. This is as simple as using the following commands on the GRID home:

[oracle@ip-172-31-0-61 lib]$ cd $ORACLE_HOME/rdbms/lib
[oracle@ip-172-31-0-61 lib]$ pwd
/u01/app/oracle/product/11.2.0/grid/rdbms/lib
[oracle@ip-172-31-0-61 lib]$ make -f ins_rdbms.mk dnfs_on
rm -f /u01/app/oracle/product/11.2.0/grid/lib/libodm11.so; cp /u01/app/oracle/product/11.2.0/grid/lib/libnfsodm11.so /u01/app/oracle/product/11.2.0/grid/lib/libodm11.so
[oracle@ip-172-31-0-61 lib]$

Now we can create the empty vFiles area in Delphix. This can be done through the Delphix command line interface, API, or through the GUI. It’s exceedingly simple to do, requiring only a server selection and a path.

Create vFiles Choose Path for vASM disks Choose Delphix Name Set Hooks Finalize

Let’s check our Linux source environment and see the result:

[oracle@ip-172-31-0-61 lib]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            9.8G  5.0G  4.3G  54% /
tmpfs                 7.8G   94M  7.7G   2% /dev/shm
/dev/xvdd              40G   15G   23G  39% /u01
172.31.7.233:/domain0/group-35/appdata_container-32/appdata_timeflow-44/datafile
                       76G     0   76G   0% /delphix/mnt

Now we’ll create a couple ASM disk files that we can add to an ASM diskgroup:

[oracle@ip-172-31-0-61 lib]$ cd /delphix/mnt
[oracle@ip-172-31-0-61 mnt]$ truncate --size 20G disk1
[oracle@ip-172-31-0-61 mnt]$ truncate --size 20G disk2
[oracle@ip-172-31-0-61 mnt]$ ls -ltr
total 1
-rw-r--r--. 1 oracle oinstall 21474836480 Aug  2 19:26 disk1
-rw-r--r--. 1 oracle oinstall 21474836480 Aug  2 19:26 disk2

Usually the “dd if=/dev/zero of=/path/to/file” command is used for this purpose, but I used the “truncate” command. This command quickly creates sparse files that are perfectly suitable.

And we’re ready! Time to create our first vASM diskgroup.

SQL> create diskgroup data
  2  disk '/delphix/mnt/disk*';

Diskgroup created.

SQL> select name, total_mb, free_mb from v$asm_diskgroup;

NAME				 TOTAL_MB    FREE_MB
------------------------------ ---------- ----------
DATA				    40960      40858

SQL> select filename from v$dnfs_files;

FILENAME
--------------------------------------------------------------------------------
/delphix/mnt/disk1
/delphix/mnt/disk2

The diskgroup has been created, and we verified that it is using dNFS. But creating a diskgroup is only 1/4th the battle. Let’s create a database in it. I’ll start with the simplest of pfiles, making use of OMF to get the database up quickly.

[oracle@ip-172-31-0-61 ~]$ cat init.ora
db_name=orcl
db_create_file_dest=+DATA
sga_target=4G
diagnostic_dest='/u01/app/oracle'

And create the database:

SQL> startup nomount pfile='init.ora';
ORACLE instance started.

Total System Global Area 4275781632 bytes
Fixed Size		    2260088 bytes
Variable Size		  838861704 bytes
Database Buffers	 3422552064 bytes
Redo Buffers		   12107776 bytes
SQL> create database;

Database created.

I’ve also run catalog.sql, catproc.sql, and pupbld.sql and created an SPFILE in ASM, but I’ll skip pasting those here for at least some semblance of brevity. You’re welcome. I also created a table called “TEST” that we’ll try to query after the next part.

Cloning our vASM Diskgroup

Let’s recap what we’ve done thus far:

  • Created an empty vFiles area from Delphix on our Source server
  • Created two 20GB “virtual” disk files with the truncate command
  • Created a +DATA ASM diskgroup with the disks
  • Created a database called “orcl” on the +DATA diskgroup

In sum, Delphix has eaten well. Now it’s time for Delphix to do what it does best, which is to provision virtual objects. In this case, we will snapshot the vFiles directory containing our vASM disks, and provision a clone of them to the target server. You can follow along with the gallery images below.

Snapshot vASM area Provision vFiles Choose the Target and vASM Path Name the Target for vASM Set Hooks Finalize Target

Here’s the vASM location on the target system:

[oracle@ip-172-31-2-237 ~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            9.8G  4.1G  5.2G  44% /
tmpfs                 7.8G   92M  7.7G   2% /dev/shm
/dev/xvdd              40G   15G   23G  39% /u01
172.31.7.233:/domain0/group-35/appdata_container-34/appdata_timeflow-46/datafile
                       76G  372M   75G   1% /delphix/mnt

Now we’re talking. Let’s bring up our vASM clone on the target system!

SQL> alter system set asm_diskstring = '/delphix/mnt/disk*';

System altered.

SQL> alter diskgroup data mount;

Diskgroup altered.

SQL> select name, total_mb, free_mb from v$asm_diskgroup;

NAME				 TOTAL_MB    FREE_MB
------------------------------ ---------- ----------
DATA				    40960      39436

But of course, we can’t stop there. Let’s crack it open and access the tasty “orcl” database locked inside. I copied over the “initorcl.ora” file from my source so it knows where to find the SPFILE in ASM. Let’s start it up and verify.

SQL> startup;
ORACLE instance started.

Total System Global Area 4275781632 bytes
Fixed Size		    2260088 bytes
Variable Size		  838861704 bytes
Database Buffers	 3422552064 bytes
Redo Buffers		   12107776 bytes
Database mounted.
Database opened.
SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA/orcl/datafile/system.259.886707477
+DATA/orcl/datafile/sysaux.260.886707481
+DATA/orcl/datafile/sys_undots.261.886707485

SQL> select * from test;

MESSAGE
--------------------------------------------------------------------------------
WE DID IT!

As you can see, the database came online, the datafiles are located on our virtual ASM diskgroup, and the table I created prior to the clone operation came over with the database inside of ASM. I declare this recipe a resounding success.

Conclusion

A lot happened here. Such is the case with a good recipe. But in the end, my actions were deceptively simple:

  • Create a vFiles area
  • Create disk files and an ASM diskgroup inside of Delphix vFiles
  • Create an Oracle database inside the ASM diskgroup
  • Clone the Delphix vFiles to a target server
  • Bring up vASM and the Oracle database on the target server

With this capability, it’s possible to do some pretty incredible things. We can provision multiple copies of one or more vASM diskgroups to as many systems as we please. What’s more, we can use Delphix’s data controls to rewind vASM diskgroups, refresh them from their source diskgroups, and even create vASM diskgroups from cloned vASM diskgroups. Delphix can also replicate vASM to other Delphix engines so you can provision in other datacenters or cloud platforms. And did I mention it works with RAC? vFiles can be mounted on multiple systems, a feature we use for multi-tier EBS provisioning projects.

But perhaps the best feature is that you can use Delphix’s vASM disks as a failgroup to a production ASM diskgroup. That means that your physical ASM diskgroups (using normal or high redundancy) can be mirrored via Oracle’s built in rebalancing to a vASM failgroup comprised of virtual disks from Delphix. In the event of a disk loss on your source environment, vASM will protect the diskgroup. And you can still provision a copy of the vASM diskgroup to another system and force mount for the same effect we saw earlier.

There is plenty more to play with and discover here. But we’ll save that for dessert. Delphix is hungry again.

The post Provisioning Virtual ASM (vASM) appeared first on Oracle Alchemist.

Real Push Button Refresh with Raspberry Pi

Sun, 2015-04-19 17:19
Push Button

In this post, we’re going to perform a push button refresh of an Oracle Database, Application Express (APEX) installation, and Tomcat webserver.

“But Oracle Alchemist,” you’re probably thinking, “we know about that. You’ve told us about how Delphix can provision and refresh data.” And yes, you’d be right. But I wasn’t done yet.

We’re going to perform a refresh of an Oracle Database, APEX installation, and Tomcat running in Amazon Web Services, replicated from a local Delphix Engine, by pressing a physical button wired to a Raspberry Pi running a python app that communicates with the Delphix REST API in the cloud over wifi.

We’ll go over technical details and the Python code right after the video. Make sure you check it out so you can watch me excitedly press the button. I even did a refresh from Starbucks.

I Like Pi

I’ve wanted a Raspberry Pi for a long time now, and for whatever reason never got around to buying one. I finally did last week, and let me tell you this thing is such a beautiful little device, I nearly fried the logic board by weRaspberry Pi B+eping on it. The components, how tiny! The GPIO pins, how enticing! The Raspberry Pi really is the ultimate geek toy. I ended up going with the Raspberry Pi Model B. It has a 40 pin GPIO header (an I/O interface we’ll use for this article), four USB 2.0 ports, a push-button micro SD slot for the hard drive, and the standard HDMI port, ethernet port, and power via micro USB. I can power it using my iPhone charging block or even with a USB battery pack. The one I bought also came with NOOBS on an 8GB SD card, which was preloaded with Raspbian, ArchLinux, OpenELEC, Pidora, RaspBMC, and RiscOS. I opted for Raspbian.

Zebra CaseAs for extras, I also got a super sexy little case called a Zebra Case. It’s designed and built right here in the good ol’ US of A, is incredibly easy to assemble, has optional rubber feet, can be hung up on a wall, and has easy access to the GPIO pins. I also picked up an Edimax EW-7811Un 150Mbps 11n Wi-Fi USB Adapter because I don’t like being tied down.

Let’s Communicate

I have Delphix 4.2 (the latest version) set up on my laptop with a bunch of data sources: Oracle, an APEX app in Tomcat, Sybase, Postgres, MS SQL Server, and a Delphix Agile Masking repository. I also have Delphix 4.2 installed in Amazon Web Services and am replicating the Oracle Database and application stack to it via Delphix replication. In Amazon, the database, Tomcat, and APEX library replicas have all been provisioned to a target Linux system as Virtual Databases (VDBs) and Virtual Files (vFiles). Delphix has a powerful GUI that can handle refresh, rewind, etc. operations but instead we’re going to do it programmatically through the Delphix REST API in Python. Why? Because alchemy, that’s why. Raspbian has Python already loaded with the GPIO library built in. So let’s take a look at the code.

Here’s the delphix.py code that connects to my Amazon Delphix Engine and performs the operations.

import urllib2
import simplejson as json
from cookielib import CookieJar

# VDBControl accepts op = [ refresh | undo ]

def VDBControl(op):

  # Delphix Details
  url = "http://ec2-52-1-228-37.compute-1.amazonaws.com"
  username = "delphix_admin"
  password = "delphix"

  # urllib2 setup
  headers = {"Content-Type" : "application/json"}
  cj = CookieJar()
  opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))

  # Get Delphix Session
  data = { "type" : "APISession", "version" : { "type" : "APIVersion", "major" : 1, "minor" : 1, "micro" : 0 } }
  opener.open(urllib2.Request(url + "/resources/json/delphix/session", data=json.dumps(data), headers=headers))

  # Login to Delphix
  data = { "type" : "LoginRequest", "username" : "delphix_admin", "password" : "delphix" }
  opener.open(urllib2.Request(url + "/resources/json/delphix/login", data=json.dumps(data), headers=headers))

  # Get Delphix Objects
  request = opener.open(urllib2.Request(url + "/resources/json/delphix/source", headers=headers))
  content = request.read()
  sourcelist = json.loads(content)

  # Loop through Delphix Objects

  for source in sourcelist["result"]:
    if source["virtual"] == True:
      # get container details including name and type
      virtualContainer = source["container"]
      virtualName = source["name"]
      request = opener.open(urllib2.Request(url + "/resources/json/delphix/database/" + virtualContainer, headers=headers))
      content = request.read()
      parentContainer = json.loads(content)["result"]["parentContainer"]
      objType = json.loads(content)["result"]["type"]
      refreshType = "OracleRefreshParameters" if objType == "OracleDatabaseContainer" else "RefreshParameters"

      if op == "refresh":
        # refresh each virtual object
        print "Refreshing " + virtualName + "..."
        data = { "type" : refreshType, "timeflowPointParameters" : { "type" : "TimeflowPointSemantic", "container" : parentContainer } }
        request = opener.open(urllib2.Request(url + "/resources/json/delphix/database/" + virtualContainer + "/refresh", data=json.dumps(data), headers=headers))
        content = request.read()
        print "Output: " + content
        print "Refresh Job Submitted."
      if op == "undo":
        print "Undoing Refresh of " + virtualName + " (if applicable)..."
        request = opener.open(urllib2.Request(url + "/resources/json/delphix/database/" + virtualContainer + "/undo", headers=headers))
        content = request.read()
        print "Output: " + content
        print "Undo Job Submitted."

In a nutshell, what this code does is:

  • Define a VDBControl function that accepts an operation, either “refresh” or “undo”
  • Declares variables for the Delphix Engine and the Python urllib2 classes
  • Get a Delphix session via the API and put a cookie in the cookie jar
  • Log into Delphix
  • Get a list of Delphix objects with a REST call to /resources/json/delphix/source on the Delphix Engine
  • Loop through the list and find the objects that are either Virtual Databases or Virtual Files
  • Grab some auxiliary data we need to do the refresh, like the source object we’re refreshing from.
  • If a refresh is being called, do the refresh by passing the proper data to Delphix using the /resources/json/delphix/database/VDBNAME/refresh REST method.
  • If an undo is being called, do the undo by passing the proper data using the undo method.

Now that we’ve seen the code that interfaces with Delphix, let’s take a look at the code that interfaces with the GPIO.

import RPi.GPIO as GPIO
import time
from delphix import VDBControl

GPIO.setmode(GPIO.BCM)
GPIO.setup(18, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.setup(24, GPIO.IN, pull_up_down=GPIO.PUD_UP)

while True:
	input_state18 = GPIO.input(18)
	if input_state18 == False:
                VDBControl("refresh")
		time.sleep(0.2)

	input_state24 = GPIO.input(24)
	if input_state24 == False:
                VDBControl("undo")
		time.sleep(0.2)

Pi and BreadboardThis code must be run as root on the Raspberry Pi to interface with the GPIO. I actually have it appended in my rc.local so it starts up automatically when the Raspberry Pi comes online. The code is really quite simple as long as everything is plugged in properly. On my breadboard I have two buttons, each slotted in across the middle gap. The gap is there to divide up the breadboard so circuits can be built from both sides while only certain designated components “bridge” the gap. Two jumper cables connect each button to the Raspberry Pi GPIO. My refresh button has a positive connection to GPIO pin 18, and a ground connection. The undo button has a positive connection to GPIO pin 24, and a ground connection. Then the python code simply:

  • Sets up pins 18 and 24 for monitoring
  • Runs an endless loop
  • If GPIO pin 18 is clicked, the state changes to False, triggering a call to my delphix.VDBControl function with the “refresh” operation. It then sleeps for a brief moment to make sure the button press doesn’t rapid fire a bunch of presses.
  • If GPIO pin 24 is clicked, the same thing happens except an “undo” operation is passed.

And that’s it! A real, honest to goodness push-button refresh, courtesy of the cutest little computer I’ve ever seen and a hobby board, wires, and buttons that made me feel nervous carrying it around in public. It wouldn’t take much to make it presentable though. In fact, given the right top and bottom coverings, we could make quite the button. That was easy.®

So, what could you do with a Raspberry Pi and a couple buttons? Let me know in the comments!

Thank you to Simon Monk and the Raspberry Pi Cookbook for the button guide!

The post Real Push Button Refresh with Raspberry Pi appeared first on Oracle Alchemist.

Deploying Application Express with Delphix

Sat, 2015-01-24 14:31
VDBs

Seamless cloning of an application stack is an outstanding goal. Seamless cloning of an application stack including the full production database, application server, and webserver in a few minutes with next to zero disk space used or configuration required is the best goal since Alexander Graham Bell decided he wanted a better way tell Mr. Watson to “come here.”

So in the spirit of discovery, I’ve installed Oracle REST Data Services (ORDS) 2.0 and Oracle Application Express (APEX) 4.2 to a source Oracle database environment in my home Delphix setup. I’m going to:

  1. Sync the ORDS binaries with Delphix as a file source
  2. Sync the APEX binaries with Delphix as a file source
  3. Sync the ORCL database with Delphix as a database source
  4. Provision a clone of the ORCL database to a target linux system as DBDEV
  5. Provision a clone of the ORDS and APEX binaries to the target system

Some of you may be scratching your head right now thinking “What is Delphix?” I’ve written a few words on it in the past, and Kyle Hailey has quite a bit of information about it along with other links such as Jonathan Lewis explaining Delphix at OOW14.

If you’re into the whole brevity thing, here’s a short summation: Delphix is a technology you can sync nearly any kind of source data into and provision on demand from any point in time to any target, near instantly and at the click of a button, all without incurring additional disk space. What that means for your business is incredibly efficient development, faster time to market, and improved application quality. And if you want to see this in action, you can try it for yourself with Delphix Developer Edition.

Let’s use Delphix to deploy APEX to a target system.

Step 1. A look at the source

On the source environment (linuxsource, 172.16.180.11) I have an 11.2.0.1 database called “orcl”.

ORCL Source Database

In the /u01/app/oracle/product directory are ./apex and ./ords, holding the APEX and ORDS installations respectively.

Source Products Directory

When ORDS is started, I am able to see the APEX magic by browsing to http://172.16.180.11:8080/apex and logging in to my InvestPLUS workspace. Here’s the pre-packaged apps I have installed:

Source System APEX Apps

Sweet. Let’s check out what I have set up in Delphix.

Step 2. Check out the Delphix Sources

You can see that I have the ORCL database (named InvestPLUS DB Prod), Oracle REST Data Services, and APEX homes all loaded into Delphix here:

Delphix Sources

When I say they’re loaded into Delphix, I mean they’ve been synced. The ORCL database is synced over time with RMAN and archive logs and compressed about 3x on the base snapshot and 60x on the incremental changes. The /u01/app/oracle/product/apex and /u01/app/oracle/product/ords directories have also been synced with Delphix and are kept up to date over time. From these synced copies we can provision one or more Virtual Databases (VDBs) or Virtual Files (vFiles) to any target we choose.

Step 3. Deploy

Provisioning both VDBs and vFiles is very quick with Delphix and takes only a few button clicks. Just check out my awesomely dramatized video of the provisioning process. For this demo, first I provisioned a clone of the ORCL database to linuxtarget (172.16.180.12) with the name DBDEV.

Provisioning DBDEV to the target

Next I provisioned a copy of the ORDS home to the target at the same location as the source (/u01/app/oracle/product/ords) with the name ORDS Dev:

ORDS Dev on the target

And lastly I provisioned a copy of the APEX home to the target at the same location as the source (/u01/app/oracle/product/apex) with the name APEX Dev:

APEX Dev on target

In hindsight I probably could have just synced /u01/app/oracle/product and excluded the ./11.2.0 directory to get both ORDS and APEX, but hey, I like modularity. By having them separately synced, I can rewind or refresh either one on my target system.

Here’s the final provisioned set of clones on the target (you can see them under the “InvestPLUS Dev/QA” group on the left nav):

Provisioned Clones

Step 4. Check out the target system

Let’s see what all this looks like on the target system. Looking at the /u01/app/oracle/product directory on the target shows us the same directories as the source:

Target directories

I’ve also got the DBDEV database up on the target:

DBDEV on the target

To give you a glimpse of how Delphix provisioned the clone, check this out. Here’s a “df -h” on the linuxtarget environment:

Linux Target df command

What this is showing us is that the APEX Home, ORDS Home, and DBDEV clone are all being served over NFS from Delphix (172.16.180.3). This is how Delphix performs a clone operation, and why we call it virtual: data is synced and compressed from sources into Delphix, and when you provision a clone Delphix creates virtual sets of files that are presented over the wire to the target system. You can think of Delphix as a backup destination for source databases/filesystems, and as network attached storage for targets. The clever bit is that Delphix uses the same storage for both purposes, with no block copies at all unless data is changed on the target VDBs or vFiles. Cool, right? On a side note and for the curious, Delphix can use dNFS as well for your Oracle VDBs.

Step 5. Reconfigure ORDS

On the source environment, ORDS is configured to connect to the ORCL database. On the target we’re going to the DBDEV database. So the one quick change we’ll need to make is to change the SID in the /u01/app/oracle/product/ords/config/apex/defaults.xml file.

[delphix@linuxtarget ords]$ vi config/apex/defaults.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Saved on Wed Jan 14 08:38:04 EST 2015</comment>
<entry key="cache.caching">false</entry>
<entry key="cache.directory">/tmp/apex/cache</entry>
<entry key="cache.duration">days</entry>
<entry key="cache.expiration">7</entry>
<entry key="cache.maxEntries">500</entry>
<entry key="cache.monitorInterval">60</entry>
<entry key="cache.procedureNameList"/>
<entry key="cache.type">lru</entry>
<entry key="db.hostname">localhost</entry>
<entry key="db.password">@050784E0F3307C86A62BF4C58EE984BC49</entry>
<entry key="db.port">1521</entry>
<entry key="db.sid">DBDEV</entry>
<entry key="debug.debugger">false</entry>
<entry key="debug.printDebugToScreen">false</entry>
<entry key="error.keepErrorMessages">true</entry>
<entry key="error.maxEntries">50</entry>
<entry key="jdbc.DriverType">thin</entry>
<entry key="jdbc.InactivityTimeout">1800</entry>
<entry key="jdbc.InitialLimit">3</entry>
<entry key="jdbc.MaxConnectionReuseCount">1000</entry>
<entry key="jdbc.MaxLimit">10</entry>
<entry key="jdbc.MaxStatementsLimit">10</entry>
<entry key="jdbc.MinLimit">1</entry>
<entry key="jdbc.statementTimeout">900</entry>
<entry key="log.logging">false</entry>
<entry key="log.maxEntries">50</entry>
<entry key="misc.compress"/>
<entry key="misc.defaultPage">apex</entry>
<entry key="security.disableDefaultExclusionList">false</entry>
<entry key="security.maxEntries">2000</entry>
</properties>

Note the only line I had to change was this one: <entry key=”db.sid”>DBDEV</entry>

After the config change, I just had to start ORDS on the target:

[delphix@linuxtarget ords]$ java -jar apex.war
Jan 21, 2015 1:18:22 PM oracle.dbtools.standalone.Standalone execute
INFO: NOTE:

Standalone mode is designed for use in development and test environments. It is not supported for use in production environments.

Jan 21, 2015 1:18:22 PM oracle.dbtools.standalone.Standalone execute
INFO: Starting standalone Web Container in: /u01/app/oracle/product/ords/config/apex
Jan 21, 2015 1:18:22 PM oracle.dbtools.standalone.Deployer deploy
INFO: Will deploy application path = /u01/app/oracle/product/ords/config/apex/apex/WEB-INF/web.xml
Jan 21, 2015 1:18:22 PM oracle.dbtools.standalone.Deployer deploy
INFO: Deployed application path = /u01/app/oracle/product/ords/config/apex/apex/WEB-INF/web.xml
Jan 21, 2015 1:18:22 PM oracle.dbtools.common.config.file.ConfigurationFolder logConfigFolder
INFO: Using configuration folder: /u01/app/oracle/product/ords/config/apex
Configuration properties for: apex
cache.caching=false
cache.directory=/tmp/apex/cache
cache.duration=days
cache.expiration=7
cache.maxEntries=500
cache.monitorInterval=60
cache.procedureNameList=
cache.type=lru
db.hostname=localhost
db.password=******
db.port=1521
db.sid=DBDEV
debug.debugger=false
debug.printDebugToScreen=false
error.keepErrorMessages=true
error.maxEntries=50
jdbc.DriverType=thin
jdbc.InactivityTimeout=1800
jdbc.InitialLimit=3
jdbc.MaxConnectionReuseCount=1000
jdbc.MaxLimit=10
jdbc.MaxStatementsLimit=10
jdbc.MinLimit=1
jdbc.statementTimeout=900
log.logging=false
log.maxEntries=50
misc.compress=
misc.defaultPage=apex
security.disableDefaultExclusionList=false
security.maxEntries=2000
db.username=APEX_PUBLIC_USER
Jan 21, 2015 1:18:58 PM oracle.dbtools.common.config.db.ConfigurationValues intValue
WARNING: *** jdbc.MaxLimit in configuration apex is using a value of 10, this setting may not be sized adequately for a production environment ***
Jan 21, 2015 1:18:58 PM oracle.dbtools.common.config.db.ConfigurationValues intValue
WARNING: *** jdbc.InitialLimit in configuration apex is using a value of 3, this setting may not be sized adequately for a production environment ***
Using JDBC driver: Oracle JDBC driver version: 11.2.0.3.0
Jan 21, 2015 1:18:59 PM oracle.dbtools.rt.web.SCListener contextInitialized
INFO: Oracle REST Data Services initialized
Oracle REST Data Services version : 2.0.10.289.08.09
Oracle REST Data Services server info: Grizzly/1.9.49

Jan 21, 2015 1:18:59 PM com.sun.grizzly.Controller logVersion
INFO: GRIZZLY0001: Starting Grizzly Framework 1.9.49 - 1/21/15 1:18 PM
Jan 21, 2015 1:18:59 PM oracle.dbtools.standalone.Standalone execute
INFO: http://localhost:8080/apex/ started.

Step 6. Victory

With ORDS started, I’m now able to access APEX on my target and log in to see my applications.

APEX Login on TargetAPEX Apps on Target

Conclusion (or Step 7. Celebrate)

The cloned ORDS and APEX homes on the target and the DBDEV database are 100% full clones of their respective sources; block for block copies if you will. No matter how big the source data, these clones are done with a few clicks and takes only a few minutes, barely any disk space (in the megabytes, not gigabytes), and the clones can be refreshed from the source or rewound in minutes.

Delphix is capable of deploying not just database clones, but the whole app stack. Because Delphix stores incremental data changes (based on a retention period you decide), applications can be provisioned from any point in time or multiple points in time. And you can provision as many clones as you want to as many targets as you want, CPU and RAM on the targets permitting. All in all a fairly powerful capability and one I’ll be experimenting on quite a bit to see how the process and benefits can be improved. I’m thinking multi-VDB development deployments and a rewindable QA suite next!

The post Deploying Application Express with Delphix appeared first on Oracle Alchemist.

Managing your Database in a Zombie Apocalypse

Sun, 2014-10-12 08:00
Zombie Server Room

Only two things are really certain: network latency over long distances, and the fact that humanity will soon rapidly degenerate into undead brain-eaters.

When that day comes, when the dead are crowding at your door and the windows are busted out and ripped up rotted arms are clawing at the inside of your home, I know what you’ll be thinking: is my database protected?

Don’t worry, my friends. The Oracle Alchemist has you covered. We just need to zombie-proof your DR plan. Let’s get started.

Getting the Power Back

Hopefully you did the smart thing and figured out how much battery and generator power you’d need to survive multiple years of failing power systems due to zombies. I know I did.

However, if you didn’t get this critical task done you may still have some options. Statistics show that the demand for U.S. gasoline was 8.73 million barrels in 2012. That comes out to 23,917.80821917808219 barrels per day of fuel that’s out there just waiting for you to snatch it up. The problem is going to be getting it. You’ll need to load yourself down with lots of weaponry and strike out in a fuel truck a few times a week, which will definitely take away from your database administration time. It’s a smart idea to enable a lot of automation and monitoring to take care of things while you’re out.

Zombie Gas StationYou’re going to need to fight other groups of surviving IT for fuel. This means you’re going to need friends. The way I see it, you have two choices: SysAdmins and Developers. They’re the two groups you work closest with as a DBA, so they’re the most likely to have your back when the dead walk. Start your planning now. If you want to get the developers on your side, tune some queries for them. Seriously, nothing will convince a developer to slice through the brain base of a walker like adding some key indexes when a query goes south during testing. However, if you think the SysAdmins are more likely to fight off rival gangs of resource hogs on the prowl for food and fuel, you can make them feel good by keeping all your filesystems cleaned up and RAM usage at a minimum.

The Problem with Zombies

Remember, the walking dead are tenacious. You remember before the apocalypse when a bunch of reporting users would all log into the database and run huge ad hoc queries against PROD without thinking about what they were doing? That was nothing. Zombies are the real deal. They will tear through a database faster than a multi-terabyte cartesian product. You can deploy outside the box now increase your chances of having a clone out there somewhere, just in case. If you want that database to survive, you’re going to need standbys. Lots of them.

I’d recommend a hub and spoke configuration. One central production database, at least 5 standby databases. As everybody knows, the chances of a zombie bringing down a database are roughly 89.375%. With 5 standby environments, you can drastically reduce the odds of being left without a standby system. On the plus side, zombies are completely brainless. What this means is that you don’t have to worry about masking or obfuscating your backup data in any way. Even on the off chance one of them kicks off a query (even zombies can figure out SQL Developer), they won’t be able to comprehend your users’ personal data, and with the complete downfall of the dollar it won’t matter if they see any credit information. So rest easy.

When All Else Fails

At some point, the zombies are going to come for you. Sorry, but it’s a statistical fact and there’s not much we can do about that. At that moment, when all hope is lost, you’re really going to need to protect your database because once you become a zombie too there really won’t be anyone left and you won’t be focused on maintaining it anymore; you’ll be focused on acquiring copious amounts of human flesh.

Zombie Server RoomSo make your last stand count. You’re a soon-to-be-undead DBA, act like it! Remember how we tune. Eliminate the wait, punch through the bottlenecks, make efficient use of processing power. Don’t get trapped between a rack and a hard place. If you have to play a game of circle-the-Exadata in order to get away, go for it, but don’t let them corner you. And whatever you do, make sure you keep your badge with you. The last thing you need is to hit a door you can’t get through without the proper credentials. Above all else: remember to kick off a backup before they finally take you. I’d recommend having the script ready and running on the console just in case you have to hit a quick key.

Good luck. You’re going to need it.

The post Managing your Database in a Zombie Apocalypse appeared first on Oracle Alchemist.

Another Great OpenWorld

Fri, 2014-10-10 06:00
Steve at the Delphix Booth

Last week I attended Oracle OpenWorld 2014, and it was an outstanding event filled with great people, awesome sessions, and a few outstanding notable experiences.

Personally I thought the messaging behind the conference itself wasn’t as amazing and upbeat as OpenWorld 2013, but that’s almost to be expected. Last year there was a ton of buzz around the introduction of Oracle 12c, Big Data was a buzzword that people were totally excited and not too horribly burnt out on, and there was barely a cloud in the sky. This year cloud it was cloud all about cloud the Cloud cloud (Spoiler alert: it was the Cloud all along) which just didn’t have that same excitement factor.

But it’s still OpenWorld, set in the heart of San Francisco with tens of thousands of buzzing Oracle faithful. And therefore it was still a pretty awesome time.

Jonathan Lewis at the Delphix BoothThis year I went representing Delphix, and man did we represent. The enthusiasm and technical curiosity were evident as our booth filled up for three days straight with folks eager to hear the good news of the data virtualization. I have to say, the DBA in me finds the promise of syncing databases to a software platform that can provision full-size, read/write clones in a couple minutes with no additional disk usage quite alluring. But there was more to the message than the technology behind the platform; there were also a plethora of use cases that captured people’s attention. Faster and more on-time business intelligence and analytics, application and database testing, regulatory compliance, and more. If that wasn’t enough, we also had Jonathan Lewis, Tim Gorman, Kyle Hailey, Ben Prusinski, and yours truly speaking at the booth which was a great bit of fun and drew a lot of folks that wanted to learn more.

On Monday I was honored to be invited back on SiliconAngle’s conference web show theCUBE to talk about copy data, Delphix, the Cloud (that should be fun for people running Cloud to Butt), Oracle’s strategy, and more. They had not one but two booths at OpenWorld this year. The always charismatic and ever savvy Dave Vellante and I had an outstanding chat, which you can see right here!

Another fantastic part of the conference was OakTable World, which is technically not part of OpenWorld…rather, it is a “secret” conference-within-a-conference. Held at the Children’s Museum nestled in the bosom of the Moscone Center (yay visuals), this conference features a lineup of incredibly technical folks talking about incredibly technical things to the wonder and amazement of all. This year was no different, with a great assortment of no-nonsense presentations. On the 2nd day of OakTable World there was also something I liked to call the Attack of the Attacks: #CloneAttack, #RepAttack, and #MonitorAttack. This event featured Delphix, DBVisit, and SolarWinds Confio and allowed people to get the software installed on their own laptops for tinkering, learning, and testing.

Pythian put on a couple exciting events as always, with the Friends of Pythian party on Monday night and the OTN Blogger Meetup on Wednesday. Both events were a blast as always, with a huge assortment of members of the Oracle community and beyond. Honestly, it’s worth going just for the good food and to see Alex Gorbachev stand up on a booth bench and try to hush a crowd of buzzing datafiends.

All in all it was an outstanding OpenWorld and it was great catching up with some amazing and brilliant people. I can’t wait to see you all again next year!

The post Another Great OpenWorld appeared first on Oracle Alchemist.

Openworld 2014

Sat, 2014-09-27 15:30
Oracle OpenWorld

Are you going to Oracle OpenWorld 2014? I am, and I hope to see you there! As you probably know, OpenWorld is a humongous event with tens of thousands of people in attendance and some amazing presentations. Why, I even hear Oracle’s new CTO might be there. I wonder if he has anything to talk about?

This year we’ll have a ton of activity at the Delphix booth. You can check out the speaking schedule, location, and more on Delphix’s website. At the very least you’ll want to register to win a sweet custom carbon fiber bike. I’ll be speaking on Monday, Tuesday, and Wednesday at the booth so please come on by and say hello.

I’ll also be hanging out quite a bit at OakTable World with data technology luminaries Kyle Hailey, Tim Gorman, Ben Prusinski, Adam Bowen, and other Delphix folks to take part and prepare for a very special set of events: the Attacks!

CloneAttackDelphix, DBVisit, and Solarwinds will be teaming up to present #CloneAttack, #RepAttack, and #MonitorAttack respectively, three events aimed at giving you real-world usage of some outstanding products for the Oracle world. Delphix specifically will be providing all participants with a 90-day developer trial of Delphix in all its glory so you can learn and experiment yourself with virtual data platforms. Be sure to check it out.

Lastly, I’m excited to announce that I will once again be appearing on #theCUBE this year. John Furrier and Dave Vellante are incredible hosts who really dig deep into the meat of the new tech and offerings in the IT world, and I’m honored to have been a part of their show last year and to be on again this year. That will be on Monday at 1PM PDT, so tune in if you can.

This promises to be an outstanding OpenWorld. I look forward to seeing you there!

The post Openworld 2014 appeared first on Oracle Alchemist.

Teach Your Children Well

Wed, 2014-09-03 07:00
computerroom

Today’s the first day of school in my city, and the plethora of “OMG 1st day of school!” posts of Facebook and my own kids heading back in to slog through another year of fundamentals got me thinking about education. My own kids (12/daughter and 10/son) came home and went swimming since it was about 104 degrees today…the hottest day of the summer, as it were.

minecraft-2__140227213724Anyways, after that activity my son decided to get down to business. And by business I mean: play Minecraft. Some of my friends and family have banned Minecraft during the school week or even as a whole, because of the incredibly addictive nature of the game. I elected instead to make my son play it on the computer instead of the Xbox or iDevice. See, on those systems it’s slightly more mindless (yet still very creative); you just pick the things you want to make and it makes them, and the game is what the game is because it’s on a console. On the computer, you have to memorize the ingredients and patterns necessary to make objects (from making andesite to baking cakes). You can mod the game with a variety of amazing mods out there in the community. You can play on servers that have goals ranging from “kill everyone you see” to “let’s build something incredible together.” I like that. It’s like legos, except with social implications, unlimited blocks of every kind, and electronics lessons all rolled into one.

What’s more, in the last few months my son has learned to install Java, use basic DOS, modify heap parameters, create a .BAT file, and many other cool things. Add that to his foray into the world of cryptocurrency, and he’s growing a set of very valuable skills. My daughter’s no slouch either, she spent a couple years on Ubuntu Linux and actually came to like it quite a bit!

Okay, so enough bragging on my kids. They’re goofballs anyways.

Teach Kids Cool Things

A while back I posted about how I became a DBA while in High School. When I was 18, I offered to help someone at a community college with a presentation to teach college students about programming. I remember them telling me that it was a ridiculous concept, because kids can’t learn those kinds of things anyways. It struck a nerve as you might imagine. Kids can learn highly technical skills if someone is willing to teach them. And with the proliferation of technical gadgets and abilities in the world, they want to learn. You can even start them young…REALLY young.stillashark

There are some great resources out there beyond build-a-world-with-blocks games like Minecraft. There are simple learning sites like phpforkids.com, highly interactive and informative classes like Code Academy, and even specially made storyboarding and design programs like Scratch by MIT.

For books with associated exercises, we have 3D Game Programming for Kids, Learn to Program with Minecraft Plugins (Java and Minecraft!), and even Python with Python for Kids: A Playful Introduction to Programming.

Babies don’t get a pass from learning! Code Babies has a set of excellent introductions to programming including Web Design, CSS, and HTML. A is NOT for “Aardvark,” it’s for “Anchor Tag”!

There are even online courses like Learn JAVA with Minecraft (my son is doing this one and enjoys it), and summer camps as well, to teach kids online and in person.

Even Oracle has a part to play in the education of youngsters with Oracle Academy. Oracle Academy has resources for all levels of learning, contests, and even a self-study programming environment and course called Greenfoot that teaches Java from zero experience with 2D gaming design. I’ve had the privilege of judging some of their contests in the past and was astounded by the skills these young people had in highly advanced technology.

This is so awesome. I really wish these sorts of things were around when I was a kid. If you have children and want them to learn how to tech, the resources are plentiful and amazing.

The post Teach Your Children Well appeared first on Oracle Alchemist.

The Customer Experience

Wed, 2014-07-23 08:00
The Apple Experience

I’m going to kick this post off by taking sides in a long-standing feud.

Apple is amazing.

There. Edgy, right? Okay, so maybe you don’t agree with me, but you have to admit that a whole lot of people do. Why is that?

NOT part of the Customer Experience. Image from AppleFanSite.comImage from AppleFanSite.comSure, there’s the snarky few that believe Apple products are successful due to an army of hipsters with thousands in disposable income, growing thick beards and wearing skinny jeans with pipes in mouth and books by Jack Kerouac in hand, sipping lattes while furiously banging away on the chiclet keyboard of their Macbook Pro with the blunt corner of an iPad Air that sports a case made of iPhones. I have to admit, it does make for an amusing thought. And 15 minutes at a Starbucks in SoHo might make you feel like that’s absolutely the case. But it’s not.

If you browse message boards or other sites that compare PCs and Apple products, you’ll frequently see people wondering why someone would buy a $2,000 Macbook when you can have an amazing Windows 8.1 laptop with better specs for a little over half the price. Or why buy an iPad when you can buy a Samsung tablet running the latest Android which provides more freedom to tinker. Or why even mess with Apple products at all when they’re not compatible with Fragfest 5000 FPS of Duty, or whatever games those darn kids are playing these days.

Part of the Customer Experience. Image provided by cnet.comImage from cnet.comThe answer is, of course, customer experience. Apple has it. When you watch a visually stunning Apple commercial, complete with crying grandpas Facetiming with their newborn great-grandson and classrooms of kids typing on Macbook Airs, you know what to expect. When you make the decision to buy said Macbook Air, you know that you will head to the Apple Store, usually in the posh mall in your town, and that it will be packed to the gills with people buzzing around looking at cases and Beats headphones and 27″ iMacs. You know that whatever you buy will come in a sleek white box, and will be placed into a thick, durable bag with two drawstring cords that you can wear like a backpack.

When you get it home and open the box, it’s like looking at a Tesla Model S. Your new laptop, situated inside a silky plastic bed and covered in durable plastic with little tabs to peel it off. The sleek black cardboard wrapped around a cable wound so perfectly that there’s not a single millimeter of space between the coils, nor a plug out of place. The laptop itself will be unibody, no gaps for fans or jiggly CD-ROM trays or harsh textures.

All of which is to say, Apple provides an amazing customer experience. Are their products expensive, sometimes ridiculously so? Of course. But people aren’t just buying into the product, they’re buying into the “Apple life.” And why not? I’d rather pay for experiences than products any day. I may be able to get another laptop with better specs than my Macbook Pro Retina, but there will always be something missing. Not the same Customer Experience.Maybe the screen resolution isn’t quite so good, maybe the battery doesn’t last as long, or maybe it’s something as simple as the power cord coming wrapped in wire bag ties with a brick the size of my head stuffed unceremoniously into a plastic bag. The experience just isn’t there, and I feel like I’ve bought something that’s not as magnificent as the money I put into it, features and specs be damned.

Customer experience isn’t just a buzz phrase, and it doesn’t just apply to how you deal with angry customers or how you talk to them while making a sale. It also doesn’t mean giving your customer everything they want. Customer experience is the journey from start to finish. It’s providing a predictable, customer-centric, and enjoyable experience for a customer that is entrusting their hard-earned cash in your product. And it applies to every business, not just retail computer sellers and coffee shops. What’s more, it applies to anyone in a service-oriented job.

Customer Experience for IT Professionals

In a previous post I mentioned how important it is to know your client. Even if your position is Sub-DBA In Charge of Dropping Indexes That Start With The Letter Z, you still have a customer (Sub-DBA In Charge Of Dropping Indexes That Start With The Letters N-Z, of course). Not just your boss, but the business that is counting on you to do your job in order to make a profit. And you may provide an exceptional level of service. Perhaps you spend countless hours whittling away at explain plans until a five page Cognos query is as pure as the driven snow and runs in the millisecond range. But it’s not just what you do, but how you do it that is important.

I want you to try something. And if you already do this, good on you. Next time you get a phone call request from someone at your work, or have a phone meeting, or someone sends you a chat asking you to do something, I want you to send a brief email back (we call this an “ack” in technical terms) that acknowledges their request, re-lists what they need in your own words (and preferably with bullets), and lists any additional requirements or caveats. Also let them know how long it will take. Make sure you don’t underestimate, it’s better to quote too much time and get it to them early. Once you’ve finished the work, write a recap email. “As we discussed,” you might say, “I have created the five hundred gazillion tables you need and renamed the table PBRDNY13 to PBRDNY13X.” Adding, of course, “Please let me know if you have any other requests.”

If the task you did involves a new connection, provide them the details (maybe even in the form of a TNSNAMES). If there are unanswered questions, spell them out. If you have an idea that could make the whole process easier next time, run it by them. Provide that level of experience on at least one task you accomplish for your customer if you do not already, and let me know if it had any impact that you can tell. Now do it consistently.

The Apple ExperienceFrom what I’ve seen, this is what separates the “workers” from the “rockstars.” It’s not the ability to fix problems faster than a speeding bullet (though that helps, as a service that sells itself), but the ability to properly communicate the process and give people a good expectation that they can count on.

There’s a lot more to it than that, I know. And some of you may say that you lack the time to have this level of care for every request that comes your way. Perhaps you’re right, or perhaps you’re suffering from IT Stockholm Syndrome. Either way, just give it a shot. I bet it will make a difference, at least most of the time.

Conclusion

Recently, I became the Director of Customer Education and Experience at Delphix, a job that I am deeply honored to have. Delphix is absolutely a product that arouses within customers an eager want, it solves complex business problems, has an amazing delivery infrastructure in the Professional Services team, and provides top notch support thereafter. A solid recipe for Customer Experience if there ever was one. But it’s not just about the taste of the meal, it’s about presentation as well. And so it is my goal to continuously build an industrialized, scalable, repeatable, and enjoyable experience for those who decide to invest their dollar on what I believe to be an amazing product. Simply put, I want to impart on them the same enthusiasm and confidence in our product that I have.

I hope you have the chance to do the same for your product, whatever it may be.

The post The Customer Experience appeared first on Oracle Alchemist.

Frequently Misused Metrics in Oracle

Tue, 2014-04-15 14:43
Lying Businessman

Back in March of last year I wrote an article on the five frequently misused metrics in Oracle: These Aren’t the Metrics You’re Looking For.

To sum up, my five picks for the most misused metrics were:

Business Graph

  1. db file scattered read – Scattered reads aren’t always full table scans, and they’re certainly not always bad.
  2. Parse to Execute Ratio – This is not a metric that shows how often you’re hard parsing, no matter how many times you may have read otherwise.
  3. Buffer Hit Ratio – I want to love this metric, I really do. But it’s an advisory one at best, horribly misleading at worst.
  4. CPU % – You license Oracle by CPU. You should probably make sure you’re making the most of your processing power, not trying to reduce it.
  5. Cost – No, not money. Optimizer cost. Oracle’s optimizer might be cost based, but you are not. Tune for time and resources, not Oracle’s own internal numbers.

Version after version, day after day, these don’t change much.

Anyways, I wanted to report to those who aren’t aware that I created a slideshow based on that blog for RMOUG 2014 (which I sadly was not able to attend at the last moment). Have a look and let me know what you think!

Metric Abuse: Frequently Misused Metrics in Oracle

Have you ever committed metric abuse? Gone on a performance tuning snipe hunt? Spent time tuning something that, in the end, didn’t even really have an impact? I’d love to hear your horror stories.

Also while you’re at it, have a look at the Sin of Band-Aids, and what temporary tuning fixes can do to a once stable environment.

And lastly, keep watching #datachat on Twitter and keep an eye out for an update from Confio on today’s #datachat on Performance Tuning with host Kyle Hailey!

The post Frequently Misused Metrics in Oracle appeared first on Oracle Alchemist.