Skip navigation.

Rittman Mead Consulting

Syndicate content
Delivering Oracle Business Intelligence
Updated: 34 min 55 sec ago

Endeca Information Discovery Integration with Oracle BI Apps

Sat, 2015-01-31 10:46

One of the new features that made its way into Oracle BI Apps, and that completely passed me by at the time, was integration with Oracle Endeca Information Discovery 3.1 via a new ODI11g plug-in. As this new integration brings the Endeca “faceted search” and semi-structured data analysis capabilities to the BI Apps this is actually a pretty big deal, so let’s take a quick look now at what this integration brings and how it works under-the-covers.

Oracle BI Apps integration with Oracle Endeca Information Discovery 3.1 requires OEID to be installed either on the same host as BI Apps or on its own server, and uses a custom ODI11g KM called “IKM SQL to Endeca Server 3.1.0” that needs to be separately downloaded from Oracle’s Edelivery site. The KM download also comes with an ODI Technology XML import file that adds Endeca Server as a technology type in the ODI Topology Navigator, and its here that the connection to the OEID 3.1 server is set up for subsequent data loads from BI Apps 11g.


As the IKM has “SQL” as the loading technology specified this means any source that can be extracted from using standard JDBC drivers can be used to load the Endeca Server, and the KM itself has options for creating the schema to go with the data upload, creating the Endeca Server record spec, specifying the collection name and so on. BI Apps ships with a number of interfaces (mappings) and ODI load plans to populate Endeca Server domains for specific BI Apps fact groups, with each interface taking all of the relevant repository fact measures and dimension attributes and loading them into one big Endeca Server denormalized schema.


When you take a look at the physical (flow) tab for the interface, you can see that the data is actually extracted via the BI Apps RPD and the BI Server, then staged in the BI Apps DW database before being loaded into the Endeca Server domain. Column metadata in the BI Server repository is then used to define the datatypes, names, sizes and other details of the Endeca Server domain schema, an improvement over just pointing Endeca Information Discovery Integrator at the BI Apps database schema.


The ODI repository also provides a pre-built Endeca Server load plan that allows you to turn-on and turn-off loads for particular fact groups, and sets up the data domain name and other variables needed for the Endeca Server data load.


Once you’ve loaded the BI Apps data into one or more Endeca Server data domains, there are a number of pre-build sample Endeca Studio applications you can use to get a feel for the capabilities of the Endeca Information Discovery Studio interface. For example, the Student Information Analytics Admissions and Recruiting sample application lets you filter on any attribute from the dataset on the left-hand side, and then returns instantaneously the filtered dataset displayed in the form of a number of graphs, word clouds and other visualizations.


Another interesting sample application is around employee expense analysis. Using features such as word clouds, its relatively easy to see which expense categories are being used the most, and you can use Endeca Server’s search and text parsing capabilities to extract keywords and other useful bits of information out of free-text areas, for example supporting text for expense claims.


Typically, you’d use Endeca Information Discovery as a “dig around the data, let’s look for something interesting” tool, with all of your data attributes listed in one place and automatic filtering and focusing on the filtered dataset using the Endeca Server in-memory search and aggregation engine. The sample applications that ship with are just meant as a starting point though (and unlike the regular BI Apps dashboards and reports, aren’t themselves supported as an official part of the product), but as you can see from the Manufacturing Endeca Studio application below, there’s plenty of scope to create returns, production and warranty-type applications that let you get into the detail of the attribute and textual information held in the BI Apps data warehouse.


More details on the setup process are in the online docs, and if you’re new to Endeca Information Discovery take a look at our past articles on the blog on this product.

Categories: BI & Warehousing

Rittman Mead at BIWA Summit 2015

Sun, 2015-01-25 20:16

I’m writing this in my hotel room in downtown San Francisco, with my colleague Francesco Tisiot flying in tonight and US colleagues Jordan Meyer, Daniel Adams and Andy Rocha travelling down tomorrow and Monday for next week’s BIWA Summit 2015. The Business Intelligence, Warehousing and Analytics SIG is a part of IOUG and this year also hosts the 11th Annual Oracle Spatial Summit, giving us three days of database-centric content touching most areas of the Oracle BI+DW stack. 


Apart from our own sessions (more in a moment), BIWA Summit 2015 has a great-line up of speakers from the Oracle Database and also Hadoop worlds, featuring Cloudera’s Doug Cutting and Oracle’s Paul Sondereggar and with most of the key names from the Oracle BI+DW community including Christian Screen, Tim & Dan Vlamis, Tony Heljula, Kevin McGinley and Stewart Bryson, Brendan Tierney, Eric Helmer, Kyle Hailey and Rene Kuipers. From a Rittman Mead perspective we’ve delivering a number of sessions over the days, details below:

  • Jordan Meyer, “Intro to Data Science for Oracle Professionals” – Tuesday Jan 27th, 9.00am – 9.50am
  • Mark Rittman, “Bringing Oracle Big Data SQL to OBIEE and ODI” – Wednesday Jan 28th 11.00am – 11.50am
  • Daniel Adams & Andy Rocha, “OBIEE Data Visualization: The How and the Why” – Wednesday Jan 28th, 1.00pm – 1.50pm (pt.1) and 2.00 – 2.50pm (pt.2)
  • Francesco Tisiot, “Oracle BI Cloud Service : What is It, and Where Will it Be Useful?” – Wednesday Jan 28th, 4.00pm – 4.50pm
  • Mark Rittman, “End-to-End Hadoop Development using OBIEE, ODI and Oracle Big Data” – Thursday Jan 29th, 9.30am – 10.20am

Rittman Mead are also proud to be one of the media sponsors for the BIWS Summit 2015, so look out for blogs and other activity from us, and if you’re coming to the event we’ll look forward to seeing you there.

Categories: BI & Warehousing

Rittman Mead’s Development Cluster, EM12c and the Blue Mendora VMware EM Plugin

Fri, 2015-01-23 15:56

For development and testing purposes, Rittman Mead run a VMWare VSphere cluster made up of a number of bare-metal servers hosting Linux, Windows and other VMs. Our setup has grown over the years from a bunch of VMs running on Mac Mini servers to where we are now, and was added-to considerably over the past twelve months as we started Hadoop development – a typical Cloudera CDH deployment we work with requires six or more nodes along with the associated LDAP server, Oracle OBIEE + ODI VMs and NAS storage for the data files. Last week we added our Exalytics server as a repurposed 1TB ESXi VM server giving us the topology shown in the diagram below.


One of the purposes of setting up a development cluster like this was to mirror the types of datacenter environments our customers run, and we use VMWare VSphere and VCenter Server to manage the cluster as a whole, using technologies such as VMWare VMotion to test out alternatives to WebLogic, OBIEE and Oracle Database HA. The screenshot below shows the cluster setup in VMWare VCenter.


We’re also big advocates of Oracle Enterprise Manager as a way of managing and monitoring a customer’s entire Oracle BI & data warehousing estate, using the BI Management Pack to manage OBIEE installations as whole, building alerts off of OBIEE Usage Tracking data, and creating composite systems and services to monitor a DW, ETL and BI system from end-to-end. We register the VMs on the VMWare cluster as hosts and services in a separate EM12cR4 install and use it to monitor our own development work, and show the various EM Management Packs to customers and prospective clients.


Something we’ve wanted to do for a while though is bring the actual VM management into Enterprise Manager as well, and to do this we’ve also now setup the Blue Mendora VMWare Plugin for Enterprise Manager, which connects to your VMWare VCenter, ESXi, Virtual Machines and other infrastructure components and brings them into EM as monitorable and manageable components. The plugin connects to VCenter and the various ESXi hosts and gives you the ability to list out the VMs, Hosts, Clusters and so on, monitor them for resource usage and set up EM alerts as you’d do with other EM targets, and perform VCenter actions such as stopping, starting and cloning VMs.


What’s particularly useful with such a virtualised environment though is being able to include the VM hypervisors, VM hosts and other VMWare infrastructure in the composite systems we define; for example, with a CDH Hadoop cluster that authenticates via LDAP and Kerberos, is used by OBIEE and ODI and is hosted on two VMWare ESXi hosts part of a VSphere cluster, we can get an overall picture of the system health that doesn’t stop at the host level.


If your organization is using VMWare to host your Oracle development, test or production environments and you’re interested in how Enterprise Manager can help you monitor and manage the whole estate, including the use of Blue Mendora’s VMWare EM Plugin, drop me a line and I’d be happy to take you through what’s involved.

Categories: BI & Warehousing

Enable Your Dashboard Designers to Concentrate on User Experience Rather Than Syntax (or How to Add a Treemap in Two Lines)

Mon, 2015-01-19 08:07

JavaScript is a powerful tool that can be used to add functionality to OBIEE dashboards.  However, for many whose wheelhouses are more naturally aligned with Stephen Few rather than John Resig, adding JavaScript to a dashboard can be intimidating. To facilitate this process, steps can be taken to centralize and simplify the invocation of this code.  In this post, I will demonstrate how to create your very own library of custom HTML tags.  These tags will empower anyone to add 3rd party visualizations from libraries like D3 without a lick of JavaScript experience.

What is a “Custom Tag”?

Most standard HTML tags provide very simple behaviors.  Complex behaviors have typically  been reserved for JavaScript.  While, for the most part, this is still the case, custom tags can be used to provide a more intuitive interface to the JavaScript.  The term “custom tag” library refers to a developer defined library of HTML tags that are not natively supported by the HTML standard, but are instead included at run-time.  For example, one might implement  a <RM-MODAL> tag to produce a button that opens a modal dialog.  Behind the scenes, JavaScript will be calling the shots, but the code in your narrative view or dashboard text section will look like plain old HTML tags.

Developing a JavaScript Library

The first step when incorporating an external library onto your dashboard is to load it.  To do so, it’s often necessary to add JavaScript libraries and css files to the <head> of a document to ensure they have been loaded prior to being called.  However, in OBIEE we don’t have direct access to the <head> from the Dashboard editor.  By accessing the DOM, we can create style and script src objects on the fly and append them to the <head>.  The code below appends external scripts to the document’s <head> section.

Figure 1. dashboard.js

01 function loadExtFiles(srcname, srctype){
02    if (srctype=="js"){
03      var src=document.createElement('script')
04      src.setAttribute("type","text/JavaScript")
05      src.setAttribute("src", srcname)
06    } else if (srctype=="css"){
07       var src=document.createElement("link")
08       src.setAttribute("rel", "stylesheet")
09       src.setAttribute("type", "text/css")
10       src.setAttribute("href", srcname)
11    }
13    if ((typeof src!==undefined) && (src!==false)) {
14       parent.document.getElementsByTagName("head")[0].appendChild(src)
15    }
16 }
18 window.onload = function() {
19   loadExtFiles("/rm/js/d3.v3.min.js", "js")
20   loadExtFiles("/rm/css/visualizations.css", "css")
21   loadExtFiles("/rm/js/visualizations.js", "js")
22 }

In addition to including the D3 library, we have included a CSS file and a JavaScript file, named visualizations.css and visualizations.js respectively.  The visualizations.css file contains the default formatting for the visualizations and visualizations.js is our library of functions that collect parameters and render visualizations.

The D3 gallery provides a plethora of useful and not so useful examples to fulfill all your visualizations needs.  If you have a background in programming, these examples are simple enough to customize.  If not, this is a tall order.  Typically the process would go something like this:

  1. Determine how the data is currently being sourced.
  2. Rewrite that section of the code to accept data in a format that can be produced by OBIEE.  Often this requires a bit more effort in the refactoring as many of the examples are sourced from CSV files or JSON.  This step will typically involve writing code to create objects and add those objects to an array or some other container.  You will then have to determine how you are passing this data container to the D3 code.  Will the D3 code be rewritten as a function that takes in the array as a parameter? Will the array be scoped in a way that the D3 code can simply reference it?
  3. Identify how configurations like colors, sizing, etc. are set and determine how to customize them as per your requirements.
  4. Determine what elements need to be added to the narrative view to render the visualization.

If you are writing your own visualization from scratch, these same steps are applied in the design phase.  Either way, the JavaScript code that results from performing these steps should not be the interface exposed to a dashboard designer.  The interface should be as simple and understandable as possible to promote re-usability and avoid implementation syntax errors.  That’s where custom HTML tags come in.

Wait…  Why use tags rather than exposing the Javascript function calls?

Using custom tags allow for a more intuitive implementation than JavaScript functions.  Simple JavaScript functions do not support named arguments.  What this means is JavaScript depends on order to differentiate arguments.

<script>renderTreemap("@1", "@2", @3, null, null, "Y");</script>

In the example above, anyone viewing this call without being familiar with the function definition would have a hard time deciphering the parameters.  By using a tag library to invoke the function, the parameters are more clear.  Parameters that are not applicable for the current invocation are simply left out.

<rm-treemap name="@1" grouping="@2" measure=@3 showValues="Y"/>

That being said, you should still familiarize yourself with the correct usage prior to using them.

Now some of you may be saying that named arguments can be done using object literals, but the whole point of this exercise is to reduce complexity for front end designers, so I wouldn’t recommend this approach within the context of OBIEE.

What do these tags look like and how do they pass the data to the JavaScript?

For this example, we will be providing a Treemap visualization.  As could be expected, the example provided by the link is sourced by a JSON object.  For our use, we will have to rewrite that code to source the data from the attributes in our custom HTML tags.  The D3 code is expecting a hierarchical object made up of leaf node objects contained within grouping objects.  The leaf node objects consists of a “name” field and a “size” field.  The grouping object consists of a “name” field and a “children” field that contains an array of leaf node objects.  By default, the size values, or measures, are not displayed and are only used to size the nodes.  Additionally, the dimensions of the treemap are hard coded values.  Inevitably users will want to change these settings, so for each of the settings which we want to expose for configuration we will provide attribute fields on the custom tag we build. Ultimately, that is the purpose of this design pattern.

  1. Name you custom tag
  2. Identify all your inputs
  3. Create a tag attribute for each input
  4. Within a javascript library, extract and organize the the values
  5. Pass those values to D3

For this example we will configure behaviours for a tag called <rm-treemap>.  Note: It is a good practice to add a dash to your custom tags to ensure they will not match an existing HTML tag.  This tag will support the following attributes:

  • name – Name of the dimension being measured
  • measure: – Used to size the node boxes
  • grouping: – Used to determine color for node boxes
  • width: Width in pixels
  • height: Height in pixels
  • showValues: Y/N

It will be implemented within a narrative view like so:

<rm-treemap name="@1" grouping="@2" measure=@3 width="700" height="500" showValues="Y"/>

In order to make this tag useful, we need to bind behaviors to it that are controlled by the tag attributes.  To extract the attribute values from <rm-treemap>, the javascript code in visualizations.js will use two methods from the Element Web API, Element.getElementsByTagName and Element.getAttributes.

Fig 2. Lines 8-11 use these methods to identify the first <rm-treemap> tag and extract the values for width, height and showValues.  It was necessary specify a single element, in this case the first one, as getElementsByTagName returns an array of all matching elements within the HTML document.  There will most likely be multiple matches as the OBIEE narrative field will loop through query results and produce a <rm-treemap> tag for each row.

In Fig 2. Lines 14-41, the attributes for name, measure and grouping will be extracted and bound to either leaf node objects or grouping objects.  Additionally lines 11 and 49-50 configure the displayed values and the size of the treemap.  The original code was further modified on line 62 to use the first <rm-treemap> element to display the output.

Finally, lines 99-101 ensure that this code only executed when the <rm-treemap> is detected on the page.  The last step before deployment is documentation.  If you are going to go through all the trouble of building a library of custom tags, you need to set aside the time to document their usage.  Otherwise, regardless of how much you simplified the usage, no one will be able to use them.

Figure 2. visualizations.js

01 var renderTreemap = function () {
 02    // Outer Container (Tree)
 03    var input = {};
 04 = "TreeMap";
 05    input.children = [];
 07    //Collect parameters from first element
 08    var treeProps = document.getElementsByTagName("rm-treemap")[0];
 09    canvasWidth = treeProps.getAttribute("width") ? treeProps.getAttribute("width") : 960;
 10    canvasHeight = treeProps.getAttribute("height") ? treeProps.getAttribute("height") : 500;
 11    showValues = treeProps.getAttribute("showValues").toUpperCase();
 13    // Populate collection of data objects with parameters
 14    var mapping = document.getElementsByTagName("rm-treemap");
 15    for (var i = 0; i < mapping.length; i++) {
 16          var el = mapping[i];
 17          var box = {};
 18          var found = false;
 20 = (showValues == "Y") ? el.getAttribute("name") +
 21                                          "<br> " +
 22                               el.getAttribute("measure") : el.getAttribute("name");
 23          box.size = el.getAttribute("measure");
 24          curGroup = el.getAttribute("grouping");
 26          // Add individual items to groups
 27         for (var j = 0; j < input.children.length; j++) {
 28            if (input.children[j].name === curGroup) {
 29                input.children[j].children.push(box);
 30                found = true;
 31            }
 32          }
 34          if (!found) {
 35            var grouping = {};
 36   = curGroup;
 37            grouping.children = [];
 38            grouping.children.push(box);
 39            input.children.push(grouping);
 40          }
 41    }
 43    var margin = {
 44        top: 10,
 45        right: 10,
 46        bottom: 10,
 47        left: 10
 48    },
 49    width = canvasWidth - margin.left - margin.right,
 50    height = canvasHeight - - margin.bottom;
 52    // Begin D3 visualization
 53     var color = d3.scale.category20c();
 55     var treemap = d3.layout.treemap()
 56        .size([width, height])
 57        .sticky(true)
 58        .value(function (d) {
 59        return d.size;
 60    });
 62    var div ="rm-treemap").append("div")
 63        .style("position", "relative")
 64        .style("width", (width + margin.left + margin.right) + "px")
 65        .style("height", (height + + margin.bottom) + "px")
 66        .style("left", margin.left + "px")
 67        .style("top", + "px");
 69    var node = div.datum(input).selectAll(".treeMapNode")
 70        .data(treemap.nodes)
 71        .enter().append("div")
 72        .attr("class", "treeMapNode")
 73        .call(position)
 74        .style("background", function (d) {
 75        return d.children ? color( : null;
 76    })
 77        .html(function (d) {
 78        return d.children ? null :;
 79    });
 81    function position() {
 82"left", function (d) {
 83            return d.x + "px";
 84        })
 85            .style("top", function (d) {
 86            return d.y + "px";
 87        })
 88            .style("width", function (d) {
 89            return Math.max(0, d.dx - 1) + "px";
 90        })
 91            .style("height", function (d) {
 92            return Math.max(0, d.dy - 1) + "px";
 93        });
 94    }
 95    //End D3 visualization
 96 }
 98  // Invoke visualization code only if rm-treemap tag exists
 99  var doTreemap = document.getElementsByTagName("rm-treemap");
100  if (doTreemap !== null) {
101    renderTreemap();
102  }


Figure 3. visualizations.css

01  .treeMapNode {
02     border: solid 1px white;
03    border-radius: 5px;
04    font: 10px sans-serif;
05    line-height: 12px;
06    overflow: hidden;
07    position: absolute;
08    text-indent: 2px;
09  }
Putting it all together

The first step to implementing this code is to make is accessible.  To do this, you will need to deploy your code to the weblogic server.  Many years ago, Venkatakrishnan Janakiraman, detailed how to deploy code to weblogic in his blog about skinning.  For this application this process still applies, however you don’t need to be concerned with the bits about modifying the instanceconfig.xml or skinning.

Once that the code has been deployed to the server, there are literally only two lines of code required to implement this visualization.  First the libraries need to be included.  This is done by sourcing in the dashboard.js file.  This can be done within the Narrative view’s prefix field, but I have chosen to add it to a text section on the dashboard.  This allows multiple analyses to use the libraries without duplicating the load process in multiple places.


The text section should be configured as follows. (Note: The path to Dashboard.js is relative to the root path specified in your deployment.)



From the Narrative View, add the <rm-treemap> tag to the Narrative field and populate the attributes with the appropriate data bind variables and your desired settings.



This should result in the following analysis.


In summary:

  1. Deploy the dashboard.js, visualization.js and visualization.css files to weblogic
  2. From a dashboard text section, source in dashboard.js, which will in turn include visualization.js and visualization.css
  3. Add the <rm-treemap> tag to the Narrative field of a Narrative view.

As you can see implementing custom HTML tags to serve as the interface for a D3 visualization will save your dashboard designers from having to sift through dozens if not hundreds of lines of confusing code.  This will reduce implementation errors, as the syntax is much simpler than JavaScript and will promote conformity, as all visualizations will be sourced from a common library.  Hopefully, this post was informative and will inspire you to consider this pattern or a similarly purposed one to make your code easier to implement.

Categories: BI & Warehousing

Why and How to use Oracle Metadata Management 12c. Part 1: Getting Started

Thu, 2015-01-15 07:34

At OOW 2014, Oracle announced the new Oracle Metadata Management solution and later in the middle of October released its first version – OMM

At the end of November of 2014, the second version was released -OMM with new features and some bugs fixed.

But first things first, what is Oracle Metadata Management? And why we want to use it?

One of the biggest problems that we face today, is the proliferation of different systems, data sources, solutions for BI, for ETL, etc in the same company. So not only for final users but also for technical people (from SysAdmin, Data Scientist, Data Steward to Developers) is quite difficult to track which data is used by which applications. In some cases is almost impossible to perform an impact analysis if someone wants to change a table or if the way that a sales measure is calculated needs to change. With more systems involved, the problem is bigger.

Oracle Metadata Management (OMM) comes to provide a solution to this problem. It is a complete metadata management platform that can reverse engineer (harvest) and catalog metadata from any source: relational, Big data, ETL, BI, data modelling, etc.

OMM allows us to perform interactive searching, data lineage, impact analysis, semantic definition and semantic usage analysis within the catalog. And the really important thing is the metadata from different providers (Oracle or/and third-party) can be related (stitched) so you will have the complete path of data from source to report or vice versa. In addition, it manages versioning and comparison of metadata models.

The Oracle Metadata Management solution offers two products: OEMM (Oracle Enterprise Metadata Management) and OMM for OBI (Oracle Metadata Management for Oracle Business Intelligence). With the first one we can use metadata providers from Oracle and third-party technologies. Using OMM for OBI allows us to use metadata for databases, OBIEE, ODI and DAC.

We will see in this series of posts how to use each of these options, the difference between them and which will be the best option depending of your environment.

In this first post we will focus on the installation process and the requirements for it.

Minimum Requirements for a small test environment 

It is important to note and it is also well explained in the Readme document, that the following are the minimum requirements for a tutorial or a small business case, not for a larger system.


Any of these browsers or newer versions of them with at least Adobe Flash v8 plugging can be used: Microsoft Internet Explorer (IE) v10, Mozilla Firefox v30 or newer, Google Chrome v30, Apple Safari v6.


2 GHZ or higher quad core processor

4 GB RAM (8 GB if 64bit OS using 64bits Web Application Server)

10 GB of disk space (all storage is primarily in the database server)

Operating System

Microsoft Windows 2008 Server, Windows 2012 Server, Windows 7, Windows 8, or Windows 8.1. Be sure that the you have full Administrator privilege when run the installer and that the Microsoft .NET Framework 3.5 or higher is installed.

Other operating systems require manual install/setup, so are not supported by this version.

Web Application Server

The installer comes with the Apache Tomcat  as Web Application Server and Oracle JRE 6 as Java Run Environment. Others web application servers (including Oracle WebLogic) require manual install/setup, and are not supported by this version.

Database Server

For the Database Server you can only use an Oracle Database from 10gR2 to 12 64-bit as a repository for OMM. You can create a new instance or reuse your existing Oracle database server but we need to have admin privileges in the database.

A very important observation is that the character set MUST be AL32UTF8 (UTF8). This is because the Oracle Intermedia Search can only index columns of type VARCHAR or CLOB (not the national variants NVARCHAR and NCLOB respectively). Otherwise you will receive this error message when you run the OMM for the first time:


To solve this, you can create a new instance of the database, or if your database has data already, there a couple of notes in My Oracle Support  260192.1 and 788156.1 to change any character set to AL32UTF8.

In addition, the CTXSYS user must be exist in the database. In case it doesn’t exist, the creation and granting privileges script can be found in <ORACLE_HOME>/ctx/admin/catctx.sql.

Preparing to install

Step 1 - Download the software. You can download the software from the  OTN site or using instead.

Step 2 – Create a Database Schema as Repository. Before start the installation, a database schema needs to be created as a repository for OMM to keep all its objects like models, configurations, etc (we will see all of these objects in next posts)

For that reason create a user in the database:

create user MIR identified by <password> quota unlimited on users

And give to it the following grants:

     “grant create session to MIR;

      grant create procedure to MIR;

     grant create sequence to MIR;

     grant create table to MIR;

     grant create trigger to MIR;

     grant create type to MIR;

     grant create view to MIR”

We also need to give grants to the new to user to execute a package from CTXSYS and another one from SYS.

    “grant execute on CTXSYS.CTX_DDL to MIR;

     grant execute on SYS.DBMS_LOCK TO MIR;”

If you prefer (and also could be a more accurate solution) you can create specific tablespaces (user tablespace and temp tablespace)  for that user. I asked to David Allan, who is always very generous with his time and knowledge, if this schema will be part of the RCU in future releases but there is no plan to incorporate the MIR schema to it.

Installation and Post-Install tasks

Step 3 – Install the software. We can start now to run the installation. The downloaded zip file contains an exe file, double-click on it to start the installation.

In the first screen, select the type of product that you want to install: OEMM or OMM for OBI. We choose the Oracle Enterprise Metadata Management and press Next.


In the next screen, you have access to the Readme document and release notes pressing the View Readme button. After the installation you can find them in the OMM_Home/Documentation folder.



The next screen show you the destination location that you can change if you want. Keep the ports number suggested on the next screen.


The last screen of the installation ask you to restart the computer in order to use the product.


Step 4 – Start OMM Server as a service. After you restart the computer, you need to configure the OMM Server as a Service and start it. You can do this through the option that is showed in the start menu and press the Start button or going directly to the windows services screen and press the right button on the OMM service and start it.

oemm_config_service oemm_service1

Step 5 – Initialize OEMM. Run the OEMM for the first time. We have everything ready to start using Oracle Metadata Management. Go to the URL: http://localhost:11580/MM or execute the shortcut that was created on your desktop after the installation or use the Windows Start Menu.

We need to enter the connection details using the schema that we created in the database. Enter MIR as the Database User Id, its password and the database URL, and then press the Test Connection button. After you receive the Successful message, press the Save button to run the initialization process where OEMM create the objects in the database schema to manage the repository.


This process takes some minutes until you get the confirmation that the initialization process is also successful.


Step 6 – Start OEMM. Close the browser tab and open again the OEMM URL (http://localhost:11580/MM).  A login page appears. User and password to login is  Administrator/Administrator


This is the main page of the OEMM where we are going to harvest (reverse-engineer) the metadata from different providers in the next posts.


In case you want to change the password of the Administrator user go to Tools > Administration on the top right of the page. Select the Administrator user and the user will be appear below.


If you prefer to create another user with Administration privileges, just press the Add User button (plus icon) in the Administration page and enter the details for the new user:


We are using the Native LDAP authentication approach for this demo, but OEMM can also use an External LDAP for authentication.

About the product documentation you can access it through the Help option which is on the top right of the page. In the Contents tab you have all the topics (Harvesting, Administration, etc) separated by folder and in each of them all the details about the specific topic

Installation of OMM for OBI

There are no differences in the installation process for OEMM and OMM for OBI. Just be sure to select the one that you want in the first screen of the installation. This is the page to login to the OMM for OBI.



In the next post, we will see how is the harvest (importing metadata) process using different metadata providers like OBIEE, ODI and others.

Categories: BI & Warehousing

Concurrent RPD Development in OBIEE

Wed, 2015-01-14 07:06

OBIEE is a well established product, having been around in various incarnations for well over a decade. The latest version, OBIEE 11g, was released 3.5 years ago, and there are mutterings of OBIEE 12c already. In all of this time however, one thing it has never quite nailed is the ability for multiple developers to work with the core metadata model – the repository, known as the RPD – concurrently and in isolation. Without this, development is doomed to be serialised – with the associated bottlenecks and inability to scale in line with the number of developers available.

My former colleague Stewart Bryson wrote a series of posts back in 2013 in which he outlines the criteria for a successful OBIEE SDLC (Software Development LifeCycle) method. The key points were :

  • There should be a source control tool (a.k.a version control system, VCS) that enables us to store all artefacts of the BI environment, including RPD, Presentation Catalog, etc etc. From here we can tag snapshots of the environment at a given point as being ready for release, and as markers for rollback if we take a wrong turn during development.
  • Developers should be able to do concurrent development in isolation.
    • To do this, source control is mandatory in order to enable branch-based development, also known as feature-driven development, which is a central tenet of an Agile method.

Oracle’s only answer to the SDLC question for OBIEE has always been MUDE. But MUDE falls short in several respects:

  • It only manages the RPD – there is no handling of the Presentation Catalog etc
  • It does not natively integrate with any source control
  • It puts the onus of conflict resolution on the developer rather than the “source master” who is better placed to decide the outcome.

Whilst it wasn’t great, it wasn’t bad, and MUDE was all we had. Either that, or manual integration into source control (1, 2) tools, which was clunky to say the least. The RPD remained a single object that could not be merged or managed except through the Administration Tool itself, so any kind of automatic merge strategies that the rest of the software world were adopting with source control tools were inapplicable to OBIEE. The merge would always require the manual launching of the Administration Tool, figuring out the merge candidates, before slowly dying in despair at having to repeat such a tortuous and error-prone process on a regular basis…

Then back in early 2012 Oracle introduced a new storage format for the RPD. Instead of storing it as a single binary file, closed to prying eyes, it was instead burst into a set of individual files in MDS XML format.

For example, one Logical Table was now one XML files on disk, made up of entities such as LogicalColumn, ExprText, LogicalKey and so on:

It even came with a set of configuration screens for integration with source control. It looked like the answer to all our SDLC prayers – now us OBIEE developers could truly join in with the big boys at their game. The reasoning went something like:

  1. An RPD stored in MDS XML is no longer binary
  2. git can merge code that is plain text from multiple branches
  3. Let’s merge MDS XML with git!

But how viable is MDS XML as a storage format for the RPD used in conjunction with a source control tool such as git? As we will see, it comes down to the Good, the Bad, and the Ugly…

The Good

As described here, concurrent and unrelated developments on an RPD in MDS XML format can be merged successfully by a source control tool such as git. Each logical object is an file, so git just munges (that’s the technical term) the files modified in each branch together to come up with a resulting MDS XML structure with the changes from each development in it.

The Bad

This is where the wheels start to come off. See, our automagic merging fairy dust is based on the idea that individually changed files can be spliced together, and that since MDS XML is not binary, we can trust a source control tool such as git to also work well with changes within the files themselves too.

Unfortunately this is a fallacy, and by using MDS XML we expose ourselves to greater complications than we would if we just stuck to a simple binary RPD merged through the OBIEE toolset. The problem is that whilst MDS XML is not binary, is not unstructured either. It is structured, and it has application logic within it (mdsid, of which see below).

Within the MDS XML structure, individual first-class objects such as Logical Tables are individual files, and structured within them in the XML are child-objects such as Logical Columns:

Source control tools such as git cannot parse it, and therefore do not understand what is a real conflict versus an unrelated change within the same object. If you stop and think for a moment (or longer) quite what would be involved in accurately parsing XML (let alone MDS XML), you’ll realise that you basically need to reverse-engineer the Administration Tool to come up with an accurate engine.

We kind of get away with merging when the file differences are within an element in the XML itself. For example, the expression for a logical column is changed in two branches, causing clashing values within ExprText and ExprTextDesc. When this happens git will throw a conflict and we can easily resolve it, because the difference is within the element(s) themselves:

Easy enough, right?

But taking a similarly “simple” merge conflict where two independent developers add or modify different columns within the same Logical Table we see what a problem there is when we try to merge it back together relying on source control alone.

Obvious to a human, and obvious to the Administration Tool is that these two new columns are unrelated and can be merged into a single Logical Table without problem. In a paraphrased version of MDS XML the two versions of the file look something like this, and the merge resolution is obvious:

But a source control tool such as git looks as the MDS XML as a plaintext file, not understanding the concept of an XML tree and sibling nodes, and throws its toys out of the pram with a big scary merge conflict:

Now the developer has to roll up his or her sleeves and try to reconcile two XML files – with no GUI to support or validate the change made except loading it back into the Administration Tool each time.

So if we want to use MDS XML as the basis for merging, we need to restrict our concurrent developments to completely independent objects. But, that kind of hampers the ideal of more rapid delivery through an Agile method if we’re imposing rules and restrictions like this.

The Ugly

This is where is gets a bit grim. Above we saw that MDS XML can cause unnecessary (and painful) merge conflicts. But what about if two developers inadvertently create the same object concurrently? The behaviour we’d expect to see is a single resulting object. But what we actually get is both versions of the object, and a dodgy RPD. Uh Oh.

Here are the two concurrently developed RPDs, produced in separate branches isolated from each other:

And here’s what happens when you leave it to git to merge the MDS XML:

The duplicated objects now cannot be edited in the Administration Tool in the resulting merged RPD – any attempt to save them throws the above error.

Why does it do this? Because the MDS XML files are named after a globally unique identifier known as the mdsid, and not their corresponding RPD qualified name. And because the mdsid is unique across developments, two concurrent creations of the same object end up with different mdsid values, and thus different filenames.

Two files from separate branches with different names are going to be seen by source control as being unrelated, and so both are brought through in the resulting merge.

As with the unnecessary merge conflict above, we could define process around same object creation, or add in a manual equalise step. The issue really here is that the duplicates can arise without us being aware because there is no conflict seen by the source control tool. It’s not like merging an un-equalised repository in the Administration Tool where we’d get #1 suffixes on the duplicate object so that at least (a) we spot the duplication and (b) the repository remains valid and the duplicate objects available to edit.

MDS XML Repository opening times

Whether a development strategy based on MDS XML is for you or not, another issue to be aware of is that for anything beyond a medium sized RPD opening times of an MDS XML repository are considerable. As in, a minute from binary RPD, and 20 minutes from MDS XML. And to be fair, after 20 minutes I gave up on the basis that no sane developer would write off that amount of their day simply waiting for the repository to open before they can even do any work on it. This rules out working with any big repositories such as that from BI Apps in MDS XML format.

So is MDS XML viable as a Repository storage format?

MDS XML does have two redeeming features :

  1. It reduces the size of your source control repository, because on each commit you will be storing just a delta of the overall repository change, rather than the whole binary RPD each time.
  2. For tracking granular development progress and changes you can identify what was modified through the source control tool alone – because the new & modified objects will be shown as changes:

But the above screenshots both give a hint of the trouble in store. The mdsid unique identifier is used not only in filenames – causing object duplication and strange RPD behaviour- but also within the MDS XML itself, referencing other files and objects. This means that as a RPD developer, or RPD source control overseer, you need to be confident that each time you perform a merge of branches you are correctly putting Humpty Dumpty back together in a valid manner.

If you want to use MDS XML with source control you need to view it as part of a larger solution, involving clear process and almost certainly a hybrid approach with the binary RPD still playing a part — and whatever you do, the Administration Tool within short reach. You need to be aware of the issues detailed above, decide on a process that will avoid them, and make sure you have dedicated resource that understands how it all fits together.

If not MDS XML, then what?…

Source control (e.g. git) is mandatory for any kind of SDLC, concurrent development included. But instead of storing the RPD in MDS XML, we store it as a binary RPD.

Wait wait wait, don’t go yet ! … it gets better

By following the git-flow method, which dictates how feature-driven development is done in source control (git), we can write a simple script that determines when merging branches what the candidates are for an OBIEE three-way RPD merge.

In this simple example we have two concurrent developments – coded “RM–1” and “RM–2”. First off, we create two branches which take the code from our “mainline”. Development is done on the two separate features in each branch independently, and committed frequently per good source control practice. The circles represent commit points:

The first feature to be completed is “RM–1”, so it is merged back into “develop”, the mainline. Because nothing has changed in develop since RM–1 was created from it, the binary RPD file and all other artefacts can simply ‘overwrite’ what is there in develop:

Now at this point we could take “develop” and start its deployment into System Test etc, but the second feature we were working on, RM–2, is also tested and ready to go. Here comes the fancy bit! Git recognises that both RM–1 and RM–2 have made changes to the binary RPD, and as a binary RPD git cannot try to merge it. But now instead of just collapsing in a heap and leaving it for the user to figure out, it makes use of git and the git-flow method we have followed to work out the merge candidates for the OBIEE Administration Tool:

Even better, it invokes the Administration Tool (which can be run from the command line, or alternatively use command line tools comparerpd/patchrpd) to automatically perform the merge. If the merge is successful, it goes ahead with the commit in git of the merge into the “develop” branch. The developer has not had to do any kind of interaction to complete the merge and commit.

If the merge is not a slam-dunk, then we can launch the Administration Tool and graphically figure out the correct resolution – but using the already-identified merge candidates in order to shorten the process.

This is not perfect, but there is no perfect solution. It is the closest thing that there is to perfection though, because it will handle merges of :

  • Unique objects
  • Same objects, different modifications (c.f. two new columns on same table example above)
  • Duplicate objects – by equalisation

There is no single right answer here, nor are any of the options overly appealing.

If you want to work with OBIEE in an Agile method, using feature-driven development, you will have to adopt and learn specific processes for working with OBIEE. The decision you have to make is on how you store the RPD (binary or multiple MDS XML files, or maybe both) and how you handle merging it (git vs Administration Tool).

My personal view is that taking advantage of git-flow logic, combined with the OBIEE toolset to perform three-way merges, is sufficiently practical to warrant leaving the RPD in binary format. The MDS XML format is a lovely idea but there are too few safeguards against dodgy/corrupt RPD (and too many unnecessary merge conflicts) for me to see it as a viable option.

Whatever option you go for, make sure you are using regression testing to test the RPD after you merge changes together, and ideally automate the testing too. Here at Rittman Mead we’ve written our own suite of tools that do just this – get in touch to find out more.

Categories: BI & Warehousing

Using Oracle Big Data SQL to Add Dimensions and Attributes to Hadoop Reporting

Sun, 2015-01-11 06:18

In a previous post I looked at using Oracle’s new Big Data SQL product with ODI12c, where I used Big Data SQL to expose two Hive tables as Oracle external tables, and then join them using the BETWEEN operator, something that’s not possible with regular HiveQL. In this post I’m going to look at using Oracle Big Data SQL with OBIEE11g, to enable reporting against Hive tables without the need to use Hive ODBC drivers and to bring in reference data without having to stage it in Hive tables in the Hadoop cluster.

In this example I’ve got some webserver log activity from the Rittman Mead Blog stored as a Hive table in Hadoop, which in its raw form only has a limited amount of descriptive data and wouldn’t be all that useful to users reporting against it using OBIEE. Here’s the contents of the Hive table as displayed via SQL*Developer:


When I bring this table into OBIEE, I really want to add details of the country that each user is visiting from, and also details of the category that each post referenced in the webserver logs belongs to. Tables for these reference data items can be found in an accompanying Oracle database, like this:


The idea then is to create an ORACLE_HIVE external table over the Hive table containing the log activity, and then import all of these tables into the OBIEE RPD as regular Oracle tables. Back in SQL*Developer, connected to the database that has the link setup to the Hadoop cluster via Big Data SQL, I create the external table using the new ORACLE_HIVE external table access driver:


And now with the Hive table exposed as the Oracle external table BDA_OUTPUT.ACCESS_PER_POST_EXTTAB, I can import all four tables into the OBIEE repository.


I can now create joins across the two Oracle schemas and four tables:


and then create a business model and presentation model to define a simple star schema against the combined dataset:


Once the RPD is saved and made available to the Presentation layer, I can now go and create some simple reports against the Hive and Oracle tables, with the Big Data SQL feature retrieving the Hive data using SmartScan technology running directly on the Hadoop cluster – bypassing MapReduce and filtering, projecting and just returning the results dataset back to the Exadata server running the Oracle SQL query.


In the previous ODI12c and Big Data SQL posting, I used the Big Data SQL feature to enable a join between the Hive table and a table containing IP address range lookups using the BETWEEN operator, so that I could return the country name for each visitor to the website. I can do a similar thing with OBIEE, by first recreating the main incoming fact table source as a view over the ORACLE_HIVE external table and adding an IP integer calculation that I can then use for the join to the IP range lookup table (and also take the opportunity to convert the log-format date string into a proper Oracle DATE datatype):


and then using that to join to a new table I’ve imported from the BLOG_REFDATA Oracle schema that contains the IP range lookups:


Now I can add country as a dimension, and create reports that break down site visits by country of access.


Similarly, I can break the date column in the view over the Hive external table out into its own logical dimension table, and then create some reports to show site access over time.


and with the final RPD looking like this:


If you’re interested in reading more about Oracle Big Data SQL I also covered it earlier on the blog around the launch date, with this post introducing the feature and another looking at how it extends Oracle security over your Hadoop cluster.

Categories: BI & Warehousing

Rittman Mead BI Forum 2015 Call for Papers Now Open – Closes on Jan 18th 2015

Thu, 2015-01-08 16:05

The Call for Papers for the Rittman Mead BI Forum 2015 is currently open, with abstract submissions open to January 18th 2015. As in previous years the BI Forum will run over consecutive weeks in Brighton, UK and Atlanta, GA, with the provisional dates and venues as below:

  • Brighton, UK : Hotel Seattle, Brighton, UK : May 6th – 8th 2015
  • Atlanta, GA : Renaissance Atlanta Midtown Hotel, Atlanta, USA : May 13th-15th 2015

Now on it’s seventh year, the Rittman Mead BI Forum is the only conference dedicated entirely to Oracle Business Intelligence, Oracle Business Analytics and the technologies and processes that support it – data warehousing, data analysis, data visualisation, big data and OLAP analysis. We’re looking for session around tips & techniques, project case-studies and success stories, and sessions where you’ve taken Oracle’s BI products and used them in new and innovative ways. Each year we select around eight-to-ten speakers for each event along with keynote speakers and a masterclass session, with speaker choices driven by attendee votes at the end of January, and editorial input from myself, Jon Mead and Charles Elliott and Jordan Meyer.


Last year we had a big focus on cloud, and a masterclass and several sessions on bringing Hadoop and big data to the world of OBIEE. This year we’re interested in project stories and experiences around cloud and Hadoop, and we’re keen to hear about any Oracle BI Apps 11g implementations or migrations from the earlier 7.9.x releases. Getting back to basics we’re always interested in sessions around OBIEE, Essbase and data warehouse data modelling, and we’d particularly like to encourage session abstracts on data visualization, BI project methodologies and the incorporation of unstructured, semi-structured and external (public) data sources into your BI dashboards. For an idea of the types of presentations that have been selected in the past, check out the BI Forum 2014, 2013 and 2012 homepages, or feel free to get in touch via email at

The Call for Papers entry form is here, and we’re looking for speakers for Brighton, Atlanta, or both venues if you can speak at both. All session this year will be 45 minutes long, all we’ll be publishing submissions and inviting potential attendees to vote on their favourite sessions towards the end of January. Other than that – have a think about abstract ideas now, and make sure you get them in by January 18th 2015 – just over a week from now!.

Categories: BI & Warehousing

Top 10 Rittman Mead Blog Posts from 2014

Wed, 2014-12-31 09:27

It’s the afternoon of New Year’s Eve over in the UK, so to round the year off here’s the top 10 blog posts from 2014 from the Rittman Mead blog, based on Google Analytics stats (page views for 2014 in brackets, only includes articles posted in 2014)

  1. Using Sqoop for Loading Oracle Data into Hadoop on the BigDataLite VM – Mark Rittman, March 22, 2014 (8466)
  2. OBIEE Dashboard prompt: at least one mandatory – Gianni Ceresa, March 17th 2014 (7683)
  3. Thoughts on Using Amazon Redshift as a Replacement for an Oracle Data Warehouse – Peter Scott, February 20th 2014 (6993)
  4. The Secret Life of Conditional Formatting in OBIEE – Gianni Ceresa, March 26th 2014 (5606)
  5. Trickle-Feeding Log Files to HDFS using Apache Flume – Mark Rittman, May 18th 2014 (5494)
  6. The State of the OBIEE11g World as of May 2014 – Mark Rittman, May 12th 2014 (4932)
  7. Date formatting in OBIEE 11g – setting the default Locale for users  – Robin Moffatt, February 12th 2014 (4840)
  8. Automated Regression Testing for OBIEE – Robin Moffatt, Jan 23rd 2014 (4040)
  9. OBIEE, Cloudera Hadoop & Hive/Impala Part 2 : Load Data into Hive Tables, Analyze using Hive & Impala – Mark Rittman, Jan 18th 2014 (3439)
  10. Introduction to Oracle BI Cloud Service : Product Overview – Mark Rittman, Sep 22nd 2014 (3190)

In all, the blog in one form or another has been going for 10 years now, and our most popular post of all time over the same period is Robin Moffatt’s “Upgrading OBIEE to” – well done Robin. To everyone else, have a Happy New Year and a prosperous 2015, and see you next year when it all starts again!

Categories: BI & Warehousing

Data Integration Tips: ODI – One Data Server with several Physical Schemas

Tue, 2014-12-30 16:25

Yes, I’m hijacking the “Data Integration Tips” series of my colleague Michael Rainey (@mRainey) and I have no shame!

This tip is intended for newcomers in the ODI world and is valid with all the versions of ODI. It’s nothing new, it has been posted by other authors on different blogs. But I see so much people struggling with that on the ODI Space on OTN that I wanted to explain it in full details, with all the context and with my own words. So next time I can just post a link to this instead of explaining it from scratch.

The Problem

I’m loading data from a schema to another schema on the same Oracle database but it’s slower than when I write a SQL insert statement manually. The bottle neck of the execution is in the steps from the LKM SQL to SQL. What should I do?

Why does it happen?

Loading Knowledge Modules (LKMs) are used to load the data from one Data Server to another. It usually connects to the source and the target Data Server to execute some steps on each of them. This is required when working with different technologies or different database instances for instance. So if we define two Data Servers to connect to our two database schemas, we will need a LKM.

In this example, I will load a star schema model in HR_DW schema, using the HR schema from the same database as a source. Let’s start with the approach using two Data Servers. Note that here we use directly the database schema to connect to our Data Servers.

Two Data Servers connecting to the same database instance, using directly the database schema to connect.

And here are the definitions of the physical schemas :

Physical Schemas

Let’s build a simple mapping using LOCATIONS, COUNTRIES and REGIONS as source to denormalize it and load it into a single flattened DIM_LOCATIONS table. We will use Left Outer joins to be sure we don’t miss any location even if there is no country or region associated. We will populate LOCATION_SK with a sequence and use an SCD2 IKM.

Mapping - Logical tab

If we check the Physical tab, we can see two different Execution Groups. This mean the Datastores are in two different Data Servers and therefore a LKM is required. Here I used LKM SQL to SQL (Built-In) which is a quite generic one, not particularly designed for Oracle databases. Performances might be better with a technology-specific KM, like LKM Oracle to Oracle Pull (DB Link). By choosing the right KM we can leverage the technology-specific concepts – here the Oracle database links – which often improve performance. But still, we shouldn’t need any database link as everything lies in the same database instance.

Mapping - Physical tab


Another issue is that temporary objects needed by the LKM and the IKM are created in the HR_DW schema. These objects are the C$_DIM_LOCATIONS table created by the LKM to bring the data in the Target Data Servers and the I$_DIM_LOCATIONS table created by the IKM to detect when a new row is needed or when a row needs to be updated according to the SCD2 rules. Even though these objects are deleted in the clean-up steps at the end of the mapping execution, it would be better to use another schema for these temporary objects instead of target schema that we want to keep clean.

The Solution

If the source and target Physical Schemas are located on the same Data Server – and the technology can execute code – there is no need for a LKM. So it’s a good idea to try to reuse as much as possible the same Data Server for data coming from the same place. Actually, the Oracle documentation about setting up the topology recommends to create an ODI_TEMP user/schema on any RDBMS and use it to connect.

This time, let’s create only one Data Server with two Physical schemas under it and let’s map it to the existing Logical schemas. Here I will use ODI_STAGING name instead of ODI_TEMP because I’m using the excellent ODI Getting Started virtual machine and it’s already in there.

One Data Server with two Physical Schemas under it

As you can see in the Physical Schema definitions, there is no other password provided to connect with HR or HR_DW directly. At run-time, our agent will only use one connection to ODI_STAGING and execute code through it, even if it needs to populate HR_DW tables. It means that we need to be sure that ODI_STAGING has all the required privileges to do so.

Physical schemas

Here are the privileges I had to grant to ODI_STAGING :



Let’s now open our mapping again and go on the physical tab. We now have only one Execution Group and there is no LKM involved. The code generated is a simple INSERT AS SELECT (IAS) statement, selecting directly from the HR schema and loading into the HR_DW schema without any database link. Data is loaded faster and our first problem is addressed.

Mapping - Physical tab without LKM

Now let’s tackle the second issue we had with temporary objects being created in HR_DW schema. If you scroll upwards to the Physical Schema definitions (or click this link, if you are lazy…) you can see that I used ODI_STAGING as Work Schema in all my Physical Schemas for that Data Server. This way, all the temporary objects are created in ODI_STAGING instead of the source or target schema. Also we are sure that we won’t have any issue with missing privileges, because our agent uses directly ODI_STAGING to connect.

So you can see it has a lot of advantages using a single Data Server when sources come from the same place. We get rid of the LKM and the schema used to connect can also be used as Work Schema so we keep the other schemas clean without any temporary objects.

The only thing you need to remember is to give the right privileges to ODI_STAGING (or ODI_TEMP) on all the objects it needs to handle. If your IKM has a step to gather statistics, you might also want to grant ANALYZE ANY. If you need to truncate a table before loading it, you have two approaches. You can grant DROP ANY table to ODI_STAGING, but this might be a dangerous privilege to give in production. A safer way is to create a stored procedure ODI_TRUNCATE in all the target database schema. This procedure takes a table name as a parameter and truncates that table using the Execute Immediate statement. Then you can grant execute on that procedure to ODI_STAGING and edit your IKM step to execute that procedure instead of using the truncate syntax.


That’s it for today, I hope this article can help some people to understand the reason of that Oracle recommendation and how to implement it. Stay tuned on this blog and on Twitter (@rittmanmead, @mRainey, @markrittman, @JeromeFr, …) for more tips about Data Integration!

Categories: BI & Warehousing

Connecting OBIEE11g on Windows to a Kerberos-Secured CDH5 Hadoop Cluster using Cloudera HiveServer2 ODBC Drivers

Sun, 2014-12-28 16:55

In a few previous posts and magazine articles I’ve covered connecting OBIEE11g to a Hadoop cluster, using OBIEE and Cloudera CDH4 and CDH5 as the examples. Things get a bit complicated in that the DataDirect Apache Hive ODBC drivers that Oracle ship are only for HiveServer1 and not the HiveServer2 version that CDH4 and CDH5 use, and the Linux version of OBIEE won’t work with the Cloudera Hive ODBC drivers that you have to use to connect to Hive on CDH4/5. You can however connect OBIEE on Windows to HiveServer2 on CDH4 and CDH5 if you use the Cloudera Hive ODBC drivers for Windows, and although this isn’t supported by Oracle in my experience it does work, albeit with the general OBIEE11g Hive restrictions and caveats detailed in the Metadata Repository Builder’s Guide, and the fact that in-practice Hive is too slow to use for ad-hoc reporting.

However … most enterprise-type customers who run Hadoop on their internal networks have their clusters configured as “secured”, rather than the unsecured cluster examples that you see in most OBIEE connection examples. By default, Hadoop clusters are very trusting of incoming network and client connections and assume that whoever’s connecting is who they say they are, and HDFS and the other cluster components don’t perform any authentication themselves of incoming client connections. In addition, by default all network connections between Hadoop cluster components run in clear text and without any mutual authentication, which is great for a research cluster or PoC but not really appropriate for enterprise customers looking to use Hadoop to store and analyse customer data.

Instead, these customers configure their clusters to run in secured mode, using Kerberos authentication to secure incoming connections, encrypt network traffic and secure connections between the various services in the cluster. How this affects OBIEE though is that your Hive connections through to the cluster also need to use Kerberos authentication, and you (and the OBIEE BI Server) need to have a valid Kerberos ticket when connecting through the Hive ODBC driver. So how do we set this up, and how do we get hold of a secure Hadoop cluster using Kerberos authentication to test against? A few of our customers have asked this question recently, so I thought it’d be worth jotting down a few notes on how to set this up.

At a high-level, if you want to connect OBIEE to a secure, Kerberos-authenticated CDH cluster, there’s three main steps you need to carry out:

  1. Get hold of a Kerberos-secured CDH cluster, and establish the connection details you’ll need to use to connect to it
  2. Make sure the Kerberos server has the correct entries/principals/user details for the user you’re going to securely-connect as
  3. Configure the host environment for OBIEE to work with Kerberos authentication, and then create the connection from OBIEE to the CDH cluster using the correct Kerberos credentials for your user

In my case, I’ve got a Cloudera CDH5.3.0 cluster running in the office that’s been configured to use MIT Kerebos 5 for authentication, set up using an OEL6 VM as the KDC (Key Distribution Centre) and the cluster configured using the new Kerebos setup wizard that was introduced with CDH5.1. Using this wizard automates the creation of the various Kerberos service account and host principals in the Kerberos database, and configures each of the cluster components – YARN, Hive, HDFS and so on – to authenticate with each other using Kerberos authentication and use encrypted network connections for inter-service and inter-node communication.


Along with the secured Hadoop cluster, key bits of information and configuration data you’ll need for the OBIEE side are:

  • The krb5.conf file from the Kerberos KDC, which contains details of the Kerberos realm, URL for the KDC server, and other key connection details
  • The name of the Kerberos principal used for the Hive service name on the Hadoop cluster – typically this is “hive”; if you want to connect to Hive first using a JDBC tool such as beeline, you’ll also need the full principal name for this service, in my case “hive/”
  • The hostname (FQDN) of the node in the CDH cluster that contains the HiveServer2 RPC interface that OBIEE connects to, to run HiveQL queries
  • The Port that HiveServer2 is running on – typically this is “10000”, and the Hive database name (for example, “default’)
  • The name of the Kerebos Realm you’ll be connecting to – for example, MYCOMPANY.COM or in my case, RITTMANDEV.COM (usually in capitals)

In my case, the krb5.conf file that is used to configure Kerebos connections to my KDC looks like this – in your company it might be a bit more complex, but this example defines a simple MIT Kerebos 5 domain:

    default = FILE:/var/log/krb5libs.log
    kdc = FILE:/var/log/krb5kdc.log
    admin_server = FILE:/var/log/kadmind.log
    default_realm = RITTMANDEV.COM
    dns_lookup_realm = false
    dns_lookup_kdc = false
    ticket_lifetime = 24h
    renew_lifetime = 7d
    forwardable = true
    kdc =
    admin_server =

In my setup, the CDH Hadoop cluster has been configured to use Kerberos authentication for all communications between cluster components and any connections from the outside that use those components; the cluster itself though can still be accessed via unsecured (non-Kerebos authenticated) SSH, though of course this aspect could be secured too. To test out the Hive connectivity before we get into the OBIEE details you can use the beeline CLI that ships with CDH5, and to do this you’ll need to be able to SSH into one of the cluster nodes (if you’ve not got beeline installed on your own workstation) and you’ll need an account (principal) created for you in the Kerebos database to correspond to the Linux user and HDFS/Hive user that has access to the Hive tables you’re interested in. To create such a Kerebos principal for my setup, I used the kadmin.local command on the KDC VM to create a user that matched my Linux/HDFS username and gave it a password:

kadmin.local:  addprinc mrittman
WARNING: no policy specified for mrittman@RITTMANDEV.COM; defaulting to no policy
Enter password for principal "mrittman@RITTMANDEV.COM":
Re-enter password for principal "mrittman@RITTMANDEV.COM":
Principal "mrittman@RITTMANDEV.COM" created.

SSH’ing into one of the secure CDH cluster nodes, I first have to authenticate using the kinit command which when successful, creates a Kerebos ticket that gets cached for a set amount of time, and beeline can thereafter use as part of its own authentication process:

officeimac:.ssh markrittman$ ssh mrittman@bda3node4
mrittman@bda3node4's password: 
[mrittman@bda3node4 ~]$ kinit -p mrittman
Password for mrittman@RITTMANDEV.COM: 
[mrittman@bda3node4 ~]$

Now I can use beeline, and pass the Hive service principal name in the connection details along with the usual host, port and database name. When beeline prompts for my username and password, I use the Kerberos principal name that matches the Linux/HDFS one, and enter that principal’s password:

[mrittman@bda3node4 ~]$ beeline
Beeline version 0.13.1-cdh5.3.0 by Apache Hive
beeline> !connect jdbc:hive2://bda3node2:10000/default;principal=hive/
scan complete in 2ms
Connecting to jdbc:hive2://bda3node2:10000/default;principal=hive/
Enter username for jdbc:hive2://bda3node2:10000/default;principal=hive/ mrittman
Enter password for jdbc:hive2://bda3node2:10000/default;principal=hive/ ********
Connected to: Apache Hive (version 0.13.1-cdh5.3.0)
Driver: Hive JDBC (version 0.13.1-cdh5.3.0)
0: jdbc:hive2://bda3node2:10000/default> show tables;
|     tab_name     |
| posts            |
| things_mrittman  |
2 rows selected (0.162 seconds)
0: jdbc:hive2://bda3node2:10000/default> select * from things_mrittman;
| things_mrittman.thing_id  | things_mrittman.thing_name  |
| 1                         | Car                         |
| 2                         | Dog                         |
| 3                         | Hat                         |
3 rows selected (0.251 seconds)

So at this point I’ve covered off the first two steps; established the connection details for the secure CDH cluster, and got hold of and confirmed the Kerberos principal details that I’ll need to connect to Hive – now its time to set up the OBIEE element.

In this particular example we’re using Windows to host OBIEE, as this is the only platform that we can get the HiveServer2 ODBC drivers to work, in this case the Cloudera Hive ODBC drivers available on their website (free download but registration may be needed). Before we can get this ODBC driver to work though, we need to install the Kerberos client software on the Windows machine so that we can generate the Kerberos ticket that the ODBC driver will need to pass over as part of the authentication process.

To configure the Windows environment for Kerberos authentication, in my case I used the Kerberos for Windows 4.x client software downloadable for free from the MIT website and copied across the krb5.conf file from the KDC server, renaming it to krb5.ini and storing it the default location of c:\ProgramData\MIT\Kerberos5.


You also need to define a system environment variable, KRB5CCNAME, to point to a directory where the Kerebos tickets can be cached, in my case I used c:\temp\krb5cache. Once this is done, reboot the Windows environment and you should then be prompted after login to authenticate yourself to the Kerebos KDC.


The ticket then stays valid for a set number of days/hours, or you can configure OBIEE itself to authenticate and cache its own ticket – for now though, we’ll create the ticket manually and connect to the secured cluster using these cached ticket details.

After installing the Cloudera Hive ODBC drivers, I create the connection using Kerebos as the Authentication Mechanism, and enter the realm name, HiveServer2 host and the Hive Kerebos principal name, like this:


In my case both the BI Administration tool and the OBIEE BI Server were on the same Windows VM, and therefore shared the same ODBC driver install, so I then moved over to the BI Administration tool to import the Hive table metadata details into the RPD and create the physical, logical and presentation layer RPD elements. Depending on how your CDH cluster is set up you might be able to test the connection now by using the View Data… menu item in BI Administration, but in my case I had to do two more things on the CDH cluster itself before I could get Hive queries under this Kerberos principal to run properly.


First, as secured CDH Hadoop clusters usually configure HiveServer2 to use “user impersonation” (connecting to Hive as the user you authenticate as, not the user that HiveServer2 authenticates to the Hive service as), YARN and MapReduce jobs run under your account and not the usual “Hive” account that unsecured Hive connections use. Where this causes a problem on CDH installations on RHEL-derived platforms (RHEL, OEL, Centos etc) is that YARN normally blocks jobs running on behalf of users with a UID of <1000 (as this on other Linux distributions typically signifies a system account), RHEL starts user UIDs at 500 and YARN therefore blocks them from running jobs. To fix this, you need to go into Cloudera Manager and edit the YARN configuration settings to lower this UID threshold to something under 500, for example 250:


I also needed to alter the group ownership of the temporary directory each node used for the YARN NodeManager’s user files so that YARN could write its temporary files correctly; on each node in the cluster I ran the following Linux commands as root to clear down any files YARN had created before, and recreate the directories with the correct permissions (Hive jobs would fail until I did this, with OBIEE just reporting an ODBC error):

rm -rf /yarn
mkdir -p /yarn/nm
chown -R yarn /yarn
chgrp -R yarn /yarn

Once this is done, queries from the BI Administration tool and from the OBIEE BI Server should connect to the Kerberos-secured CDH cluster successfully, using the Kerberos ticket you obtained using the MIT Kerberos Ticket Manager on login and then passing across the user details under which the YARN, and then Hive job should run.


If you’re interested, you can go back to the MIT Kerberos Ticket Manager and see the other Kerberos tickets that were requested and then cached by the Cloudera Hive ODBC driver when it mutually authenticated with the HiveServer2 RPC interface – Kerebos authenticates both ways to ensure that who you’re connecting to is actually who they say they are, in this case checking the HiveServer2 connection you’re connecting to isn’t being spoofed by someone else.


So that’s the process for connecting OBIEE to a Kerberos-secured CDH Hadoop cluster in a nutshell; in the New Year I’ll put something together on using Apache Sentry to provide role-based access control for Hive and Impala tables and as of CDH 5.3, HDFS directories, and I’ll also take a look at the new extended ACLs feature in CDH5.2 that goes beyond HDFS’s standard POSIX security model.

Categories: BI & Warehousing

Happy Christmas from Rittman Mead, and Where To Find Us in 2015

Tue, 2014-12-23 09:35

It’s the day before Christmas Eve over in the UK, and we’re finishing up for the Christmas break and looking forward to seeing friends and family over the next few days. To all of our readers, customers, friends and partners, have a great Holiday season and New Year, and we’re looking forward to seeing you at various events and on customer projects in 2015.

If you’re thinking of submitting an abstract for the Rittman Mead BI Forum 2015 the call for papers is now open. with abstracts accepted through to January 18th, 2015. In addition, as well as the BI Forum in May you can also catch-up with us at these events in the first-half of the New Year:

That’s it for now though – have a great Christmas and New Year, and see you in 2015!

Categories: BI & Warehousing

TIMESTAMPS and Presentation Variables

Mon, 2014-12-22 07:38

TIMESTAMPS and Presentation Variables can be some of the most useful tools a report creator can use to invent robust, repeatable reports while maximizing user flexibility.  I intend to transform you into an expert with these functions and by the end of this page you will certainly be able to impress your peers and managers, you may even impress Angus MacGyver.  In this example we will create a report that displays a year over year analysis for any rolling number of periods, by week or month, from any date in time, all determined by the user.  This entire document will only use values from a date and revenue field.

Final Month DS

The TIMESTAMP is an invaluable function that allows a user to define report limits based on a moving target. If the goal of your report is to display Month-to-Date, Year-to-Date, rolling month or truly any non-static period in time, the TIMESTAMP function will allow you to get there.  Often users want to know what a report looked like at some previous point in time, to provide that level of flexibility TIMESTAMPS can be used in conjunction with Presentation Variables.

To create robust TIMESTAMP functions you will first need to understand how the TIMESTAMP works. Take the following example:

Filter Day -7 DS

Here we are saying we want to include all dates greater than or equal to 7 days ago, or from the current date.

  • The first argument, SQL_TSI_DAY, defines the TimeStamp Interval (TSI). This means that we will be working with days.
  • The second argument determines how many of that interval we will be moving, in this case -7 days.
  • The third argument defines the starting point in time, in this example, the current date.

So in the end we have created a functional filter making Date >= 1 week ago, using a TIMESTAMP that subtracts 7 days from today.

Results -7 Days DS

Note: it is always a good practice to include a second filter giving an upper limit like “Time”.”Date” < CURRENT_DATE. Depending on the data that you are working with you might bring in items you don’t want or put unnecessary strain on the system.

We will now start to build this basic filter into something much more robust and flexible.

To start, when we subtracted 7 days in the filter above, let’s imagine that the goal of the filter was to always include dates >= the first of the month. In this scenario, we can use the DAYOFMONTH() function. This function will return the calendar day of any date. This is useful because we can subtract this amount to give us the first of the month from any date by simply subtracting it from that date and adding 1.

Our new filter would look like this:

DayofMonth DS

For example if today is December 18th, DAYOFMONTH(CURRENT_DATE) would equal 18. Thus, we would subtract 18 days from CURRENT_DATE, which is December 18th, and add 1, giving us December 1st.

MTD Dates DS

(For a list of other similar functions like DAYOFYEAR, WEEKOFYEAR etc. click here.)

To make this even better, instead of using CURRENT_DATE you could use a prompted value with the use of a Presentation Variable (for more on Presentation Variables, click here). If we call this presentation variable pDate, for prompted date, our filter now looks like this:

pDate DS

A best practice is to use default values with your presentation variables so you can run the queries you are working on from within your analysis. To add a default value all you do is add the value within braces at the end of your variable. We will use CURRENT_DATE as our default, @{pDate}{CURRENT_DATE}.  Will will refer to this filter later as Filter 1.

{Filter 1}:

pDateCurrentDate DS

As you can see, the filter is starting to take shape. Now lets say we are going to always be looking at a date range of the most recent completed 6 months. All we would need to do is create a nested TIMESTAMP function. To do this, we will “wrap” our current TIMESTAMP with another that will subtract 6 months. It will look like this:

Month -6 DS

Now we have a filter that is greater than or equal to the first day of the month of any given date (default of today) 6 months ago.

Month -6 Result DS

To take this one step further, you can even allow the users to determine the amount of months to include in this analysis by making the value of 6 a presentation variable, we will call it “n” with a default of 6, @{n}{6}.  We will refer to the following filter as Filter 2:

{Filter 2}:

n DS

For more on how to create a prompt with a range of values by altering a current column, like we want to do to allow users to select a value for n, click here.

Our TIMESTAMP function is now fairly robust and will give us any date greater than or equal to the first day of the month from n months ago from any given date. Now we will see what we just created in action by creating date ranges to allow for a Year over Year analysis for any number of months.

Consider the following filter set:

 Robust1 DS

This appears to be pretty intimidating but if we break it into parts we can start to understand its purpose.

Notice we are using the exact same filters from before (Filter 1 and Filter 2).  What we have done here is filtered on two time periods, separated by the OR statement.

The first date range defines the period as being the most recent complete n months from any given prompted date value, using a presentation variable with a default of today, which we created above.

The second time period, after the OR statement, is the exact same as the first only it has been wrapped in another TIMESTAMP function subtracting 1 year, giving you the exact same time frame for the year prior.

YoY Result DS

This allows us to create a report that can run a year over year analysis for a rolling n month time frame determined by the user.

A note on nested TIMESTAMPS:

You will always want to create nested TIMESTAMPS with the smallest interval first. Due to syntax, this will always be the furthest to the right. Then you will wrap intervals as necessary. In this case our smallest increment is day, wrapped by month, wrapped by year.

Now we will start with some more advanced tricks:

  • Instead of using CURRENT_DATE as your default value, use yesterday since most data are only as current as yesterday.  If you use real time or near real time reporting, using CURRENT_DATE may be how you want to proceed. Using yesterday will be valuable especially when pulling reports on the first day of the month or year, you generally want the entire previous time period rather than the empty beginning of a new one.  So, to implement, wherever you have @{pDate}{CURRENT_DATE} replace it with @{pDate}{TIMESTAMPADD(SQL_TSI_DAY,-1,CURRENT_DATE)}
  • Presentation Variables can also be used to determine if you want to display year over year values by month or by week by inserting a variable into your SQL_TSI_MONTH and DAYOFMONTH statements.  Changing MONTH to a presentation variable, SQL_TSI_@{INT}{MONTH} and DAYOF@{INT}{MONTH}, where INT is the name of our variable.  This will require you to create a dummy variable in your prompt to allow users to select either MONTH or WEEK.  You can try something like this: CASE MOD(DAY(“Time”.”Date”),2) WHEN 0 ‘WEEK’ WHEN 1 THEN ‘MONTH’ END



DropDown DS

In order for our interaction between Month and Week to run smoothly we have to make one more consideration.  If we are to take the date December 1st, 2014 and subtract one year we get December 1st, 2013, however, if we take the first day of this week, Sunday December 14, 2014 and subtract one year we get Saturday December 14, 2014.  In our analysis this will cause an extra partial week to show up for prior years.  To get around this we will add a case statement determining if ‘@{INT}{MONTH}’ = ‘Week’ THEN subtract 52 weeks from the first of the week ELSE subtract 1 year from the first of the month.

Our final filter set will look like this:

Final Filter DS

With the use of these filters and some creative dashboarding you can end up with a report that easily allows you to view a year over year analysis from any date in time for any number of periods either by month or by week.

Final Month Chart DS

Final Week Chart DS

That really got out of hand in a hurry! Surely, this will impress someone at your work, or even Angus MacGyver, if for nothing less than he or she won’t understand it, but hopefully, now you do!

Also, a colleague of mine Spencer McGhin just wrote a similar article on year over year analyses using a different approach. Feel free to review and consider your options.


Calendar Date/Time Functions

These are functions you can use within OBIEE and within TIMESTAMPS to extract the information you need.

  • Current_Date
  • Current_Time
  • Current_TimeStamp
  • Day_Of_Quarter
  • DayName
  • DayOfMonth
  • DayOfWeek
  • DayOfYear
  • Hour
  • Minute
  • Month
  • Month_Of_Quarter
  • MonthName
  • Now
  • Quarter_Of_Year
  • Second
  • TimestampAdd
  • TimestampDiff
  • Week_Of_Quarter
  • Week_Of_Year
  • Year

Back to section


Presentation Variables

The only way you can create variables within the presentation side of OBIEE is with the use of presentation variables. They can only be defined by a report prompt. Any value selected by the prompt will then be sent to any references of that filter throughout the dashboard page.

In the prompt:

Pres Var DS

From the “Set a variable” dropdown, select “Presentation Variable”. In the textbox below the dropdown, name your variable (named “n” above).

When calling this variable in your report, use the syntax @{n}{default}

If your variable is a string make sure to surround the variable in single quotes: ‘@{CustomerName]{default}’

Also, when using your variable in your report, it is good practice to assign a default value so that you can work with your report before publishing it to a dashboard. For variable n, if we want a default of 6 it would look like this @{n}{6}

Presentation variables can be called in filters, formulas and even text boxes.

Back to section


Dummy Column Prompt

For situations where you would like users to select a numerical value for a presentation variable, like we do with @{n}{6} above, you can convert something like a date field into values up to 365 by using the function DAYOFYEAR(“Time”.”Date”).

As you can see we are returning the SQL Choice List Values of DAYOFYEAR(“Time”.”Date”) <= 52.  Make sure to include an ORDER BY statement to ensure your values are well sorted.

Dummy Script DS

Back to Section

Categories: BI & Warehousing

OBIEE Enterprise Security

Fri, 2014-12-19 05:35

The Rittman Mead Global Services team have recently been involved in a number of security architecture implementations and produced a security model which meets a diverse set of requirements.  Using our experience and standards we have been able to deliver a robust model that addresses the common questions we routinely receive around security, such as :

“Whats considerations do I need to make when exposing Oracle BI to the outside world?”


“How can I make a flexible security model which is robust enough to meet the demands of my organisation but easy to maintain?”

The first question is based on a standard enterprise security model where the Oracle BI server is exposed by a web host, enabling SSL and tightening up access security.  This request can be complex to achieve but is something that we have implemented many times now.

The second question is much harder to answer, but our experience has led us to develop a multi-dimensional inheritance security model, with numerous clients that has yielded excellent results.

What is a Multi-dimensional Inheritance Security Model?

The wordy title is actually a simple concept that incorporates 5 key areas:

  • Easy to setup and maintain
  • Flexible
  • Durable
  • Expandable
  • Be consistent throughout the product

While there numerous ways of implementing a security model in Oracle BI, by sticking to the key concepts above, we ensure we get it right.  The largest challenge we face in BI is the different types of security required, and all three need to work in harmony:

  • Application security
  • Content security
  • Data security
Understanding the organisation makeup

The first approach is to consider the makeup of a common organisation and build our security around it.


This diagram shows different Departments (Finance, Marketing, Sales) whose data is specific to them, so normally the departmental users should only see their own data that is relevant to them.  In contrast the IT department who are developing the system need visibility across all data and so do the Directors.


What types of users do I have?

Next is to consider the types of users we have:

  1. BI Consumer: This will be the most basic and common user who needs to access the system for information.
  2. BI Analyst: As an Analyst the user will be expected to generate more bespoke queries and need ways to represent them. They will also need an area to save these reports.
  3. BI Author: The BI Author will be able to create content and publish that content for the BI Consumers and BI Analysts.
  4. BI Department Admin: The BI Department Admin will be responsible for permissions for their department as well as act as a focal point user.
  5. BI Developer: The BI Developer can be thought of as the person(s) who creates models in the RPD and will need additional access to the system for testing of their models. They might also be responsible for delivering Answers Requests or Dashboards in order to ‘Prove’ the model they created.
  6. BI Administrator:  The Administrator will be responsible for the running of the BI system and will have access to every role.  Most Administrator Task will not require Skills in SQL/Data Warehouse and is generally separated from the BI Developer role.

The types of users here are a combination of every requirement we have seen and might not be required by every client.  The order they are in shows the implied inheritance, so the BI Analyst inherits permissions and privileges from the BI Consumer and so on.

What Types do I need?

Depending on the size of organization determines what types of user groups are required. By default Oracle ships with:

  1. BI Consumer
  2. BI Author
  3. BI Administrator

Typically we would recommend inserting the BI Analyst into the default groups:

  1. BI Consumer
  2. BI Analyst
  3. BI Author
  4. BI Administrator

This works well when there is a central BI team who develop content for the whole organization. The structure would look like this:



For larger organizations where dashboard development and permissions is handled across multiple BI teams then the BI Department Administrator group can be used to locally manage permissions for each department.  Typically we see the BI team as a central Data Warehouse team who deliver the BI model (RPD) to the multiple BI teams.  In a large Organization the administration of Oracle BI should be handled by someone who isn’t the BI Developer, the structure could look like:




Permissions on groups

Each of the groups will require different permissions, at a high level the permissions would be:


Name Permissions BI Consumer
  • View Dashboards
  • Save User Selections
  • Subscribe to Ibots
BI Analyst
  • Access to Answers and standard set of views
  • Some form of storage
  • Access to Subject areas
BI Author
  • Access to Create/Modify Dashboards
  • Save Predefined Sections
  • Access to Action Links
  • Access to Dashboard Prompts
  • Access to BI Publisher
BI Department Admin
  • Ability to apply permissions and manage the Web Catalog
BI Developer
  • Advance access to answers
  • Access to all departments
BI Administrator
  • Everything


Understanding the basic security mechanics in 10g and 11g

In Oracle BI 10g the majority of the security is handled in the Oracle BI Server.  This would normally be done through initialisation blocks, which would authenticate the user from a LDAP server, then run a query against a database tables to populate the user into ‘Groups’ used in the RPD and ‘Web Groups’ used in the presentation server.  These groups would have to match in each level; Database, Oracle BI Server and Oracle BI Presentation Server.

With the addition of Enterprise Manager and Weblogic the security elements in Oracle BI 11g radically changed.  Authenticating the user is in the Oracle BI server is no longer the recommended way and is limited in Linux. While the RPD Groups and Presentation Server Web Groups still exist they don’t need to be used.  Users are now authenticated against Weblogic.  This can be done by using Weblogic’s own users and groups or by plugging it into a choice of LDAP servers.  The end result will be Groups and Users that exist in Weblogic.  The groups then need to be mapped to Application Roles in Enterprise Manager, which can be seen by the Oracle BI Presentation Services and Oracle BI Server.  It is recommended to create a one to one mapping for each group.



What does all this look like then?

Assuming this is for an SME size organization where the Dashboard development (BI Author) is done by the central BI team the groups would like:




The key points are:

  • The generic BI Consumer/Analyst groups give their permissions to the department versions
  • No users should be in the generic BI Consumer/Analyst groups
  • Only users from the BI team should be in the generic BI Author/Administrator group
  • New departments can be easily added
  • the lines denote the inheritance of permissions and privileges


Whats next – The Web Catalog?

The setup of the web catalog is very important to ensure that it does not get unwieldy, so it needs to reflect the security model and we would recommend setting up some base folders which look like:



Each department has their own folder and 4 sub folders. The permissions applied to each department’s root folder is BI Administrators so full control is possible across the top.  This is also true for every folder below however they will have additional explicit permissions described to ensure that the department cannot create any more than the four sub folders.

  • The Dashboard folder is where the dashboards go and the departments BI Developers group will have Full control and the departments BI consumer will have read . This will allow the departments BI Developers to create dashboards,  the departments BI Administrators to apply permissions and the departments consumers and analysts the ability to view.
  • The same permissions are applied to the Dashboard Answers folder to the same effect.
  • The Development Answers folder has Full control given to the departments BI Developers and no access to for the departments BI Analysts or BI Consumers. This folder is mainly for the departments BI Developers to store answers when in the process of development.
  • The Analyst folder is where the departments BI Analysts can save Answers. Therefore they will need full control of this folder.

I hope this article gives some insight into Security with Oracle BI.  Remember that our Global Services products offer a flexible support model where you can harness our knowledge to deliver your projects in a cost effective manner.

Categories: BI & Warehousing

OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance

Thu, 2014-12-18 16:26

The other week I posted a three-part series (part 1, part 2 and part 3) on going beyond MapReduce for Hadoop-based ETL, where I looked at a typical Apache Pig dataflow-style ETL process and showed how Apache Tez and Apache Spark can potentially make these processes run faster and make better use of in-memory processing. I picked Pig as a data processing environment as the multi-step data transformations creates translate into lots of separate MapReduce jobs in traditional Hadoop ETL environments, but run as a single DAG (directed acyclic graph) under Tez and Spark and can potentially use memory to pass intermediate results between steps, rather than writing all those intermediate datasets to disk.

But tools such as OBIEE and ODI use Apache Hive to interface with the Hadoop world, not Pig, so its improvements to Hive that will have the biggest immediate impact on the tools we use today. And what’s interesting is the developments and work thats going on around Hive in this area, with four different “next-generation Hive” initiatives going on that could end-up making OBIEE and ODI on Hadoop run faster:

  • Hive-on-Tez (or “Stinger”), principally championed by Hortonworks, along with which will enable ACID transactions in HiveQL
  • Hive-on-Spark, a more limited port of Hive to run on Spark and backed by Cloudera amongst others
  • Spark SQL within Apache Spark, which enables SQL queries against Spark RDDs (and Hive tables), and exposes a HiveServer2-compatible Thrift Server for JDBC access
  • Vendor initiatives that build on Hive but are mainly around integration with their RDBMS engines, for example Oracle Big Data SQL

Vendor initiatives like Oracle’s Big Data SQL and Cloudera Impala have the benefit of working now (and are supported), but usually come with some sort of penalty for not working directly within the Hive framework. Oracle’s Big Data SQL, for example, can read data from Hive (very efficiently, using Exadata SmartScan-type technology) but then can’t write-back to Hive, and currently pulls all the Hive data into Oracle if you try and join Oracle and Hive data together. Cloudera’s Impala, on the other hand, is lightening-fast and works directly on the Hadoop platform, but doesn’t support the same ecosystem of SerDes and storage handlers that Hive supports, taking away one of the key flexibility benefits of working with Hive. 

So what about the attempts to extend and improve Hive, or include Hive interfaces and compatibility in Spark? In most cases an ETL routine written as a series of Hive statements isn’t going to be as fast or resource-efficient as a custom Spark program, but if we can make Hive run faster or have a Spark application masquerade as a Hive database, we could effectively give OBIEE and ODI on Hadoop a “free” platform performance upgrade without having to change the way they access Hadoop data. So what are these initiatives about, and how usable are they now with OBIEE and ODI?

Probably the most ambitious next-generation Hive project is the Stinger initiative. Backed by Hortonworks and based on the Apache Tez framework that runs on Hadoop 2.0 and YARN. Stinger aimed first to port Hive to run on Tez (which runs MapReduce jobs but enables them to potentially run as a single DAG), and then add ACID transaction capabilities so that you can UPDATE and DELETE from a Hive table as well as INSERT and SELECT, using a transaction model that allows you to roll-back uncommitted changes (diagram from the Hortonworks page)


Tez is more of a set of developer APIs rather than the full data discovery / data analysis platform that Spark aims to provide, but it’s a technology that’s available now as part of Hortonworks HDP2.2 platform and as I showed in the blog post a few days ago, an existing Pig script that you run as-is on a Tez environment typically runs twice as fast as when its using MapReduce to move data around (with usual testing caveats applying, YMMV etc). Hive should be the same as well, giving us the ability to take Hive transformation scripts and run them unchanged except for specifying Tez at the start as the execution engine.


Hive on Tez is probably the first of these initiatives we’ll see working with ODI and OBIEE, as ODI has just been certified for Hortonworks HDP2.1, and the new HDP2.2 release is the one that comes with Tez as an option for Pig and Hive query execution. I’m guessing ODI will need to have its Hive KMs updated to add a new option to select Tez or MapReduce as the underlying Hive execution engine, but otherwise I can see this working “out of the box” once ODI support for HDP2.2 is announced.

Going back to the last of the three blog posts I wrote on going beyond MapReduce, many in the Hadoop industry back Spark as the successor to MapReduce rather than Tez as its a more mature implementation that goes beyond the developer-level APIs that Tez aims to provide to make Pig and Hive scripts run faster. As we’ll see in a moment Spark comes with its own SQL capabilities and a Hive-compatible JDBC interface, but the other “swap-out-the-execution-engine” initiative to improve Hive is Hive on Spark, a port of Hive that allows Spark to be used as Hive’s execution engine instead of just MapReduce.

Hive on Spark is at an earlier stage in development than Hive on Tez with the first demo being given at the recent Strata + Hadoop World New York, and specific builds of Spark and versions of Hive needed to get it running. Interestingly though, a post went on the Cloudera Blog a couple of days ago announcing an Amazon AWS AMI machine image that you could use to test Hive on Spark, which though it doesn’t come with a full CDH or HDP installation or features such as a HiveServer JDBC interface, comes with a small TPC-DS dataset and some sample queries that we can use to get a feeling for how it works. I used the AMI image to create an Amazon AWS m3.large instance and gave it a go.

By default, Hive in this demo environment is configured to use Spark as the underlying execution engine. Running a couple of the TPC-DS queries first using this Spark engine, and then switching back to MapReduce by running the command “set hive.execution.engine=mr” within the Hive CLI, I generally found queries using Spark as the execution engine ran 2-3x faster than the MapReduce ones.


You can’t read too much into this timing as the demo AMI is really only to show off the functional features (Hive using Spark as the execution engine) and no work on performance optimisation has been done, but it’s encouraging even at this point that it’s significantly faster than the MapReduce version.

Long-term the objective is to have both Tez and Spark available as options as execution engines under Hive, along with MapReduce, as the diagram below from a presentation by Cloudera’s Szenon Ho shows; the advantage of building on Hive like this rather than creating your own new SQL-on-Hadoop engine is that you can make use of the library of SerDes, storage handlers and so on that you’d otherwise need to recreate for any new tool.


The third major SQL-on-Hadoop initiative I’ve been looking at is Spark SQL within Apache Spark. Unlike Hive on Spark which aims to swap-out the compiler and execution engine parts of Hive but otherwise leave the rest of the product unchanged, Apache Spark as a whole is a much more freeform, flexible data query and analysis environment that’s aimed more at analysts that business users looking to query their dataset using SQL. That said, Spark has some cool SQL and Hive integration features that make it an interesting platform for doing data analysis and ETL.

In my Spark ETL example the other day, I loaded log data and some reference data into RDDs and then filtered and transformed them using a mix of Scala functions and Spark SQL queries. Running on top of the set of core Spark APIs, Spark SQL allows you to register temporary tables within Spark that map onto RDDs, and give you the option of querying your data using either familiar SQL relational operators, or the more functional programming-style Scala language


You can also create connections to the Hive metastore though, and create Hive tables within your Spark application for when you want to persist results to a table rather than work with the temporary tables that Spark SQL usually creates against RDDs. In the code example below, I create a HiveContext as opposed to the sqlContext that I used in the example on the previous blog, and then use that to create a table in my Hive database, running on a Hortonworks HDP2.1 VM with Spark 1.0.0 pre-built for Hadoop 2.4.0:

scala> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
scala> hiveContext.hql("CREATE TABLE posts_hive (post_id int, title string, postdate string, post_type string, author string, post_name string, generated_url string) row format delimited fields terminated by '|' stored as textfile")
scala> hiveContext.hql("LOAD DATA INPATH '/user/root/posts.psv' INTO TABLE posts_hive")

If I then go into the Hive CLI, I can see this new table listed there alongside the other ones:

hive> show tables;
Time taken: 0.536 seconds, Fetched: 8 row(s)

What’s even more interesting is that Spark also comes with a HiveServer2-compatible Thrift Server, making it possible for tools such as ODI that connect to Hive via JDBC to run Hive queries through Spark, with the Hive metastore providing the metadata but Spark running as the execution engine.


This is subtly different to Hive-on-Spark as Hive’s metastore, support for SerDes and storage handlers runs under the covers but Spark provides you with a full programmatic environment, making it possible to just expose Hive tables through the Spark layer, or mix and match data from RDDs, Hive tables and other sources before storing and then exposing the results through the Hive SQL interface. For example then, you could use Oracle SQL*Developer 4.1 with the Cloudera Hive JDBC drivers to connect to this Spark SQL Thrift Server and query the tables just like any other Hive source, but crucially the Hive execution is being done by Spark, rather than MapReduce as would normally happen.


Like Hive-on-Spark, Spark SQL and Hive support within Spark SQL are at early stages, with Spark SQL not yet being supported by Cloudera whereas the core Spark API is. From the work I’ve done with it, it’s not yet possible to expose Spark SQL temporary tables through the HiveServer2 Thrift Server interface, and I can’t see a way of creating Hive tables out of RDDs unless you stage the RDD data to a file in-between. But it’s clearly a promising technology and if it becomes possible to seamlessly combine RDD data and Hive data, and expose Spark RDDs registered as tables through the HiveServer2 JDBC interface it could make Spark a very compelling platform for BI and data analyst-type applications. Oracle’s David Allen, for example, blogged about using Spark and the Spark SQL Thrift Server interface to connect ODI to Hive through Spark, and I’d imagine it’d be possible to use the Cloudera HiveServer2 ODBC drivers along with the Windows version of OBIEE to connect to Spark in this way too – if I get some spare time over the Christmas break I’ll try and get an example working.

Categories: BI & Warehousing

Rittman Mead BI Forum 2015 Call for Papers Now Open!

Wed, 2014-12-17 03:00

I’m very pleased to announce that the Call for Papers for the Rittman Mead BI Forum 2015 is now open, with abstract submissions open to January 18th 2015. As in previous years the BI Forum will run over consecutive weeks in Brighton, UK and Atlanta, GA, with the provisional dates and venues as below:

  • Brighton, UK : Hotel Seattle, Brighton, UK : May 6th – 8th 2015
  • Atlanta, GA : Renaissance Atlanta Midtown Hotel, Atlanta, USA : May 13th-15th 2015

Now on it’s seventh year, the Rittman Mead BI Forum is the only conference dedicated entirely to Oracle Business Intelligence, Oracle Business Analytics and the technologies and processes that support it – data warehousing, data analysis, data visualisation, big data and OLAP analysis. We’re looking for session around tips & techniques, project case-studies and success stories, and sessions where you’ve taken Oracle’s BI products and used them in new and innovative ways. Each year we select around eight-to-ten speakers for each event along with keynote speakers and a masterclass session, with speaker choices driven by attendee votes at the end of January, and editorial input from myself, Jon Mead and Charles Elliott and Jordan Meyer.


Last year we had a big focus on cloud, and a masterclass and several sessions on bringing Hadoop and big data to the world of OBIEE. This year we’re interested in project stories and experiences around cloud and Hadoop, and we’re keen to hear about any Oracle BI Apps 11g implementations or migrations from the earlier 7.9.x releases. Getting back to basics we’re always interested in sessions around OBIEE, Essbase and data warehouse data modelling, and we’d particularly like to encourage session abstracts on data visualization, BI project methodologies and the incorporation of unstructured, semi-structured and external (public) data sources into your BI dashboards. For an idea of the types of presentations that have been selected in the past, check out the BI Forum 2014, 2013 and 2012 homepages, or feel free to get in touch via email at

The Call for Papers entry form is here, and we’re looking for speakers for Brighton, Atlanta, or both venues if you can speak at both. All session this year will be 45 minutes long, all we’ll be publishing submissions and inviting potential attendees to vote on their favourite sessions towards the end of January. Other than that – have a think about abstract ideas now, and make sure you get them in by January 18th 2015.

Categories: BI & Warehousing

Rittman Mead’s Bar Optimizer

Tue, 2014-12-16 09:08


At Rittman Mead R&D, we have the privilege of solving some of our clients’ most challenging data problems. We recently built a set of customized data products that leverage the power of Oracle and Cloudera platforms and wanted to share some of the fun we’ve had in creating unique user experiences. We’ve been thinking about how we can lean on our efforts to help make the holidays even more special for the extended Rittman Mead family. With that inspiration, we had several questions on our minds:

  • How can we throw an amazing holiday party?
  • What gifts can we give that we can be sure our coworkers, friends, and family will enjoy?
  • What gifts would we want for ourselves?

After a discussion over drinks, the answers became clear. We decided to create a tool that uses data analytics to help you create exceptional cocktails for the holidays.

Here is how we did it. First, we analyzed the cocktail recipes of three world-renowned cocktail bars: PDT, Employees Only, and Death & Co. We then turned their drink recipes into data and got to work on the Bar Optimizer, which uses analytics on top of that data to help you make the holiday season tastier than ever before.

To use the Bar Optimizer, enter the liquors and other ingredients that you have on hand to see what drinks you can make. It then recommends additional ingredients that let you create the largest variety of new drinks. You can also use this feature to give great gifts based on others’ liquor cabinets. Finally, try using one of our optimized starter kits to stock your bar for a big holiday party. We’ve crunched the numbers to find the fewest bottles that can make the largest variety of cocktails.

Click the annotated screenshot above for details, and contact us if you would like more information about how we build products that take your data beyond dashboards.

Categories: BI & Warehousing

Linux cluster sysadmin — OS metric monitoring with colmux

Mon, 2014-12-15 12:44

In this mini-series of blog posts I’m taking a look at a few very useful tools that can make your life as the sysadmin of a cluster of Linux machines. This may be a Hadoop cluster, or just a plain simple set of ‘normal’ machines on which you want to run the same commands and monitoring.

First we looked at using SSH keys for intra-machine authorisation, which is a pre-requisite executing the same command across multiple machines using PDSH, as well as what we look at in this article – monitoring OS metrics across a cluster with colmux.

Colmux is written by Mark Seger, the same person who wrote collectl. It makes use of collectl on each target machine to report back OS metrics across a cluster to a single node.


Install collectl across the cluster

Using pdsh we can easily install collectl on each node (if it’s not already), which is a pre-requisite for colmux:

pdsh -w root@rnmcluster02-node0[1-4] "yum install -y collectl && service collectl start && chkconfig collectl on"

NB by enabling the collectl service on each node it will capture performance data to file locally, which colmux can replay centrally.

Then install colmux itself, which you can download from Sourceforge. It only needs to be actually installed on a single host, but obviously we could push it out across the cluster with pdsh if we wanted to be able to invoke it on any node at will. Note that here I’m running it on a separate linux box (outside of the cluster) rather than on my Mac:

cd /tmp
# Make sure you get the latest version of collectl-utils, from
# This example is hardcoded to a version and particular sourceforge mirror
curl -O
tar xf collectl-utils-4.8.2.src.tar.gz
cd collectl-utils-4.8.2
sudo ./INSTALL
# collectl-utils also includes colplot, so if you might want to use it restart
# apache (assuming it's installed)
sudo service httpd restart

Colmux and networking

Couple of important notes:

  1. The machine you run colmux from needs to have port 2655 open in order for each node’s collectl to send back the data to it.

  2. You also may encounter an issue if you have any odd networking (eg NAT on virtual machines) that causes colmux to not work because it picks the ‘wrong’ network interface of the host to tell collectl on each node to send its data to. Details and workaround here.

colmux in action – real-time view, point in time


colmux -addr 'rnmcluster02-node0[1-4]' -username root


# Mon Dec  1 22:20:40 2014  Connected: 4 of 4
#                    <--------CPU--------><----------Disks-----------><----------Network---------->
#Host                cpu sys inter  ctxsw KBRead  Reads KBWrit Writes   KBIn  PktIn  KBOut  PktOut
rnmcluster02-node01    1   1    28     36      0      0      0      0      0      2      0       2
rnmcluster02-node04    0   0    33     28      0      0     36      8      0      1      0       1
rnmcluster02-node03    0   0    15     17      0      0      0      0      0      1      0       1
rnmcluster02-node02    0   0    18     18      0      0      0      0      0      1      0       1


Real-time view, persisted

-cols puts the hosts across the top and time as rows. Specify one or more columns from the output without -cols. In this example it is the values for cpu value, along with the disk read/write (columns 1, 5 and 7 of the metrics as seen above):


colmux -addr 'rnmcluster02-node0[1-4]' -user root -cols 1,5,7


cpu                            KBRead                         KBWrit
 node01 node02 node03 node04 |  node01 node02 node03 node04 |  node01 node02 node03 node04
      0      0      0      0 |       0      0      0      0 |      12     28      0      0
      0      0      0      0 |       0      0      0      0 |      12     28      0      0
      1      0      1      0 |       0      0      0      0 |       0      0      0      0
      0      0      0      0 |       0      0      0      0 |       0      0      0      0
      0      0      0      0 |       0      0      0      0 |       0      0      0      0
      0      0      0      0 |       0      0      0      0 |       0     20      0      0
      0      0      0      0 |       0      0      0      0 |      52      4      0      0
      0      0      0      2 |       0      0      0      0 |       0      0      0      0
      1      0      0      0 |       0      0      0      0 |       0      0      0      0
     15     16     15     15 |       0      4      4      4 |      20     40     32     48
      0      0      1      1 |       0      0      0      0 |       0      0      4      0
      1      0      0      0 |       0      0      0      0 |       0      0      0      0


To check the numbers of the columns that you want to reference, run the command with the --test argument:

colmux -addr 'rnmcluster02-node0[1-4]' -user root --test

>>> Headers <<<
#                    <--------CPU--------><----------Disks-----------><----------Network---------->
#Host                cpu sys inter  ctxsw KBRead  Reads KBWrit Writes   KBIn  PktIn  KBOut  PktOut

>>> Column Numbering <<<
 0 #Host   1 cpu     2 sys     3 inter   4 ctxsw   5 KBRead  6 Reads   7 KBWrit
 8 Writes  9 KBIn   10 PktIn  11 KBOut  12 PktOut

And from there you get the numbers of the columns to reference in the -cols argument.

To include the timestamp, use -oT in the -command and offset the column numbers by 1:


colmux -addr 'rnmcluster02-node0[1-4]' -user root -cols 2,6,8 -command '-oT'


sys                            Reads                          Writes
#Time    node01 node02 node03 node04 |  node01 node02 node03 node04 |  node01 node02 node03 node04
22:24:50      0      0      0      0 |       0      0      0      0 |       0      0      0      0
22:24:51      1      0      0      0 |       0      0      0      0 |       0      0      0      0
22:24:52      0      0      0      0 |       0      0      0      0 |       0     16      0     16
22:24:53      1      0      0      0 |       0      0      0      0 |      36      0     16      0
22:24:54      0      0      0      1 |       0      0      0      0 |       0      0      0      0
22:24:55      0      0      0      0 |       0      0      0      0 |       0     20     32     20

NB There’s a bug with colmux 4.8.2 that prevents you accessing the first metric with -cols when you also enable timestamp -oTdetails here.

Specifying additional metrics

Collectl (which is what colmux calls to get the data) can fetch metrics from multiple subsystems on a node. You can access all of these through colmux too. By default when you run colmux you get cpu, disk and network but you can specify others using the -s argument followed by the subsystem identifier.

To examine the available subsystems run collectl on one of the target nodes:

[root@rnmcluster02-node01 ~]# collectl --showsubsys
The following subsystems can be specified in any combinations with -s or
--subsys in both record and playbackmode.  [default=bcdfijmnstx]

These generate summary, which is the total of ALL data for a particular type
  b - buddy info (memory fragmentation)
  c - cpu
  d - disk
  f - nfs
  i - inodes
  j - interrupts by CPU
  l - lustre
  m - memory
  n - network
  s - sockets
  t - tcp
  x - interconnect (currently supported: OFED/Infiniband)
  y - slabs

From the above list we can see that if we want to also show memory detail alongside CPU we need to include m and c in the subsystem list:


colmux -addr 'rnmcluster02-node0[1-4]' -user root -command '-scm'


# Tue Dec  2 08:02:38 2014  Connected: 4 of 4
#                    <--------CPU--------><-----------Memory----------->
#Host                cpu sys inter  ctxsw Free Buff Cach Inac Slab  Map
rnmcluster02-node02    1   0    19     18  33M  15M 345M 167M  30M  56M
rnmcluster02-node04    0   0    30     24  32M  15M 345M 167M  30M  56M
rnmcluster02-node03    0   0    30     36  32M  15M 345M 165M  30M  56M
rnmcluster02-node01    0   0    16     16  29M  15M 326M 167M  27M  81M

Changing the sample frequency

To change the sample frequency use the -i syntax in -command:


colmux -addr 'rnmcluster02-node0[1-4]' -user root -command '-scm -i10 -oT' -cols 2,4

Samples every 10 seconds:

sys                            ctxsw
#Time    node01 node02 node03 node04 |  node01 node02 node03 node04
08:06:29     -1     -1     -1     -1 |      -1     -1     -1     -1
08:06:39     -1     -1     -1     -1 |      -1     -1     -1     -1
08:06:49      0      0      0      0 |      14     13     15     19
08:06:59      0      0      0      0 |      13     13     17     21
08:07:09      0      0      0      0 |      19     18     15     24
08:07:19      0      0      0      0 |      13     13     15     19
08:07:29      0      0      0      0 |      13     13     14     19
08:07:39      0      0      0      0 |      12     13     13     19

Column width

Add the -colwidth argument


colmux -addr 'rnmcluster02-node0[1-4]' -user root -command '-scm' -cols 1 -colwidth 20


  rnmcluster02-node01  rnmcluster02-node02  rnmcluster02-node03  rnmcluster02-node04
                   -1                   -1                   -1                   -1
                   -1                   -1                   -1                   -1
                    1                    0                    0                    0
                    0                    0                    0                    0
                    0                    1                    0                    0
                    0                    0                    1                    0
                    1                    0                    1                    0
                    0                    1                    0                    0


As well as running interactively, collectl can run as a service and record metric samples to disk. Using colmux you can replay these from across the cluster.

Within the -command, include -p and the path to the collectl log files (assumes that it is the same on each host). As with real-time mode, for different subsystems change the flags after -s


colmux -addr 'rnmcluster02-node0[1-4]' -user root -command '-p /var/log/collectl/*20141201* -scmd -oD'


# 21:48:50  Reporting: 4 of 4
#                                      <--------CPU--------><-----------Memory-----------><----------Disks----------->
#Host               Date     Time      cpu sys inter  ctxsw Free Buff Cach Inac Slab  Map KBRead  Reads KBWrit Writes
rnmcluster02-node04 20141201 21:48:50    0   0    17     15  58M  10M 340M 162M  30M  39M      0      0      1      0
rnmcluster02-node03 20141201 21:48:50    0   0    11     13  58M  10M 340M 160M  30M  39M      0      0      0      0
rnmcluster02-node02 20141201 21:48:50    0   0    11     15  58M  10M 340M 163M  29M  39M      0      0      1      0
rnmcluster02-node01 20141201 21:48:50    0   0    12     14  33M  12M 342M 157M  27M  63M      0      0      1      0

# 21:49:00  Reporting: 4 of 4
#                                      <--------CPU--------><-----------Memory-----------><----------Disks----------->
#Host               Date     Time      cpu sys inter  ctxsw Free Buff Cach Inac Slab  Map KBRead  Reads KBWrit Writes
rnmcluster02-node04 20141201 21:49:00    0   0    17     15  58M  10M 340M 162M  30M  39M      0      0      4      0
rnmcluster02-node03 20141201 21:49:00    0   0    13     14  58M  10M 340M 160M  30M  39M      0      0      5      0
rnmcluster02-node02 20141201 21:49:00    0   0    12     14  58M  10M 340M 163M  29M  39M      0      0      1      0
rnmcluster02-node01 20141201 21:49:00    0   0    12     15  33M  12M 342M 157M  27M  63M      0      0      6      0

# 21:49:10  Reporting: 4 of 4
#                                      <--------CPU--------><-----------Memory-----------><----------Disks----------->
#Host               Date     Time      cpu sys inter  ctxsw Free Buff Cach Inac Slab  Map KBRead  Reads KBWrit Writes
rnmcluster02-node04 20141201 21:49:10    0   0    23     23  58M  10M 340M 162M  30M  39M      0      0      1      0
rnmcluster02-node03 20141201 21:49:10    0   0    19     24  58M  10M 340M 160M  30M  39M      0      0      2      0
rnmcluster02-node02 20141201 21:49:10    0   0    18     23  58M  10M 340M 163M  29M  39M      0      0      2      1
rnmcluster02-node01 20141201 21:49:10    0   0    18     24  33M  12M 342M 157M  27M  63M      0      0      1      0

Restrict the time frame by adding to -command the arguments -from and/or -thru

[oracle@rnm-ol6-2 ~]$ colmux -addr 'rnmcluster02-node0[1-4]' -user root -command '-p /var/log/collectl/*20141201* -scmd -oD --from 21:40:00 --thru 21:40:10'
# 21:40:00  Reporting: 4 of 4
#                                      <--------CPU--------><-----------Memory-----------><----------Disks----------->
#Host               Date     Time      cpu sys inter  ctxsw Free Buff Cach Inac Slab  Map KBRead  Reads KBWrit Writes
rnmcluster02-node04 20141201 21:40:00    0   0    16     14  59M  10M 340M 162M  30M  39M      0      0      0      0
rnmcluster02-node03 20141201 21:40:00    0   0    12     14  58M  10M 340M 160M  30M  39M      0      0      8      1
rnmcluster02-node02 20141201 21:40:00    0   0    12     15  59M  10M 340M 162M  30M  39M      0      0      6      1
rnmcluster02-node01 20141201 21:40:00    0   0    13     16  56M  11M 341M 156M  27M  42M      0      0      7      1

# 21:40:10  Reporting: 4 of 4
#                                      <--------CPU--------><-----------Memory-----------><----------Disks----------->
#Host               Date     Time      cpu sys inter  ctxsw Free Buff Cach Inac Slab  Map KBRead  Reads KBWrit Writes
rnmcluster02-node04 20141201 21:40:10    0   0    26     33  59M  10M 340M 162M  30M  39M      1      0     10      2
rnmcluster02-node03 20141201 21:40:10    0   0    20     31  58M  10M 340M 160M  30M  39M      0      0      4      1
rnmcluster02-node02 20141201 21:40:10    0   0    23     35  59M  10M 340M 162M  30M  39M      3      0      9      2
rnmcluster02-node01 20141201 21:40:10    0   0    23     37  56M  11M 341M 156M  27M  42M      4      1      4      1

[oracle@rnm-ol6-2 ~]$

colmux reference

You can find more about colmux from the website:

as well as the built in man page man colmux

Visualising collectl data with colplot

As a little bonus to the above, colmux is part of the collectl-utils package, which also includes colplot, a gnuplot-based web tool that renders collectl data into graphs. It’s pretty easy to set up, running under Apache just fine and just needing gnuplot installed if you haven’t already. It can report metrics across a cluster if you make sure that you first make each node’s collectl data available locally to colplot.

Navigating to the web page shows the interface from which you can trigger graph plots based on the collectl data available:

colplot’s utilitarian graphs are a refreshing contrast to every webapp that is built nowadays promising “beautiful” visualisations (which no doubt the authors are “passionate” about making “awesome”):

The graphs are functional and can be scaled as needed, but each change is a trip back to the front page to tweak options and re-render:


For me, colplot is an excellent tool for point-in-time analysis and diagnostics, but for more generalised monitoring with drilldown into detail, it is too manual to be viable and I’ll be sticking with collectl -> graphite -> grafana with its interactive and flexible graph rendering:


Do note however that colplot specifically does not drop data points, so if there is a spike in your data you will see it. Other tools (possibly including graphite but I’ve not validated this) will, for larger timespans average out data series so as to provide a smoother picture of a metric (eg instead of a point every second, maybe every ten seconds). If you are doing close analysis of a system’s behaviour in a particular situation this may be a problem. If you are wanting more generalised overview of a system’s health, with the option to drill into data historical as needed, it will be less of an issue.


When working with multiple Linux machines I would first and foremost make sure SSH keys are set up in order to ease management through password-less logins.

After SSH keys, I would recommend pdsh for parallel execution of the same SSH command across the cluster. It’s a big time saver particularly when initially setting up the cluster given the installation and configuration changes that are inevitably needed.

To monitor a cluster I would always recommend collectl as the base metric collector. colmux works excellently for viewing these metrics from across the cluster in a single place from the commandline. For viewing the metrics over the longer term you can either store them in (or replay them into) Graphite/Carbon, and render them in Grafana. You have the option of colplot too since this is installed as part of colmux.

So now your turn – what particular tools or tips do you have for working with a cluster of Linux machines? Leave your answers in the comments below, or tweet them to me at @rmoff.

Categories: BI & Warehousing

Linux cluster sysadmin — Parallel command execution with PDSH

Tue, 2014-12-09 11:40

In this series of blog posts I’m taking a look at a few very useful tools that can make your life as the sysadmin of a cluster of Linux machines easier. This may be a Hadoop cluster, or just a plain simple set of ‘normal’ machines on which you want to run the same commands and monitoring.

Previously we looked at using SSH keys for intra-machine authorisation, which is a pre-requisite what we’ll look at here — executing the same command across multiple machines using PDSH. In the next post of the series we’ll see how we can monitor OS metrics across a cluster with colmux.

PDSH is a very smart little tool that enables you to issue the same command on multiple hosts at once, and see the output. You need to have set up ssh key authentication from the client to host on all of them, so if you followed the steps in the first section of this article you’ll be good to go.

The syntax for using it is nice and simple:

  • -w specifies the addresses. You can use numerical ranges [1-4] and/or comma-separated lists of hosts. If you want to connect as a user other than the current user on the calling machine, you can specify it here (or as a separate -l argument)
  • After that is the command to run.

For example run against a small cluster of four machines that I have:

robin@RNMMBP $ pdsh -w root@rnmcluster02-node0[1-4] date

rnmcluster02-node01: Fri Nov 28 17:26:17 GMT 2014
rnmcluster02-node02: Fri Nov 28 17:26:18 GMT 2014
rnmcluster02-node03: Fri Nov 28 17:26:18 GMT 2014
rnmcluster02-node04: Fri Nov 28 17:26:18 GMT 2014

PDSH can be installed on the Mac under Homebrew (did I mention that Rittman Mead laptops are Macs, so I can do all of this straight from my work machine… :-) )

brew install pdsh

And if you want to run it on Linux from the EPEL yum repository (RHEL-compatible, but packages for other distros are available):

yum install pdsh

You can run it from a cluster node, or from your client machine (assuming your client machine is mac/linux).

Example – install and start collectl on all nodes

I started looking into pdsh when it came to setting up a cluster of machines from scratch. One of the must-have tools I like to have on any machine that I work with is the excellent collectl. This is an OS resource monitoring tool that I initially learnt of through Kevin Closson and Greg Rahn, and provides the kind of information you’d get from top etc – and then some! It can run interactively, log to disk, run as a service – and it also happens to integrate very nicely with graphite, making it a no-brainer choice for any server.

So, instead of logging into each box individually I could instead run this:

pdsh -w root@rnmcluster02-node0[1-4] yum install -y collectl
pdsh -w root@rnmcluster02-node0[1-4] service collectl start
pdsh -w root@rnmcluster02-node0[1-4] chkconfig collectl on

Yes, I know there are tools out there like puppet and chef that are designed for doing this kind of templated build of multiple servers, but the point I want to illustrate here is that pdsh enables you to do ad-hoc changes to a set of servers at once. Sure, once I have my cluster built and want to create an image/template for future builds, then it would be daft if I were building the whole lot through pdsh-distributed yum commands.

Example – setting up the date/timezone/NTPD

Often the accuracy of the clock on each server in a cluster is crucial, and we can easily do this with pdsh:

Install packages

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] yum install -y ntp ntpdate

Set the timezone:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] ln -sf /usr/share/zoneinfo/Europe/London /etc/localtime

Force a time refresh:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] ntpdate
rnmcluster02-node03: 30 Nov 20:46:22 ntpdate[27610]: step time server offset -2.928585 sec
rnmcluster02-node02: 30 Nov 20:46:22 ntpdate[28527]: step time server offset -2.946021 sec
rnmcluster02-node04: 30 Nov 20:46:22 ntpdate[27615]: step time server offset -2.915713 sec
rnmcluster02-node01: 30 Nov 20:46:25 ntpdate[29316]: rate limit response from server.
rnmcluster02-node01: 30 Nov 20:46:22 ntpdate[29316]: step time server offset -2.925016 sec

Set NTPD to start automatically at boot:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] chkconfig ntpd on

Start NTPD:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] service ntpd start

Example – using a HEREDOC (here-document) and sending quotation marks in a command with PDSH

Here documents (heredocs) are a nice way to embed multi-line content in a single command, enabling the scripting of a file creation rather than the clumsy instruction to “open an editor and paste the following lines into it and save the file as /foo/bar”.

Fortunately heredocs work just fine with pdsh, so long as you remember to enclose the whole command in quotation marks. And speaking of which, if you need to include quotation marks in your actual command, you need to escape them with a backslash. Here’s an example of both, setting up the configuration file for my ever-favourite gnu screen on all the nodes of the cluster:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] "cat > ~/.screenrc <<EOF
hardstatus alwayslastline \"%{= RY}%H %{kG}%{G} Screen(s): %{c}%w %=%{kG}%c  %D, %M %d %Y  LD:%l\"
startup_message off
msgwait 1
defscrollback 100000
nethack on

Now when I login to each individual node and run screen, I get a nice toolbar at the bottom:

Combining commands

To combine commands together that you send to each host you can use the standard bash operator semicolon ;

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] "date;sleep 5;date"
rnmcluster02-node01: Sun Nov 30 20:57:06 GMT 2014
rnmcluster02-node03: Sun Nov 30 20:57:06 GMT 2014
rnmcluster02-node04: Sun Nov 30 20:57:06 GMT 2014
rnmcluster02-node02: Sun Nov 30 20:57:06 GMT 2014
rnmcluster02-node01: Sun Nov 30 20:57:11 GMT 2014
rnmcluster02-node03: Sun Nov 30 20:57:11 GMT 2014
rnmcluster02-node04: Sun Nov 30 20:57:11 GMT 2014
rnmcluster02-node02: Sun Nov 30 20:57:11 GMT 2014

Note the use of the quotation marks to enclose the entire command string. Without them the bash interpretor will take the ; as the delineator of the local commands, and try to run the subsequent commands locally:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] date;sleep 5;date
rnmcluster02-node03: Sun Nov 30 20:57:53 GMT 2014
rnmcluster02-node04: Sun Nov 30 20:57:53 GMT 2014
rnmcluster02-node02: Sun Nov 30 20:57:53 GMT 2014
rnmcluster02-node01: Sun Nov 30 20:57:53 GMT 2014
Sun 30 Nov 2014 20:58:00 GMT

You can also use && and || to run subsequent commands conditionally if the previous one succeeds or fails respectively:

robin@RNMMBP $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig collectl on && service collectl start"

rnmcluster02-node03: Starting collectl: [  OK  ]
rnmcluster02-node02: Starting collectl: [  OK  ]
rnmcluster02-node04: Starting collectl: [  OK  ]
rnmcluster02-node01: Starting collectl: [  OK  ]

Piping and file redirects

Similar to combining commands above, you can pipe the output of commands, and you need to use quotation marks to enclose the whole command string.

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig|grep collectl"
rnmcluster02-node03: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off
rnmcluster02-node01: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off
rnmcluster02-node04: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off
rnmcluster02-node02: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off

However, you can pipe the output from pdsh to a local process if you want:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] chkconfig|grep collectl
rnmcluster02-node02: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off
rnmcluster02-node04: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off
rnmcluster02-node03: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off
rnmcluster02-node01: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off

The difference is that you’ll be shifting the whole of the pipe across the network in order to process it locally, so if you’re just grepping etc this doesn’t make any sense. For use of utilities held locally and not on the remote server though, this might make sense.

File redirects work the same way – within quotation marks and the redirect will be to a file on the remote server, outside of them it’ll be local:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig>/tmp/pdsh.out"
robin@RNMMBP ~ $ ls -l /tmp/pdsh.out
ls: /tmp/pdsh.out: No such file or directory

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] chkconfig>/tmp/pdsh.out
robin@RNMMBP ~ $ ls -l /tmp/pdsh.out
-rw-r--r--  1 robin  wheel  7608 30 Nov 19:23 /tmp/pdsh.out

Cancelling PDSH operations

As you can see from above, the precise syntax of pdsh calls can be hugely important. If you run a command and it appears ‘stuck’, or if you have that heartstopping realisation that the shutdown -h now you meant to run locally you ran across the cluster, you can press Ctrl-C once to see the status of your commands:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] sleep 30
^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)
pdsh@RNMMBP: rnmcluster02-node01: command in progress
pdsh@RNMMBP: rnmcluster02-node02: command in progress
pdsh@RNMMBP: rnmcluster02-node03: command in progress
pdsh@RNMMBP: rnmcluster02-node04: command in progress

and press it twice (or within a second of the first) to cancel:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] sleep 30
^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)
pdsh@RNMMBP: rnmcluster02-node01: command in progress
pdsh@RNMMBP: rnmcluster02-node02: command in progress
pdsh@RNMMBP: rnmcluster02-node03: command in progress
pdsh@RNMMBP: rnmcluster02-node04: command in progress
^Csending SIGTERM to ssh rnmcluster02-node01
sending signal 15 to rnmcluster02-node01 [ssh] pid 26534
sending SIGTERM to ssh rnmcluster02-node02
sending signal 15 to rnmcluster02-node02 [ssh] pid 26535
sending SIGTERM to ssh rnmcluster02-node03
sending signal 15 to rnmcluster02-node03 [ssh] pid 26533
sending SIGTERM to ssh rnmcluster02-node04
sending signal 15 to rnmcluster02-node04 [ssh] pid 26532
pdsh@RNMMBP: interrupt, aborting.

If you’ve got threads yet to run on the remote hosts, but want to keep running whatever has already started, you can use Ctrl-C, Ctrl-Z:

robin@RNMMBP ~ $ pdsh -f 2 -w root@rnmcluster02-node[01-4] "sleep 5;date"
^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)
pdsh@RNMMBP: rnmcluster02-node01: command in progress
pdsh@RNMMBP: rnmcluster02-node02: command in progress
^Zpdsh@RNMMBP: Canceled 2 pending threads.
rnmcluster02-node01: Mon Dec  1 21:46:35 GMT 2014
rnmcluster02-node02: Mon Dec  1 21:46:35 GMT 2014

NB the above example illustrates the use of the -f argument to limit how many threads are run against remote hosts at once. We can see the command is left running on the first two nodes and returns the date, whilst the Ctrl-C – Ctrl-Z stops it from being executed on the remaining nodes.


By default, when you ssh to new host for the first time you’ll be prompted to validate the remote host’s SSH key fingerprint.

The authenticity of host 'rnmcluster02-node02 (' can't be established.
RSA key fingerprint is 00:c0:75:a8:bc:30:cb:8e:b3:8e:e4:29:42:6a:27:1c.
Are you sure you want to continue connecting (yes/no)?

This is one of those prompts that the majority of us just hit enter at and ignore; if that includes you then you will want to make sure that your PDSH call doesn’t fall in a heap because you’re connecting to a bunch of new servers all at once. PDSH is not an interactive tool, so if it requires input from the hosts it’s connecting to it’ll just fail. To avoid this SSH prompt, you can set up the environment variable PDSH_SSH_ARGS_APPEND as follows:

export PDSH_SSH_ARGS_APPEND="-q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"

The -q makes failures less verbose, and the -o passes in a couple of options, StrictHostKeyChecking to disable the above check, and UserKnownHostsFile to stop SSH keeping a list of host IP/hostnames and corresponding SSH fingerprints (by pointing it at /dev/null). You’ll want this if you’re working with VMs that are sharing a pool of IPs and get re-used, otherwise you get this scary failure:

Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
Please contact your system administrator.

For both of these above options, make sure you’re aware of the security implications that you’re opening yourself up to. For a sandbox environment I just ignore them; for anything where security is of importance make sure you are aware of quite which server you are connecting to by SSH, and protecting yourself from MitM attacks.

PDSH Reference

You can find out more about PDSH at


When working with multiple Linux machines I would first and foremost make sure SSH keys are set up in order to ease management through password-less logins.

After SSH keys, I would recommend pdsh for parallel execution of the same SSH command across the cluster. It’s a big time saver particularly when initially setting up the cluster given the installation and configuration changes that are inevitably needed.

In the next article of this series we’ll see how the tool colmux is a powerful way to monitor OS metrics across a cluster.

So now your turn – what particular tools or tips do you have for working with a cluster of Linux machines? Leave your answers in the comments below, or tweet them to me at @rmoff.

Categories: BI & Warehousing

Linux cluster sysadmin — SSH keys

Tue, 2014-12-09 05:34

In this short series of blog posts I’m going to take a look at a few very useful tools that can make your life as the sysadmin of a cluster of Linux machines easier. This may be a Hadoop cluster, or just a plain simple set of ‘normal’ machines on which you want to run the same commands and monitoring.

To start with, we’re going to use the ever-awesome ssh keys to manage security on the cluster. After that we’ll look at executing the same command across multiple machines at the same time using PDSH, and then monitoring OS metrics across a cluster with colmux.

In a nutshell, ssh keys enable us to do password-less authentication in a secure way. You can find a detailed explanation of them in a previous post that I wrote, tips and tricks for OBIEE Linux sysadmin. Beyond the obvious time-saving function of not having to enter a password each time we connect to a machine, having SSH keys in place enable the use of the tools we discuss later, pdsh and colmux.

Working with SSH keys involves taking the public key from a pair, and adding that to another machine in order to allow the owner of the pair’s private key to access that machine. What we’re going to do here is generate a unique key pair that will be used as the identity across the cluster. So each node will have a copy of the private key, in order to be able to authenticate to any other node, which will be holding a copy of the public key (as well as, in turn, the same private key).

In this example I’m going to use my own client machine to connect to the cluster. You could easily use any of the cluster nodes too if a local machine would not be appropriate.
As a side-note, this is another reason why I love the fact that Rittman Mead standard-issue laptop is a MacBook, and just under the covers of Mac OS is a *nix-based command-line meaning that a lot of sysadmin work can be done natively without needing additional tools that you would on Windows (e.g. PuTTY, WinSCP, Pageant, etc etc).

SSH key strategy

We’ve several ways we could implement the SSH keys. Because it’s a purely sandbox cluster, I could use the same SSH key pair that I generate for the cluster on my machine too, so the same public/private key pair is distributed thus:

If we wanted a bit more security, a better approach might be to distribute my personal SSH key’s public key across the cluster too, and leave the cluster’s private key to truly identify cluster nodes alone. An additional benefit of this approach is that is the client does not need to hold a copy of the cluster’s SSH private key, instead just continuing to use their own.

For completeness, the extreme version of the key strategy would be for each machine to have its own ssh key pair (i.e. its own security identity), with the corresponding public keys distributed to the other nodes in the cluster:

But anyway, here we’re using the second option – a unique keypair used across the cluster and the client’s public ssh key distributed across the cluster too.

Generating the SSH key pair

First, we need to generate the key. I’m going to create a folder to hold it first, because in a moment we’re going to push it and a couple of other files out to all the servers in the cluster and it’s easiest to do this from a single folder.

mkdir /tmp/rnmcluster02-ssh-keys

Note that in the ssh-keygen command below I’m specifying the target path for the key with the -f argument; if you don’t then watch out that you don’t accidentally overwrite your own key pair in the default path of ~/.ssh.

The -q -N "" flags instruct the key generation to use no passphrase for the key and to not prompt for it either. This is the lowest friction approach (you don’t need to unlock the ssh key with a passphrase before use) but also the least secure. If you’re setting up access to a machine where security matters then bear in mind that without a passphrase on an ssh key anyone who obtains it can therefore access any machine to which the key has been granted access (i.e. on which its public key has been deployed).

ssh-keygen -f /tmp/rnmcluster02-ssh-keys/id_rsa -q -N ""

This generates in the tmp folder two files – the private and public (.pub) keys of the pair:

robin@RNMMBP ~ $ ls -l /tmp/rnmcluster02-ssh-keys
total 16
-rw-------  1 robin  wheel  1675 30 Nov 17:28 id_rsa
-rw-r--r--  1 robin  wheel   400 30 Nov 17:28

Preparing the authorized_keys file

Now we’ll prepare the authorized_keys file which is where the public SSH key of any identity permitted to access the machine is stored. Note that each user on a machine has their own authorized_keys file, in ~/.ssh/. So for example, the root user has the file in /root/.ssh/authorized_keys and any public key listed in that file will be able to connect to the server as the root user. Be aware the American [mis-]spelling of “authorized” – spell it [correctly] as “authorised” and you’ll not get any obvious errors, but the ssh key login won’t work either.

So we’re going to copy the public key of the unique pair that we just created for the cluster into the authorized_keys file. In addition we will copy in our own personal ssh key (and any other public key that we want to give access to all the nodes in the cluster):

cp /tmp/rnmcluster02-ssh-keys/ /tmp/rnmcluster02-ssh-keys/authorized_keys
# [optional] Now add any other keys (such as your own) into the authorized_keys file just created
cat ~/.ssh/ >> /tmp/rnmcluster02-ssh-keys/authorized_keys
# NB make sure the previous step is a double >> not > since the double appends to the file, a single overwrites.

Distributing the SSH artefacts

Now we’re going to push this set of SSH files out to the .ssh folder of the target user on each node, which in this case is the root user. From a security point of view it’s probably better to use a non-root user for login and then sudo as required, but we’re keeping things simple (and less secure) to start with here. So the files in our folder are:

  • id_rsa – the private key of the key pair
  • – the public key of the key pair. Strictly speaking this doesn’t need distributing to all nodes, but it’s conventional and handy to hold it alongside the private key.
  • authorized_keys – this is the file that the sshd daemon on each node will look at to validate an incoming login request’s offered private key, and so needs to hold the public key of anyone who is allowed to access the machine as this user.

To copy the files we’ll use scp, but how you get them in place doesn’t really matter so much, so long as they get to the right place:

scp -r /tmp/rnmcluster02-ssh-keys root@rnmcluster02-node01:~/.ssh

At this point you’ll need to enter the password for the target user, but rejoice! This is the last time you’ll need to enter it as subsequent logins will be authenticated using the ssh keys that you’re now configuring.

Run the scp for all nodes in the cluster. If you’ve four nodes in the cluster your output should look something like this:

$ scp -r /tmp/rnmcluster02-ssh-keys/ root@rnmcluster02-node01:~/.ssh
root@rnmcluster02-node01's password:
authorized_keys                                                  100%  781     0.8KB/s   00:00
id_rsa                                                           100% 1675     1.6KB/s   00:00                                                       100%  400     0.4KB/s   00:00
$ scp -r /tmp/rnmcluster02-ssh-keys/ root@rnmcluster02-node02:~/.ssh
Warning: Permanently added the RSA host key for IP address '' to the list of known hosts.
root@rnmcluster02-node02's password:
authorized_keys                                                  100%  781     0.8KB/s   00:00
id_rsa                                                           100% 1675     1.6KB/s   00:00                                                       100%  400     0.4KB/s   00:00
$ scp -r /tmp/rnmcluster02-ssh-keys/ root@rnmcluster02-node03:~/.ssh
root@rnmcluster02-node03's password:
authorized_keys                                                  100%  781     0.8KB/s   00:00
id_rsa                                                           100% 1675     1.6KB/s   00:00                                                       100%  400     0.4KB/s   00:00
$ scp -r /tmp/rnmcluster02-ssh-keys/ root@rnmcluster02-node04:~/.ssh
root@rnmcluster02-node04's password:
authorized_keys                                                  100%  781     0.8KB/s   00:00
id_rsa                                                           100% 1675     1.6KB/s   00:00                                                       100%  400     0.4KB/s   00:00

Testing login authenticated through SSH keys

The moment of truth. From your client machine, try to ssh to each of the cluster nodes. If you are prompted for a password, then something is not right – see the troubleshooting section below.

If you put your own public key in authorized_keys when you created it then you don’t need to specify which key to use when connecting because it’ll use your own private key by default:

robin@RNMMBP ~ $ ssh root@rnmcluster02-node01
Last login: Fri Nov 28 17:13:23 2014 from

[root@localhost ~]#

There we go – logged in automagically with no password prompt. If we’re using the cluster’s private key (rather than our own) you need to specify it with -i when you connect.

robin@RNMMBP ~ $ ssh -i /tmp/rnmcluster02-ssh-keys/id_rsa root@rnmcluster02-node01
Last login: Fri Nov 28 17:13:23 2014 from

[root@localhost ~]#

Troubleshooting SSH key connections

SSH keys are one of the best things in a sysadmin’s toolkit, but when they don’t work can be a bit tricky to sort out. The first thing to check is that on the target machine the authorized_keys file that does all the magic (by listing the ssh keys that are permitted to connect inbound on a host to the given user) is in place:

[root@localhost .ssh]# ls -l ~/.ssh/authorized_keys
-rw-r--r-- 1 root root 775 Nov 30 18:55 /root/.ssh/authorized_keys

If you get this:

[root@localhost .ssh]# ls -l ~/.ssh/authorized_keys
ls: cannot access /root/.ssh/authorized_keys: No such file or directory

then you have a problem.

One possible issue in this specific instance could be that the above pre-canned scp assumes that the user’s .ssh folder doesn’t already exist (since it doesn’t, on brand new servers) and so specifies it as the target name for the whole rnmcluster02-ssh-keys folder. However if it does already exist then it ends up copying the rnmcluster02-ssh-keys folder into the .ssh folder:

[root@localhost .ssh]# ls -lR
total 12
-rw------- 1 root root 1675 Nov 22  2013 id_rsa
-rw-r--r-- 1 root root  394 Nov 22  2013
drwxr-xr-x 2 root root 4096 Nov 30 18:49 rnmcluster02-ssh-keys

total 12
-rw-r--r-- 1 root root  775 Nov 30 18:49 authorized_keys
-rw------- 1 root root 1675 Nov 30 18:49 id_rsa
-rw-r--r-- 1 root root  394 Nov 30 18:49
[root@localhost .ssh]#

To fix this simply move the authorized_keys from rnmcluster02-ssh-keys back into .ssh:

[root@localhost .ssh]# mv ~/.ssh/rnmcluster02-ssh-keys/authorized_keys ~/.ssh/

Other frequent causes of problems are file/folder permissions that are too lax on the target user’s .ssh folder (which can be fixed with chmod -R 700 ~/.ssh) or the connecting user’s ssh private key (fix: chmod 600 id_rsa). The latter will show on connection attempts very clearly:

robin@RNMMBP ~ $ ssh -i /tmp/rnmcluster02-ssh-keys/id_rsa root@rnmcluster02-node01
Permissions 0777 for '/tmp/rnmcluster02-ssh-keys/id_rsa' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: /tmp/rnmcluster02-ssh-keys/id_rsa

Another one that has bitten me twice over time – and that eludes the troubleshooting I’ll demonstrate in a moment – is that SELinux gets stroppy about root access using ssh keys. I always just take this as a handy reminder to disable selinux (in /etc/selinux/config, set SELINUX=disabled), having never had cause to leave it enabled. But, if you do need it enabled you’ll need to hit the interwebs to check the exact cause/solution for this problem.

So to troubleshoot ssh key problems in general do two things. Firstly from the client side, specify verbosity (-v for a bit of verbosity, -vvv for most)

ssh -v -i /tmp/rnmcluster02-ssh-keys/id_rsa root@rnmcluster02-node01

You should observe ssh trying to use the private key, and if the server rejects it it’ll fall back to any other ssh private keys it can find, and then password authentication:

debug1: Offering RSA public key: /tmp/rnmcluster02-ssh-keys/id_rsa
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: password

Quite often the problem will be on the server side, so assuming that you can still connect to the server (eg through the physical console, or using password authentication) then go and check /var/log/secure where you’ll see all logs relating to attempted connections. Here’s the log file corresponding to the above client log, where ssh key authentication is attempted but fails, and then password authentication is used to successfully connect:

Nov 30 18:15:05 localhost sshd[13156]: Authentication refused: bad ownership or modes for file /root/.ssh/authorized_keys
Nov 30 18:15:15 localhost sshd[13156]: Accepted password for root from port 59305 ssh2
Nov 30 18:15:15 localhost sshd[13156]: pam_unix(sshd:session): session opened for user root by (uid=0)

Now we can see clearly what the problem is – “bad ownership or modes for file /root/.ssh/authorized_keys”.

The last roll of the troubleshooting dice is to get sshd (the ssh daemon that runs on the host we’re trying to connect to) to issue more verbose logs. You can either set LogLevel DEBUG1 (or DEBUG2, or DEBUG3) in /etc/ssh/sshd_config and restart the ssh daemon (service sshd restart), or you can actually run a (second) ssh daemon from the host with specific logging. This would be appropriate on a multi-user server where you can’t just go changing sshd configuration. To run a second instance of sshd you’d use:

/usr/sbin/sshd -D -d -p 2222

You have to run sshd from an absolute path (you’ll get told this if you try not to). The -D flag stops it running as a daemon and instead runs interactively, so we can see easily all the output from it. -d specifies the debug logging (-dd or -ddd for greater levels of verbosity), and -p 2222 tells sshd to listen on port 2222. Since we’re doing this on top of the existing sshd, we obviously can’t use the default ssh port (22) so pick another port that is available (and not blocked by a firewall).

Now on the client retry the connection, but pointing to the port of the interactive sshd instance:

ssh -v -p 2222 -i /tmp/rnmcluster02-ssh-keys/id_rsa root@rnmcluster02-node01

When you run the command on the client you should get both the client and host machine debug output go crackers for a second, giving you plenty of diagnostics to pore through and analyse the ssh handshake etc to get to the root of the issue.

Hopefully you’ve now sorted your SSH keys, because in the next article we’re going to see how we can use them to run commands against multiple servers at once using pdsh.


When working with multiple Linux machines I would first and foremost make sure SSH keys are set up in order to ease management through password-less logins.

We’ll see in the next couple of articles some other tools that are useful when working on a cluster:

  • pdsh
  • colmux

I’m interested in what you think – what particular tools or tips do you have for working with a cluster of Linux machines? Leave your answers in the comments below, or tweet them to me at @rmoff.

Categories: BI & Warehousing