Skip navigation.

BI & Warehousing

OBIEE How-To: A View Selector for your Dashboard

Rittman Mead Consulting - Wed, 2014-10-29 20:00

A common problem report developers face is user groups having different needs and preferences, and as a consequence these user groups want to see their data presented in different ways. Some users prefer to see a graph when others want a table is a classic example. So, how do we do this? It’s a no brainer… we use a view selector. View selectors give us a great amount of flexibility by allowing us to swap out one analysis view for another. You might even take it a step further and use a view selector to swap out an entire compound layout for another one, giving the user an entirely different set of views to look at. Truly powerful stuff, right?

But view selectors do have one limitation… they’re only available at the analysis level. What if you wanted this selector functionality at the dashboard level so that you could swap out an analysis from one subject area for one from different subject area? Or what if you wanted to be able to switch one dashboard prompt for another one? You’re out of luck, it’s just not possible…

Just kidding… of course it’s possible. As it turns out, it’s fairly straightforward to build your own dashboard level view selector using other objects already provided by OBIEE out-of-the-box.

Create a dashboard variable prompt to drive the content. We need a way for the users to select the view they want to see. View selectors have a built in dropdown prompt to accomplish this at the analysis level. To do this at the dashboard level we’re going to use a dashboard prompt.

So, the first step is to create a new dashboard prompt object and add a variable prompt. You can name the variable whatever you wish, for this example we’re just going to call it P_SECTION. You can set the User Input to whatever you want, but it’s important that only one option is selected at a time… multiple values should not be allowed. Let’s set the user input to “Choice List” and add some custom values.

What you name these custom values isn’t important but the labels should be descriptive enough so that the users understand the different options. Just keep in mind, the values you use here will need to exactly match the analysis we create in the next step. For this example, let’s use ‘Section1′, ‘Section2′, and ‘Section3′ to keep things simple.

JFB - View Selector - P_SECTION Prompt

 Create an analysis to drive the conditional logic. We need to create an analysis that will return a set number of rows for each of the options in the prompt we just created. The number of rows returned then drives which section we see on the dashboard.

Ultimately, the logic of this analysis doesn’t matter, and there are a dozen ways to accomplish this. To keep things simple, we’re just going to use CASE statements. While not an elegant solution, it’ll work just fine for our basic example.

Add three columns to the criteria, we’ll use a Time dimension and modify the column formula with the following CASE statements. Make sure that the text strings match the Custom Values used in the prompt.

CASE WHEN "Time"."T05 Per Name Year" IN ('2006') THEN 'Section1' END

CASE WHEN "Time"."T05 Per Name Year" IN ('2006', '2007') THEN 'Section2' END

CASE WHEN "Time"."T05 Per Name Year" IN ('2006', '2007', '2008') THEN 'Section3' END

JFB - View Selector - Table

Now we need to update the filter so that the appropriate rows are shown based upon what the user selects. Basically, we need the request to return 1, 2, or 3 rows based upon our P_SECTION presentation variable.

For our example we’re going to create a filter for each of the options and set them equal to the presentation variable we created earlier in our dashboard prompt. Only one filter will be true at a time so the operator between these filters has been set to OR. Also you’ll notice that the default value for the presentation variable has been set to ‘Section1′, across the board. If, for whatever reason, the P_SECTION variable isn’t set we want the dashboard to default to the first section.

JFB - View Selector - Filter

CASE WHEN "Time"."T05 Per Name Year" IN ('2006') THEN 'Section1' END is equal to / is in @{P_SECTION}{Section1}
OR CASE WHEN "Time"."T05 Per Name Year" IN ('2006', '2007') THEN 'Section2' END is equal to / is in @{P_SECTION}{Section1}
OR CASE WHEN "Time"."T05 Per Name Year" IN ('2006', '2007', '2008') THEN 'Section3' END is equal to / is in @{P_SECTION}{Section1}

So, let’s quickly walk through how this works. The end user selects ’Section1’ from the dashboard prompt. That selection is stored in our P_SECTION presentation variable, which is then passed to and used by our filter. With ‘Section1’ selected only the 1st line of the filter will hold true which will result in a single row returned. When ‘Section2’ is chosen, the second row of the filter is true which returns two rows, and so on.

We’re almost done, in the next step we’ll create some conditions on the individual dashboard sections and put it all together.

Create sections and set some conditions. We just need to create our sections and set some conditions so that they are shown/hidden appropriately. Create a new dashboard page. Edit the dashboard page and drag three empty sections on to the page. Create a condition on the first section using the Analysis created in the last step. The first condition we need to create should be True If Row Count is equal to 1.

JFB - View Selector - Condition

Are you beginning to see how this is going to work? The only time we’ll get a single row back is when the presentation variable is set to ‘Section1’. When P_SECTION is set to ‘Section2’ we’ll get two rows back from our analysis. Go ahead and create a second condition that is True If Row Count is equal to 2 for section 2. For section 3 create a condition that’s True If Row Count is equal to 3.

JFB - View Selector - Dashboard Editor

Since we aren’t adding content to these sections, you’ll want to make sure to enable the option to “Show Section Title” or add a couple text fields so that you can easily identify which section is rendered on the page. Lastly, drag the dashboard prompt onto the page. Save the dashboard page and let’s take a look.

When the page first renders, you should see something similar to the following screenshot. The prompt is set to ‘Section1’ and sure enough, Section 1 appears below it. If you change the selection to ‘Section2’ or ‘Section3’ and hit apply, Section 1 will be hidden and the corresponding content will appear. All that’s left now would be to go back and add content to the sections.

JFB - View Selector - Result

So, using only out-of-the-box features, we were able to create an extremely versatile and dynamic bit of functionality… and all it took was a dashboard prompt, an analysis to hold our conditional logic, and some sections and conditions.

This approach is just another tool that you can use to help deliver the dynamic content your users are looking for. It provides flexibility within the context of a single dashboard page and also limits the need to navigate (and maintain) multiple pages. Admittedly, the example was just walked through isn’t all that exciting, but hopefully you can see the potential.

Some of your users want a minimalist view allowing them to filter on just the basics, while others want to slice and dice by everything under the sun? Create two prompts, a basic and an advanced, and allow the users to switch between the two.

JFB - View Selector - BasicAdv

Want to pack a large amount of charts into a page while still minimizing scrolling for those poor souls working with 1024×768? No problem, have a low-res option of the dashboard.

JFB - View Selector - LowRes

 The finance department wants a to see a dashboard full of bar charts, but the payroll department is being totally unreasonable and only wants to see line graphs? Well, you get the idea…

Categories: BI & Warehousing

Part 4: DBAs guide to managing sandboxes

Keith Laker - Tue, 2014-10-28 08:11

This is the next part in my on-going series of posts on the topic of how to successfully manage sandboxes within an Oracle data warehouse environment. In Part 1 I provided an overview of sandboxing (key characteristics, deployment models) and introduced the concept of a lifecycle called BOX’D (Build, Observe, X-Charge and Drop). In Part 2 I briefly explored the key differences between data marts and sandboxes. Part 3 explored the Build-phase of our lifecycle.

Now, in this post I am going to focus on the Observe-phase. At this stage in the lifecycle we are concerned with managing our sandboxes. Most modern data warehouse environments will be running hundreds of data discovery projects so it is vital that the DBA can monitor and control the resources that each sandbox consumes by establishing rules to control the resources available to each project both in general terms and specifically for each project.  

In most cases, DBAs will setup a sandbox with dedicated resources. However, this approach does not create an efficient use of resources since sharing of unused resources across other projects is just not possible. The key advantage of Oracle Multitenant is its unique approach to resource management. The only realistic way to support thousands of sandboxes, which in today’s analytical driven environments is entirely possible if not inevitable, is to allocate one chunk of memory and one set of background processes for each container database. This provides much greater utilisation of existing IT resources and greater scalability as multiple pluggable sandboxes are consolidated into the multitenant container database.

Resources

 

Using multitenant we can now expand and reduce our resources as required to match our workloads. In the example below we are running an Oracle RAC environment, with two nodes in the cluster. You can see that only certain PDBs are open on certain nodes of the cluster and this is achieved by opening the corresponding services on these nodes as appropriate. In this way we are partitioning the SGA across the various nodes of the RAC cluster. This allows us to achieve the scalability we need for managing lots of sandboxes. At this stage we have a lot of project teams running large, sophisticated workloads which is causing the system to run close to capacity as represented by the little resource meters.

 

Expand 1

 

It would be great if our DBA could add some additional processing power to this environment to handle this increased workload. With 12c what we can do is simply drop another node into the cluster which allows us to spread the processing of the various sandbox workloads loads out across the expanded cluster. 

Expand 2

Now our little resource meters are showing that the load on the system is a lot more comfortable. This shows that the new multitenant feature integrates really well with RAC. It’s a symbiotic relationship whereby Multitenant makes RAC better and RAC makes Multitenant better.

So now we can add resources to the cluster how do we actually manage resources across each of our sandboxes? As a DBA I am sure that you are familiar with the features in Resource Manager that allow you to control system resources: CPU, sessions, parallel execution servers, Exadata I/O. If you need a quick refresher on Resource Manager then check out this presentation by Dan Norris “Overview of Oracle Resource Manager on Exadata” and the chapter on resource management in the 12c DBA guide.

With 12c Resource Manager is now multitenant-aware. Using Resource Manager we can configure policies to control how system resources are shared across the sandboxes/projects. Policies control how resources are utilised across PDBs creating hard limits that can enforce a “get what you pay for” model which is an important point when we move forward to the next phase of the lifecycle: X-Charge. Within Resource Manager we have adopted an “industry standard” approach to controlling resources based on two notions:

  1. a number of shares is allocated to each PDB
  2. a maximum utilization limit may be applied to each PDB

To help DBAs quickly deploy PDBs with a pre-defined set of shares and utilisation limits there is a “Default” configuration that works, even as PDBs are added or removed. How would this work in practice? Using a simple example this is how we could specify resource plans for the allocation of CPU between three PDBs:

RM 1

 

As you can see, there are four total shares, 2 for the data warehouse and one each for our two sandboxes. This means that our data warehouse is guaranteed 50% of the CPU whatever else is going on in the other sandboxes (PDBs). Similarly each of our sandbox projects is guaranteed at least 25%. However, in this case we did not specify settings for maximum utilisation. Therefore, our marketing sandbox could use 100% of the CPU if both the data warehouse and the sales sandbox were idle.

By using the “Default” profile we can simplify the whole process of adding and removing sandboxes/PDBS. As we add and remove sandboxes, the system resources are correctly rebalanced, by using the settings specific default profile, across all the plugged-in sandboxes/PDBs as shown below.

RM 2

 

Summary

In this latest post on sandboxing I have examined the “Observe” phase of our BOX’D sandbox lifecycle. With the new  multitenant-aware Resource Manager we can configure policies to control how system resources are shared across sandboxes. Using Resource Manager it is possible to configure a policy so that the first tenant in a large, powerful server experiences a realistic share of the resources that will eventually be shared as other tenants are plugged in.

In the next post I will explore the next phase of our sandbox lifecycle, X-charge, which will cover the metering and chargeback services for pluggable sandboxes. 

 

Technorati Tags: , ,

Categories: BI & Warehousing

Monitoring OBIEE with the ELK stack

Rittman Mead Consulting - Tue, 2014-10-21 09:37

Monitoring the health of an OBIEE system and diagnosing problems that may occur is a vital task for the system’s administrator and support staff. It’s one that at Rittman Mead we help customers with implementing themselves, and also provide as a managed service. In this article I am going to discuss the ELK stack, which fills a specific gap between the high-level monitoring and configuration functionality of Enterprise Manager 11g Fusion Middleware Control, and the Enterprise-grade monitoring, alerting and configuration management of Enterprise Manager 12c Cloud Control.

The ELK stack enables you to rapidly access both summary and detail information across the stack, supporting swift identification and diagnosis of any issues that may occur. The responsive interface lets you to drill into time periods or any ad-hoc field or filter as you wish, to analyse and diagnose problems. Data can be summarised and grouped arbitrarily, displaying relative error rates ensuring that genuine problems are not lost in the ‘noise’ of usual operation.

Out of the box, OBIEE ships with Enterprise Manager 11g Fusion Middleware Control (FMC), which as the name says is part of the Enterprise Manager line of tools from Oracle for managing systems. It is more of a configuration and deployment tool than it is really a monitoring and diagnostics one. The next step up is FMC’s (very) big brother, Enterprise Manager 12c Cloud Control (EM12c). This is very much its own product, requiring its own infrastructure and geared up to monitoring an organisation’s entire fleet of [Oracle] hardware and software. With this greatly enhanced functionally with EM12c also comes a license cost. The ELK stack conceptually fits perfectly alongside your existing EM FMC, providing a most excellent OBIEE monitoring dashboard and analysis tool, and allowing you to explore the kind of diagnostics and historical data that you could have access to in EM 12c.

In ELK we can see at a glance what kind of relative activity there has been on the system over the past few days:

There have been some errors, and the top three nQS and ORA error codes and messages are shown. This is an important differentiator to EM where you can search for errors, but cannot see straightaway if it is a one-off or multiple occurence. By grouping by error message it’s possible to quickly see what the biggest problem on a system may currently be:

At this point we might want to drill down into what was being run when the errors were being thrown. For example, from the error summary alone we can see the biggest problem was a locked database account – but which database was being queried? Lower down the dashboard page is a list of log details, and by clicking on the search icon against an error message we can filter the results shown:

We can use the search icon again to restrict results by ECID

And from there see all the related log entries, including which connection pool the request was against (and thus which database account is locked)

Another way of diagnosing a sudden rash of errors would be to instead drilldown on time alone to take a more holistic view at the logs (useful also given that ECIDs don’t always give the full picture). Using the system activity timeline along with the events log view it is a piece of cake to do this – simply click and drag a time window on the chart to instantly zoom into it.

Taking a step back up, we can see at a glance which areas of the OBIEE metadata model (RPD) are being used, as well as where we are pulling logs from – and all of these are clickable in order to filter the results further. So it’s easy to see, for a given subject area, what’s the current error rate? Or to quickly access all the log files for a specific set of components alone (for example, BI Server and OPMN). Any field that is displayed, whether in a chart or a detailed log view, can be clicked and used as the input for an ad-hoc filter.

It’s not just errors and logs we can monitor – the current and trending performance of the system (or a part of it; note the filtering by subject area and database described above) can be observed and of course, drilled into:

The ELK stack

The ELK stack is a suite of free software made up of three tools, the first letter of each giving it its name:

  • ElasticSearch
  • Logstash
  • Kibana

At a very high level, we collect and enrich diagnostic data from log files using logstash, store it in ElasticSearch, and present and analyse it through Kibana.

  • ElasticSearch is a document store, in which data with no predefined structure can be stored. Its origins and core strength are in full text search of any of the data held within it, and it is this that differentiates it from pure document stores such as MongoDB that Mark Rittman wrote about recently. Data is loaded and retrieved from ElasticSearch through messages sent over the HTTP protocol, and one of the applications that can send data this way and works extremely well is Logstash.
  • Logstash is an innocuous looking tool that at first glance one could mistakenly write off as “just” a log parser. It does a lot more than that and a healthy ecosystem of input, filter, codec and output plugins means that it can interface between a great variety of applications, shifting data from one to another and optionally processing and enriching it along the way.
  • The final piece of the stack is Kibana, a web application that enables one to build very flexible and interactive time-based dashboards, sourcing data from ElasticSearch. Interestingly, another of my favourite tools that I have written about before – and will write about again in this article – is Grafana which is forked from Kibana (and modified to source its data from time-series databases like graphite/carbon/whisper or InfluxDB) – thus if you’re at home with one you will be the other.

In this article I’m going to show how to set up your own ELK stack to monitor OBIEE, based on SampleApp v406.

Who is this for?

As you will see below, setting up and configuring the ELK stack does involve rolling up ones sleeves and diving right in. If you’re looking for an off-the-shelf monitoring solution then you should look elsewhere (such as EM12c). But if you want to have a crack at it I think you’ll be pleasantly surprised at what is possible once you get past the initially (bumpy) learning curve. The capabilities are great, and there’s an active support community as is the case with lots of open-source tools. With a bit of work it is possible to create a monitoring environment tailored pretty much entirely to your design.

Installing the stack

ELK runs on all common linux distributions (including Oracle Linux), as well as Mac OS. The only prerequisite is a JDK for ElasticSearch and Logstash, and web server for Kibana; here I am using Apache.

First up, let’s install JDK 1.7 (SampleApp has 1.6, which isn’t enough):

sudo yum install -y java-1.7.0-openjdk.x86_64

Apache is already installed on SampleApp, which we can verify thus:

[oracle@demo ~]$ sudo yum install -y httpd
Loaded plugins: refresh-packagekit
Setting up Install Process
Package httpd-2.2.15-29.0.1.el6_4.x86_64 already installed and latest version
Nothing to do
[oracle@demo ~]$ sudo service httpd status
httpd is stopped

It’s shutdown by default and that’s fine because we need to update the configuration on it anyway.

ElasticSearch

The easiest way to install ElasticSearch is using the yum repository:

sudo rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch

cat > /tmp/elasticsearch.repo<<EOF
[elasticsearch-1.3]
name=Elasticsearch repository for 1.3.x packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.3/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
EOF

sudo mv /tmp/elasticsearch.repo /etc/yum.repos.d/
sudo yum install -y elasticsearch

I’d then set it to start at boot automagically:

sudo chkconfig elasticsearch on

and then start it up:

sudo service elasticsearch start

One final, optional, step in the installation is a plugin called kopf which gives a nice web dashboard for looking at the status of ElasticSearch:

export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk.x86_64/
cd /usr/share/elasticsearch/bin
sudo ./plugin -install lmenezes/elasticsearch-kopf

logstash

There’s no repository for logstash, but it’s no biggie because there’s no install as such, just a download and unpack. Grab the download archive for logstash from the ELK download page, and then unpack it:

cd ~/Downloads
wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
# Need to use sudo because /opt is owned by root
sudo tar -xf logstash-1.4.2.tar.gz --directory /opt/
sudo mv /opt/logstash-1.4.2/ /opt/logstash/
sudo chown -R oracle. /opt/logstash/

Kibana

As with logstash, Kibana just needs downloading and unpacking. There’s also a wee bit of configuration to do, so that the web server (Apache, in our case) knows to talk to it, and so that Kibana knows how to find ElasticSearch.

cd ~/Downloads/
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz
sudo tar -xf kibana-3.1.0.tar.gz --directory /opt
sudo mv /opt/kibana-3.1.0/ /opt/kibana/
sudo chown -R oracle. /opt/kibana/

Now to configure Apache, telling it where to find Kibana. If you have existing sites configured, you’ll need to sort this bit out yourself, but on a vanilla SampleApp v406 you can use the following sed command to set up the needful:

sudo sed -i'.bak' -e 's/DocumentRoot.*$/DocumentRoot "\/opt\/kibana\/"/g' /etc/httpd/conf/httpd.conf

Lastly, Kibana needs to know where to find ElasticSearch, which is where it is going to pull its data from. An important point here is that the URL of ElasticSearch must be resolvable and accessing from the web browser you run Kibana on, so if you are using a DNS name it must resolve etc. You can update the configuratinon file config.js by hand (it’s the elasticsearch: definition that needs updating), or use this sed command:

sed -i'.bak' -e 's/^\s*elasticsearch:.*$/elasticsearch: "http:\/\/demo.us.oracle.com:9200",/g' /opt/kibana/config.js

Finally, [re]start Apache so that it uses the new configuration:

sudo service httpd restart

You should be able to now point your web browser at the server and see the default Kibana dashboard. So for sampleapp, if you’re running Firefox locally on it, the URL would simply be http://localhost/ (port 80, so no need to specify it in the URL). Note that if you’re doing anything funky with network, your local web browser needs to be able to hit both Apache (port 80 by default), and ElasticSearch (port 9200 by default).

Configuring ELK end-to-end

Now that we’ve got the software installed, let’s see how it hangs together and create our first end-to-end example. There’s a good logstash tutorial here that covers a lot of the functionality. Here, I’ll just look at some of the very basics, creating a very simple logstash configuration which will prompt for input (i.e. stdin) and send it straight to ElasticSearch. The kopf plugin that we installed above can show that the data made it to ElasticSeach, and finally we will create a very simple Kibana dashboard to demonstrate its use.

Logstash works by reading a configuration file and then running continually waiting for the configured input. As well as the input we configure an output, and optionally in between we can have a set of filters. For now we will keep it simple with just an input and output. Create the following file in /opt/logstash and call it logstash-basic.conf:

input {
        stdin {}
        }
output {
        elasticsearch {}
        }

It’s pretty obvious what it’s saying – for the input, use stdin, and send it as output to elasticsearch (which will default to the localhost).
Run this with logstash:

export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk.x86_64/
cd /opt/logstash
bin/logstash -f logstash-basic.conf

After a few moments you should get a prompt. Enter some text, and nothing happens… apparently. What’s actually happened is that the text, plus some information such as the current timestamp, has been sent to ElasticSearch.

Let’s see where it went. In a web browser, got to http://localhost:9200/_plugin/kopf/. 9200 is the port on which ElasticSearch listens by default, and kopf is a plugin we can use to inspect ElasticSearch’s state and data. ElasticSeach has a concept of an index, in which documents, maybe of the same repeating structure but not necessarily, can be stored. Crudely put, this can be seen as roughly analogous to tables and rows of data respectively. When logstash sends data to ElasticSearch it creates one index per day, and in kopf now you should see an index with the current date, with a “document” for each line you entered after running logstash:

ElasticSearch can be queried using HTTP requests, and kopf gives a nice way to construct these and see the results which are in JSON format. Click on the rest (as in, REST API) menu option, leave the request as default http://localhost:9200/_search, and click send. You’ll see in the response pane a chunk of JSON in amongst which are the strings that you’ve entered to logstash:

Enter a few more lines into the logstash prompt, and then head over to http://localhost/ where you should find the default dashboard, and click on the Logstash Dashboard option:

It’s fairly bare, because there’s very little data. Notice how you have a histogram of event rates over the past day at the top, and then details of each event at the bottom. There are two things to explore here. First up, go and enter a bit more data into logstash, so that the create events have been spread out over time. Click the refresh icon on the Kibana dashboard, and then click-drag to select just the period on the chart that has data. This will zoom in on it and you’ll see in greater definition when the events were created. Go and click on one of the event messages in the lower pane and see how it expands, showing the value of each field – including message which is what logstash sent through from its input to output.

Now let’s get some proper data in, by pointing logstash at the BI Server log (nqsserver.log). Create a new configuration file, logstash-obi.conf, and build it up as follows. First we’ll use the file input to get data from …wait for it….a file! The syntax is fairly obvious:

input {
        file {
                path => "/app/oracle/biee/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1/nqserver.log"
                }
        }

Now we need to tell Logstash how to interpret the file. By default it’ll chuck every line of the log to ElasticSearch, with the current timestamp – rather than the timestamp of the actual event.

Now is time to introduce the wonderful world of the grok. A grok is one of the most important of the numerous filter plugins that are available in logstash. It defines expected patterns of content in the input, and maps it to fields in the output. So everything in a log message, such as the timestamp, user, ecid, and so on – all can be extracted from the input and stored as distinct items. They can also be used for further processing – such as amending the timestamp output from the logstash event to that of the log file line, rather than the system time at which it was processed.

So, let us see how to extract the timestamp from the log line. An important part of grok’ing is patterns. Grok statement are written as Regular expressions, or regex (obXKCD), so to avoid continual wheel-reinventing of regex statements for common objects (time, ip addresses, etc) logstash ships with a bunch of these predefined, and you can build your own too. Taking a line from nqsserver.log we can see the timestamp matches the ISO 8601 standard:

So our grok will use the pre-defined pattern TIMESTAMP_ISO8601, and then everything else (“GREEDYDATA”) after the timestamp, map to the log message field. The timestamp is in square brackets, which I’ve escaped with the backslash character. To indicate that it’s a grok pattern we want to match, it’s enclosed in %{ } markers.

\[%{TIMESTAMP_ISO8601:timestamp}\] %{GREEDYDATA:log_message}

This can be broken down as follows:

\[                                  The opening square bracket, escaped by \
%{TIMESTAMP_ISO8601:timestamp}      Capture an ISO 8601 timestamp, store it in a field called 'timestamp'
\]                                  The closing square bracket, escaped by \
%{GREEDYDATA:log_message}           Capture everything else ('GREEDYDATA' is also a grok pattern) and store it in the 'log_message' field

A grok operator in logstash is part of the filter processing, so we need a new stanza in the configuration file, after input and before output. Note that the grok operator is matching our pattern we built above against the message field, which is pre-populated by default by the input stream. You can grok against any field though.

input {
        file {
                path => "/app/oracle/biee/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1/nqserver.log"
                }
        }
filter {
        grok {
                match => ["message","\[%{TIMESTAMP_ISO8601:timestamp}\] %{GREEDYDATA:log_message}"]
            }
        }
output {
        elasticsearch {}
        }

Now we can see in the resulting capture we’ve extracted the timestamp to a field called “timestamp”, with the remainder of the field in “log_message”

But – the actual timestamp of the log entry that we have attached to the event stored in ElasticSearch, a special field called @timestamp is still reflecting the timestamp at which logstash read the logfile entry (30th September), rather than when the logfile entry was created (11th June). To fix this, we use a new filter option (grok being the first), the date filter:

date {
        match => ["timestamp", "yyyy-MM-dd'T'HH:mm:ss.SSSZZ"]
}

This matches the timestamp field that we captured with the grok, and converts it into the special @timestamp field of the event, using the mask specified. Now if we go back to kopf, the ElasticSearch admin tool, we can see that a new index has been created, with the date of log entry that we just parsed and correctly extracted the actual date from:

And over in Kibana if you set the timeframe long enough in the filter at the top you’ll be able to find the log entries showing up, now with the correct timestamp. Note that ElasticSearch accounts for the timezone of the message, storing it in UTC:

A large part of setting up your own ELK stack is this configuration of the filters in order to imbue and extract as much information from the log as you want. Grok filters are just the beginning – you can use conditional paragraphs to grok certain fields or logs for certain strings (think, error messages and ORA codes), you can mutate output to add in your own fields and tags based on what has been read. The point of doing this is that you enrich what is in the logs by giving the different pieces of data meaning that you can then drive the Kibana dashboard and filtering through.

Tips for using logstash

To think of the logstash .conf file as merely “configuration” in the same way a simple .ini is would be to underestimate it. In effect you are writing a bit of data transformation code, so brace yourself – but the silver lining is that whatever you want to do, you probably can. Logstash is a most flexible and powerful piece of software, and one in which the model of input, filter, codec and output work very well.

In the spirit of code development, here are some tips you may find useful:

  • Grok patterns can inherit. Study the provided patterns (in the logstash patterns folder), and build your own. Look for commonality across files and look to reuse as much as possible. If you have multiple unrelated patterns for every type of log file in FMW then you’ve done it wrong.
  • When you’re building grok statements and patterns, use the superb http://grokdebug.herokuapp.com. The advantage of this over a standard regex debugger is that it supports the grok syntax of capture groups and even custom patterns.
  • For building up and testing regex from scratch, there are some useful sites:

    On the Mac there is a useful tool called Oyster which does the same as the above sites but doesn’t require an Internet connection.

  • By default a failed grok will add the tag _grokparsefailure to the event. If you have multiple grok clauses, give each its own unique tag_on_failure value so that you can see from the output message which grok statement failed.

    grok {
        match => [  "message", "%{WLSLOG}"
        ]
        tag_on_failure => ["_grokparsefailure","grok03"]
    }

  • You can use # as a comment character, so comment your code liberally, particularly whilst you’re finding your way with the configuration language so you can find your way back from the gingerbread house.
  • grok’ing takes resources, so target your grok statements by using conditionals. For example, to only parse nqquery logs for an expected string, you could write :

    if [path] =~ /nqquery/ {
        # Do your grok here
        }

    NB the =~ denotes a regex match, and the / delineate the regex string.

  • Use a conditional to split the output, writing and grok failures to stdout and/or a file, so that it is easier to see what’s failing and when. Note the use of the rubydebug codec here to prettify output sent to stdout.

    output {
        if "_grokparsefailure" not in [tags] {
            elasticsearch { }
        } else {
            file { path => "unprocessed.out" }
            stdout { codec => rubydebug }
        }
    }

  • Use the multiline codec to work with log messages that span multiple lines, and watch out for newline characters – grok does not like them so use a mutate gsub to fix them out.
Building a Kibana dashboard

Now that we’ve got the data streaming through nicely from logstash into ElasticSearch, let us take a look at building dashboards against it in Kibana. Kibana is a web application that runs within an existing web server such as Apache, and it builds dashboards from data stored in ElasticSearch.

The building blocks of a Kibana dashboard are rows, which contain panels of a given pane width, up to twelve per row. On a panel goes one of the types of object, such as a graph. We’ll see now how to build up a dashboard and the interactions that we can use for displaying and analysing data.

When you first load Kibana you get a default dashboard that links to a few other start dashboards. Here we’re going to properly start from scratch so as to build up a picture of how a dashboard is created. Click on Blank Dashboard, and then in the top-right corner click on Configure Dashboard. Click on Index and from the Timestamping option select day. What this does is tell Kibana which ElasticSearch indices it is to pull data from, in this case using the standard Logstash index naming pattern – which we observed in kopf earlier – logstash-YYYY.MM.DD. Click Save, and then select Add a row. Give the row a title such as System Activity, click on Create Row and then Save. A new row appears on the dashboard.

Now we’ll add a graph to the row. Click on Add panel to empty row, and select Histogram as the Panel Type. Note that by default the width is 4 – change this to 12. You’ll note that there are plenty of options to explore, but to start with we’ll just keep it simple, so go ahead and click Save. By default the chart will show all data, so use the Time filter dropdown option at the top of the screen to select a recent time period. Assuming your data has loaded from logstash into ElasticSearch you should see a graph similar to this:

This is a graph of the number of events (log file entries) per time period. The graph will amend the resolution according to the zoom so that a reasonable resolution of data is shown, or you can force it through the Resolution option in the graph properties. In the legend of the chart you can see the resolution currently used.

Assuming you’ve selecting a broad time interval, such as the last week, you’ll presumably want to drill into the data shown. This is very intuitive in Kibana – simply click and drag horizontally over the time period you want to examine.

There are two important concepts for selecting and grouping data in Kibana, called filters and queries. A query groups data based on conditions, and we’ll explore those later. Filtering is a predicate applied to all data returned. Think of it just like a WHERE clause on a SQL query. You can see the current filter(s) applied at the top of the dashboard.

You may well want to hide these, and they can be collapsed – as can the query row and all of the dashboard rows – by clicking on the little triangle

From the Filter area you can also add, amend, disable and remove filters.

So far all we’ve got is a graph showing system activity over time, based on events recorded in a log file. But, we’ve no way of seeing what those logs are, and this is where the Table panel comes in. Add a new row, give it a title of Log messages and add the Table panel to it specifying the span as 12. You should now see a list of messages with timestamps corresponding to the time period shown in the graph. You can customise the Table panel, for example specifying which fields to show; by default it shows _source which is the raw row returned by ElasticSearch. More useful to us is the log_message field that we parsed out using the grok in logstash earlier. You can do this by selecting the relevant field from the Fields list on the left (which can be collapsed for convenience), or editing the Table panel and specifying it in the Columns area.

From the Table panel it is possible to select the data shown even more precisely by adding additional filters based on data in the table. Clicking on a particular row will expand it and show all of the associated fields, and each field has a set of Action options. You can filter only for that value, or specifically excluding it, and you can also add each field into the table shown (just like we did above for logmessage). So here I can opt to only display messages that I’ve tagged in logstash as coming from the BI Server component itself:

You’ll note in the second screenshow, once the filter has been applied, that the graph has changed and is showing less data. That is because a filter is global to a dashboard. But what if we want to show on the graph counts for all logs, but in the data table just those for BI Server? Here is where queries come into play. A query also looks like a predicate, but rather than restricting the data returned it just identifies a set of data within what is returned. To illustrate this I’m first going to remove all existing filters except the time period one:

And now in the Query area click on the + (to the right of the line). Now there are two queries, both with a wildcard as their value meaning they’ll each match everything. In the second query box I add the query Component: OracleBIServerComponent – note for this to work your logstash must be sending messages to ElasticSearch with the necessary Component field. Once updated, the second query’s impact can be seen in the graph, which is showing both the “all” query and the BI Server component tows. Use the View > option in the top left of the graph as a quick way of getting to the graph settings, including disabling cumulative/stack view:

Each panel in Kibana can be configured to show all or some of the query groups that have been defined. This is most useful for creating breakdowns of data, including those that are splitting it in different ways and you wouldn’t want all of the options displayed in entirety on all panels. You might want to group out the components, and the types of error, and then show a break down of system activity by one or the other – but not necessarily both. To configure which query a panel is to show use the Configure option in the top-right of a panel and go to the Queries tab. If it’s set to all then each and every query set will be shown individually on the panel.

If it’s selected then you can select one or more of the defined query sets to display

There are about a dozen types of panel in Kibana, and I’m not going to cover them all here. The other ones particularly of interest for building this kind of OBIEE monitoring dashboard include:

  • Terms is basically a SELECT FIELD, COUNT(*) ... GROUP BY FIELD. It shows the top x number of terms for a given field, and how frequently they occurred. Results can be as a pie or bar chart, or just a table:From a Terms panel you can add filters by clicking on a term. In the example above, clicking on the pie segment, or table row icon, for ERROR would add a filter to show just ERROR log entries
  • Trends shows the trend of event occurrences in a given time frame. Combined with an appropriate query you can show things like error rates
  • Stats shows a set of statistics, so you can identify mean response times, maximum users logged on, and so on – assuming you have this data coming through from the logstash parsing.

Once you’ve built your dashboard save it (Kibana stores it in ElasticSearch). The dashboard is defined in json and you can opt to download this too.

The complete OBIEE monitoring view

Parsing logs is a great way to get out valuable information from the text stored and build visualisations and metrics on top of it. However, for pure metrics alone (such as machine CPU, OBIEE DMS metrics, and so on) a close-relation to Kibana, Grafana, is better suited to the task. Thus we have the text-based data going into ElasticSearch and reported through Kibana, and the pure metrics into a time-based store such as whisper (graphite’s database) and reported through Grafana. Because Grafana is a fork of Kibana, the look and feel is very similar.

Using obi-metrics-agent the DMS metrics from OBIEE can be collected and stored in whisper, and so also graphed out in Grafana alongside the system metrics. This gives us an overall architecture like this:

Obviously, it would be nice if we could integrate the fundamental time-based nature of both Kibana and Grafana together, so that drilling into a particular time period of interest maybe from an error rate point of view in the logs would also show the system and DMS metrics for the same period. There has been discussion about this (1, 2, 3) but I don’t get the impression that it will happen soon, if ever. One other item of interest here is Marvel, which is a commercial offering for monitoring ElasticSearch – through Kibana. It makes use of stock Kibana panel types, along with some new ones including the Nodes panel type, which suits the requirements we have of monitoring OBIEE/system metrics within a Kibana view, but unfortunately it looks like currently it is going to remain within Marvel only.

One other path to consider is trying to get the metrics currently sent to graphite/whisper instead into ElasticSearch so that Kibana can then report on them. The problem with this is twofold. Firstly, ElasticSearch is fundamentally a text-based store, whereas whisper fits much better for time/metric data (as would another DB such as influxDB). So trying to crow-bar the two together may not be the best solution, and instead better for it to be resolved at front end as discussed above. Secondly, Kibana’s graphing capabilities do not conceptually extend to multiple metrics in the same graph – only multiple queries – which means that graphing something that would be simple in Grafana (such as CPU wait/user/sys) would be overly complex in Kibana.

Architecting an ELK deployment

So far I’ve shown how to configure ELK on a single server, reading logs from that same server. But there are two extra things we should consider. First, Logstash in particular can be quite a ‘heavy’ process depending on how much work you’re doing with it. If you are processing all the logs that FMW writes, and have lots of grok filters (which isn’t a bad thing; it means you’re extracting lots of good information), then you will see logstash using a lot of CPU, lots IO, possibly to the detriment of other processes on the system – a tad ironic if the purpose of using logstash is to monitor for any system problems that occur. Secondly, ELK works very well with mutiple servers. You might have a scaled out OBIEE stack, or want to monitor multiple environments. Rather than replicating the ELK stack on each server instead it’s better for each server to push its log messages to a central ELK server for processing. And since the processing takes place on the ELK server and not the server being monitored we reduce the local resource footprint too.

To implement this kind of deployment, you need something like logstash-forwarder on the OBI server which is a light-touch program, sending the messages to logstash itself on the ELK server, over a custom protocol called lumberjack. Logstash then processes the messages as before, except it is reading the input from the logstash-forwarder rather than from file. An alternative approach to this is using redis as a message broker, with logstash running on both the source (sending output to redis) and ELK server (using redis as the input). This approach is documented very well here / here, and the former of using logstash-forwarder here. Logstash-forwarder worked very well for me in my tests, and seems to fit the purpose nicely.

Conclusion

Responsive monitoring tools are crucial for successful and timely support of an OBIEE system, and the ELK stack provides an excellent basis on which to build beyond the capabilities of Enterprise Manager Fusion Middleware Control. The learning curve is a bit steep at first, and you have to be comfortable with installing unpackaged tools, but the payoff makes it worth it! If you are interested in finding out about how Rittman Mead can help with your OBIEE implementation or other areas, please contact us.

Categories: BI & Warehousing

Using rlwrap with Apache Hive beeline for improved readline functionality

Rittman Mead Consulting - Fri, 2014-10-17 06:18

rlwrap is a nice little wrapper in which you can invoke commandline utilities and get them to behave with full readline functionality just like you’d get at the bash prompt. For example, up/down arrow keys to move between commands, but also home/end to go to the start/finish of a line, and even ctrl-R to search through command history to rapidly find a command. It’s one of the standard config changes I’ll make to any system with Oracle’s sqlplus on, and it works just as nicely with Apache Hive’s commandline interface, beeline.

beeline comes with some of this functionality (up/down arrow) but not all (for me, it was ‘home’ and ‘end’ not working and printing 1~ and 5~ respectively instead that prompted me to setup rlwrap with it).

Installing rlwrap

To install rlwrap simply add the EPEL yum packages to your repository configuration:

sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/`uname -p`/epel-release-6-8.noarch.rpm

and then install rlwrap from yum:

sudo yum install -y rlwrap

Use

Once rlwrap is installed you can invoke beeline through it manually, specifying all the standard beeline options as you would normally: (I’ve used the \ line continuation character here just to make the example nice and clear)

rlwrap -a beeline \
-u jdbc:hive2://bdanode1:10000 \
-n rmoffatt -p password \
-d org.apache.hive.jdbc.HiveDriver

Now I can connect to beeline, and as before I press up arrow to access commands from when I previously used the tool, but I can also hit ctrl-R to start typing part of a command to recall it, just like I would in bash. Some other useful shortcuts:

  • Ctrl-lclears the screen but with the current line still shown
  • Ctrl-kdeletes to the end of the line from the current cursor position
  • Ctrl-udeletes to the beginning of the line from the current cursor position
  • Esc-fmove forward one word
  • Esc-bmove backward one word
    (more here)

And most importantly, Home and End work just fine! (or, ctrl-a/ctrl-e if you prefer).

NB the -a argument for rlwrap is necessary because beeline already does some readline-esque functions, and we want rlwrap to forceable override them (otherwise neither work very well). Or more formally (from man rlwrap):

Always remain in “readline mode”, regardless of command’s terminal settings. Use this option if you want to use rlwrap with commands that already use readline.

Alias

A useful thing to do is to add an alias directly in your profile so that it is always available to launch beeline under rlwrap, in this case as the rlbeeline command:

# this sets up "rlbeeline" as the command to run beeline
# under rlwrap, you can call it what you want though. 
cat >> ~/.bashrc<<EOF
alias rlbeeline='rlwrap -a beeline'
EOF
# example usage:
# rlbeeline /
# -u jdbc:hive2://bdanode1:10000 /
# -n rmoffatt -p password /
# -d org.apache.hive.jdbc.HiveDriver

If you want this alias available for all users on a machine create the above as a standalone .sh file in /etc/profile.d/.

Autocomplete

One possible downside of using rlwrap with beeline is that you lose the native auto-complete option within beeline for the HiveQL statements. But never fear – we can have the best of both worlds, with the -f argument for rlwrap, specifying a list of custom auto-completes. So this is even a level-up for beeline, because we could populate it with our own schema objects and so on that we want auto-completed.

As a quick-start, run beeline without rlwrap, hit tab-twice and then ‘y’ to show all options and paste the resulting list into a text file (eg beeline_autocomplete.txt). Now call beeline, via rlwrap, passing that file as an argument to rlwrap:

rlwrap -a -f beeline_autocomplete.txt beeline

Once connected, use auto-complete just as you would normally (hit tab after typing a character or two of the word you’re going to match):

Connecting to jdbc:hive2://bdanode1:10000
Connected to: Apache Hive (version 0.12.0-cdh5.0.1)
[...]
Beeline version 0.12.0-cdh5.0.1 by Apache Hive
0: jdbc:hive2://bdanode1:10000> SE
SECOND        SECTION       SELECT        SERIALIZABLE  SERVER_NAME   SESSION       SESSION_USER  SET
0: jdbc:hive2://bdanode1:10000> SELECT

Conclusion

rlwrap is the tool that keeps on giving; just as I was writing this article, I noticed that it also auto-highlights opening parentheses when typing the closing one. Nice!

Categories: BI & Warehousing

Are you ready for Windows 10? Dodeca Is Ready!

Tim Tow - Wed, 2014-10-15 22:38
Microsoft recently released its first preview version of their next desktop operating system, Windows 10. In our business, we have seen many large companies just migrate to Windows 7 and the general consensus seems to be that Windows 8 is a 'consumer' operating system. In other words, it will be a while before you have to think about Windows 10, but it is our business to be thinking about this now. We have the Windows 10 preview downloaded and installed in our labs. We made a very smart decision years ago when we decided to write the Dodeca client layer in .NET technology because it works in Windows 10 unchanged!

We are ready. Will you be ready?
Categories: BI & Warehousing

Kscope15 Abstract deadline is Wednesday, October 15!

Tim Tow - Sat, 2014-10-11 13:08
Kscope15 abstract submission will close in just a few days and the ODTUG content teams want you!

I am sure there are many of you who have never done a presentation at a conference and may not feel comfortable speaking in public. Well, I have a secret for you, despite the fact that I won the Best Speaker Award at Kscope14, I share your discomfort with speaking. What works for me, however, is that I only speak about topics that I know very well. Do you have some area of expertise that you know really well? If you do, please share it! It will be fun and all of your peers who learn something new, from you, will appreciate it!

With that, take the next step and click here to submit session!
Categories: BI & Warehousing

First-timer tips for Oracle Open World

Rittman Mead Consulting - Wed, 2014-10-08 07:16

Last week I had the great pleasure to attend Oracle Open World (OOW) for the first time, presenting No Silver Bullets – OBIEE Performance in the Real World at one of the ODTUG user group sessions on the Sunday. It was a blast, as the saying goes, but the week before OOW I was more nervous about the event itself than my presentation. Despite having been to smaller conferences before, OOW is vast in its scale and I felt like the week before going to university for the first time, full of uncertainty about what lay ahead and worrying that everyone would know everyone else except you! So during the week I jotted down a few things that I’d have found useful to know ahead of going and hopefully will help others going to OOW take it all in their stride from the very beginning.

Coming and going

I arrived on the Friday at midday SF time, and it worked perfectly for me. I was jetlagged so walked around like a zombie for the remainder of the day. Saturday I had chance to walk around SF and get my bearings both geographically, culturally and climate. Sunday is “day zero” when all the user group sessions are held, along with the opening OOW keynote in the evening. I think if I’d arrived Saturday afternoon instead I’d have felt a bit thrust into it all straight away on the Sunday.

In terms of leaving, the last formal day is Thursday and it’s full day of sessions too. I left straight after breakfast on Thursday and I felt I was leaving too early. But, OOW is a long few days & nights so chances are by Thursday you’ll be beat anyway, so check the schedule and plan your escape around it.

Accomodation

Book in advance! Like, at least two months in advance. There are 60,000 people descending on San Francisco, all wanting some place to stay.

Get airbnb, a lot more for your money than a hotel. Wifi is generally going to be a lot better, and having a living space in which to exist is nicer than just a hotel room. Don’t fret about the “perfect” location – anywhere walkable to Moscone (where OOW is held) is good because it means you can drop your rucksack off at the end of the day etc, but other than that the events are spread around so you’ll end up walking further to at least some of them. Or, get an Uber like the locals do!

Sessions

Go to Oak Table World (OTW), it’s great, and free. Non-marketing presentations from some of the most respected speakers in the industry. Cuts through the BS. It’s also basically on the same site as the rest of OOW, so easy to switch back and forth between OOW/OTW sessions.

Go and say hi to the speakers. In general they’re going to want to know that you liked it. Ask questions — hopefully they like what they talk about so they’ll love to speak some more about it. You’ll get more out of a five minute chat than two hours of keynote. And on that subject, don’t fret about dropping sessions — people tweet them, the slides are usually available, and in fact you could be sat at your desk instead of OOW and have missed the whole lot so just be grateful for what you do see. Chance encounters and chats aren’t available for download afterwards; most presentations are. Be strict in your selection of “must see” sessions, lest you drop one you really really did want to see.

Use the schedule builder in advance, but download it to your calendar (watch out for line-breaks in the exported file that will break the import) and sync it to your mobile phone so you can see rapidly where you need to head next. Conference mobile apps are rarely that useful and frequently bloated and/or unstable.

Don’t feel you need to book every waking moment of every day to sessions. It’s not slacking if you go to half as many but are twice as effective from not being worn out!

Dress

Dress wise, jeans and polo is fine, company polo or a shirt for delivering presentations. Day wear is fine for evenings too, no need to dress up. Some people do wear shorts but they’re in the great minority. There are lots of suits around, given it is a customer/sales conference too.

Socialising

The sessions and random conversations with people during the day are only part of OOW — the geek chat over a beer (or soda) is a big part too. Look out for the Pythian blogger meetup, meetups from your country’s user groups, companies you work with, and so on.

Register for the evening events that you get invited to (ODTUG, Pythian, etc) because often if you haven’t pre-registered you can’t get in if you change your mind, whereas if you do register but then don’t go that’s fine as they’ll bank on no-shows. The evening events are great for getting to chat to people (dare I say, networking), as are the other events that are organised like the swim in the bay, run across the bridge, etc.

Sign up for stuff like swim in the bay,  it’s good fun – and I can’t even swim really. Run/Bike across the bridge are two other events also organised. Hang around on twitter for details, people like Yury Velikanov and Jeff Smith are usually in the know if not doing the actual organising.

General

When the busy days and long evenings start to take their toll don’t be afraid to duck out and go and decompress. Grab a shower, get a coffee, do some sight seeing. Don’t forget to drink water as well as the copious quantities of coffee and soda.

Get a data package for your mobile phone in advance of going eg £5 per day unlimited data. Conference wifi is just about OK at best, often flaky. Trying to organise short-notice meetups with other people by IM/twitter/email gets frustrating if you only get online half an hour after the time they suggested to meet!

Don’t pack extra clothes ‘just in case’. Pack minimally because (1) you are just around the corner from Market Street with Gap, Old Navy etc so can pick up more clothes cheaply if you need to and (2) you’ll get t-shirts from exhibitors, events (eg swim in the bay) and you’ll need the suitcase space to bring them all home. Bring a suitcase with space in or that expands, don’t arrive with a suitcase that’s already at capacity.

Food

So much good food and beer. Watch out for some of the American beers; they seem to start at about 5% ABV and go upwards, compared to around 3.6% ABV here in the UK. Knocking back this at the same rate as this will get messy.

In terms of food you really are spoilt, some of my favourites were:

  • Lori’s diner (map) : As a brit, I loved this American Diner, and great food - yum yum. 5-10 minutes walk from Moscone.
  • Mel’s drive-in (map) : Just round the corner from Moscone, very busy but lots of seats. Great american breakfast experience! yum
  • Grove (map) : Good place for breakfast if you want somewhere a bit less greasy than a diner (WAT!)

 

Categories: BI & Warehousing

BI Applications in Cloud

Dylan's BI Notes - Mon, 2014-10-06 18:28
Prepackaged analytics applications are available as cloud services. The idea is that the client company does not need to use their own hardware and does not need to install the software or apply patches by themselves. What they need is just simply the browsers. For the end users, there should not be much difference.   The BI apps built […]
Categories: BI & Warehousing

Adding Oracle Big Data SQL to ODI12c to Enhance Hive Data Transformations

Rittman Mead Consulting - Sun, 2014-10-05 15:29

An updated version of the Oracle BigDataLite VM came out a couple of weeks ago, and as well as updating the core Cloudera CDH software to the latest release it also included Oracle Big Data SQL, the SQL access layer over Hadoop that I covered on the blog a few months ago (here and here). Big Data SQL takes the SmartScan technology from Exadata and extends it to Hadoop, presenting Hive tables and HDFS files as Oracle external tables and pushing down the filtering and column-selection of data to individual Hadoop nodes. Any table registered in the Hive metastore can be exposed as an external table in Oracle, and a BigDataSQL agent installed on each Hadoop node gives them the ability to understand full Oracle SQL syntax rather than the cut-down SQL dialect that you get with Hive.

NewImage

There’s two immediate use-cases that come to mind when you think about Big Data SQL in the context of BI and data warehousing; you can use Big Data SQL to include Hive tables in regular Oracle set-based ETL transformations, giving you the ability to reference Hive data during part of your data load; and you can also use Big Data SQL as a way to access Hive tables from OBIEE, rather than having to go through Hive or Impala ODBC drivers. Let’s start off in this post by looking at the ETL scenario using ODI12c as the data integration environment, and I’ll come back to the BI example later in the week.

You may recall in a couple of earlier posts earlier in the year on ETL and data integration on Hadoop, I looked at a scenario where I wanted to geo-code web server log transactions using an IP address range lookup file from a company called MaxMind. To determine the country for a given IP address you need to locate the IP address of interest within ranges listed in the lookup file, something that’s easy to do with a full SQL dialect such as that provided by Oracle:

NewImage

In my case, I’d want to join my Hive table of server log entries with a Hive table containing the IP address ranges, using the BETWEEN operator – except that Hive doesn’t support any type of join other than an equi-join. You can use Impala and a BETWEEN clause there, but in my testing anything other than a relatively small log file Hive table took massive amounts of memory to do the join as Impala works in-memory which effectively ruled-out doing the geo-lookup set-based. I then went on to do the lookup using Pig and a Python API into the geocoding database but then you’ve got to learn Pig, and I finally came up with my best solution using Hive streaming and a Python script that called that same API, but each of these are fairly involved and require a bit of skill and experience from the developer.

But this of course is where Big Data SQL could be useful. If I could expose the Hive table containing my log file entries as an Oracle external table and then join that within Oracle to an Oracle-native lookup table, I could do my join using the BETWEEN operator and then output the join results to a temporary Oracle table; once that’s done I could then use ODI12c’s sqoop functionality to copy the results back down to Hive for the rest of the ETL process. Looking at my Hive database using SQL*Developer 4.0.3’s new ability to work with Hive tables I can see the table I’m interested in listed there:

NewImage

and I can also see it listed in the DBA_HIVE_TABLES static view that comes with Big Data SQL on Oracle Database 12c:

SQL> select database_name, table_name, location
  2  from dba_hive_tables
  3  where table_name like 'access_per_post%';

DATABASE_N TABLE_NAME             LOCATION
---------- ------------------------------ --------------------------------------------------
default    access_per_post        hdfs://bigdatalite.localdomain:8020/user/hive/ware
                      house/access_per_post

default    access_per_post_categories     hdfs://bigdatalite.localdomain:8020/user/hive/ware
                      house/access_per_post_categories

default    access_per_post_full       hdfs://bigdatalite.localdomain:8020/user/hive/ware
                      house/access_per_post_full

There are various ways to create the Oracle external tables over Hive tables in the linked Hadoop cluster, including using the new DBMS_HADOOP package to create the Oracle DDL from the Hive metastore table definitions or using SQL*Developer Data Modeler to generate the DDL from modelled Hive tables, but if you know the Hive table definition and its not too complicated, you might as well just write the DDL statement yourself using the new ORACLE_HIVE external table access driver. In my case, to create the corresponding external table for the Hive table I want to geo-code, it looks like this:

CREATE TABLE access_per_post_categories(
  hostname varchar2(100), 
  request_date varchar2(100), 
  post_id varchar2(10), 
  title varchar2(200), 
  author varchar2(100), 
  category varchar2(100),
  ip_integer number)
organization external
(type oracle_hive
 default directory default_dir
 access parameters(com.oracle.bigdata.tablename=default.access_per_post_categories));

Then it’s just a case of importing the metadata for the external table over Hive, and the tables I’m going to join to and then load the results into, into ODI’s repository and then create a mapping to bring them all together.

NewImage

Importantly, I can create the join between the tables using the BETWEEN clause, something I just couldn’t do when working with Hive tables on their own.

NewImage

Running the mapping then joins the webserver log lookup table to the geocoding IP address range lookup table through the Oracle SQL engine, removing all the complexity of using Hive streaming, Pig or the other workaround solutions I used before. What I can then do is add a further step to the mapping to take the output of my join and use that to load the results back into Hive, like this:

NewImage

I’ll then use IKM SQL to to Hive-HBase-File (SQOOP) knowledge module to set up the export from Oracle into Hive.

NewImage

Now, when I run the mapping I can see the initial table join taking place between the Oracle native table and the Hive-sourced external table, and the results then being exported back into Hadoop at the end using the Sqoop KM.

NewImage

Finally, I can view the contents of the downstream Hive table loaded via Sqoop, and see that it does in-fact contain the country name for each of the page accesses.

NewImage

Oracle Big Data SQL isn’t a solution suitable for everyone; it only runs on the BDA and requires Exadata for the database access, and it’s an additional license cost on top of the base BDA software bundle. But if you’ve got it available it’s an excellent way to blend Hive and Oracle data, and a great way around some of the restrictions around HiveQL and the Hive JDBC/ODBC drivers. More on this topic later next week, when I’ll look at using Big Data SQL in conjunction with OBIEE 11g.

Categories: BI & Warehousing

News and Updates from Oracle Openworld 2014

Rittman Mead Consulting - Sat, 2014-10-04 08:48

It’s the Saturday after Oracle Openworld 2014, and I’m now home from San Francisco and back in the UK. It’s been a great week as usual, with lots of product announcements and updates to the BI, DW and Big Data products we use on current projects. Here’s my take on what was announced this last week.

New Products Announced

From a BI and DW perspective, the most significant product announcements were around Hadoop and Big Data. Up to this point most parts of an analytics-focused big data project required you to code the solution yourself, with the diagram below showing the typical three steps in a big data project – data ingestion, analysis and sharing the results.

NewImage

At the moment, all of these steps are typically performed from the command-line using languages such as Python, R, Pig, Hive and so on, with tools like Apache Flume and Apache Sqoop used to bring data into and out of the Hadoop cluster. Under the covers, these tools use technologies such as MapReduce or Spark to do their work, automatically running jobs in parallel across the cluster and making use of the easy scalability of Hadoop and NoSQL databases.

You can also neatly divide the work up on a big data project into two phases; the “discovery” phase typically performed by a data scientist where data is loaded, analysed, correlated and otherwise “understood” to provide the initial insights, and then an “exploitation” phase where we apply governance, provide the output data in a format usable by BI tools and otherwise share the results with the wider corporate audience. The updated Information Management Reference Architecture we collaborated on with Oracle and launched by in June this year had distinct discovery and exploitation phases, and the architecture itself made a clear distinction between the Innovation part that enabled the discovery phase of a project and the Execution part that delivered the insights and data in a more governed, production setting.

NewImage

This was the theme of the product announcements around analytics, BI, data warehousing and big data during Openworld 2014, with Oracle’s Omri Traub in the photo below taking us through Oracle’s big data product strategy. What Oracle are doing here is productising and “democratising” big data, putting it clearly in context of their existing database, engineered systems and BI products and linking them all together into an overall information management architecture and delivery process.

NewImage

So working through from ingestion through to data analysis, these steps have typically been performed by data scientists using scripting tools and rudimentary data visualisation engines, making them labour-intensive and reliant on a small set of people conversant with these tools and process. Oracle Big Data Discovery is aimed squarely at these steps, and combines Apache Spark-based data preparation and transformation capabilities with an analysis and visualisation engine based on Endeca Server.

NewImage

Key features of Big Data Discovery include:

  • Ability to analyse, parse, explore and “wrangle” data using graphical tools and a Spark-based transformation engine
  • Create a catalog of the data on your Hadoop cluster, and then search that catalog using Endeca Server search technologies
  • Create recommendations of other datasets that might interest you, based on what you’re looking at now
  • Visualize your datasets to help understand what they contain, and discover new insights

Under the covers it comprises two parts; the data loading, transformation and profiling part that uses Apache Spark to do its work in parallel across all the nodes in the cluster, and the analysis part, which takes data prepared by Apache Spark and loads into the Endeca Server in-memory engine to perform the analysis, aggregation and data visualisation. Unlike the Spark part the Endeca server element runs just on one node and limits the size of the analysis dataset to what can run in-memory in the Endeca Server engine, but in practice you’re going to work with a sample of the data rather than the entire dataset at that stage (in time the assumption is that the Endeca Server engine will be unbundled and run natively on YARN, giving it the same scalability as the Spark-based data ingestion and transformation part). Initially Big Data Discovery will run on-premise with a cloud version later on, and it’s not dependent on Big Data Appliance – expect to see something later this year / early next year.

Another new product that addresses the discovery phase and discovery lab part of a big data project is Oracle Data Enrichment Cloud Service, from the Oracle Data Integration team and designed to complement ODI and Oracle EDQ. Whilst Oracle positioned ODECS as something you’d use as well as Big Data Discovery and typically upstream from BDD, to me there seemed to be a fair bit of overlap between the products, with both tools doing data profiling and transformation but BDD being more focused on the exploration and discovery part, and ODECS being more focused on early-stage data profiling and transformation.

NewImage

ODECS is clearly more of an ETL tool complement and runs natively in the cloud, right from the start. It’s most probably aimed at customers with their Hadoop dataset already in the cloud, maybe using Amazon Elastic MapReduce or Oracle’s new Hadoop-as-a-Service and has more in common with the old Data Quality Option for Oracle Warehouse Builder than Endeca’s search-first analytic interface. It’s got a very nice interface including a mobile-enabled website and the ability to include and merge in external datasets, including Oracle’s own Data as a Service platform offering. Along with the new Metadata Management tool Oracle also launched at Openworld it’s a great addition to the Oracle Data Integration product suite, but I can’t help thinking that its initial availability only on Oracle’s public cloud platform is going to limit its use with Oracle’s typical customers – we’ll have to just wait and see.

The other major product that addresses big data projects was Oracle Big Data SQL. Partly addressing the discovery phase of big data projects but mostly (to my mind) addressing the exploitation phase, and the execution part of the information management architecture, Big Data SQL gives Oracle Exadata the ability to return data from Hive and NoSQL on the Big Data Appliance as well as data from its normal relational store. I covered Big Data SQL on the blog a few weeks ago and I’ll be posting some more in-depth articles on it next week, but the other main technical innovation with the product is its bringing of Exadata’s SmartScan feature to Hadoop, projecting and filtering data at the Hadoop storage node level and also giving Hadoop the ability to understand regular Oracle SQL, rather than the cut-down version you get with HiveQL.

NewImage

Where this then leaves us is with the ability to do most of a big data project using (Oracle) tools, bringing big data analysis within reach of organisations with Oracle-style budgets but without access to rare data scientist-type resources. Going back to my diagram earlier, a post-OOW big data project using the new products launched in this last week could look something like this:

NewImage

Big Data SQL is out now and depends on BDA and Exadata for its use; Big Data Discovery should be out in a few months time, runs on-premise but doesn’t require BDA, whilst ODECS is cloud-only and runs on a BDA in the background. Expect more news and more integration/alignment from the products as 2014 ends and 2015 starts, and we’re looking forward to using them on Oracle-centric Hadoop projects in the near future. 

Product Updates for BI, Data Integration, Exalytics, BI Applications and OBIEE

Other news announced over the week for products we more commonly use on projects include:

Finally, something that we were particularly pleased to see was the updated Oracle Information Management Architecture I mentioned earlier referenced in most of the analytics sessions, with Oracle’s Balaji Yelamanchili for example introducing it in his big data and business analytics general session mid-way through the week. 

NewImage  

We love the way this brings together the big data components and puts them in the context of the wider data warehouse and analytic processes, and compared to a few years ago when Hadoop and big data was considered completely separate to data warehousing and BI and done by staff completely different to the core business analytics team, this new reference architecture puts it squarely within the world of BI and analytics we work in. It also emphasises the new abilities Hadoop, NoSQL databases and big data can bring us – support for wider sets of data sources with dynamic schemas, the ability to economically work with and analyse much larger datasets, and support discovery-type upfront analysis work. Finally, it recognises that to get true value out of analysis you start on Hadoop, you eventually need to add proper data governance, make the results more widely available using full SQL tools, and use the right tools – relational databases, OLAP servers and the like – to analyse the data once its in a more structured form. 

If you missed our write-up on the updated Information Management Reference Architecture you can can read our two-part blog post here and here, read the Oracle white paper, or listen to the podcast with OTN Archbeat’s Bob Rhubart. For now though I’m looking forward to seeing the family after a week and a half away in San Francisco – thanks to OTN and the Oracle ACE Director Program for sponsoring my visit over to SF for Openworld, and we’ll post our conference presentation slides later next week when we’re back in the UK and US offices.

Categories: BI & Warehousing

Multi Sheet Excel Output

Tim Dexter - Thu, 2014-10-02 17:28

Im on a roll with posts. This blog can be rebuilt ...

I received a question today from Camilo in Colombia asking how to achieve the following.

‘What are my options to deliver excel files with multiple sheets? I know we can split 1 report in multiple sheets in with the BIP Advanced Options, but what if I want to have 1 report / sheet? Where each report in each sheet has a independent data model ….’

Well, its not going to be easy if you have to have completely separate data models for each sheet. That would require generating multiple Excel outputs and then merging them, somehow.

However, if you can live with a single data model with multiple data sets i.e. queries that connect to separate data sources. Something like this:


Then we can help. Each query is returning its own data set but they will all be presented together in a single data set that BIP can then render. Our data structure in the XML output would be:

<DS>
 <G1>
  ...
 </G1>
 <G2>
  ...
 </G2>
 <G3>
  ...
 </G3>
</DS>

Three distinct data sets within the same data output.

To get each to sit on a separate sheet within the Excel output is pretty simple. It depends on how much native Excel functionality you want.

Using an RTF template you just create the layouts for each data set on a page(s) separated by a page break (Ctrl-Enter.) At runtime, BIP will place each output onto a separate sheet in the workbook. If you want to name each sheet you can use the <?spreadsheet-sheet-name: xpath-expression?> command. More info here. That’s as sophisticated as it gets with the RTF templates. No calcs, no formulas, etc. Just put the output on a sheet, bam!

Using an Excel template you can get more sophisticated with the layout.



This time thou, you create the layout for each data model on separate sheets. In my example, sheet 1 holds the department data, sheet 2, the employee data and so on. Some conditional formatting has snuck in there.

I have zipped up the sample files here.

FIN!

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Calibri; mso-bidi-theme-font:minor-latin;}
Categories: BI & Warehousing

Working Capital and Inventory Management

Dylan's BI Notes - Sun, 2014-09-28 23:24
The importance of the inventory management can be best described via working capital management. Cash is the blood of the company. Cash conversion cycle is measured as ( ( Account Receivables + Inventory – Account Payables )  / (Total Revenue / 365) These Youtube videos from Mark West from Redstack Analytics describes this CCC metric very […]
Categories: BI & Warehousing

Supply Chain Questions

Dylan's BI Notes - Sun, 2014-09-28 22:42
The key question are about inventory and orders. What is our inventory status? by product and by location? Do we have enough inventory on hand and in process to meet customer demand? How can we maximize customer satisfaction while minimizing cost and size of inventory on hand? Both over stocking and under stocking are bad […]
Categories: BI & Warehousing

Oracle Exalytics X4-4 - Bigger, Better, Stronger

Look Smarter Than You Are - Sun, 2014-09-28 10:49
X4-4 - Same price as the X3-4 but with more powerThe big announcement about it is today at OpenWorld (it would be awesome if they mentioned it during the Intel keynote tonight), but the Exalytics X4-4 is actually available now.  It's the same price as the X3-4 ($175,000 at list not including software, maintenance, tax, title, license, yada yada).  This does mean the X3 is - effective immediately - no longer available, but then again, since the new one is the same price, I'm not sure why anyone would want the older one.  No word yet on if you can upgrade an X3 to an X4, but since they did offer an upgrade kit from X2 to X3 (though I never heard of anyone buying it), I'm guessing there will be one for those wanting to make an X3 into an X4.
X4-4 SpecsThe main improvement over the X3 is the number of cores: it's still 4 Intel chips, but those chips all now have 15 cores on them, meaning the X4 has 60 cores compared to the X3's 40 cores.  Here are the important details:

  • 4 Intel Xeon E7-8895v2 processors running at 2.8 - 3.6 GHz
  • 8 - 60 cores (capacity on demand, more on this in a second)
  • 2 TB of RAM
  • 2.4 TB of PCI flash
  • 7.2 TB of hard disk running at 10K RPMs (not that fast these days)
  • 2 Infiniband ports running at 40 Gb/s
  • 4 Ethernet ports running at up to 10 Gb/s
Cool Thing 1: Variable Speed & Cores
You probably heard about this last July.  Oracle worked with Intel to design a line of their Xeon E7-889x chips specifically for Oracle.  What we didn't realize until we saw it show up on the X4 spec sheet was that the chips were going in the Exalytics X4.  Simply put, on the fly, Exalytics can vary how many cores it uses and when it's fewer cores, the speed goes up.  If it's running 15 cores per chip, Intel sets the speed to 2.8 GHz.  If it's only using 2 cores per chip the speed goes all the way to 3.6 GHz (a GHz is one billion clock ticks per second).

But wait, you math geniuses say.  Isn't 3.6 * 2 less than 2.8 * 15 (so why wouldn't Oracle just always leave all 60 cores on at the slower speed)?  Well, yes, if you're actually using all those cores, and this is where you know the chip was apparently designed for Essbase (though it did premiere in Exadata first).  As much as I love my Essbase, there are still transactions that end up single threading (or using far less than the available cores on the box).
Say I'm running a massive allocation and despite my best efforts (and FIXPARALLEL), it's still single threading or running at 8 CPUs or fewer.  In this case, Exalytics is now smart enough to talk to those impressive new E7-8895v2 chips, scale down to as few cores as are actually needed, and in the process, up the clock speed for the remaining cores.  Take that, commodity hardware.  This really is the killer feature that makes Exalytics do something no other server running Essbase can do.
On a side note, Intel seems to be dropping the power on the non-used cores to nearly zero when not in use meaning the power consumption of your Exalytics box actually lowers on-demand.  So if your boss won't sign off on your new Exalytics X4, tell her she hates the planet.
Cool Thing 2: You Don't Need BIFSPer the current Engineered Systems Price List (buried down in note 13), you longer have to purchase BIFS (BI Foundation Suite) to buy Exalytics (either the X4 or T5).  You can now own BIFS, OBIEE, Essbase+, or Hyperion Planning+ without having to get a VP to sign off for a special exemption.  That's right, Planning people preferring to purchase pure premium power, you can now buy Exalytics.  With this change, I presume that any new Planning customer looking for the best user experience will be buying Exalytics X4 along with Planning.
Also buried in the footnotes, you apparently can now buy Exalytics for as few as 20 named users.  Last time I checked (and I don't read every edition of the footnotes, haters who think I have no life), the minimum was 100 named users.
What's Next: HFM on ExalyticsWe heard about it on the opening developer's day at Kscope: HFM should finally run on Exalytics in version 11.1.2.4 (which we're hoping to see by the end of 2014).  I'm not sure if it will run on both the T5 (Solaris) and the X4 (Linux) by year-end, but Linux is almost a given.  That said, I don't work for Oracle, so don't base any buying decisions on the belief that HFM will definitely run on the X4.  Just when it happens, be pleasantly surprised that you can now consolidate all your major Oracle Business Analytics apps together.
So any T5 news?  Not at the moment. It's still available running it's 128 cores with 4 TB of RAM (and other cool things) so if you're looking for major horsepower and server consolidation, look to the T5.
I'll be updating this post after the OpenWorld keynote to include any new Exalytics news but if you hear any other Exalytics updates in the meantime, post it in the comments.
Categories: BI & Warehousing

EPM and BI Meetup at Next Week’s Openworld (and details of our Oracle DI Speakeasy)

Rittman Mead Consulting - Fri, 2014-09-26 10:08

Just a short note to help publicise the Oracle Openworld 2014 EPM and BI Meetup that’s running next week, organised by Cameron Lackpour and Tim Tow from the ODTUG board.

This is an excellent opportunity for EPM and BI developers and customers to get together and network over drinks and food, and chat with members of the ODTUG board and maybe some of the EPM and BI product management team. It’s running at Piattini, located at 2331 Mission St. (between 19th St & 20th St), San Francisco, CA 94110 from 7pm to late and there’s more details at this blog post by Cameron. The turnout should be pretty good, and if you’re an EPM or BI developer looking to meet up with others in your area this is a great opportunity to do so. Attendance is free and you just need to register using this form.

Similarly, if you’re into data warehousing and data integration you might be interested in our Rittman Mead / Oracle Data Integration’s Speakeasy event, running on the same evening (Tuesday September 30th 2014) from 7pm – 9pm at Local Edition, 691 Market St, San Francisco, CA. Aimed at ODI, OWB and data integration developers and customers and featuring members of the Rittman Mead team and Oracle’s Data Integration product team, again this is a great opportunity to meet with your peers and share stories and experiences. Registration is free and done through this registration form, with spaces still open at the time of posting.

Categories: BI & Warehousing

Introduction to Oracle BI Cloud Service : Service Administration

Rittman Mead Consulting - Thu, 2014-09-25 20:17

Earlier in the week we’ve looked at the developer features within Oracle BI Cloud Service (BICS), aimed at departmental users who want the power of OBIEE 11g without the need to stand-up their own infrastructure. We looked at the process of uploading spreadsheets and other data to the Oracle Database Schema Service that accompanies BICS, how you create the BI Repository that translates the tables and columns you upload into measures, attributes and hierarchies, and then took a brief look at how dashboards and reports are created and then shared with other users in your department. If you’re coming in late, here’s the links to the previous posts in the series:

One of the design goals for BICS was to reduce the amount of administration work an end-user has to perform, and to simplify and consolidate any tasks that they do have to do. Behind the scenes BICS actually comprises a BI environment, and a database environment, with most of the administration work being concerned with the BI one. Let’s start by looking at the service administration page that you see when you first log into the BICS environment as an administrator, with the screenshot below showing the overview page for the overall service.

NewImage

Oracle BI Cloud Service is part of Oracle’s overall Oracle Platform-as-a-Service (PaaS) offering, with BICS being made up of a database service and a BI service. The screenshot above shows the overall availability of these two services over the past two weeks, and you click on either the database service or the BI service to drill into more detail. Let’s click on the BI service first.

NewImage

The BI service dashboard page shows the same availability statuses again, along with a few graphs to show usage over that period. Also on this page are details of the start and end date for the service contract, details of the SFTP user account you’ll need to for some import/archive operations, and a link to Presentation Services for this instance, to launch the OBIEE Home Page.

The OBIEE home page, as we saw in previous posts in this series, has menu items for model editing, data uploading and creating reports and dashboards. What it also has though is a Manage menu item, as shown in the screenshot below, that takes you through to an administration function that lets you set up application roles and backup/restore the system.

NewImage

Application roles are the way that OBIEE groups permissions and privileges and then assigns them to sets of users. With on-premise OBIEE the only way to manage application roles is through Enterprise Manager Fusion Middleware Control, but with BICS this functionality has been moved into OBIEE proper so that non-system administrators can perform this task. The list of users you work with are the ones defined for your service (tenancy) and using this tool you can assign them to existing application roles, create new ones, or group one set of roles within another. Users themselves are created as part of the instance creation process, with the minimum (license) number of users for an instance being 10.

NewImage

The Snapshots tab on this same Service Console page provides access to a new, system-wide snapshot and restore function that provides the means to version your system, restore it from a backup and transport a dev/test environment to your production instance. As I mentioned in previous postings in the series, each tenant for BICS comes with two instances, once for dev/test and one for prod, and the snapshot facility gives you a means to copy everything from one environment into another, for when you’ve completed development and testing and want to put your dashboards into production.

NewImage

Taking a snapshot, as shown in the screenshot above, creates an archive file containing your RPD, the catalog and all the security settings, and you can store a number of snapshots within each environments, giving you a (very coarse-grained) versioning ability. What you can also do is download these snapshots as what are called “BI Archive” files as shown in the screenshot below, and its these archive files that you can then upload into your other instance to give you your code promotion process – note however that applying an archive file overwrites everything that was there before, so you’ll need to be careful doing this when users start creating reports in your production environment – really, it’s just a once-only code promotion facility followed then by a way of backing up and restoring your environments.

NewImage

Note also that you’ll separately need to backup and restore any database elements, as these aren’t automatically included in the BI archive process. Backup and restoration of database elements is done via the separate database instance service page shown below, where you can export the whole schema or just parts of it, and then retrieve the export file via an SFTP transfer.

NewImage

So that’s in in terms of BICS administration, and for our initial look at the BI Cloud Service platform. Rittman Mead are of course offering services around BICS and cloud BI in-general so contact us if you’d like to give BICS a spin, and keep an eye on the blog over the next few weeks where we’ll take you through the example BICS application we built, reporting against Salesforce.com data using their REST API.

Categories: BI & Warehousing

Database Links

Tim Dexter - Thu, 2014-09-25 13:39

Yeah, its been a while, moving on ...

I got a question a week back asking about how BI Publisher could handle dblinks. The customer currently has db links from DB1 to DB2 and uses them in their queries. Could BIP handle the syntax and pass it on to the database in its SQL or could it handle the link another way?

select e1.emp_name
, e1.emp_id
,e2.manager
from emps e1
, emps@db2 e2
where e1.manager_id = e2.id

Well, there is the obvious way to create the join in BIP. Just get rid of the db link alttogether and create two separate database connections (db1 and db2). Write query A against db1 and query B against db2. Then just create a join between the two queries, simple.

 But, what if you wanted to use the dblink? Well, BIP would choke on the @db2 you would have in the sql. Some silly security rules that, no, you can not turn off if you want to. But there are ways around it, the choking, not the security. Create an alias at the database level for the emp@db2, that way BIP can parse the resulting query. Lets assume I create an alias in the db for my db linked table as 'managers'. Now my query becomes:

select e1.emp_name
, e1.emp_id
,e2.manager
from emps e1
, managers e2
where e1.manager_id = e2.id

 BIP will not choke, it will just pass the query through and the db can handle the linking for it.

Thats it, thats all I got on db links. See you in 6 months :)




Categories: BI & Warehousing

Introduction to Oracle BI Cloud Service : Building Dashboards & Reports

Rittman Mead Consulting - Thu, 2014-09-25 04:00

This week we’ve been looking at the new Oracle BI Cloud Service (BICS), the cloud version of OBIEE11g that went GA at the start of this week. Rittman Mead were part of the beta program for BICS and spend a couple of weeks building a sample BICS application to put the product through its paces, creating a reporting application for Salesforce.com that pulled in its data via the Salesforce REST API and staged it in the Oracle Database Schema Service that comes with BICS. Earlier in the week we looked at how data was uploaded or transferred into the accompanying database schema, and yesterday looked at how the repository was created using the new thin-client data modeller. Today, we’ll look at how you create the dashboards and reports that your users will use, using the Analysis and Dashboard Editors that are part of the service. If you’re arriving mid-way through the series, here’s the links to the other posts in the series:

In fact creating analyses and dashboards is the part of BICS that has least changed compared to the on-premise version. In keeping with the “self-service” theme for BICS there’s an introductory set of guidance notes when you first connect to BICS, like this:   NewImage   and the dashboard and analysis editors are available as menu options on the Home page, along with a link to the Catalog view, like this:   NewImage   From that point on though it’s standard Answers and Dashboards, with the normal four-tab editor view within Answers (the Analysis Editor) and the ability to create views, calculations, filters and so on. Anyone familiar with Answers will be at home within the cloud version, and there’s a new visualisation – the heat map view, as shown in the final screenshot later in this article – that hints at other visualisations that we’ll see featured first in the cloud version of OBIEE, expected to be updated more frequently than the on-premise version (one of the major selling points for customers looking to adopt new features as soon as possible with OBIEE).   NewImage   What’s missing from this environment though are features like Agents and alerts, scorecards and BI Publisher, or the ability to create actions other than links to other web pages or catalog content.   NewImage   These are features that Oracle are saying they’ll add-back in time though as the underlying infrastructure for BICS builds-out, and of course the whole UI is likely to go through a rev with the 12c release of OBIEE due sometime in 2015. Dashboards are also created in the same way as with on-premise OBIEE, with the same Dashboard Editor and access to features like conditional display of sections and support for presentation variables.

NewImage

So, that wraps-up our quick tour around the analysis and dashboard creation parts of Oracle BI Cloud Service; tomorrow, to finish-up the series we’ll look at the administration elements of BICS including new self-service application role provisioning, tools for administering and monitoring the instance and for backing-up and migrating content from one instance to another.

Categories: BI & Warehousing

Introduction to Oracle BI Cloud Service : Creating the Repository

Rittman Mead Consulting - Wed, 2014-09-24 04:00

Earlier in this series we’ve looked at the overall product proposition for Oracle BI Cloud Service (BICS), and how you upload data to the Database Schema Service that comes with it. Today, we’re going to look at what’s involved in creating the BI Repository that holds the metadata about your logical tables, calculations and dimension hierarchies, using the new thin-client data modeller that like the rest of BICS runs entirely within your web browser. For anyone coming into the series mid-way, here’s the links to the other posts in the series:

So anyone familiar with OBIEE will know that a central part of the product, and the part of it that makes it easy for users to work with their data, is the business-orientated semantic model that you create over your source data. Held within what’s called the “BI Repository” and made-up of physical, logical and presentation layers, the semantic model turns what can be a complex set of source tables, joins and cross-application links into a simple to understand set of subject areas made up of fact tables and dimensions. Regular on-premise OBIEE semantic models can get pretty complex, with joins across different database types, logical tables with several different ways you can provide their data – for example, at detail-level from an Oracle data warehouse whilst at summary level, from an Essbase cube, and to edit them you use a dedicated Windows development tool called BI Administration.

Allowing these complex data models, and having a dependency on a Windows-based development tool, poses two main issues for any consumer-style version of OBIEE; first, if the aim of the service is to attract customers who want to create their systems “self-service”, you’ve got to made the repository development process a lot simpler than it currently is – you can’t expect customers to go on a course or buy my excellent book when they just want to get a dashboard up and running with the minimum fuss. You also can’t realistically expect them to install a Windows-only development tool back at the office as most of their target customers won’t have admin privileges on their workstations, or they might even be using Macs or work out of a browser; and then, even if they get it installed you’ll need to ensure there’s a network connection available to the BI Server in the cloud through their corporate firewall. Clearly, a browser-based repository creation tool was needed, ideally one that did some of the basic work automatically for the user and didn’t need hours or days of training to understand. Of course, the risk to this is that you create a repository editing tool that’s too “dumbed-down” for most developers to find useful, and we’ll consider that possibility later in the article.

So following the data upload process that we covered in yesterday’s post, we’re now in a position where we’ve got a number of tables sitting in Oracle Database Schema Service, and we’re ready to build a repository to report against them. To access the thin-client data modeller you click on the Model menu item on the BICS homepage, as shown in the screenshot below.

NewImage

The modeller itself supports a simplified subset of what you can create with the full BI Administration tool. You’ve got a single source, the Oracle Database Schema Service, and a single business model. Business model tables have a logical table source as you’d normally expect, but just the one LTS is currently supported. Calculations within logical tables are supported, but they’re logical-level only (i.e. post-aggregation) with no current support for physical-level (pre-aggregation) at this point.

NewImage

Level-based hierarchies within the business model are supported, including skip-level and ragged ones, and there’s support for time-series dimensions including their own editor.

NewImage

Where possible, introspection is used when creating the business model components, with table joins and matching column names used to create candidate logical joins. Static and dynamic repository variables, along with session variables are supported, with the front-end also supporting presentation and request variables – so all good there.

NewImage

Under the covers, each tenant within BICS has their own RPD and their own catalog, and any edits to the repository that you perform are effectively “online” edits. To make edits to an existing model the developer therefore has to first “lock” the model, make their changes and add their new entries and then validate them, and then either revert the model or publish the changes. 

NewImage

In the background BICS updates the RPD using the metadata web service API for the BI Server, with the RPD it creates the same format as the ones we create on-premise, just with a smaller set of features supported through the thin-client admin tool.

As I mentioned in the first post in the series, each tenant install of BICS comes with two instances; one for development or pre-prod and one for production. To move a completed repository out of one environment into another a new feature called a “BI Archive” is used, a snapshot of your BICS system that includes both the repository, the catalog and any security objects you create. In this first version of BICS each import is total and overwrites everything that was in the instance beforehand, so there’s no incremental import or ability to selectively import just certain objects or certain reports into a new environment, meaning that you’ll lose any reports or dashboards created in production if you subsequently refresh it from dev/pre-prod – something to bear in-mind.

One other thing to be aware of is that there’s no ability to create alias tables or opaque views in the thin-client modeller, so if you want to create additional copies of dimension table for more than one dimension role, or you want to create a table using an arbitrary SELECT statement you’ll need to go into ApEx and create a database view instead – not a huge imposition as ApEx comes with tools for creating these pretty easily, but something that will lead to a more complex database model in-time. The screenshot below shows one such database view then exposed through the thin-client modeller, where you can see the SELECT statement behind it (but not alter or amend it except through ApEx).

NewImage

Finally, the thin-client modeller supports row-level and subject area security, using filters or object permissions to set up manually or create by reference to application roles granted to your users. We’ll look at what’s involved in setting up security and application roles in the final post in this series, where we look at administering your BICS instance.

So, that’s a high-level view of the repository creation process; in tomorrow’s post, we’ll look at what’s involved in creating reports and dashboards.

Categories: BI & Warehousing

Introduction to Oracle BI Cloud Service : Provisioning Data

Rittman Mead Consulting - Tue, 2014-09-23 04:00

In the first post in this series I looked at the new Oracle BI Cloud Service, which went GA over the weekend and which Rittman Mead have been using these past few weeks as part of a beta release. In the first post I looked at what BICS is and who its aimed at in this initial release, and went through the features at a high-level; over the rest of the week I’ll be looking at the features in-detail, starting today with the data upload and provisioning process. Here’s the links to the rest of the series, with the items getting updated over the week as I post each entry in the series:

As I mentioned in that first post, “Introduction to Oracle BI Cloud Service : Product Overview”, BICS in this initial release to my mind is aimed at departmental use-cases where someone wants to quickly upload and analyse an offline dataset and share the results with other members of their team. BICS comes bundled with Oracle Database Schema Service and 50GB of storage, and OBIEE in this setup reports just against this data source with no ability to reach-out dynamically to other data sources or blend those sources with the main one in Oracle’s cloud database. It’s aimed really at users with a single source of data to work with, who’ve probably obtained it as an export from some other system and just want to be able to report against it, though as we’ll see later in this post it is possible to link to other SaaS sources with a bit of PL/SQL wizardry.

So the first task you’re likely to perform when working with BICS is to upload some data to report on. There are three main options for uploading data to BICS, two of which are browser-based and aimed at end-users, and one that uses SQL*Developer and more aimed at devs. BICS itself comes with a menu items on the home page for uploading data, and this is what we’ll think users will use most as it’s built-into the tool and fairly prominent.

NewImage

Clicking on this menu item launches an ApEx application hosted in the Database Schema Service that comes with BICS, and which allows you to upload and parse XLS and delimited file-types to the database cloud instance and then store the contents in database tables.

NewImage

Oracle Database Schema Service also comes with Application Express (ApEx) as a front-end, and ApEx has similar tools for upload datasets into the service, with additional features for creating views and PL/SQL packages to process and manipulate the data, something we used in our beta program example to connect to Salesforce.com and download data using their REST API. In-theory you shouldn’t need to use these features much, but SIs and partners such as ourselves will no doubt use ApEx a lot to build out the loading infrastructure, data cleansing and other features that you might want for a packaged cloud app – so get your PL/SQL books out and brush-up on ApEx development.

NewImage

The other way to get data into BICS is to use Oracle SQLDeveloper, which has a special Oracle Cloud connector type that allows you to view and work with database objects as if they were regular database ones, and upload data to the cloud in the form of “carts”. I’d imagine these options will get extended over time, either by tools or utilities Oracle release for this v1.0 BICS release, or by BICS eventually supporting the full Oracle Database Instance Service that’ll support regular SQLNet connections from ETL tools.

NewImage

So once you’ve got some data uploaded into Database Schema Services, you’ll end up with a set of source tables from which you can create your BI Repository. Check back tomorrow for more details on how BICS’s new thin-client data modeller works and how you create your business model against this cloud data source, including how the repository editing and checkout process works in this new potentially multi-user development environment.

 

Categories: BI & Warehousing