Rittman Mead Consulting
Most readers of this blog will be aware of Oracle Openworld, Oracle’s annual conference and exhibition held in San Francisco, California. Openworld is a particularly import event for us as it’s when most of the key new product announcements are made, and it’s also a chance for us to catch up with our customers, colleagues and friends in the industry, and share our experiences implementing Oracle’s BI and analytics products.
This year we’re pleased to have six speakers representing us at the Openworld conference delivering sixteen sessions in total, with Jon, Mark, Stewart and Venkat joined by James Knight (Head of Advanced Analytics) and Jérôme Françoisse, who’s been heading-up our ODI 12c beta program participation in Europe. We’re pleased to be taking part in the ODTUG, EOUC, APOUC and IOUG User Group Sunday Symposiums on the Sunday, and presenting as part of the main conference from Monday through to Friday. Mark, Venkat and Stewart will also be taking part in the Oracle ACE Director / Oracle ACE activities, and we’ll be at the Authors Symposium and the DW Global Leaders’ forum during the main week – so say hello if you see one of us!
We’ve put together a page on the website outlining our sessions and where you can find us over the Openworld week, with links to the relevant Content Catalog entries and some general information about Oracle Openworld. We’ll also be running a series of webinars after Openworld closes, so if you’d like some more information on Rittman Mead at Oracle Openworld 2013, check out the website page.
A recent release of Qlikview added a new feature called “Direct Discovery”, that allows Qlikview reports to directly access a database datasource on-demand . Qlikview, by default, loads all required data up-front into its own in-memory column-store database, but this new Direct Discovery feature allows it to in addition retrieve data from an SQL data source, on-demand, to supplement this in-memory dataset.
This should allow larger datasets to be accessed from within Qlikview reports, and I therefore began to wonder if I could use this new functionality to access some of the very large data sets that we often see with OBIEE reports. In fact – could I use the Qlikview UI but with the OBIEE RPD as the datasource, potentially replacing the Qlikview in-memory cache completey, instead using OBIEE’s BI Repository, federated query capability and its own caching options? If so, not only would this give me access to all of the joined up heterogeneous datasources contained in the RPD, but also the comprehensive security model that OBIEE provides. So lets first see if I can use Qlikview to access data via an OBIEE RPD, and then in I’ll cover the direct discovery feature in the next post.
To do this, I going to use a local copy of the SampleApp V305 Virtual Machine which contains an OBIEE environment that uses a number of different data sources available via the RPD. I created a simple OBIEE report containing Product, Date and Revenue from the I – Sample TimesTen folder, which I’m going to replicate using Qlikview with the OBIEE RPD as the data source:–
The first task is to use the ODBC driver for BIServer, installed as part of the standard OBIEE installation, to created a system DSN to allow access to the RPD for Qlikview to use.
I started Qlikview and selected File > New, by default this starts a wizard that expects a file based data source, however using a database data source is a much more manual task, so Cancel this wizard and choose File > Edit Script to open the Script editor.
Click the Connect button and select the appropriate BIServer ODBC connection, supply the user name and password, click OK, and a connection statement will be added to the script.
Click the Select button and you will see the Folders and objects you normally see in the Presentation layer of the RPD shown as tables and columns. Using this wizard to create SQL select statements for product, time and Revenue, I created the following script:–
ODBCCONNECTTO QlikViewTest (XUserIdis LMUNCIJOPDZcWSFPDD, XPasswordis HECQSARMNLacWYIGTRNA); SQL SELECT * FROM "I - Sample TimesTen".Products; SQL SELECT * FROM "I - Sample TimesTen"."Time"; SQL SELECT "1- Revenue" FROM "I - Sample TimesTen"."Base Facts";
Running this script shows that data is retrieved, but when finished nothing is shown. This is because Qlikview does not have a default presentation, so I add some report objects and get the following:–
I can see some data, but what I cannot do, is the Qlikview thing of clicking on a data item and the rest of the data changing to reflect this choice. Its useful to look at the table joins, select File > Table Viewer to see what Qlikview has done with the datasets :–
Its is easy to see that none of the datasets are linked and all of the datasets need to be linked to allow Qlikview to highlight associated data.
Going back to the Script editor I can see that the select that gets the Revenue does not have any key columns that Qlikview can use to link the values to the data. Qlikview automatically links datasets using common field names.
I reopen the Script Editor, highlight the Revenue select and click the Select button, scroll down to the Base Facts table and examine the available fields. Using this wizard I can only select the fields that have been put into the folder in the presentation layer and this does not provide the necessary product and time keys. This gives me two options:–
- Add suitable key values to the appropriate presentation layer folders
- Manually create a suitable piece of logical SQL
I want to avoid putting key values into measure folders in the presentation layer as that goes against best practice and usability practices. A much easier option is to manually create some logical SQL, if fact OBIEE provides this on the Advanced tab of the Analysis editor.
I don’t need all of this generated logical SQL, but it is a good starting point. I copy this into my Qlikview script editor, and edit it just retrieve the data and my script is now:–
ODBCCONNECTTO QlikViewTest (XUserIdis LMUNCIJOPDZcWSFPDD, XPasswordis HECQSARMNLacWYIGTRNA); SQL SELECT * FROM "I - Sample TimesTen".Products; SQL SELECT * FROM "I - Sample TimesTen"."Time"; SQL SELECT "I - Sample TimesTen"."Products"."P4 Brand" "P4 Brand", "I - Sample TimesTen"."Time"."T05 Per Name Year" "T05 Per Name Year", "I - Sample TimesTen"."Base Facts"."1- Revenue" "1- Revenue" FROM "I - Sample TimesTen";
Notice that I have removed some of the generated sql and changed the aliases to be the actual column names to help Qlikview link the data. If I execute the script I now get a much more Qlikview type report, where i select a Brand and Qlikview shows me the related years and the total revenue:–
A quick look at the table view shows one of the most important parts of a Qlikview report, the dataset links:–
Now that I’ve got Qlikview using the RPD and I can reproduce the data I saw in OBIEE, but I want to extend the logical sql and make a more interesting report. So i manually edit the sql to get this:–
ODBC CONNECT TO QlikViewTest (XUserId is cJbQJYJOPDZcWSFPCC, XPassword is CHAeeARMNLacWYIGTBUA); ProductTable: LOAD Product, Brand; SQL SELECT "P1 Product" as Product, "P2 Product Type", "P4 Brand" as Brand FROM "A - Sample Sales".Products; SQL SELECT "E - Sample Essbase"."Time"."T05 Per Name Year" as Year FROM "E - Sample Essbase"; RevenueFact: SQL SELECT "I - Sample TimesTen"."Products"."P1 Product" as Product, "I - Sample TimesTen"."Time"."T05 Per Name Year" as Year, "I - Sample TimesTen"."Base Facts"."1- Revenue" as Revenue FROM "I - Sample TimesTen";
With this script, I’ve only selected the data objects I need to link and display and I’ve used the power of the RPD to select the product data from an Oracle database, the time details from an Essbase cube and revenue data from a TimesTen database. These separate datasets are then combined into a single Qlikview report:–
From this brief article it can be seen that we can successfully use the Qlikview UI with the OBIEE RPD to create reports, but three things are necessary:–
- an ODBC connection to the BIServer
- an appreciation of the Presentation layer, so that the correct objects can be accessed
- Manual creation or editing of generated logical SQL to enable Qlikview to successfully link datasets together.
In the next part, I will look more closely at the different logical sql options and have a look at the Direct Discovery feature to retrieve the data on-demand from the RPD, thereby avoiding duplication of data and the potential use of TimesTen or Essbase as the underlying in-memory database.
A couple of months ago Peter Scott and I wrote a series of posts on using Exalytics with the Oracle BI Apps, with the idea begin to try and use Exalytics’ In-Memory database cache to speed up BI Apps dashboards. We looked at a number of approaches, including using OBIEE11g’s Summary Advisor to replicate commonly-used BI Apps aggregates into TimesTen, whilst Pete looked at the more manual route of using TimesTen’s ttimportfromOracle utility, which I expanded on a couple of weeks later. But there’s another way you can replicate data into TimesTen that doesn’t involve the Summary Advisor, scripts or indeed anything outside of a regular BI Apps installation – it’s actually the DAC, which in the 11g release has a feature for replicating tables into the TimesTen in-memory database. So how does this work, and is it a viable solution for moving your BI Apps data warehouse in-memory?
As a bit of background for anyone new to Exalytics and TimesTen but that knows the BI Apps, there’s two typical use-cases for TimesTen when used in the context of OBIEE:
- As an in-memory aggregate cache, automatically populated and managed by OBIEE’s Summary Advisor, as part of the Exalytics hardware and software package
- As a regular database but stored in-memory, used to hold detail-level data warehouse data plus any aggregate tables you choose to add-in
Strictly-speaking, the second use-case can be done outside of Exalytics, but you’ll most probably need to use the TimesTen table compression feature only available within Exalytics if you want to store any significant volume of data in TimesTen. In fact, given the typical size of data warehouses including the BI Apps data warehouse, it’s unlikely that you’ll ever get your whole dataset into TimesTen as it has to store everything in-memory, so you’ve then got to choose either to:
- Just store aggregates in TimesTen, which is the route Oracle went down with Exalytics
- Only try this with data marts or other small data warehouses
- Put just the “hot” data from your data warehouse into TimesTen – for example, the last couple of years data across the data warehouse, or a particular fact table and its dimensions
It’s this third approach that the new DAC11g is most suited to, with functionality to mark one or more BI Apps DW tables as “in-memory”, then replicate them into TimesTen using an ETL engine actually built-in to the DAC, so there’s no need to add new Informatica routines or otherwise customise your main load routine. It does, however, come with some significant limitations, some of which you can work around but others you’d need to just live with. Let’s go through what’s involved though, using BI Apps 22.214.171.124 and DAC 11g as currently downloadable from OTN.
DAC 11g has a number of new concepts within this release, and one we’re interested in for this example is the “external executor”. What this allows is the DAC to use ETL engines other than Informatica to do work for it, so you could use ODI, for example, to do some ELT-centric work in support for a wider Informatica load if that made more sense. Another type of external executor that you can register though is called “Data Copy”, and this is a special additional data movement feature that’s actually built-in to DAC 11g and uses parallel Java processes to copy data from one table to another. The TimesTen replication feature in DAC 11g uses this to move data from the BI Apps data warehouse into TimesTen tables, so lets register this Data Copy external executor before we do anything else, by switching to the Setup view within the DAC console and creating a new entry within the External Executors section.
Notice the ODI 11g Embedded Agent option that’s also there, but for now I give the external executor a name, press the Generate button to create the parameters for this particular external executor type, then review the default settings for the parallel data load engine. DAC 11g executes the data load using multiple Java JDBC calls that take subsets of the source table’s data and copy them into the target table, and you’ll most probably need to fiddle-around with these settings for your own environment, to get the best mix of concurrent data load vs. overall capacity vs. avoiding TimesTen table locks.
Now we can go and mark some tables as being “in-memory”, ready for replicating them into TimesTen using this new feature.
Space is of course at a premium with TimesTen, so it’d make sense to start-off with a small source BI Apps DW table and try and replicate that, to get a sense of the space it takes up, and how that might extrapolate to the wider source data set. W_AR_BALANCE_F in my sample BI Apps data warehouse has just 62 rows in it, and it joins to a number of dimension tables, once of which (W_PROFIT_CENTER_D) has just 162 rows of data in it. So let’s start with these two, and see where things go.
To select tables for replicating into TimesTen, switch to the Design view, then use the Tables tab to locate the tables, and check the In Memory checkbox, like this:
I then do the same for the W_PROFIT_CENTER_D table, with the DAC Console creating copies of the table metadata in the new source system container I’d set up prior to this exercise.
For each of these tables, I now also need to define the datatype lengths and precisions of each column. To do this, with each of the two tables tables selected, click on the Columns tab and then navigate to the In Memory Length and In-Memory Precision columns, and either copy across the default values from each column, or use a utility such as ttimportfromOracle (if your data warehouse source is the Oracle Database) to first generate optimal datatype lengths, then copy those settings into the in-memory column definitions.
Now the TimesTen tables are defined in the DAC repository, I now need to set up a connection from the DAC to the TimesTen server. To do so, I perform the following steps:
1. Ensure TimesTen 126.96.36.199.4+ (ideally 188.8.131.52.5+) is installed either on the server running OBIA, or the server is installed somewhere else and the client is installed on the OBIA server.
2. Copy across the TimesTen library files from $TIMESTEN_HOME/lib to $DAC_HOME/dac/lib – I just copied and pasted all of the .jar and .lib files from the TimesTen /lib directory into the DAC one.
3. If you’re running the DAC server on a 64-bit machine, and you’ve got 64-bit TimesTen installed, alter the DAC config file so that it points to a 64-bit JVM rather than the default 32-bit one that gets installed with the DAC client – I pointed it to the one that ships with 64-bit OBIEE 184.108.40.206, as also installed on this server.
4. Create the required TimesTen client ODBC DSN on your OBIA server, and the TimesTen Data Manager server DSN if you’ve got both TimesTen server and client on your OBIA machin
5. Go into the DAC client, select Setup, then the Physical Data Sources tab, then create a new data source that points through to the TimesTen Client ODBC DSN
At the end of these steps you should have an additional physical datasource connection registered in the DAC console, that successfully connects through to your TimesTen database, like this:
Now you need to create a logical data source that we’ll use later to point to this physical connection. To do so, from the DAC Client select Tools > Seed Data > Logical Data Sources, then create a new logical data source called DBConnection_TimesTen.
To create the data copy tasks, I do the following:
1. Switch to the DAC Client Design view, select the Tables tab, press Query, check the In Memory checkbox, then press Go. The list of tables marked as in-memory, to be replicated into TimesTen, is then displayed. In this case, it’s the W_AR_BALANCE_F table and the W_PROFIT_CENTER_D one
2. Right-click anywhere in this list of tables, and select Generate Data Copy Tasks. When prompted, select the All Records in the List radio button.
3. When the Generate Data Copy Tasks dialog is shown, select the following values:
Primary Logical Source: DBConnection_OLAP
Primary Logical Target: DBConnection_TimesTen
Task Phase: Post Load
Note that the Task Phase can be changed to suit where you want the data copying to happen in a real-life project, and that you can check the Enable Incremental Data Copy checkbox if you want update just the new and changed data in your TimesTen tables – see the online docs on this feature for more details. Once done, press OK to have the DAC Client create the data copy tasks for you.
You’ll also need to create the corresponding TimesTen tables, a task you can also do from the DAC Client. With the same set of In Memory-flagged tables displayed, this time select the Generate DW Table Scripts option, like this:
When the Generate DW Table Scripts dialog is shown, select TimesTen as the Physical Data Source Type, check the Execute checkbox, and then select the tt_dac option for the Physical Data Source, like this:
The DAC Client will now generate the TimeTen table DDL scripts and execute them, to create the tables to hold your replicated DW data. Assuming all is well, the Generate DW Table Scripts dialog should show a success message like this:
You should then repeat this to create the DW Index scripts, which the DAC Client will base on the indexes on the original BI Apps DW tables. As I talked about in my blog post on indexing TimesTen tables a few weeks ago, you should follow-up this replication process with the use of the TimesTen Index Advisor, which generates further index recommendations based on the actual query workload on your system, and can dramatically improve the performance of queries against TimesTen tables compared to what you’d get with the default DAC-generated indexes.
Now back to the data copy tasks. These new tasks need now to be added to an existing, or a new, subject area so that you can then include them in an execution plan. In this instance, I’m going to create a new subject area for the tasks, .hen add the new tasks to this subject area – in reality, you might choose to include these tasks in with your existing DAC subject areas, so that the TimesTen replication happens as the rest of the load takes place, but in my example I’m going to keep this replication separate from my main data load to keep things simple. To create this subject area and add the data copy tasks to it, I do the following:
1. From the DAC Client, select the Design view, then the Subject Areas tab. Press New to create new subject area and call it Data Copy Tasks, then press Save.
2. With the subject area still open for editing, select the Tasks tab, press the Add / Remove button, then type in “DataCopy*” into the Name field and press Go. The two data copy tasks should then be displayed. Press the Add All button to add these tasks to the right-hand list of selected tasks, then press OK.
Now it’s a case of either adding this subject area to an existing DAC execution plan or creating a new execution plan just for this data copy subject area. I’m going to do the latter.
1. Switch to the Execute view, select the Execution Plans tab, then press New, and call the new execution plan DataCopy Tasks, and press Save.
2. Now with that execution plan still open for editing, switch to the Subject Areas tab and use the Add / Remove button to add the new subject area you just created to this execution plan.
3. Now switch to the Connectivity Parameters tab and press the Generate button. Once the parameter generation process completes, select the following values for the two parameters:
This connects the tasks’ logical data source connections to the physical ones, including the physical TimesTen database connection I created earlier in the process.
4. Finally, for this stage, press the Build button above the list of execution plans with this one still selected, to create the list of ordered tasks for this new execution plan.
Now we’re ready to run the data copy process, and replicate these two DW’s tables into the TimesTen database. To do so, again within the Execute view and with the new execution plan selected, press Run. Once the execution plan completed, you should see it listed under the Run History tab, hopefully with a success message next to it.
Finally, if you move over to SQL*Developer and create a connection through to the TimesTen database, you should be able to see the replicated data in these two tables.
So there you have it – replication of BI Apps DW data into equivalent TimesTen tables, all handled by the DAC and its internal JDBC-based ETL engine. But … what happens next, and how do you get these tables into the BI Apps repository and make them available to the BI Apps dashboards, so that they can be used in-place of the regular Oracle tables to speed up queries? Well – you’ll have to wait for the follow up…. ;-)
We are honoured to announce that Rittman Mead have been nominated in 5 categories for the UKOUG Partner of the Year Awards. We have been nominated in:
- Business Intelligence Partner of the Year
- Training Partner of the Year
- Managed Services (Outsourcing & Operations) Partner of the year
- Emerging (New Products) Partner of the Year
- Engineered Systems Partner of the Year
We believe these awards should be a recognition of both the services we provide to our customers, and to information and thought leader we provide to the wider community through activities like the blog and conference presentations. These awards are voted for by Oracle end users and community members.
The fact that we have been nominated in 5 awards hopefully shows the growth of the company and moreover the growth in the significance of data, analytics and Business Intelligence to companies as a whole.
Below is a little bit of information about the categories and what we have been doing in each to get nominated. If you’d like to vote please click here.Business Intelligence
Business Intelligence is our core competency, which we have won three times in the last five years. We pride ourselves in our deep technical ability, delivery methodology, though leadership and knowledge sharing.Training
Ever since we started the company training has been at it’s heart. We have trained hundreds of people in the Business Intelligence and Data Integration community, both in paid-for bootcamp style courses, and in workshops, labs, and various sessions at conferences and other similar events.
We believe our blog is one of the key sources of OBIEE knowledge globally, pretty much everyone we meet in the industry has used it at some point in their career.Managed Service
Last year we put significant focus and investment into creating a global services team with the goal of providing different levels of business as usual and post implementation support to our clients.
The type of services include operational continuity/business relevance, expert advisory, engineered systems (Exalytics) and a virtual Business Intelligence Competency Centre (BICC).Emerging (New Products)
Rittman Mead has always looked to balance its skills across Oracle’s core Business Intelligence and Data Integration tools and the bleeding edge ones. The last 12 to 18 months has seen a significant change in the landscape of these tools, especially with the advent of Big Data and Oracle’s acquisition of Endeca.
Rittman Mead implemented the first post-acquisition UK-based Endeca project, are running the first Real Time Decisions (RTD) project of this Oracle fiscal year, and have done a significant amount of work with R, including a pioneering study of the New York transit system.Engineered Systems
Rittman Mead has been at the forefront of the Business Intelligence Engineered Systems (Exalytics) wave. Since its announcement at 2011 Oracle Open World, we ordered the first Exalytics server to be bought in the UK. Since then we have been working with Oracle and several of our customers to fully understand it and work out how to maximise its benefits to clients of all sizes.
Awards like this are great recognition for all the hard work our team put in, so I would ask you to register your vote here.
A few months ago I posted a series of articles on the new BI Apps 220.127.116.11.1 release, which used ODI11g in the background to perform data loads, rather than Informatica PowerCenter. Since then, I’ve collaborated with Accenture’s Kevin McGinley on a “getting started” BI Apps 18.104.22.168.1 installation cookbook, where I went through the install and Kevin did the data load, with the aim being to provide readers with a “minimum viable install” OBIA 22.214.171.124.1 system. For me though, the most interesting part about this new release is how it embeds ODI11g into the platform infrastructure, and how element of ODI are used (and extended) to provide a proper, embedded ETL tool that Oracle can presumably take forward in a way that they couldn’t with Informatica. In the next couple of blog posts then, I’m going to look at this ODI integration in a bit more detail, starting today with “lifting up the lid” on the configuration and data load process, to see just how the various BI Apps platform tools integrate with ODI and control how it works.
If you’ve worked with earlier, Informatica-based releases of the BI Apps, you’ll be familiar with the many, many manual steps involved in configuring and integrating the various platform components, including two initial configuration phases that are particularly involved and prone to operator error:
- Linking all of the ETL, ETL control (DAC), database and other elements together, so that the DAC can run tasks that in turn, run Informatica Workflows that in turn, load data into the BI Apps data warehouse
- Preparing lots of CSV files to contain the various domain values, segment codes, flex field details and so on for your source systems, even though you know these same details are held in tables in EBS, for example.
BI Apps 126.96.36.199.1 simplifies this setup process greatly, though, by automatically pushing all source and target configuration details entered into the web-based Configuration Manager application directly into ODI’s repository. You can see this in action when defining the EBS data source in our cookbook example, where in the screenshot below connection details to the EBS database are entered into Configuration Manager, and then appear thereafter in ODI Studio’s Topology navigator.
Configuration Manager stores copies of these settings in its own metadata tables, in this instance C_DATA_SERVER in a schema called DEV_BIACOMP, which can be thought of as similar to some of the DAC repository tables in BI Apps 7.9.x; in this instance though, Configuration Manager automatically pushes the values through to the ODI repository held in the DEV_BIA_ODIREPO schema, rather than you having to manually create the same entries in Informatica Workflow Manager yourself.
It’s a similar story with the setting up of domain values – what happens with BI Apps 188.8.131.52.1 is that you configure a special type of load plan, analogous to BI Apps 184.108.40.206′s execution plans, to read from the various EBS (in this instance) metadata table and set up the domain values automatically. In the screenshots below, the one on the left shows one of the special “domain extract only” load plans being set up for a particular set of fact table groups, while the screenshot on the right shows the same load plan being executed, loading domain values into the Configuration Manager metadata tables.
Notice also that what ODI11g is using to load data into these tables, is a standard ODI11g 220.127.116.11+ load plan, and it’s presumably to support the BI Apps and their more complex loading routines and dependencies that load plans were introduced. And, because these are standard load plans, when they go wrong, as an ODI developer they’re a lot easier to diagnose and debug than Informatica/DAC load routines, which left log files all over the place and used Informatica terminology for the load, rather than ODI’s familiar ELT approach. In the screenshots below, this domain-load load process has failed at the point where data starts to get read from the EBS instance, and looking at ODI’s error message view, you can quickly see that the error was caused by the EBS server being unreachable.
So far the screenshots of ODI’s internals that you’ve seen are from ODI Studio, the Java thick-client developer tool that all ODI developers are familiar with. But the BI Apps tries to hide the details of the ETL tool from you, treating it as an embedded, “headless” server that ideally you administer as much as possible from Configuration Manager (for system-wide configuration) and Functional Setup Manager (for application-specific configuration). To achieve this, Configuration Manager can run ODI Console embedded within its web view, so its possible to click on a load plan, view its status and drill into the individual steps all from your web browser, no ODI Studio install needed.
So how do these load plans get generated, when there’s no DAC and no historic way within ODI of dynamically-generating load plans based on metadata? What enables this dynamic creation of load plans is a new utility included with BI Apps 11g called “Load Plan Generator”, which ships as a JEE library within the WebLogic domain and a plug-in to ODI Studio, for creating test load plans as part o the ETL customisation process.
This blog post by Oracle’s Saurabh Verma describes the Load Plan Generator JEE back-end utility in a fair bit of detail, but in summary you can think of the load plan generation process as going like this:
1. The ODI developer decides to create a new load plan, for loading data about inventories from EBS, for example. As shown in the screenshots before, domain values for this area within EBS will need to have been loaded prior to this, but assuming this is in place, the developer first selects one or more fact table groups, with each fact group a kind of “subject area” containing one or more data warehouse fact tables.
2. In the background, ODI doesn’t have the concept of fact groups, but it instead uses flex field metadata for each fact table to specify which fact group each belongs to. You can see what’s included in each fact group by looking at the warehouse domains view in Configuration Manager, and you can see the fact table flex fields in ODI Studio, when you view details of the table (or “model”) in the Designer navigator.
3. So when the BI Apps ODI developer tells Configuration Manager to create a new load plan, the steps it goes through and metadata it consults looks like this:
4. Giving you, in the end, a load plan with a standard set of phases, and optimally-selected and orchestrated load plan steps to load data into the required fact groups.
5. Then, once the load plan runs, you’ve got the same in-built restartability capabilities that you get in all other load plans, together with the concept of load plan instances, exceptions and so forth, so again as an ODI developer all of this is fairly obvious and fairly transparent to debug.
In the background, WebLogic Server hosts the ODI agent within an ODI-specific managed server and within the overall BI Apps WebLogic domain, with ODI’s security linked to WebLogic so that the same developer and operator IDs work across the whole system, and with a set of additional Fusion Middleware Security application roles for the various levels of administrative access.
So – you could say that BI Apps 18.104.22.168.1 is a “developer’s dream” for ODI professionals, as it brings the productivity and source system compatibility benefits of the BI Apps to the familiar, more efficient world of ODI. Everything is connected, and you can see where everything is stored in the various underlying repository tables. But – and this is probably the most obvious next thought from experienced BI Apps developers – how do you go about customising these ETL routines, adding new data sources, and upgrading the system over time? And where does the new integration with GoldenGate come in, giving us the ability to break-out of restrictive load windows and potentially move BI Apps into the cloud – watch this space for more details.
A few months ago Charles Elliott and I were tasked with assessing OBIEE performance for a client here in the US. Queries were taking hours to run (literally) and obviously users weren’t happy. In the context of BI, there are many places where performance can be improved (database tuning, data warehouse modeling, query writing, etc.) but we decided to start with the RPD.
One of our resources at this client pointed to us that every query they ran used a database view (no, not a materialized view) which in turn sourced a handful of physical tables. These tables were all joined in the view using outer joins. So, every single query the BI Server ran, had to go through all the joins contained in this view, even if it didn’t need data from all the tables in question. We called this the “Inception Effect” (remember the movie?) because you always had to go 5 levels deep, and each one took longer than the last.
In this post, I want to demonstrate this problem and how we can “trick” the BI Server into behaving the way we want.
IMPORTANT: Please keep in mind that for this first example, we are using version 22.214.171.124.0. This detail will be very relevant later.The Problem
For this test, we’re obviously not using our client’s data. We will be working with data from the 2010 FIFA World Cup, and our tables are GOALS, PLAYERS and MATCHES. The GOALS table will be the source for both the logical dimension and fact, while PLAYERS and MATCHES will also provide additional information for our logical dimension. This is our physical model:
In our (very simple) business model, we have a fact table called Fact: Goals which uses the GOALS physical table as its only source:
And we also have a dimension table called Dim: GPM, which has one LTS with all three physical tables as sources (the columns in this table were named for easy identification of the source table). Note the join types for this logical table source:
So, let’s see what this looks like when we create a very simple query in Answers.
Note that we have a column from the GOALS table, one from the PLAYERS table and one from our fact. As expected, our query should include one outer join (remember that the LTS for our dimension included a couple of outer joins), but upon closer examination, we see that it actually includes both outer joins, even though it didn’t need to include the MATCHES table.
Furthermore, if we remove the ‘Players – Name’ column as well (leaving only columns that come from the GOALS table), the two outer joins still show up:
You may be wondering: Why do I care, as long as the result is still correct? Well, in our client assessment this was extremely important, because all the outer joins included in the queries all the time were causing them to run much slower than they should.The Solution
The solution in these situations is to “trick” the BI Server into using the outer-joined logical table sources only when needed. It requires a little more development time, but in the end it is well worth it. Let’s take a look at what our model looked like with this approach and what the results were.
As you can guess by the LTS names, the GOALS LTS includes only the GOALS table as its physical source, the GOALS_PLAYERS LTS includes the GOALS and PLAYERS tables, while the GOALS_PLAYERS_MATCHES LTS includes all three of them. The order of the logical table sources matters, as you want you first choice on top, and your last choice at the bottom. Alternatively, you can use Priority Groups to determine the order in which the BI Server is going to try and use each LTS based on the query criteria. Let’s see if it actually works.
In our multiple LTS test, I added an extra column to our model called ‘LTS ID’ that will help us identify in Answers the LTS being used for each query.
So this is our first result set:
This query includes only columns from the GOALS table, plus our LTS ID column, which shows the LTS being used. The important thing, however, is that when we look at the SQL issued by the BI Server, we see that no outer joins were included, indicating once more that the BI Server was smart enough to choose the most efficient LTS for this query.
If we include a column from the PLAYERS table to our request, this is what we see:
And the SQL validates our theory, including only one outer join:
Lastly, if we add a column from the MATCHES table to the request:
Our SQL should include both outer joins, as it is using the LTS seen in the request, with all 3 tables:
Recently, we heard a rumor from Oracle Product Management that with the 126.96.36.199 release of OBIEE, the BI Server would be a little smarter around pruning outer-joined logical table sources. We decided to put it to the test, and the results were encouraging.
We went back to the one LTS idea:
But in this case, if the theory is correct, the BI Server should be able to prune the unnecessary tables despite the outer joins.
When we run a request in Answers with columns from all three tables:
The BI Server will issue both outer joins in the query as expected:
However, if we remove the ‘Matches – Venue’ column:
The BI Server will remove the MATCHES table from the query, therefore eliminating one of the outer joins:
And finally, if we leave only columns from the GOALS table in our query, no outer joins are included:
And all this, using one single LTS in 188.8.131.52.0.
Now, I’m not claiming that the newer versions of OBIEE will be able to prune every unnecessary outer join in every single situation. This was a simple example, with only a few tables, but it does show that Oracle has been working on this. And that’s very good news if you are a RPD developer.
Until next time!
A few months ago a wrote a series of posts about the new 12cR2 release of Oracle Enterprise Manager Cloud Control, which re-instated the BI Management Pack used for monitoring OBIEE 11g systems. Since then, the 12cR3 update for Cloud Control has been released, and one of the new features it added to the BI Management Pack was support for adding Exalytics systems and monitoring them as a whole. So how does this new feature work, and what’s involved in setting it up?
If you read my previous post on registering host systems for use with the BI Management Pack the process for registering an Exalytics system is more or less the same, except that there’s now a specific target type of “Exalytics System” that can be used with the “Add Targets using Guided Process” registration step. In this instance, I’m using EM12cR3 to register an Exalytics server that’s running OBIEE 184.108.40.206 and the rest of the Exalytics v220.127.116.11 stack, including Essbase and TimesTen.
Then, assuming the Exalytics server doesn’t currently have the EM Management Agent already installed, one is installed on the Exalytics server automatically by the EM Management Server, so that the Exalytics server can then be monitored at a hardware and software level from the main Enterprise Manager web console.
Then, you can either use the auto-discovery feature in EM to “sniff-out” the various middleware components on the Exalytics server, or you can register them manually, this time registering the WebLogic domain on the Exalytics server and automatically picking-up all of the OBIEE components on the server.
Essbase Server gets picked-up as part of the discovery process for the OBIEE Fusion Middleware components, but TimesTen has to be separately registered through a plug-in that you have to download into your EM environment. I covered the TimesTen plug-in download, registration and deployment process back in my earlier post on EM12cR2 and it’s the same with 12cR3, with the TimesTen plugin deployed to the Oracle Management Server in the EM environment, and then deployed to the Management Agent on the Exalytics server, Once you’ve registered both the OBIEE elements and the TimesTen elements for your Exalytics server, you can see them together as part of the overall Exalytics “system” as shown in the screenshot below.
The key addition here in 12cR3 is that Exalytics is now a first-class EM target type, and each Exalytics server is registered as an EM “system”, grouping together all of the hardware and software elements into a single management group.
Some basic host (server) monitoring comes out of the box and with the BI Management Pack, and there are additional plug-ins and management packs you can use to get further insights into the hardware side, ILOM etc.
Then, for each of the OBIEE11g and TimesTen elements, you can drill further into their various monitoring and management pages, just as you would with the 12cR2 version of Cloud Control and the BI Management Pack.
As I mentioned before, registering Exalytics Server targets is part of the new 12cR3 release of Enterprise Manager Cloud Control and is part of the pay-extra BI Management Pack, which also gives you additional insights and management features for OBIEE and Essbase. All versions of Exalytics can be registered and managed by EM using this feature, but earlier (pre v18.104.22.168) releases will need a configuration file added to their servers to enable the EM discovery process to recognise them as proper Exalytics servers. All-in-all, not a bad extra feature for the BI Management pack, and it brings Exalytics into line with the other Oracle Engineered Systems in terms of EM target registration – and it’s also a pre-requisite if you’re planning to virtualise Exalytics using OVM and the “trusted partitions” licensing scheme.
The vast majority of our customers use Oracle Database as the underlying data source for OBIEE 11g, sometimes on Exadata but almost always the Enterprise Edition of the database. OBIEE typically sells best into customers that already extensively use Oracle products, and often it’s often bundled-in with a wider technology or apps deal including database and middleware products. But for some customers, the cost of the Oracle Database as a data source can be prohibitive and in many cases, they’re also looking at purchasing and deployment options that don’t involve upfront perpetual license deals and running the software on-premise. One of our longest-standing customers came to us with just such a scenario a few weeks ago, and so I thought it’d be interesting to take a look at what alternatives there are to licensing Oracle Database Enterprise Edition, and whether running these alternatives in the cloud was a practical option.
To set the context: imagine that you wanted to deploy a small-to-medium sized data warehouse on two servers:
- A “production” server with 64GB RAM, 4 x 4 core servers, Intel x86_64 architecture and running Oracle Linux 6 (for licensing purposes, 8 processors)
- A “dev/test” server with the same spec, onto which we’d install two separate databases
For now we’ll ignore the ETL element, but if we were licensing Oracle Data Integrator Enterprise Edition, we’d need to license it for these two servers too at around $23k/processor.
So what are the alternatives to Oracle Database Enterprise Edition as a data source? To my mind, there are three main options:
- Trading down to fewer processors but keeping Oracle Database EE, or going down to a lower-specced version such as Standard Edition or Standard Edition One
- Using an alternative RDBMS, probably either open-source or a “community edition”
- The left-field option – using Hadoop, NoSQL, Hive and the other new “big data” technologies.
Let’s take a look at the Oracle options first of all. Based on 8 processors (4 x 4 Intel x86_64 cores, x0.5 multiplication factor), the cost of licensing Oracle DB EE + commonly-used database options based on the current Tech Price List would be as follows (at list price):
- Oracle Database Enterprise Edition ($47.5k x 8 = $380k)
- Partitioning Option ($11.5k x 8 = $92k)
- EM Tuning Pack ($5k x 8 = $40k)
- EM Diagnostics Pack ($5k x 8 = $40k)
Giving us a total of $552k for 8 processors (covering both of our servers). Add onto that the standard 22% support and software updates, and you’re looking at an initial payment of $673k or the perpetual license, and an ongoing $121k/year for annual maintenance. Factor in a DBA and the hardware as well, and you’ll probably not get much change out of $800k in year one, and $250k in the years afterwards.
You could switch-down to one of the cheaper editions of Oracle Database, which also don’t come with the add-on packs, which could take you down to $5.8k/processor for SE1 or $17.5k for SE, but you’ll then have to do without most of the high-end data warehousing features in the Oracle database, and work within socket restrictions when it comes to SE1. Now of course you get what you pay for with Oracle Database, including potentially lower ongoing admin costs because much of the tuning and optimisation work is either unnecessary or automated, but if price is the issue and the alternative is a cut-down version of Oracle, what else is on offer – and that works with OBIEE?
If you start to look around at the alternatives to Oracle, and assuming you’re discounting products like IBM DB/2 or Microsoft SQL Server, you’ll find there are two main “conventional” types of RBDMS that you could use with OBIEE:
- General purpose RBDMS’s, such as mySQL and PostgreSQL
- “Analytical” databases, typically in-memory, column-store and shared-nothing, such as Greenplum and Vertica
If you look under the covers with the analytical databases, in most cases you’ll find that the core RBDMS technology they use is in fact PostgreSQL, with the (open-source) code then forked and adapted to work with the particular technology angle each vendor chooses to emphasise. Most developers tend to think that PostgreSQL is a more “enterprise-scale” free RDBMS than mySQL, and so given that it also provides the core DB technology in the other products, how might this work with OBIEE?
As a bit of background, PostgreSQL is an open-source database that’s of a similar vintage to Oracle Database, and you can download it for free or install it as part of most Linux distributions. Many developers swear by PostgreSQL itself as being all they need for data warehouse scenarios, but a couple of vendors have taken the PostgreSQL code and commercialised it, adding their own management tools and support options around the core open-source code. The most famous of these is EnterpriseDB, who package up PostgreSQL into a number of options one of which is called Postgres Plus Advanced Server. If you’re looking for an open-source based, low-cost but mature RBDMS engine for your database, this is more or less the flagship option, so I set about creating a new OBIEE 22.214.171.124 VM for testing purposes, with the aim of using Postgres Plus Advanced Server as my data source.
In fact, the first issue you hit when trying to go “non-Oracle” for the database, is that you need a Oracle, SQL Server or IBM DB/2 database to install the RCU schemas into. PostgreSQL isn’t an option, but in reality most customers using OBIEE will have a small Oracle database available somewhere, so I cheated here and installed the RCU schemas into a spare Oracle database, and then went on with the install.
Installing Postgres Plus Advanced Server itself was pretty straightforward, with the installer separately downloading the core PostgreSQL files and the Apache HTTP server used for its web-based admin tools. On Windows, once you’ve installed the software there’s a new Start Menu entry with the EnterpriseDB-specific (value-add) tools, including a console-based Enterprise Manager-type tool called pgAdmin.
Opening up pgAdmin gives you an environment that’s similar to that you’d find in tools such as TOAD, SQL*Developer and the old Java thick-client based Enterprise Manager, and using the tool I was able to connect to my PostgreSQL database, create a few tables, and run some test SQL queries.
It’s pretty basic stuff compared to Oracle’s Enterprise Manager product, but it may be all you need if you’re looking for a tactical, or departmental solution. EnterpriseDB price Postgres Plus Advanced Server at just under $5k/processor/year for the core database (one processor = one socket, however many cores), with their Enterprise Manager product costing $1500/year for two monitored hosts, and their “Solution Pack” another $5k/year, so based on the same server sizing as we used for the Oracle costings, the price of an EnterpriseDB solution looks a bit like this:
- Postgres Plus Advanced Server ($5k x 8 = $40k)
- Enterprise Manager EM Tuning Pack ($1.5k)
- Solutions Pack ($5k)
Coming out at around $47k/year for the two servers. Now the Oracle licenses were perpetual not annual, but even so, the total cost of the on-premise PostgreSQL solution is about 1/3rd of just the annual maintenance cost of the Oracle software, let alone the software license cost, so it’s considerably cheaper. So what don’t you get with PostgreSQL – either core, or commercialised via EnterpriseDB, that you get with Oracle?
Now as a disclaimer, I’m mainly an Oracle person and can’t really speak with any great depth on PostgreSQL, but the major things that are missing in PostgreSQL compared to Oracle Database 11g are:
- Proper, in-built support for parallel query – PostgreSQL has an open-source GridSQL project but this looks more like a federated, shared-nothing solution rather than PQ within a single server
- Built-in ETL in the form of Oracle Warehouse Builder
- Anything like Oracle’s built in MOLAP server, Advanced Analytics and the like
- All the DR options, security options and so forth
And no doubt the core DW features – bitmap indexing, cost-based optimiser, support for VLDBs and so on – are not nearly as developed, and easy to manage, as with the Oracle database. But – given that not all customers need these features, and that many customers even on 10g and 11g are still using the database as if it were Oracle 7 or 8i – lack of such features may not be a show-stopper if money is tight.
Now so far we’ve been talking about running PostgreSQL “on-premise”, more or less a a direct substitute for Oracle Database; however, it’s also possible to run Postgres Plus Advanced Server on Amazon’s AWS cloud platform, with no local install of the software, simplified administration but access to the same core Postgres Plus Advanced Server features billed at an hourly rate, giving you an architecture like the diagram below:
EnterpriseDB and AWS provide a free 24-hour trial, so I set up another database this time on the Amazon cloud, provisioned it and created the same set of tables.
Connecting to the cloud-based Postgres Plus database was the same as connecting to a regular, on-premise one, and in fact this is more-or-less just commercial PostgreSQL on Amazon AWS. Instead of charging for software and support up-front as with regular Oracle or on-premise Postgres Plus Advanced Server, you’re instead charged, AWS-style, an hourly rate based on the size of instance – one comparable to one of the Oracle servers mentioned beforehand comes in at $6.48/hour or $56k/year, or $112k for the two servers we’d need, still overall less than even the Oracle software maintenance charge but covering hosting as well (though you’ll need to pay Amazon storage and bandwidth charges on top, too).
Connecting OBIEE 11g to these two databases is also fairly straightforward – the Data Direct Postgres ODBC drivers that come with OBIEE work fine, with the connection process involving creating an ODBC DSN to the Postgres database either on-premise on on AWS, then importing the tables into the BI Repository and modelling them as normal. Connecting to the AWS cloud as a data source worked fine as wasn’t laggy, but you’d obviously need to test it at greater volume and with a realistic network connection. But – it works and seems to perform OK, at least with three rows…
So something like Postgres Plus Advanced Server in the cloud would probably appeal to a customer with a temporary requirement, a departmental budget, or an infrastructure that’s already in the cloud, most probably on Amazon AWS. But you can also run Oracle Database on Amazon’s AWS cloud as well, through a partnership between Oracle and Amazon’s RDS (“Relational Database Service”). In this case, you go to the Amazon AWS website, sign-up for the service, put in your credit card details and within a few minutes, you’ve got a provisioned, Standard Edition One database ready to connect to via all the usual tools. And what this is, again, is the standard Oracle Database (or with EnterpriseDB’s offering, Postgres Plus Advanced Server) but hosted on AWS, charged hourly, with two main options for pricing:
- A license-included option (Oracle Database Standard Edition One) at $3.14/hour for a reasonably specced-instance, coming out at about $27k/year, or $54k for the two servers we’d need
- A “bring-your-own licence” (BYOL) option, where you provide the Oracle licenses and Amazon AWS hosts it, at around $19k/year for the same size instance, $38k/year for the two servers we’d need
So if you work on the assumption that Oracle Database SE1 is roughly equivalent to Postgres Plus Advanced Server in terms of DW features, it’s not a bad option, and probably a better one if you’re already heavily invested in Oracle and have lots of skills there. The BYOL option is really just about hosting, but at $38k/year to host the two servers, back them up and so on, it’s not a bad option compared to buying some servers and putting them somewhere.
So, with Amazon RDS or Postgres Plus Advanced Server in the cloud, if you’re happy to take the hit on features and scaleability it’s possible to bring your database costs down considerably, which may be an option when your project first starts off, and with the database platform reviewed say in a couple of years’ time to see whether it’s worth trading up to Oracle Database Enterprise Edition, or bring the whole thing back in-house. But Postgres and Oracle aren’t the only players in the analytical database market – there are other on-premise vendors such as Greenplum and Vertica who’ll sell you either a full-priced, Oracle-comparable database and who have cut-down, community editions available for single-node installs, or there are new players such as Amazon again who are bringing out cloud-native analytic databases again based on instance sizes and hourly charging – so lets look at one of them, Amazon’s “Redshift”.
Amazon Redshift is actually based on an analytic database product called ParAccel, a column-store database in the same vein as Vertica but with major bits cut-out in it’s Amazon Redshift form. That said – it’s a lot more of an interesting starting point than core PostgreSQL or Oracle Database SE1, an given that it’s a core Amazon AWS product, it’s no doubt going to receive a lot of investment, developer evangelism and usage over the next few years. It’s also Postgres-compatible in terms of ODBC drivers, so viable for OBIEE, though like Postgres unsupported, so you’ll not get much in the way of BI Server optimisations or help from Oracle Support if it goes wrong.
To connect OBIEE to Amazon Redshift there’s a set of ODBC and JDBC drivers that you can download, and once you’ve opened up the port to Redshift in the AWS firewall, you can create an ODBC connection on the OBIEE side just like most other PostgreSQL connections, and then import the Redshift metadata into OBIEE’s BI Repository.
Redshift is priced around the same mark as Postgres Plus Advanced Server, but without the Amazon AWS storage and bandwidth charges. It’s a bit higher-risk than Postgres Plus, and it’s not got the Oracle Database compatibility features EnterpriseDB added to core Postgres, but it’s column-store, cloud-native and backed fully by Amazon.
So – with all of this in mind, what are the conclusions? Well it’s fair to say that, based on what turned-up today and OBIEE’s pre-existing heterogenous database support, you can connect non-Oracle databases to OBIEE as well as Oracle ones, but you’ll need to bear in mind that the newer cloud-based ones, and the open-source ones, won’t necessarily come with support from Oracle. If you’re looking to keep as close to an Oracle solution as possible but cut costs dramatically, using Amazon’s RDS service particularly with the license-included SE1 option could bring down the cost considerably, from $800k or so to around $50k/year, but you’ll need to work within the product constraints of SE/SE1, so no materialised views, partitioning option and the like – so it’s probably a matter of preference whether you’d go for Oracle SE1 or a Postgres-based solution such as EnterpriseDB’s Postgres Plus Advanced Server (I’d probably go with Amazon RDS and Oracle Database SE1, if only because I can upgrade in-time to Oracle Database Enterprise Edition and my Oracle skills can be re-used).
Going forward – analytic databases-as-a-service certainly sound interesting, with Amazon Redshift looking particularly interesting. How much customers will take up cloud-based data warehousing databases in-general though will probably be limited until OBIEE itself goes into the cloud – either through cloud-friendly licensing policies that charge by the hour and instance capacity, or with a cloud-native version of OBIEE designed to interoperate with Oracle’s Cloud Database offering, which up until now doesn’t provide external SQL*Net access and is really designed for powering cloud-based Java apps. Watch this space as they say – and it’ll be interesting to hear what Oracle announce in this area at next months’ Oracle Openworld.
Just a quick blog post to give an update about our Masterclass tour in India in partnership with ODTUG. As you probably know, Mark Rittman (Oracle ACE Director & Rittman Mead Co-Founder), Stewart Bryson (Oracle ACE, Rittman Mead America Managing Director) & myself (Oracle ACE, Rittman Mead India Managing Director) will be doing a master class event in 3 cities – Bangalore, Hyderabad and Mumbai in October 2013. If you want to hear & see about the latest in Oracle BI technologies from experts like Mark & Stewart, this is the event to attend. This event is run immediately after Oracle Open World 2013. Amongst the 3 of us, we have close to ten sessions in OOW2013. If you would like to know what is the planned direction of the Oracle BI/EPM products this is the event to attend as we will all be coming fresh out of Open World(watch out for the BI Analytics/EPM Product Update from Mark in particular).
Following are the topics covered in the 1 day event (in each city)
9.30am – 10.00am: Registration and Welcome
10.00am – 10.30am: Oracle BI, Analytics and EPM Product Update – Mark Rittman
10.30am – 11.30pm: Reporting against Transactional Schemas using OBIEE11g – Stewart Bryson
11.30pm – 12.30pm: OBIEE 11g Integration with the Oracle EPM Stack – Venkatakrishnan J
12.30pm – 1.30pm: Lunch & Networking
1.30pm – 2.30pm: OBIEE and Essbase on Exalytics Development & Deployment Best Practices – Mark Rittman
2.30pm – 3.30pm: Oracle BI Multi-user Development: MDS XML versus MUDE – Stewart Bryson
3.30pm – 4.00pm: Coffee Break & Networking
4.00pm – 5.00pm: Intro and tech deep dive into BI Apps 11g + ODI
5.00pm – 6.00pm: Metadata & Data loads to EPM using Oracle Data Integrator – Venkatakrishnan J
We have few slots left in each event. If you would like to register yourself or your team, you can register directly using the links below.
- Bangalore, Fortune Select Trinity Hotel, Whitefield: October 15th 2013, 10am – 6pm (IST)
- Hyderabad, Westin Mindspace, Hyderabad, Hitec City: October 17th 2013, 10am – 6pm (IST)
- Mumbai, Courtyard Marriott, Mumbai: October 19th 2013, 10am – 6pm (IST)
The events are also being run in partnership with ODTUG, a world-wide user group focusing on Oracle tools and technology including BI, and EPM. We’re very pleased to have ODTUG’s involvement in the event, and we’ll be giving away ODTUG materials and associate membership at each event.
More details about the event can be found here.
A few days ago you might have noticed the flurry of activity from Oracle around a new OBIEE 11g add-on called the “Oracle BI Mobile App Designer“. Designed to complement Oracle BI Mobile HD, Oracle’s “native client” mobile BI client, BI Mobile App Designer instead creates HTML5-based web apps that work with not only the iPad and iPhone, but all HTML5-based browsers including those in recent Android phones and tablets. So what is Oracle BI Mobile App Designer, how do you get in installed, and how does it’s mobile BI Apps differ from Oracle BI Mobile HD?
To use the BI Mobile App designer, all of the installation and configuration happens server-side, as the apps it creates are 100% thin-client, web-based apps. Therefore, there’s no App Store downloads or additional software that you’ll need to get cleared with your security team, but you’ll need to ensure you’re on the very latest version of OBIEE 11g – 126.96.36.199.1 – and also apply some patches on-top to enable the BI Mobile App feature. From what I can see, the complete set of patches are only currently available for Linux x86_64 and Windows x64, and assuming you’re on OBIEE 188.8.131.52 at the moment, the patches you’ll need to apply are as follows:
- 16556157 (OBIEE 184.108.40.206.1 patchset, which also requires you to download another patch, 16569379)
- 17004920, which enables support for BI Mobile App Designer in the catalog, and the Presentation Services UI
- 17220944, the actual BI Mobile App Designer app, enabled through extending the OBIEE WebLogic domain
As the first patch actually contains six sub-patches, it’s quite a marathon to get all of them installed, but on my Oracle Linux 5 installation it all went smoothly and I was up-and-running in a couple of hours. Once you’ve installed everything, applied the security settings and started the BI Mobile App Designer application for the first time, you’ll now see it listed as a “create” option in the OBIEE 11g web interface, like this:
Access to BI Mobile App designer apps is granted through the usual BIConsumer application role, with the ability to create new apps granted through BIAuthor (all of which, of course, can be customised). So let’s get on then and create our first mobile app.
The first choice you’re prompted with after selecting New > Mobile App, is to select the device type; either Phone or Tablet:
I select Tablet, and I’m then prompted to select the data source type. At this point, it’s clear what technology BI Mobile App designer is based on – BI Publisher 11g; this ability to source data either from a custom data model, or directly from the OBIEE BI Repository, is a feature of BI Publisher (direct access to the RPD came with the 220.127.116.11 release), and in fact, the best way to understand the role of BI Mobile App Designer compared to BI Mobile HD, is to consider how BI Publisher complements OBIEE’s regular dashboards and analyses. BI Publisher accesses the same data sources as analyses and dashboards (more, in fact) and gives you a lot more flexibility and room to be creative compared to regular dashboards, with regular dashboards having the edge though when it comes to “power-user” analysis and access to features such as action links, maps, KPIs and scorecards and the like. By basing BI Mobile App Designer on BI Publisher technology though (and keeping BI Mobile HD as it is), some additional interesting mobile analytics possibilities are opened up:
- You’ve got a lot more control and flexibility over the layout, allowing you to create “executive information system”-type dashboards, or ones focused on particular role or LOB
- You can include images, text, custom navigation features
- You can support Android and other non-iOS devices, with no need for App Store downloads
- There’s no need to get the mobile app signed or approved by corporate security, as it’s a 100% thin-client, no-install application, with no data stored on the device
Let’s move on then, and create the application proper.
After you’ve initially saved the application to the Presentation Services catalag, you’re then presented with a cover page, a list of data items down the left-hand side, and options to create new pages and application elements across the top – you can see how this editor is based on the Online Layout Editor used by BI Publisher 11g. As an aside, I used the latest version of Safari on the Mac in this session, and didn’t hit any compatibility problems creating and viewing reports.
A BI Mobile App typically contains a number of pages, including this cover page (for which you can change the image, title and so on). As with BI Publisher templates, each page you create can have a grid-type layout, to which you then add components such as charts, tables, pivot tables and navigation items. In the example below, I’ve added a page using the Navigation page style, which automatically adds a filter object to the left-hand side, and allows me to drop charts and other components to the right of it, each of which automatically gets filtered by the selected column value.
A new component type introduced in this application is the Tile component, which takes a grouping column, then displays a set of charts or other components grouped by that column. The screenshot below on the left is the page in design mode, whilst on the right I’ve previewed it in display mode, showing all of the tiles for that set of LOBs.
As well as filtering data using Filter components, there’s also what’s called the “Accordion” component – this lists out a set of column values plus subtotals, but when you click on a column value, a component such as a chart is then displayed for that value. In the screenshot below I’ve created an accordion navigation component for LOB, with each click of a column value showing sales broken down by product type within that LOB (and five points for whoever remembers what the computer on the right-hand side of the screen is).
There’s also the concept of “sub-pages”, pages which are linked to from the master page and which filter on the value that’s clicked-on, similar to the click-filtering that you get with BI Publisher but across two pages. In the example page above, the bar chart shown within the LOB accordion chart can now be clicked on to link to the LOB Detail Analysis page I’ve also set up, like this:
So once you’ve created your application, it’s time to try it out. You can either preview the app in your regular desktop web browser, or using a QR code on your tablet. Authentication is carried out as per OBIEE’s regular dashboard web application, but you can create a link directly to the BI Mobile App rather than have the user go through the standard OBIEE login page – or, you can use a new URL, http://[server_name:port]/mobile/appstore, which presents them with the BI Mobile App designer login page, and then an “App Store” of mobile apps they can select, and then “subscribe” to, adding it to their personal app page.
Then, it’s a case of viewing and navigating around the mobile app, using the navigation items added earlier on, and viewing the charts, tables and other components contained within each page.
So – not bad at all, and a very interesting complement to the native iOS clients, especially if you’re on Android, you want a more custom, LOB-style app or you’re concerned about the security implications of a thick-client app on your mobile device. No doubt we’ll see more imaginative mobile app layout on the web and from Rittman Mead in the future, but for now – looks good.
A few months ago I posted a series of articles on our blog about the new 18.104.22.168.1 release of the BI Applications, focusing on the new ODI 11g-based architecture and its supporting utilities. At the same time, Accenture’s Kevin McGinley was also investigating and writing about this new release, and co-presented with Oracle’s Florian Schouten on it at this year’s BI Forum event in Atlanta. At the time, the three of us agreed that what developers really needed was a step-by-step guide, or “cookbook”, for getting BI Apps 22.214.171.124.1 up and running in a test environment, as so much of the architecture was new and the official docs covered all eventualities rather than just the steps needed to get a “minimal install” up and running.
Kevin and I therefore went away and put such a guide together, based on a Windows-only (typically VM) environment, using Oracle E-Business Suite 12.1.3 as the data source and aiming to get a basic set of data into the BI Apps tables and dashboards. It’s just now been published OTN as “Cookbook: Installing and Configuring Oracle BI Applications 126.96.36.199.1″, and whilst It’s by no means an exhaustive guide (ignoring Linux installs for example, or multi-tier installs), it takes you through the download, install, configuration and then load of some basic EBS data into the BI Apps data warehouse. For anyone who’s interested, I put the architecture and install piece together, with Kevin then tackling the part around system configuration and the initial data load. Hopefully it’ll give readers a head-start in pulling a BI Apps 188.8.131.52.1 system together, show you how the ODI part works, and see how Informatica and the DAC are replaced by ODI and some web-based configuration tools.
Finally, as we put the article together around the time of ODTUG KScope’13, Rittman Mead and Accenture decided to donate the article fee to ODTUG’s chosen KScope’13 charity – Wine to Water, a 501(c)(3) non-profit aid organization focused on providing clean water to needy people around the world. If you were lucky enough to be at KScope’13 you’ll probably know all about Wine to Water, but if not, full details on the excellent work they do can be found on their website.
One of the most exciting aspects of the current information management revolution is business users getting faster access to wider sets of data and therefore being able to understand business events and make decisions more effectively and quicker than before.
This is happening through a number of different tools, some more aligned to Big Data, however some are more related to business enablement. One of the catch all terms used for this latter approach is Information Discovery. As the name suggests this is one area that Oracle Endeca Information Discovery (OEID) is strong.
OEID has a new feature called Data Provisioning that allows users to provide their own data as a source for reporting. Traditionally if a business user wanted to report on a new source data they would need to contact the IT function and sponsor a project or work package to bring that data into the reporting system.
Furthermore, the nature of requirements in any Business Intelligence project means that the business user cannot be sure:
- if that data does actually bring value to the organisation, or
- how to model or represent the data to get the inherent value from it.
This often leads to a disconnect between business users and their IT department, accompanied by long development times, and has given rise to a number of desktop based tools such as Qlikview, Spotfire and Tableau as business users demand to have more control over their data sources.
Data Provisioning puts control back in the hands of the business user. They can easily set up a new data source or add a data source to an existing report.
OEID’s model-less approach, combined with its easy to use, search based interface make it an ideal tool for Data Provisioning. Users will be able to either create simple reporting solutions, or data mash-ups, exploring then adding new sources as they analyse the data.
At the moment Oracle Endeca Information Discovery’s Data Provisioning capability is in its early stages. The current limitations are the only data source you can provision is a spreadsheet and data mash-ups are not yet possible. Additionally once applications are created it is not easy to refresh their data.
Future releases of Endeca are likely to address these limitations, with users being able to update data sources, plus integrate them on the fly into existing reports. Therefore giving business users the true capability of doing data mash-ups.
We will hopefully see the data sources expand with the introduction of JDBC, JSON, OBIEE and Hive some time in the near future.
If this sounds like something your organisation could benefit from, or you just want to try it out to get an idea of what this service offers, we have set up an Endeca data provisioning service. You can sign up for an account, enabling you to then upload a spreadsheet in order to start building your own app. Please note we can only keep apps for 30 days.
We have also posted a brief YouTube video of how to get started once you have registered for an account. More videos will follow over the next few weeks showing you how to further build our your application.
To create an account click here or for more information on Endeca please go to http://www.rittmanmead.com/endeca/ and http://www.rittmanmead.com/category/oracle-endeca/.