Skip navigation.

BI & Warehousing

Database Links

Tim Dexter - Thu, 2014-09-25 13:39

Yeah, its been a while, moving on ...

I got a question a week back asking about how BI Publisher could handle dblinks. The customer currently has db links from DB1 to DB2 and uses them in their queries. Could BIP handle the syntax and pass it on to the database in its SQL or could it handle the link another way?

select e1.emp_name
, e1.emp_id
,e2.manager
from emps e1
, emps@db2 e2
where e1.manager_id = e2.id

Well, there is the obvious way to create the join in BIP. Just get rid of the db link alttogether and create two separate database connections (db1 and db2). Write query A against db1 and query B against db2. Then just create a join between the two queries, simple.

 But, what if you wanted to use the dblink? Well, BIP would choke on the @db2 you would have in the sql. Some silly security rules that, no, you can not turn off if you want to. But there are ways around it, the choking, not the security. Create an alias at the database level for the emp@db2, that way BIP can parse the resulting query. Lets assume I create an alias in the db for my db linked table as 'managers'. Now my query becomes:

select e1.emp_name
, e1.emp_id
,e2.manager
from emps e1
, managers e2
where e1.manager_id = e2.id

 BIP will not choke, it will just pass the query through and the db can handle the linking for it.

Thats it, thats all I got on db links. See you in 6 months :)




Categories: BI & Warehousing

Introduction to Oracle BI Cloud Service : Building Dashboards & Reports

Rittman Mead Consulting - Thu, 2014-09-25 04:00

This week we’ve been looking at the new Oracle BI Cloud Service (BICS), the cloud version of OBIEE11g that went GA at the start of this week. Rittman Mead were part of the beta program for BICS and spend a couple of weeks building a sample BICS application to put the product through its paces, creating a reporting application for Salesforce.com that pulled in its data via the Salesforce REST API and staged it in the Oracle Database Schema Service that comes with BICS. Earlier in the week we looked at how data was uploaded or transferred into the accompanying database schema, and yesterday looked at how the repository was created using the new thin-client data modeller. Today, we’ll look at how you create the dashboards and reports that your users will use, using the Analysis and Dashboard Editors that are part of the service. If you’re arriving mid-way through the series, here’s the links to the other posts in the series:

In fact creating analyses and dashboards is the part of BICS that has least changed compared to the on-premise version. In keeping with the “self-service” theme for BICS there’s an introductory set of guidance notes when you first connect to BICS, like this:   NewImage   and the dashboard and analysis editors are available as menu options on the Home page, along with a link to the Catalog view, like this:   NewImage   From that point on though it’s standard Answers and Dashboards, with the normal four-tab editor view within Answers (the Analysis Editor) and the ability to create views, calculations, filters and so on. Anyone familiar with Answers will be at home within the cloud version, and there’s a new visualisation – the heat map view, as shown in the final screenshot later in this article – that hints at other visualisations that we’ll see featured first in the cloud version of OBIEE, expected to be updated more frequently than the on-premise version (one of the major selling points for customers looking to adopt new features as soon as possible with OBIEE).   NewImage   What’s missing from this environment though are features like Agents and alerts, scorecards and BI Publisher, or the ability to create actions other than links to other web pages or catalog content.   NewImage   These are features that Oracle are saying they’ll add-back in time though as the underlying infrastructure for BICS builds-out, and of course the whole UI is likely to go through a rev with the 12c release of OBIEE due sometime in 2015. Dashboards are also created in the same way as with on-premise OBIEE, with the same Dashboard Editor and access to features like conditional display of sections and support for presentation variables.

NewImage

So, that wraps-up our quick tour around the analysis and dashboard creation parts of Oracle BI Cloud Service; tomorrow, to finish-up the series we’ll look at the administration elements of BICS including new self-service application role provisioning, tools for administering and monitoring the instance and for backing-up and migrating content from one instance to another.

Categories: BI & Warehousing

Introduction to Oracle BI Cloud Service : Creating the Repository

Rittman Mead Consulting - Wed, 2014-09-24 04:00

Earlier in this series we’ve looked at the overall product proposition for Oracle BI Cloud Service (BICS), and how you upload data to the Database Schema Service that comes with it. Today, we’re going to look at what’s involved in creating the BI Repository that holds the metadata about your logical tables, calculations and dimension hierarchies, using the new thin-client data modeller that like the rest of BICS runs entirely within your web browser. For anyone coming into the series mid-way, here’s the links to the other posts in the series:

So anyone familiar with OBIEE will know that a central part of the product, and the part of it that makes it easy for users to work with their data, is the business-orientated semantic model that you create over your source data. Held within what’s called the “BI Repository” and made-up of physical, logical and presentation layers, the semantic model turns what can be a complex set of source tables, joins and cross-application links into a simple to understand set of subject areas made up of fact tables and dimensions. Regular on-premise OBIEE semantic models can get pretty complex, with joins across different database types, logical tables with several different ways you can provide their data – for example, at detail-level from an Oracle data warehouse whilst at summary level, from an Essbase cube, and to edit them you use a dedicated Windows development tool called BI Administration.

Allowing these complex data models, and having a dependency on a Windows-based development tool, poses two main issues for any consumer-style version of OBIEE; first, if the aim of the service is to attract customers who want to create their systems “self-service”, you’ve got to made the repository development process a lot simpler than it currently is – you can’t expect customers to go on a course or buy my excellent book when they just want to get a dashboard up and running with the minimum fuss. You also can’t realistically expect them to install a Windows-only development tool back at the office as most of their target customers won’t have admin privileges on their workstations, or they might even be using Macs or work out of a browser; and then, even if they get it installed you’ll need to ensure there’s a network connection available to the BI Server in the cloud through their corporate firewall. Clearly, a browser-based repository creation tool was needed, ideally one that did some of the basic work automatically for the user and didn’t need hours or days of training to understand. Of course, the risk to this is that you create a repository editing tool that’s too “dumbed-down” for most developers to find useful, and we’ll consider that possibility later in the article.

So following the data upload process that we covered in yesterday’s post, we’re now in a position where we’ve got a number of tables sitting in Oracle Database Schema Service, and we’re ready to build a repository to report against them. To access the thin-client data modeller you click on the Model menu item on the BICS homepage, as shown in the screenshot below.

NewImage

The modeller itself supports a simplified subset of what you can create with the full BI Administration tool. You’ve got a single source, the Oracle Database Schema Service, and a single business model. Business model tables have a logical table source as you’d normally expect, but just the one LTS is currently supported. Calculations within logical tables are supported, but they’re logical-level only (i.e. post-aggregation) with no current support for physical-level (pre-aggregation) at this point.

NewImage

Level-based hierarchies within the business model are supported, including skip-level and ragged ones, and there’s support for time-series dimensions including their own editor.

NewImage

Where possible, introspection is used when creating the business model components, with table joins and matching column names used to create candidate logical joins. Static and dynamic repository variables, along with session variables are supported, with the front-end also supporting presentation and request variables – so all good there.

NewImage

Under the covers, each tenant within BICS has their own RPD and their own catalog, and any edits to the repository that you perform are effectively “online” edits. To make edits to an existing model the developer therefore has to first “lock” the model, make their changes and add their new entries and then validate them, and then either revert the model or publish the changes. 

NewImage

In the background BICS updates the RPD using the metadata web service API for the BI Server, with the RPD it creates the same format as the ones we create on-premise, just with a smaller set of features supported through the thin-client admin tool.

As I mentioned in the first post in the series, each tenant install of BICS comes with two instances; one for development or pre-prod and one for production. To move a completed repository out of one environment into another a new feature called a “BI Archive” is used, a snapshot of your BICS system that includes both the repository, the catalog and any security objects you create. In this first version of BICS each import is total and overwrites everything that was in the instance beforehand, so there’s no incremental import or ability to selectively import just certain objects or certain reports into a new environment, meaning that you’ll lose any reports or dashboards created in production if you subsequently refresh it from dev/pre-prod – something to bear in-mind.

One other thing to be aware of is that there’s no ability to create alias tables or opaque views in the thin-client modeller, so if you want to create additional copies of dimension table for more than one dimension role, or you want to create a table using an arbitrary SELECT statement you’ll need to go into ApEx and create a database view instead – not a huge imposition as ApEx comes with tools for creating these pretty easily, but something that will lead to a more complex database model in-time. The screenshot below shows one such database view then exposed through the thin-client modeller, where you can see the SELECT statement behind it (but not alter or amend it except through ApEx).

NewImage

Finally, the thin-client modeller supports row-level and subject area security, using filters or object permissions to set up manually or create by reference to application roles granted to your users. We’ll look at what’s involved in setting up security and application roles in the final post in this series, where we look at administering your BICS instance.

So, that’s a high-level view of the repository creation process; in tomorrow’s post, we’ll look at what’s involved in creating reports and dashboards.

Categories: BI & Warehousing

Introduction to Oracle BI Cloud Service : Provisioning Data

Rittman Mead Consulting - Tue, 2014-09-23 04:00

In the first post in this series I looked at the new Oracle BI Cloud Service, which went GA over the weekend and which Rittman Mead have been using these past few weeks as part of a beta release. In the first post I looked at what BICS is and who its aimed at in this initial release, and went through the features at a high-level; over the rest of the week I’ll be looking at the features in-detail, starting today with the data upload and provisioning process. Here’s the links to the rest of the series, with the items getting updated over the week as I post each entry in the series:

As I mentioned in that first post, “Introduction to Oracle BI Cloud Service : Product Overview”, BICS in this initial release to my mind is aimed at departmental use-cases where someone wants to quickly upload and analyse an offline dataset and share the results with other members of their team. BICS comes bundled with Oracle Database Schema Service and 50GB of storage, and OBIEE in this setup reports just against this data source with no ability to reach-out dynamically to other data sources or blend those sources with the main one in Oracle’s cloud database. It’s aimed really at users with a single source of data to work with, who’ve probably obtained it as an export from some other system and just want to be able to report against it, though as we’ll see later in this post it is possible to link to other SaaS sources with a bit of PL/SQL wizardry.

So the first task you’re likely to perform when working with BICS is to upload some data to report on. There are three main options for uploading data to BICS, two of which are browser-based and aimed at end-users, and one that uses SQL*Developer and more aimed at devs. BICS itself comes with a menu items on the home page for uploading data, and this is what we’ll think users will use most as it’s built-into the tool and fairly prominent.

NewImage

Clicking on this menu item launches an ApEx application hosted in the Database Schema Service that comes with BICS, and which allows you to upload and parse XLS and delimited file-types to the database cloud instance and then store the contents in database tables.

NewImage

Oracle Database Schema Service also comes with Application Express (ApEx) as a front-end, and ApEx has similar tools for upload datasets into the service, with additional features for creating views and PL/SQL packages to process and manipulate the data, something we used in our beta program example to connect to Salesforce.com and download data using their REST API. In-theory you shouldn’t need to use these features much, but SIs and partners such as ourselves will no doubt use ApEx a lot to build out the loading infrastructure, data cleansing and other features that you might want for a packaged cloud app – so get your PL/SQL books out and brush-up on ApEx development.

NewImage

The other way to get data into BICS is to use Oracle SQLDeveloper, which has a special Oracle Cloud connector type that allows you to view and work with database objects as if they were regular database ones, and upload data to the cloud in the form of “carts”. I’d imagine these options will get extended over time, either by tools or utilities Oracle release for this v1.0 BICS release, or by BICS eventually supporting the full Oracle Database Instance Service that’ll support regular SQLNet connections from ETL tools.

NewImage

So once you’ve got some data uploaded into Database Schema Services, you’ll end up with a set of source tables from which you can create your BI Repository. Check back tomorrow for more details on how BICS’s new thin-client data modeller works and how you create your business model against this cloud data source, including how the repository editing and checkout process works in this new potentially multi-user development environment.

 

Categories: BI & Warehousing

Introduction to Oracle BI Cloud Service : Product Overview

Rittman Mead Consulting - Mon, 2014-09-22 05:02

Long-term readers of this blog will probably know that I’m enthusiastic about the possibilities around running OBIEE in the cloud, and over the past few weeks Rittman Mead have been participating in the beta program for release one of Oracle’s Business Intelligence Cloud Service (BICS). BICS went GA over the weekend and is now live on Oracle’s public cloud site, so all of this week we’ll be running a special five-part series on what BI Cloud Service is, how it works and how you go about building a simple application. I’m also presenting on BICS and our beta program experiences at Oracle Openworld this week (Oracle BI in the Cloud: Getting Started, Deployment Scenarios, and Best Practices [CON2659], Monday Sep 29 10:15 AM – 11.00 AM Moscone West 3014), so if you’re at the event and want to hear our thoughts, come along.

Over the next five days I’ll be covering the following topics, and I’ll update the list with hyperlinks once the articles are published:

So what is Oracle BI Cloud Service, and how does it relate to regular, on-premise OBIEE11g?

On the Oracle BI Cloud Service homepage, Oracle position the product as “Agile Business Intelligence in the Cloud for Everyone”, and there’s a couple of key points in this positioning that describe the product well.

NewImage

The “agile” part is referring to the point that being cloud-based, there’s no on-premise infrastructure to stand-up, so you can get started a lot quicker than if you needed to procure servers, get the infrastructure installed, configure the software and get it accepted by the IT department. Agile also refers to the fact that you don’t need to purchase perpetual or one/two-year term licenses for the software, so you can use OBIEE for more tactical projects without having to worry about expensive long-term license deals. The final way that BICS is “agile” is in the simplified, user-focused tools that you use to build your cloud-based dashboards, with BICS adopting a more consumer-like user interface that in-theory should mean you don’t have to attend a course to use it.

BICS is built around standard OBIEE 11g, with an updated user interface that’ll roll-out across on-premise OBIEE in the next release and the standard Analysis Editor, Dashboard Editor and repository (RPD) under the covers. Your initial OBIEE homepage is a modified version of the standard OBIEE homepage that lists standard developer functions down the left-hand side as a series of menu items, and the BI Administration tool is replaced with an online, thin-client repository editor that provides a subset of the full BI Administration tool functionality.

NewImage

Customers who license BICS in this initial release get two environments (or instances) to work with; a pre-prod or development environment to create their applications in initially, and a production environment into which they deploy each release of their work. BICS is also bundled with Oracle Database Schema Service, a single-schema Oracle Database service with an ApEx front-end into which you store the data that BICS reports on, and with ApEx and BICS itself having tools to upload data into it; this is, however, the only data source that BICS in version 1 supports, so any data that your cloud-based dashboards report on has to be loaded into Database Schema Service before you can use it, and you have to use Oracle’s provided tools to do this as regular ETL tools won’t connect. We’ll get onto the data provisioning process in the next article in this five-part series.

BICS dashboards and reports currently support a subset of what’s available in the on-premise version. The Analysis Editor (“Answers”) is the same as on-premise OBIEE with the catalog view on the left-hand side, tabs for Results and so on, and the same set of view types (and in fact a new one, for heat maps). There’s currently no access to Agents, Scorecards, BI Publisher or any other Presentation Services features that require a database back-end though, or any Essbase database in the background as you get with on-premise OBIEE 11.1.1.7+.

NewImage

What does become easier to deploy though is Oracle BI Mobile HD as every BICS instance is, by definition, accessible over the internet. Last time I checked the current version of BI Mobile HD on Apple’s App Store couldn’t yet connect, but I’m presuming an update will be out shortly to deal with BICS’s login process, which gets you to enter a BICS username and password along with an “identity domain” that specifics the particular company tenant ID that you use.

NewImage

I’ll cover the thin-client data modeller later in this series in more detail, but at a high-level what this does is remove the need for you to download and install Oracle BI Administration to set up your BI Repository, something that would have been untenable for Oracle if they were serious about selling a cloud-based BI tool. The thin-client data modeller takes the most important (to casual users) features of BI Administration and makes them available in a browser-based environment, so that you can create simple repository models against a single data source and add features like dimension hierarchies, calculations, row-based and subject-area security using a point-and-click environment.

NewImage

Features that are excluded in this initial release include the ability to define multiple logical table sources for a logical table, creating multiple business areas, creating calculations using physical (vs. logical) tables and so on, and there’s no way to upload on-premise RPDs to BICS, or download BICS ones to use on-premise, at this stage. What you do get with BICS is a new import and export format called a “BI Archive” which bundles up the RPD, the catalog and the security settings into a single archive file, and which you use to move applications between your two instances and to store backups of what you’ve created.

So what market is BICS aimed at in this initial release, and what can it be used for? I think it’s fair to say that in this initial release, it’s not a drop-in replacement for on-premise OBIEE 11g, with only a subset of the on-premise features initially supported and some fairly major limitations such as only being able to report against a single database source, no access to Agents, BI Publisher, Essbase and so on. But like the first iteration of the iPhone or any consumer version of a previously enterprise-only tool, its trying to do a few things well and aiming at a particular market – in this case, departmental users who want to stand-up an OBIEE environment quickly, maybe only for a limited amount of time, and who are familiar with OBIEE and would like to carry on using it. In some ways its target market is those OBIEE customers who might otherwise have use Qlikview, Tableau or one of the new SaaS BI services such as Good Data, who most probably have some data exports in the form of Excel spreadsheets or CSV documents, want to upload them to a BI service without getting all of IT involved and then share the results in the form of dashboards and reports with their team. Pricing-wise this appears to be who Oracle are aiming the service at (minimum 10 users, $3500/month including 50GB of database storage) and with the product being so close to standard OBIEE functionality in terms of how you use it, it’s most likely to appeal to customers who already use OBIEE 11g in their organisation.

That said, I can see partners and ISVs adopting BICS to deliver cloud-based SaaS BI applications to their customers, either as stand-alone analysis apps or as add-ons to other SaaS apps that need reporting functionality. Oracle BI Cloud Service is part of the wider Oracle Platform-as-a-Service (PaaS) that includes Java (WebLogic), Database, Documents, Compute and Storage, so I can see companies such as ourselves developing reporting applications for the likes of Salesforce, Oracle Sales Cloud and other SaaS apps and then selling them, hosting included, through Oracle’s cloud platform; I’ll cover our initial work in this area, developing a reporting application for Salesforce.com data, later in this series.

NewImage

Of course it’s been possible to deploy OBIEE in the cloud for some while, with this presentation of mine from BIWA 2014 covering the main options; indeed, Rittman Mead host OBIEE instances for customers in Amazon AWS and do most of our development and training in the cloud including our exclusive “ExtremeBI in the Cloud” agile BI service; but BICS has two major advantages for customers looking to cloud-deploy OBIEE:

  • It’s entirely thin-client, with no need for local installs of BI Administration and so forth. There’s also no need to get involved with Enterprise Manager Fusion Middleware Control for adding users to application roles, defining application role mappings and so on
  • You can license it monthly, including data storage. No other on-premise license option lets you do this, with the shortest term license being one year

such that we’ll be offering it as an alternative to AWS hosting for our ExtremeBI product, for customers who in-particular want the monthly license option.

So, an interesting start. As I said, I’ll be covering the detail of how BICS works over the next five days, starting with the data upload and provisioning process in tomorrow’s post – check back tomorrow for the next instalment.

Categories: BI & Warehousing

Getting The Users’ Trust – Part 2

Rittman Mead Consulting - Thu, 2014-09-18 04:35

Last time I wrote about the performance aspects of a BI system and how they could affect a user’s confidence. I concluded by mentioning that incorrect data might be generated by poorly coded ETL routines causing data loss or duplication. This time I am looking more at the quality of the data we load (or don’t load).

Back in the 1990’s I worked with a 4.5 TB DWH that had a single source for fact and reference data, that is the data loaded was self-consistent. Less and less these days we find a single source DWH to be the case; we are adding multiple data sources (both internal and external). Customers can now appear on CRM, ERP, social media, credit referencing, loyalty, and a whole host of other systems. This proliferation of data sources gives rise to a variety of issues we need to be at least aware of, and in reality, should be actively managing. Some of these issues require us to work out processing rules within our data warehouse such as what do we do with fact data that arrives before its supporting reference data; I once had a system where our customer source could only be extracted once a week but purchases made by new customers would appear in our fact feed immediately after customer registration. Obviously, it is a business call on whether we publish facts that involve yet to be loaded customers straight away or defer those loads until the customer has been processed in the DWH. In the case of my example we needed to auto-create new customers in the data warehouse with just the minimum of data, the surrogate key and the business key and then do a SCD type 1update when the full customer data profile is loaded the following week. Technical issues such as these are trivial, we formulate and agree a business rule to define our actions and we implement it in our ETL or, possibly, the reporting code. In my opinion the bigger issues to resolve are in Data Governance and Data Quality.

Some people combine Data Quality and Governance together as a single topic and believe that a single solution will put all right. However, to my mind, they are completely separate issues. Data quality is about the content of the data and governance is about ownership, providence and business management of the data. Today, Data Governance is increasingly becoming a regulatory requirement, especially in finance.

Governance is much more than the data lineage tools we might access in ETL tools such as ODI and even OWB. ETL lineage is about source to target mappings; our ability to say that ‘bank branch name’ comes from this source attribute, travels through these multiple ODI mappings and finally updates that column in our BANK_BRANCH dimension table. In true Data Governance we probably do some or all of these:

  • Create a dictionary of approved business terms. This will define every attribute in business terms and also provide translations between geographic and business-unit centric ways of viewing data. In finance one division may talk about “customer”, another division will say “investor”, a third says “borrower”; in all three cases we are really talking about the same kind of object, a person. This dictionary should go down to the level of individual attribute and measures and include the type of data being held such as text, currency, date-time, these data types are logical types and not physical types as seen on the actual sources. It is important that this dictionary is shared throughout the organisation and is “the true definition” of what is reported.
  • Define ownership (or stewardship) for the approved business data item.
  • Map business data sources and targets to our approved list of terms (at attribute level). It is very possible that some attributes will have multiple potential sources, in such cases we must specify which source will be the master source.
  • Define processes to keep our business data aligned.  
  • Define ownership for the sources for design (and for static data such as ISO country codes, content) change accountability. Possibility integrate into change notification mechanism of change process.
  • Define data release processes for approved external reference data.
  • Define data access and redaction rules for compliance purposes.
  • Build-in audit and control.
As you can see we are not, in the main, talking data content, instead we are improving our description of the business data over that are already held in database data dictionaries and XSD files. This is still metadata and is almost certainly best managed in some kind of Data Governance application. One tool we might consider for this is Oracle Data Relationship Manager from the Hyperion family of products. If we want to go more DIY it may be possible to leverage some of the data responsibility features of Oracle SQL Developer Data Modeller.

Whereas governance is about using the right data and having processes and people to guarantee it is correctly sourced, Data Quality is much finer in grain and looks at the actual content. Here a tool such as Oracle Enterprise Data Quality is invaluable. By the way I have noticed that OEDQ version 12 has recently been released, I have a blog on this in the pipeline.

I tend to divide Data Quality into three disciplines:

  • Data Profiling is always going to be our first step. Before we fix things we need to know what to fix! Generally, we try to profile a sample of the data and assess it column by column, row by row to build a picture of the actual content. Typically we look at data range, nulls, number of distinct values and in the case of text data: character types used (alpha, letter case, numeric, accents, punctuation etc), regular expressions. From this we develop a plan to tackle quality, for example on a data entry web-page we may want to tighten processing rules to prevent certain “anticipated” errors; more usually we come up with business rules to apply in our next stage. 
  • Data Assessment. Here we test the full dataset against the developed rules to identify data that conforms or needs remedy. This remedy could be referring the data back to the source system owner for correction, providing a set of data fixes to apply to the source which can be validated and applied as a batch, creating processes to “fix” data on the source at initial data entry, or (and I would strongly advise against this for governance reasons) dynamically fix in an ETL process. The reason I am against fixing data downstream in ETL is that the data we report on in our Data Warehouse is not going to match the source and this will be problematic when we try to validate if our data warehouse fits reality.
  • Data de-duplication. This final discipline of our DQ process is the most difficult, identifying data that is potentially duplicated in our data feed. In data quality terms a duplicate is where two or more rows refer to what is probably (statistically) the same item, this is a lot more fuzzy than an exact match in database terms; people miskey data, call centre staff mis-hear names, companies merge and combine data sets, I have even seen customers registering a new email address because they can not be bothered to reset their password on a e-selling website. De-duplication is important to improve the accuracy of BI in general, it is nigh-on mandatory for organisations that need to manage risk and prevent fraud.
Data Quality is so important to trusted BI; without it we run the risk that our dimensions do not roll-up correctly and that we under-report by separating our duplicates. However, being correct at the data warehouse is only part of the story, these corrections also need to be on the sources; to do that we have to implement processes and disciplines throughout the organisation.   For BI that users can trust we need to combine both data management disciplines. From governance we need to be sure that we are using the correct business terms for all attributes and that the data displayed in those attributes has made the correct journey from the original source. From quality we gain confidence that we are correctly aggregating data in our reporting.   At the end of the day we need to be right to be trusted.

 

 

Categories: BI & Warehousing

Getting The Users’ Trust – Part 1

Rittman Mead Consulting - Wed, 2014-09-17 03:02

Looking back over some of my truly ancient Rittman Mead blogs (so old in fact that they came with me when I joined the company soon after Rittman Mead was launched), I see recurrent themes on why people “do” BI and what makes for successful implementations. After all, why would an organisation wish to invest serious money in a project if it does not give value either in terms of cost reduction or increasing profitability through smart decisions. This requires technology to provide answers and a workforce that is both able to use this technology and has faith that the answers returned allow them to do their jobs better. Giving users this trust in the BI platform generally boils down to resolving these three issues: ease of use of the reporting tool, quickness of data return and “accuracy” or validity of the response. These last two issues are a fundamental part of my work here at Rittman Mead and underpin all that I do in terms of BI architecture, performance, and data quality. Even today as we adapt our BI systems to include Big Data and Advanced Analytics I follow the same sound approaches to ensure usable, reliable data and the ability to analyse it in a reasonable time.

Storage is cheap so don’t aggregate away your knowledge. If my raw data feed is sales by item by store by customer by day and I only store it in my data warehouse as sales by month by state I can’t go back to do any analysis on my customers, my stores, my products. Remember that the UNGROUP BY only existed in my April Fools’ post. Where you choose to store your ‘unaggregated’ data may well be different these days; Hadoop and schema on read paradigms often being a sensible approach. Mark Rittman has been looking at architectures where both the traditional DWH and Big Data happily co-exist.

When improving performance I tend to avoid tuning specific queries, instead I aim to make frequent access patterns work well. Tuning individual queries is almost always not a sustainable approach in BI; this week’s hot, ‘we need the answer immediately’ query may have no business focus next week. Indexes that we create to make a specific query fly may have no positive effect on other queries; indeed, indexes may degrade other aspects of BI performance such as increased data load times and have subtle effects such as changing a query plan cost so that groups of materialized views are no longer candidates in query re-write (this is especially true when you use nested views and the base view is no longer accessed).

My favoured performance improvement techniques are: correctly placing the data be it clustering, partitioning, compressing, table pinning, in-memory or whatever, and making sure that the query optimiser knows all about the nature of the data; again and again “right” optimiser information is key to good performance. Right is not just about running DBMS_STATS.gather_XXX over tables or schemas every now and then; it is also about telling the optimiser about data relationships between data items. Constraints describe the data, for example which columns allow NULL values, which columns are part of parent-child relationships (foreign keys). Extended table statistics can help describe relationships between columns in a single table for example in a product dimensions table the product sub-category and the product category columns will have an interdependence, without that knowledge cardinality estimates can be very wrong and favour nested loop style plans that could be very poor performing on large data sets.

Sometimes we will need to create aggregates to answer queries quickly; I tend to build ‘generic’ aggregates, those that can be used by many queries. Often I find that although data is loaded frequently, even near-real-time, many business users wish to look at larger time windows such as week, month, or quarter; In practice I see little need for day level aggregates over the whole data warehouse timespan, however, there will always be specific cases that might require day-level summaries. If I build summary tables or use Materialized Views I would aim to make tables that are at least 80% smaller than the base table and to avoid aggregates that partially roll up many dimensional hierarchies; customer category by product category by store region by month would probably not be the ideal aggregate for most real-user queries. That said Oracle does allow us to use fancy grouping semantics in the building of aggregates (grouping sets, group by rollup and group by cube.) The in-database Oracle OLAP cube functionality is still alive and well (and was given a performance boost in Oracle 12c); it may be more appropriate to aggregate in a cube (or relational-look-alike) rather than individual summaries.

Getting the wrong results quickly is no good, we must be sure that the results we display are correct. As professional developers we test to prove that we are not losing or gaining data through incorrect joins and filters, but ETL coding is often the smallest factor in “incorrect results” and this brings me to part 2, Data Quality.

Categories: BI & Warehousing

Next Generation Outline Extractor Webcast - Oct 8th!

Tim Tow - Fri, 2014-09-12 16:07
I am doing a repeat of my Kscope15 'Best Speaker' award-winning presentation as part of the ODTUG webcast series. Here is the official announcement from ODTUG:
Wednesday, October 8, 2014 12:00 PM - 1:00 PM EDT

Next Generation Essbase Outline Extractor Tips and Tricks
Tim Tow, Applied OLAP
The Next Generation Outline Extractor is the follow-up to the classic OlapUnderground Essbase Outline Extractor used by thousands of Essbase customers. This session, which was the highest-rated session at Kscope15 in Seattle, explores some of the new capabilities of the Next Generation Outline Extractor, including command line operations exported directly to a relational database. Attend this session to learn how to leverage this free utility in your company. Make you you sign up to join my on October 8th; you can register here!.
Categories: BI & Warehousing

Rittman Mead/Oracle Data Integration Speakeasy @ Oracle Open World

Rittman Mead Consulting - Thu, 2014-09-11 10:59

If you are attending Oracle Open World this year and fancy bit of a different experience, come and join Rittman Mead and Oracle’s Data Integration teams for drinks and networking at 7pm on Tuesday 30th September at the Local Edition speakeasy on Market Street.

We will be providing a couple of hours of free drinks with the opportunity to quiz our leading data integration experts and Oracle’s data integration team about any aspect of the data integration toolset, architecture and our innovative implementation approaches, and to relax and kick back at the end of a long day. So whether you want to know about how ODI can facilitate your big data strategy, or implement data quality and data governance across your enterprise data architecture, please come along.

The Local Edition is located at 691 Market St, San Francisco, CA and the event runs from 7pm to 9pm. Please register here.

For further information on this event and the sessions we are presenting at Oracle Open World contact us at info@rittmanmead.com.

Categories: BI & Warehousing

Using Oracle GoldenGate for Trickle-Feeding RDBMS Transactions into Hive and HDFS

Rittman Mead Consulting - Wed, 2014-09-10 15:13

A few months ago I wrote a post on the blog around using Apache Flume to trickle-feed log data into HDFS and Hive, using the Rittman Mead website as the source for the log entries. Flume is a good technology to use for this type of capture requirement as it captures log entries, HTTP calls, JMS queue entries and other “event” sources easily, has a resilient architecture and integrates well with HDFS and Hive. But what if the source you want to capture activity for is a relational database, for example Oracle Database 12c? With Flume you’d need to spool the database transactions to file, whereas what you really want is a way to directly connect to the database engine and capture the changes from source.

Which is exactly what Oracle GoldenGate does, and what most people don’t realise is that GoldenGate can also load data into HDFS and Hive, as well as the usual database targets. Hive and HDFS aren’t fully-supported targets yet, you can use the Oracle GoldenGate for Java adapter to act as the handler process and then land the data in HDFS files or Hive tables on your target Hadoop platform. My Oracle Support has two tech nodes, “Integrating OGG Adapter with Hive (Doc ID 1586188.1)” and “Integrating OGG Adapter with HDFS (Doc ID 1586210.1)” that give example implementations of the Java adapters you’d need for these two target types, with the overall end-to-end process for landing Hive data looking like the diagram below (and the HDFS one just swapping out HDFS for Hive at the handler adapter stage)

NewImage

This is also a good example of the sorts of technology we’d use to implement the “data factory” concept within the new Oracle Information Management Reference Architecture, the part of the architecture that moves data between the Hadoop and NoSQL-based Data Reservoir, and the relationally-stored enterprise information store; in this case, trickle-feeding transactional data from the Oracle database into Hadoop, perhaps to archive it at lower-cost than we could do in an Oracle database, or to add transaction activity data to a Hadoop-based application

NewImage

So I asked my colleague Nelio Guimaraes to set up a GoldenGate capture process on our Cloudera CDH5.1 Hadoop cluster, using GoldenGate 12.1.2.0.0 for our source Oracle 11gR2 database and Oracle GoldenGate for Java, downloadable separately on edelivery.oracle.com under Oracle Fusion Middleware > Oracle GoldenGate Application Adapters 11.2.1.0.0 for JMS and Flat File Media Pack. In our example, we’re going to capture activity on the SCOTT.EMP table in the Oracle database, and then perform the following step to set up replication from it into a replica Hive table:

  1. Create a table in Hive that corresponds to the table in Oracle database.
  2. Create a table in the Oracle database and prepare the table for replication.
  3. Configure the Oracle GoldenGate Capture to extract transactions from the Oracle database and create the trail file.
  4. Configure the Oracle GoldenGate Pump to read the trail and invoke the custom adapter
  5. Configure the property file for the Hive handler
  6. Code, Compile and package the custom Hive handler
  7. Execute a test. 
Setting up the Oracle Database Source Capture

Let’s go into the Oracle database first, check the table definition, and then connect to Hadoop to create a Hive table of the same column definition.

[oracle@centraldb11gr2 ~]$ sqlplus scott/tiger
SQL*Plus: Release 11.2.0.3.0 Production on Thu Sep 11 01:08:49 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
SQL> describe DEPT
 Name Null? Type
 ----------------------------------------- -------- ----------------------------
 DEPTNO NOT NULL NUMBER(2)
 DNAME VARCHAR2(14)
 LOC VARCHAR2(13)
SQL> exit
...
[oracle@centraldb11gr2 ~]$ ssh oracle@cdh51-node1
Last login: Sun Sep 7 16:11:36 2014 from officeimac.rittmandev.com
[oracle@cdh51-node1 ~]$ hive
...
create external table dept
(
 DEPTNO string, 
 DNAME string, 
 LOC string
) row format delimited fields terminated by '\;' stored as textfile
location '/user/hive/warehouse/department'; 
exit
...

Then I install Oracle Golden Gate 12.1.2 on the source Oracle database, just as you’d do for any Golden Gate install, and make sure supplemental logging is enabled for the table I’m looking to capture. Then I go into the ggsci Golden Gate command-line utility, to first register the user it’ll be connecting as, and what table it needs to capture activity for.

[oracle@centraldb11gr2 12.1.2]$ cd /u01/app/oracle/product/ggs/12.1.2/
[oracle@centraldb11gr2 12.1.2]$ ./ggsci
$ggsci> DBLOGIN USERID sys@ctrl11g, PASSWORD password sysdba
$ggsci> ADD TRANDATA SCOTT.DEPT COLS(DEPTNO), NOKEY

GoldenGate uses a number of components to replicate data from source to targets, as shown in the diagram below.

NewImageFor our purposes, though, there are just three that we need to configure; the Extract component, which captures table activity on the source; the Pump process that moves data (or the “trail”) from source database to the Hadoop cluster; and the Replicat component that takes that activity and applies it to the target tables. In our example, the extract and pump processes will be as normal, but we need to create a custom “handler” for the target Hive table that uses the Golden Gate Java API and the Hadoop FS Java API.

The tool we use to set up the extract and capture process is ggsci, the command-line Golden Gate Software Command Interface. I’ll use it first to set up the Manager process that runs on both source and target servers, giving it a port number and connection details into the source Oracle database.

$ggsci> edit params mgr
PORT 7809
USERID sys@ctrl11g, PASSWORD password sysdba
PURGEOLDEXTRACTS /u01/app/oracle/product/ggs/12.1.2/dirdat/*, USECHECKPOINTS

Then I create two configuration files, one for the extract process and one for the pump process, and then use those to start those two processes.

$ggsci> edit params ehive
EXTRACT ehive
USERID sys@ctrl11g, PASSWORD password sysdba
EXTTRAIL /u01/app/oracle/product/ggs/12.1.2/dirdat/et, FORMAT RELEASE 11.2
TABLE SCOTT.DEPT;
$ggsci> edit params phive
EXTRACT phive
RMTHOST cdh51-node1.rittmandev.com, MGRPORT 7809
RMTTRAIL /u01/app/oracle/product/ggs/11.2.1/dirdat/rt, FORMAT RELEASE 11.2
PASSTHRU
TABLE SCOTT.DEPT;
$ggsci> ADD EXTRACT ehive, TRANLOG, BEGIN NOW
$ggsci> ADD EXTTRAIL /u01/app/oracle/product/ggs/12.1.2/dirdat/et, EXTRACT ehive
$ggsci> ADD EXTRACT phive, EXTTRAILSOURCE /u01/app/oracle/product/ggs/12.1.2/dirdat/et
$ggsci> ADD RMTTRAIL /u01/app/oracle/product/ggs/11.2.1/dirdat/rt, EXTRACT phive

As the Java event handler on the target Hadoop platform won’t be able to ordinarily get table metadata for the source Oracle database, we’ll use the defgen utility on the source platform to create the parameter file that the replicat process will need.

$ggsci> edit params dept
defsfile ./dirsql/DEPT.sql
USERID ggsrc@ctrl11g, PASSWORD ggsrc
TABLE SCOTT.DEPT;
./defgen paramfile ./dirprm/dept.prm NOEXTATTR

Note that NOEXTATTR means no extra attributes; because the version on target is a generic and minimal version, the definition file with extra attributes won’t be interpreted. Then, this DEPT.sql file will need to be copied across to the target Hadoop platform where you’ve installed Oracle GoldenGate for Java, to the /dirsql folder within the GoldenGate install. 

[oracle@centraldb11gr2 12.1.2]$ ssh oracle@cdh51-node1
oracle@cdh51-node1's password: 
Last login: Wed Sep 10 17:05:49 2014 from centraldb11gr2.rittmandev.com
[oracle@cdh51-node1 ~]$ cd /u01/app/oracle/product/ggs/11.2.1/
[oracle@cdh51-node1 11.2.1]
$ pwd/u01/app/oracle/product/ggs/11.2.1
[oracle@cdh51-node1 11.2.1]$ ls dirsql/
DEPT.sql

Then, going back to the source Oracle database platform, we’ll start the Golden Gate Monitor process, and then the extract and pump processes.

[oracle@cdh51-node1 11.2.1]$ ssh oracle@centraldb11gr2
oracle@centraldb11gr2's password: 
Last login: Thu Sep 11 01:08:18 2014 from bdanode1.rittmandev.com
GGSCI (centraldb11gr2.rittmandev.com) 7> start mgr
Manager started.
 
GGSCI (centraldb11gr2.rittmandev.com) 8> start ehive
 
Sending START request to MANAGER ...
EXTRACT EHIVE starting
 
GGSCI (centraldb11gr2.rittmandev.com) 9> start phive
 
Sending START request to MANAGER ...
EXTRACT PHIVE starting

Setting up the Hadoop / Hive Replicat Process

Setting up the Hadoop side involves a couple of similar steps to the source capture side; first we configure the parameters for the Manager process, then configure the extract process that will pull table activity off of the trail file, sent over by the pump process on the source Oracle database.

[oracle@centraldb11gr2 12.1.2]$ ssh oracle@cdh51-node1
oracle@cdh51-node1's password: 
Last login: Wed Sep 10 21:09:38 2014 from centraldb11gr2.rittmandev.com
[oracle@cdh51-node1 ~]$ cd /u01/app/oracle/product/ggs/11.2.1/
[oracle@cdh51-node1 11.2.1]$ ./ggsci
$ggsci> edit params mgr
PORT 7809
PURGEOLDEXTRACTS /u01/app/oracle/product/ggs/11.2.1/dirdat/*, usecheckpoints, minkeepdays 3
$ggsci> add extract tphive, exttrailsource /u01/app/oracle/product/ggs/11.2.1/dirdat/rt
$ggsci> edit params tphive
EXTRACT tphive
SOURCEDEFS ./dirsql/DEPT.sql
CUserExit ./libggjava_ue.so CUSEREXIT PassThru IncludeUpdateBefores
GETUPDATEBEFORES
TABLE SCOTT.DEPT;

Now it’s time to create the Java hander that will write the trail data to the HDFS files and Hive table. The My Oracle Support Doc.ID 1586188.1 I mentioned at the start of the article has a sample Java program called SampleHandlerHive.java that writes incoming transactions into an HDFS file within the Hive directory, and also writes it to a file on the local filesystem. To get this working on our Hadoop system, we created a new java source code file from the content in SampleHandlerHive.java, updated the path from hadoopConf.addResource to point the the correct location of core-site.xml, hdfs-site.xml and mapred-site.xml, and then compiled it as follows:

export CLASSPATH=/u01/app/oracle/product/ggs/11.2.1/ggjava/ggjava.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/*
javac -d . SampleHandlerHive.java

Successfully executing the above command created the SampleHiveHandler.class under /u01/app/oracle/product/ggs/11.2.1//dirprm/com/mycompany/bigdata. To create the JAR file that the GoldenGate for Java adapter will need, I then need to change directory to the /dirprm directory under the Golden Gate install, and then run the following commands:

jar cvf myhivehandler.jar com
chmod 755 myhivehandler.jar

I also need to create a properties file for this JAR to use, in the same /dirprm directory. This properties file amongst other things tells the Golden Gate for Java adapter where in HDFS to write the data to (the location where the Hive table keeps its data files), and also references any other JAR files from the Hadoop distribution that it’ll need to get access to.

[oracle@cdh51-node1 dirprm]$ cat tphive.properties 
#Adapter Logging parameters. 
gg.log=log4j
gg.log.level=info
 
#Adapter Check pointing  parameters
goldengate.userexit.chkptprefix=HIVECHKP_
goldengate.userexit.nochkpt=true
 
# Java User Exit Property
goldengate.userexit.writers=jvm
jvm.bootoptions=-Xms64m -Xmx512M -Djava.class.path=/u01/app/oracle/product/ggs/11.2.1/ggjava/ggjava.jar:/u01/app/oracle/product/ggs/11.2.1/dirprm:/u01/app/oracle/product/ggs/11.2.1/dirprm/myhivehandler.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/hadoop-common-2.3.0-cdh5.1.0.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/etc/hadoop:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/etc/hadoop/conf.dist:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/hadoop-auth-2.3.0-cdh5.1.0.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/hadoop-hdfs-2.3.0-cdh5.1.0.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/protobuf-java-2.5.0.jar
 
#Properties for reporting statistics
# Minimum number of {records, seconds} before generating a report
jvm.stats.time=3600
jvm.stats.numrecs=5000
jvm.stats.display=TRUE
jvm.stats.full=TRUE
 
#Hive Handler.  
gg.handlerlist=hivehandler
gg.handler.hivehandler.type=com.mycompany.bigdata.SampleHandlerHive
gg.handler.hivehandler.HDFSFileName=/user/hive/warehouse/department/dep_data
gg.handler.hivehandler.RegularFileName=cinfo_hive.txt
gg.handler.hivehandler.RecordDelimiter=;
gg.handler.hivehandler.mode=tx

Now, the final step on the Hadoop side is to start its Golden Gate Manager process, and then start the Replicat and apply process.

GGSCI (cdh51-node1.rittmandev.com) 5> start mgr
 
Manager started. 
 
GGSCI (cdh51-node1.rittmandev.com) 6> start tphive
 
Sending START request to MANAGER ...
EXTRACT TPHIVE starting

Testing it All Out

So now I’ve got the extract and pump processes running on the Oracle Database side, and the apply process running on the Hadoop side, let’s do a quick test and see if it’s working. I’ll start by looking at what data is in each table at the beginning.

SQL> select * from dept;     

    DEPTNO DNAME  LOC
 ---------- -------------- -------------

10 ACCOUNTING  NEW YORK
20 RESEARCH  DALLAS
30 SALES  CHICAGO
40 OPERATIONS  BOSTON
50 TESTE  PORTO
60 NELIO  STS
70 RAQUEL  AVES
 
7 rows selected.

Over on the Hadoop side, there’s just one row in the Hive table:

hive> select * from customer;

OK 80MARCIA   ST

Now I’ll go back to Oracle and insert a new row in the DEPT table:

SQL> insert into dept (deptno, dname, loc)
  2  values (75, 'EXEC','BRIGHTON'); 

1 row created. 
SQL> commit; 

Commit complete.

And, going back over to Hadoop, I can see Golden Gate has added that record to the Hive table, by the Golden Gate for Java adapter writing the transaction to the underlying HDFS file.

hive> select * from customer;

OK 80MARCIA   ST
75 EXEC       BRIGHTON

So there you have it; Golden Gate replicating Oracle RBDMS transactions into HDFS and Hive, to complement Apache Flume’s ability to replicate log and event data into Hadoop. Moreover, as Michael Rainey explained in this three part blog series, Golden Gate is closely integrated into the new 12c release of Oracle Data Integrator, making it even easier to manage Golden Gate replication processes into your overall data loading project, and giving Hadoop developers and Golden Gate users access to the full set of load orchestration and data quality features in that product rather than having to rely on home-grown scripting, or Oozie.

Categories: BI & Warehousing

OBIEE SampleApp in The Cloud: Importing VirtualBox Machines to AWS EC2

Rittman Mead Consulting - Wed, 2014-09-10 01:40

Virtualisation has revolutionised how we work as developers. A decade ago, using new software would mean trying to find room on a real tin server to install it, hoping it worked, and if it didn’t, picking apart the pieces probably leaving the server in a worse state than it was to begin with. Nowadays, we can just launch a virtual machine to give a clean environment and if it doesn’t work – trash it and start again.
The sting in the tail of virtualisation is that full-blown VMs are heavy – for disk we need several GB just for a blank OS, and dozens of GB if you’re talking about a software stack such as Fusion MiddleWare (FMW), and the host machine needs to have the RAM and CPU to support it all too. Technologies such as Linux Containers go some way to making things lighter by abstracting out a chunk of the OS, but this isn’t something that’s reached the common desktop yet.

So whilst VMs are awesome, it’s not always practical to maintain a library of all of them on your local laptop (even 1TB drives fill up pretty quickly), nor will your laptop have the grunt to run more than one or two VMs at most. VMs like this are also local to your laptop or server – but wouldn’t it be neat if you could duplicate that VM and make a server based on it instantly available to anyone in the world with an internet connection? And that’s where The Cloud comes in, because it enables us to store as much data as we can eat (and pay for), and provision “hardware” at the click of a button for just as long as we need it, accessible from anywhere.

Here at Rittman Mead we make extensive use of Amazon Web Services (AWS) and their Elastic Computing Cloud (EC2) offering. Our website runs on it, our training servers run on it, and it scales just as we need it to. A class of 3 students is as easy to provision for as a class of 24 – no hunting around for spare servers or laptops, no hardware sat idle in a cupboard as spare capacity “just in case”.

One of the challenges that we’ve faced up until now is that all servers have had to be built from scratch in the cloud. Obviously we work with development VMs on local machines too, so wouldn’t it be nice if we could build VMs locally and then push them to the cloud? Well, now we can. Amazon offer a route to import virtual machines, and in this article I’m going to show how that works. I’ll use the superb SampleApp v406 VM that Oracle provide, because this is a great real-life example of a VM that is so useful, but many developers can find too memory-intensive to be able to run on their local machines all the time.

This tutorial is based on exporting a Linux guest VM from a Linux host server. A Windows guest probably behaves differently, but a Mac or Windows host should work fine since VirtualBox is supported on both. The specifics are based on SampleApp, but the process should be broadly the same for all VMs. 

Obtain the VM

We’re going to use SampleApp, which can be downloaded from Oracle.

  1. Download the six-part archive from http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples–167534.html
  2. Verify the md5 checksums against those published on the download page:
    [oracle@asgard sampleapp406]$ ll
    total 30490752
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 01:33 SampleAppv406.zip.001
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 01:30 SampleAppv406.zip.002
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 02:03 SampleAppv406.zip.003
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 02:34 SampleAppv406.zip.004
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 02:19 SampleAppv406.zip.005
    -rw-r--r-- 1 oracle oinstall 4977591522 Sep  9 02:53 SampleAppv406.zip.006
    [oracle@asgard sampleapp406]$ md5sum *
    2b9e11f69ada5f889088dd74b5229322  SampleAppv406.zip.001
    f8a1a5ae6162b20b3e9c6c888698c071  SampleAppv406.zip.002
    68438cfea87e8d3a2e2f15ff00dadf12  SampleAppv406.zip.003
    b71d9ace4f75951198fc8197da1cfe62  SampleAppv406.zip.004
    4f1a5389c9e0addc19dce6bbc759ec20  SampleAppv406.zip.005
    2c430f87e22ff9718d5528247eff2da4  SampleAppv406.zip.006
  3. Unpack the archive using 7zip — the instructions for SampleApp are very clear that you must use 7zip, and not another archive tool such as winzip.
    [oracle@asgard sampleapp406]$ time 7za x SampleAppv406.zip.001</code>7-Zip (A) [64] 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18
    p7zip Version 9.20 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,80 CPUs)
    
    Processing archive: SampleAppv406.zip.001
    
    Extracting SampleAppv406Appliance
    Extracting SampleAppv406Appliance/SampleAppv406ga-disk1.vmdk
    Extracting SampleAppv406Appliance/SampleAppv406ga.ovf
    
    Everything is Ok
    
    Folders: 1
    Files: 2
    Size: 31191990916
    Compressed: 5242880000
    
    real 1m53.685s
    user 0m16.562s
    sys 1m15.578s
  4. Because we need to change a couple of things on the VM first (see below), we’ll have to import the VM to VirtualBox so that we can boot it up and make these changes.You can import using the VirtualBox GUI, or as I prefer, the VBoxManage command line interface. I like to time all these things (just because, numbers), so stick a time command on the front:
    time VBoxManage import --vsys 0 --eula accept SampleAppv406Appliance/SampleAppv406ga.ovf

    This took 12 minutes or so, but that was on a high-spec system, so YMMV.
    [...]
    0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
    Successfully imported the appliance.
    
    real    12m15.434s
    user    0m1.674s
    sys     0m2.807s
Preparing the VM

Importing Linux VMs to Amazon EC2 will only work if the kernel is supported, which according to an AWS blog post includes Red Hat Enterprise Linux 5.1 – 6.5. Whilst SampleApp v406 is built on Oracle Linux 6.5 (which isn’t listed by AWS as supported), we have the option of telling the VM to use a kernel that is Red Hat Enterprise Linux compatible (instead of the default Unbreakable Enterprise Kernel – UEK). There are some other pre-requisites that you need to check if you’re trying this with your own VM, including a network adaptor configured to use DHCP. The aforementioned blog post has details.

  1. Boot the VirtualBox VM, which should land you straight in the desktop environment, logged in as the oracle user.
  2. We need to modify a file as root (superuser). Here’s how to do it graphically, or use vi if you’re a real programmer:
    1. Open a Terminal window from the toolbar at the top of the screen
    2. Enter
      sudo gedit /etc/grub.conf

      The sudo bit is important, because it tells Linux to run the command as root. (I’m on an xkcd-roll here: 1, 2)

    3. In the text editor that opens, you will see a header to the file and then a set of repeating sections beginning with title. These are the available kernels that the machine can run under. The default is 3, which is zero-based, so it’s the fourth title section. Note that the kernel version details include uek which stands for Unbreakable Enterprise Kernel – and is not going to work on EC2.
    4. Change the default to 0, so that we’ll instead boot to a Red Hat Compatible Kernel, which will work on EC2
    5. Save the file
  3. Optional steps:
    1. Whilst you’ve got the server running, add your SSH key to the image so that you can connect to it easily once it is up on EC2. For more information about SSH keys, see my previous blog post here, and a step-by-step for doing it on SampleApp here.
    2. Disable non-SSH key logins (in /etc/ssh/sshd_config, set PasswordAuthentication no and PubkeyAuthentication yes), so that your server once on EC2 is less vulnerable to attack. Particularly important if you’re using the stock image with Admin123 as the root password.
    3. Set up screen, and OBIEE and the database as a Linux service, both covered in my article here.
  4. Shutdown the instance by entering this at a Terminal window:

    sudo shutdown -h now

Export the VirtualBox VM to Amazon EC2

Now we’re ready to really get going. The first step is to export the VirtualBox VM to a format that Amazon EC2 can work with. Whilst they don’t explicitly support VMs from VirtualBox, they do support the VMDK format – which VirtualBox can create. You can do the export from the graphical interface, or as before, from the command line:

time VBoxManage export "OBIEE SampleApp v406" --output OBIEE-SampleApp-v406.ovf

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Successfully exported 1 machine(s).

real    56m51.426s
user    0m6.971s
sys     0m12.162s

If you compare the result of this to what we downloaded from Oracle it looks pretty similar – an OVF file and a VMDK file. The only difference is that the VMDK file is updated with the changes we made above, including the modified kernel settings which are crucial for the success of the next step.

[oracle@asgard sampleapp406]$ ls -lh
total 59G
-rw------- 1 oracle oinstall  30G Sep  9 10:55 OBIEE-SampleApp-v406-disk1.vmdk
-rw------- 1 oracle oinstall  15K Sep  9 09:58 OBIEE-SampleApp-v406.ovf

We’re ready now to get all cloudy. For this, you’ll need:

  1. An AWS account
    1. You’ll also need your AWS account’s Access Key and Secret Key
  2. AWS EC2 commandline tools installed, along with a Java Runtime Environment (JRE) 1.7 or greater:

    wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
    sudo mkdir /usr/local/ec2
    sudo unzip ec2-api-tools.zip -d /usr/local/ec2
    # You might need to fiddle with the following paths and version numbers: 
    sudo yum install -y java-1.7.0-openjdk.x86_64
    cat >> ~/.bash_profile <<EOF
    export JAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.65.x86_64/jre"
    export EC2_HOME=/usr/local/ec2/ec2-api-tools-1.7.1.1/
    export PATH=$PATH:$EC2_HOME/bin
    EOF<

  3. Set your credentials as environment variables:
    export AWS_ACCESS_KEY=xxxxxxxxxxxxxx
    export AWS_SECRET_KEY=xxxxxxxxxxxxxxxxxxxxxx
  4. Ideally a nice fat pipe to upload the VM file over, because at 30GB it is not trivial (not in 2014, anyway)

What’s going to happen now is we use an EC2 command line tool to upload our VMDK (virtual disk) file to Amazon S3 (a storage platform), from where it gets converted into an EBS volume (Elastic Block Store, i.e. a EC2 virtual disk), and from there attached to a new EC2 instance (a “server”/”VM”).

Before we can do the upload we need an S3 “bucket” to put the disk image in that we’re uploading. You can create one from https://console.aws.amazon.com/s3/. In this example, I’ve got one called rmc-vms – but you’ll need your own.

Once the bucket has been created, we build the command line upload statement using ec2-import-instance:

time ec2-import-instance OBIEE-SampleApp-v406-disk1.vmdk --instance-type m3.large --format VMDK --architecture x86_64 --platform Linux --bucket rmc-vms --region eu-west-1 --owner-akid $AWS_ACCESS_KEY --owner-sak $AWS_SECRET_KEY

Points to note:

  • m3.large is the spec for the VM. You can see the available list here. In the AWS blog post it suggests only a subset will work with the import method, but I’ve not hit this limitation yet.
  • region is the AWS Region in which the EBS volume and EC2 instance will be built. I’m using ew-west-1 (Ireland), and it makes sense to use the one geographically closest to where you or your users are located. Still waiting for uk-yorks-1
  • architecture and platform relate to the type of VM you’re importing.

The upload process took just over 45 minutes for me, and that’s from a data centre with a decent upload:

[oracle@asgard sampleapp406]$ time ec2-import-instance OBIEE-SampleApp-v406-disk1.vmdk --instance-type m3.large --format VMDK --architecture x86_64 --platform Linux --bucket rmc-vms --region eu-west-1 --owner-akid $AWS_ACCESS_KEY --owner-sak $AWS_SECRET_KEY
Requesting volume size: 200 GB
TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  active  StatusMessage   Pending InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBytesConverted       0       Status       active  StatusMessage   Pending : Downloaded 0
Creating new manifest at rmc-vms/d77672aa-0e0b-4555-b368-79d386842112/OBIEE-SampleApp-v406-disk1.vmdkmanifest.xml
Uploading the manifest file
Uploading 31191914496 bytes across 2975 parts
0% |--------------------------------------------------| 100%
   |==================================================|
Done
Average speed was 11.088 MBps
The disk image for import-i-fh08xcya has been uploaded to Amazon S3
where it is being converted into an EC2 instance.  You may monitor the
progress of this task by running ec2-describe-conversion-tasks.  When
the task is completed, you may use ec2-delete-disk-image to remove the
image from S3.

real    46m59.871s
user    10m31.996s
sys     3m2.560s

Once the upload has finished Amazon automatically converts the VMDK (now residing on S3) into a EBS volume, and then attaches it to a new EC2 instance (i.e. a VM). You can monitor the status of this task using ec2-describe-conversion-tasks, optionally filtered on the TaskId returned by the import command above:

ec2-describe-conversion-tasks --region eu-west-1 import-i-fh08xcya

TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  active  StatusMessage   Pending InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBytesConverted       3898992128
Status  active  StatusMessage   Pending : Downloaded 31149971456

This is now an ideal time to mention as a side note the Linux utility watch, which simply re-issues a command for you every x seconds (2 by default). This way you can leave a window open and keep an eye on the progress of what is going to be a long-running job

watch ec2-describe-conversion-tasks --region eu-west-1 import-i-fh08xcya

Every 2.0s: ec2-describe-conversion-tasks --region eu-west-1 import-i-fh08xcya                                                             Tue Sep  9 12:03:24 2014

TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  active  StatusMessage   Pending InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBytesConverted       5848511808
Status  active  StatusMessage   Pending : Downloaded 31149971456

And whilst we’re at it, if you’re using a remote server to do this (as I am, to take advantage of the large bandwidth), you will find screen invaluable for keeping tasks running and being able to reconnect at will. You can read more about screen and watch here.

So back to our EC2 import job. To start with, the task will be Pending: (NB unlike lots of CLI tools, you read the output of this one left-to-right, rather than as columns with headings)

$ ec2-describe-conversion-tasks --region eu-west-1
TaskType        IMPORTINSTANCE  TaskId  import-i-ffvx6z86       ExpirationTime  2014-09-12T15:32:01Z    Status  active  StatusMessage   Pending InstanceID      i-b2245ef2
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   5021144064      VolumeSize      60      AvailabilityZone        eu-west-1a      ApproximateBytesConverted       4707330352      Status  active  StatusMessage   Pending : Downloaded 5010658304

After a few moments it gets underway, and you can see a Progress percentage indicator: (scroll right in the code snippet below to see)

TaskType        IMPORTINSTANCE  TaskId  import-i-fgr0djcc       ExpirationTime  2014-09-15T15:39:28Z    Status  active  StatusMessage   Progress: 53%   InstanceID      i-c7692e87
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   5582545920      VolumeId        vol-f71368f0    VolumeSize      20      AvailabilityZone        eu-west-1a      ApproximateBytesConverted       5582536640      Status  completed

Note that at this point you’ll see also see an Instance in the EC2 list, but it won’t launch (no attached disk – because it’s still being imported!)

If something goes wrong you’ll see the Status as cancelled, such as in this example here where the kernel in the VM was not a supported one (observe it is the UEK kernel, which isn’t supported by Amazon):

TaskType        IMPORTINSTANCE  TaskId  import-i-ffvx6z86       ExpirationTime  2014-09-12T15:32:01Z    Status  cancelled       StatusMessage   ClientError: Unsupported kernel version 2.6.32-300.32.1.el5uek       InstanceID      i-b2245ef2
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   5021144064      VolumeId        vol-91b1c896    VolumeSize      60      AvailabilityZone        eu-west-1a      ApproximateBytesConverted    5021128688      Status  completed

After an hour or so, the task should complete:

TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  completed       InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeId        vol-a383f8a4    VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBy
tesConverted    31191855472     Status  completed

At this point you can remove the VMDK from S3 (and should do, else you’ll continue to be charged for it), following the instructions for ec2-delete-disk-image

Booting the new server on EC2

Go to your EC2 control panel, where you should see an instance (EC2 term for “server”) in Stopped state and with no name.

Select the instance, and click Start on the Actions menu. After a few moments a Public IP will be shown in the details pane. But, we’re not home free quite yet…read on.

Firewalls

So this is where it gets a bit tricky. By default, the instance will have launched with Amazon’s Firewall (known as a Security Group) in place which – unless you have an existing AWS account and have modified the default security group’s configuration – is only open on port 22, which is for ssh traffic.

You need to head over to the Security Group configuration page, accessed in several ways but easiest is clicking on the security group name from the instance details pane:

Click on the Inbound tab and then Edit, and add “Custom TCP Rule” for the following ports:

  • 7780 (OBIEE front end)
  • 7001 (WLS Console / EM)
  • 5902 (oracle VNC)

You can make things more secure by allowing access to the WLS admin (7001) and VNC port (5902) to a specific IP address or range only.

Whilst we’re talking about security, your server is now open to the internet and all the nefarious persons out there, so you’ll be wanting to harden your server not least by resetting all the passwords to ones which aren’t publicly documented in the SampleApp user documentation!

Once you’ve updated your Security Group, you can connect to your server! If you installed the OBIEE and database auto start scripts (and if not, why not??) you should find OBIEE running just nicely on http://[your ip]:7780/analytics – note that the port is 7780, not 9704.

2014-09-09_20-21-23

If you didn’t install the script, you will need to start the services manually per the SampleApp documentation. To connect to the server you can ssh (using Terminal, PuTTY, etc) to the server or connect on VNC (Admin123 is the password). For VNC clients try Screen Share on Macs (installed by default), or RealVNC on Windows.

Caveats & Disclaimers
  • Running a server on AWS EC2 costs real money, so watch out. Once you’ve put your credit card details in, Amazon will continue to charge your card whilst there are chargeable items on your account (EBS volumes, instances – running or not- , and so on). You can get an idea of the scale of charges here.
  • As mentioned above, a server on the open internet is a lot more vulnerable than one virtualised on your local machine. You will get poked and probed, usually by automated scripts looking for open ports, weak passwords, and so on. SampleApp is designed to open the toybox of a pimped-out OBIEE deployment to you, it is not “hardened”, and you risk learning the tough way about the need for it if you’re not careful.
Cloning

Amazon EC2 supports taking a snapshot of a server, either for backup/rollback purposes or spinning up as a clone, using an Amazon Machine Image (AMI). From the Instances page, simply select “Create an Image” to build your AMI. You can then build another instance (or ten) from this AMI as needed, exact replicas of the server as it was at the point that you created the image.

Lather, Rinse, and Repeat

There’s a whole host of VirtualBox “appliances” out there, and some of them such as the developer-tools-focused ones only really make sense as local VMs. But there are plenty that would benefit from a bit of “Cloud-isation”, where they’re too big or heavy to keep on your laptop all the time, but are handy to be able to spin up at will. A prime example of this for me is the EBS Vision demo database that we use for our BI Apps training. Oracle used to provide an pre-built Amazon image (know as an AMI) of this, but since withdrew it. However, Oracle do publish Oracle VM VirtualBox templates for EBS 12.1.3 and 12.2.3 (related blog), so from this with a bit of leg-work and a big upload pipe, it’s a simple matter to brew your own AWS version of it — ready to run whenever you need it.

Categories: BI & Warehousing

Sunday Times Tech Track 100

Rittman Mead Consulting - Tue, 2014-09-09 14:35

Over the weekend, Rittman Mead was listed in the Sunday Times Tech Track 100. We are extremely proud to get recognition for the business as well as our technical capability and expertise.

A lot of the public face of Rittman Mead focuses on the tools and technologies we work with. Since day one we have had a core policy to share as much information as possible. Even before the advent of social media, we shared pretty much everything we knew through either our blog or by speaking at conferences, but we very rarely talk about the business itself. However, a lot of the journey we have gone through over the last 7 years has been about the growth and maintenance of a successful, sustainable, multi-national business. We have been able to talk about, educate and evangelise about the tools and technologies as a result of having the successful business to support this.

I remember during one interview we did several years ago the candidate asked (and I’m paraphrasing): “How do you guys make any money, all I see/read is people sitting in airports writing blog posts about leading edge technologies?”.

One massive benefit from this is we often face the same problems (albeit on a different scales) to those that we talk about with customers, so we have been able to better understand the underlying drivers and proposed solutions for our clients.

From a personal point of view, this has meant spending a lot more time looking at contracts as opposed to code and reading business books/blogs as opposed to technical ones. However, it has been well worth it and I would like to say thanks to all of those both inside and outside of the company who have helped contribute to this success.

Categories: BI & Warehousing

Analyzing Twitter Data using Datasift, MongoDB, Hive and ODI12c

Rittman Mead Consulting - Mon, 2014-09-08 14:39

Last week I posted an article on the blog around analysing Twitter data using Datasift, MongoDB and Pig, where I used the Datasift service to stream tweets about Rittman Mead into a MongoDB NoSQL database, and then queried the dataset using Pig. The context for this is the idea of a “data reservoir”, where we supplement the more traditional file and relational datasets we find in data warehouses with other data, typically machine generated, unstructured or very low-level, to add context to the numbers in our reporting system. In the example I quoted in the article, it’d be very interesting to take the activity we record against our blog and website and correlate that with the “conversation” that happens about it in the social media world; for example, were the hits for a particular article due to it been mentioned in a tweet, and did a spike in activity correspond to a particularly influential Twitter user retweeting something we’d tweeted?

NewImage

In that previous article I’d used Pig to access and analyse the data, in part because I saw a match between the nested datasets in a typical DataSift Twitter message and the relations, tuples and bags you get in a Pig schema. For example, if you look at the Tweet from Borkur in the screenshot below from RoboMongo, a Mac OS X client for MongoDB that I’ve found useful, you can see the author details nested inside the interaction details, and the Type attribute having many values under the Trends parent attribute – these map well onto Pig tuples and bags respectively.

NewImage

What I’d really like to do with this dataset, though, is to take certain elements of it and use that to supplement the data I’m loading using ODI12c. Whilst ODI can run arbitrary R, Pig and shell scripts using the ODI Procedure feature (as I did here to make use of Sqoop, before Oracle added Sqoop KMs to ODI12.1.3), it gets the best out of Hadoop when it can access data using Hive, the SQL layer over Hadoop that represents HDFS data as rows and columns, and allows us to SELECT and INSERT data using SQL commands – or to be precise, a dialect of SQL called HiveQL. But how will Hive cope with the nested and repeating data structures in a DataSift Twitter message, and allow us to get just the data out that we’re interested in?

In fact, the MongoDB connector for Hadoop that I used for Pig the other day also comes with Hive connectivity, in the form of a SerDe that lets Hive report against data in a MongoDB database (David Allen blogged about another MongoDB Hive storage handler a while ago, in an article about MongoDB and ODI). What’s more, this Hive connector for MongoDB is actually easier to work with that the Pig connector, as instead of worrying about Tuples and Bags you can just pick out the nested attributes that you’re interested in using a dot notation. For example, if I’m only interested in the InteractionID, username, tweet content and number of followers within a particular Twitter dataset, I can create a table that looks like this in Hive:

CREATE TABLE tweet_data(
  interactionId string,
  username string,
  content string,
  author_followers int)
ROW FORMAT SERDE 
  'com.mongodb.hadoop.hive.BSONSerDe' 
STORED BY 
  'com.mongodb.hadoop.hive.MongoStorageHandler' 
WITH SERDEPROPERTIES ( 
  'mongo.columns.mapping'='{"interactionId":"interactionId",
  "username":"interaction.interaction.author.username",
  "content":\"interaction.interaction.content",
  "author_followers_count":"interaction.twitter.user.followers_count"}'
  )
TBLPROPERTIES (
  'mongo.uri'='mongodb://cdh51-node1:27017/datasiftmongodb.rm_tweets'
  )

And at that point, it’s pretty easy to bring the dataset into ODI12c, through the IKM Hive to Hive Control Append knowledge module, and join up the Twitter dataset with the website log data that’s coming in via Flume. ODI can connect to Hive via JDBC drivers supplied with CDH4/5, and once you register the Hive connection and reverse-engineer the Hive metastore metadata into ODI’s repository, the complexity of the underlying Hive storage is hidden and you’re just presented with tables and columns, just like any other datastore type.

NewImage

Starting with the Twitter data first, I create a Hive table outside of ODI that returns the precise set of tweet attributes that I’m interested in, and then filter that dataset down to just the tweets that link to content on our website, by filtering on the tweet link’s URL matching the start of our website address.

NewImage

Then I load-up the hits from the Rittman Mead website, previously landed into Hadoop using Flume and exposed to ODI as another Hive table, filter out all the non-blog page accesses and keep just the URL part of the Apache Weblog request field, removing the transport mechanism and other bits around it.

NewImage

Then, I use a final ODI mapping to join the two datasets together, using ODI’s ability to apply HiveQL expressions to the incoming datasets so that’ve got the same format – trailing ‘/‘ at the end of the URL, no ampersand and query text at the end of the URL, and so on. Both this and the previous transformation are great examples of where ODI can help with this sort of work, making it pretty easy to munge and correct your data so that you’re then able to match-up the two different sources.

NewImage

Then it’s just a case of creating a package or load plan to sequence the mappings, and then run them using the local or standalone agent. You can see the individual KM steps running on the left-hand side, with ODI generating HiveQL queries which in turn are translated into MapReduce and run in parallel across the Hadoop cluster.

NewImage

And then, at the end of the process, I’ve got a Hive table of all of our blog articles that have been mentioned on Twitter (since we started consuming the tweet feed, a day or so ago), with the number of page requests and the number of times that page got mentioned in tweets.

NewImage

Obviously there’s a lot more we can do with this; we can access the number of followers each twitter user has, along with their location, gender and the sentiment (positive, negative, neutral) of the tweet. From that we can work out some impact from the twitter activity, and we can also add to it data from other sources such as Facebook, LinkedIn and so on to get a fuller picture of the activity around our site. Then, the data we’re gathering in can either be left in MongoDB, or I can use these ODI mappings to either archive it in Hive tables, or export the highlights out to Oracle Database using Sqoop or Oracle Loader for Hadoop.

Categories: BI & Warehousing

Analyzing Twitter Data using Datasift, MongoDB and Pig

Rittman Mead Consulting - Thu, 2014-09-04 17:08

If you followed our recent postings on the updated Oracle Information Management Reference Architecture, one of the key concepts we talk about is the “data reservoir”. This is a pool of additional data that you can add to your data warehouse, typically stored on Hadoop or NoSQL databases, where you store unstructured, semi-structured or unprocessed structured data in large volume and at low cost. Adding a data reservoir gives you the ability to leverage the types of data sources that previously were thought of as too “messy”, too high-volume to store for any serious amount of time, or require processing or storing by tools that aren’t in the usual relational data warehouse toolset.

NewImage

By formally including them in your overall information management architecture though, with common tools, security and data governance over the entire dataset, you give your users the ability to consider the whole “360-degree view” of their customers and their interactions with the market.

To take an example, a few weeks ago I posted a series of articles on the blog where I captured user activity on our website, http://www.rittmanmead.com, transported it to one of our Hadoop clusters using Apache Flume, and then analysed it using Hive, Pig and finally Spark. In one of the articles I used Pig and a geocoding API to determine the country that each website visitor came from, and then in a final five-part series I automated the whole process using ODI12c and then copied the final output tables to Oracle using Oracle Loader for Hadoop. This is quite a nice example of ETL-offloading into Hadoop, with an element of Hadoop-native event capture using Flume, but once the processing has finished the data moves out of Hadoop and into the Oracle database.

NewImage

What would be interesting though would be to start adding data into Hadoop that’s permanent, not transitory as part of an ETL process, to start building out this concept of the “data reservoir”. Taking our website activity dataset, something that would really add context to the visits to our site would be corresponding activity on social networks, to see who’s linking to our posts, who’s discussing them, whether those discussions are positive or negative, and which wider networks those people belong to. Twitter is a good place to start with this as it’s the place we see our articles and activities most discussed, but it’s be good to build out this picture over time to add in activity on social networks such as Facebook, Youtube, LinkedIn and Google+; if we did this, we’d be able to consider a much broader and richer picture when looking at activity around Rittman Mead, potentially correlating activity and visits to our website with mentions of us in the press, comments made by our team and the wider picture of what’s going on in our world.

NewImage

There are a number of ways you can bring Twitter data into your Hadoop cluster or data warehouse, but the most convenient way we’ve found is to use DataSift, a social media aggregation site and service that license raw feeds from the likes of Twitter, Facebook, WordPress and other social media platforms, enhancing the data feeds with sentiment scores and other attributes, and then sell access to the feeds via a number of formats and APIs. Accessing Twitter data through DataSift costs money, particularly if you want to go back and look at historical activity vs. just filtering on a few keywords in new Twitter activity, but they’re very developer-friendly and able to provide greater volumes of firehose activity than the standard Twitter developer API allows.

So assuming you can get access to a stream of Twitter data on a particular topic – in our case, all mentions of our website, our team’s Twitter handles, retweets of our content etc – the question then becomes one of how to store the data. Looking at the Datasift Sample Output page, each of these streams delivers their payloads via JSON documents, XML-like structures that nest categories of tweet metadata within parent structures that make up the total tweet data and metadata dataset.

NewImage

And there’s a good reason for this; individual tweets might not use every bit of possible tweet metadata, for example not including entries under “mentions” or “retweets” if those aren’t used in a  particular message. Certain bits of metadata might be repeated X numbers of times – @ mentions, for example, and the JSON document might have a different structure altogether if a different JSON schema version is used for a particular tweet. Altogether not an easy type of data structure for a relational database to hold – though Oracle 12.1.0.2.0 has just introduced native JSON support to the core Oracle database – but NoSQL databases in contrast find these sorts of data structures easy, and one of the most popular for this type of work is MongoDB.

MongoDB is a open-source “document” database that’s probably best known to the Oracle world through this internet cartoon; what the video is getting at is NoSQL advocates recommending databases such as MongoDB for large-scale web work when something much more mainstream like mySQL would do the job better, but where NoSQL and document-style database come into their own is storing just this type of semi-structured, schema-on-read datasets. In fact, Datasift support MongoDB as an API end-point for their Twitter feed, so let’s go ahead and set up a MongoDB database, prepare it for the Twitter data, and then set-up a Datasift feed into it.

MongoDB installation on Linux, for example to run alongside a Hadoop installation, is pretty straightforward and involves adding a YUM repository and then running “sudo yum install mongodb-org” (there’s an OS X installation too, but I wanted to run this server-side on my Hadoop cluster). Once you’ve installed the MongoDB software, you start the mongod service to enable the server element, and then log into the mongo command-shell to create a new database.

MongoDB, being a schema-on-read database, doesn’t require you to set up a database schema up-front; instead, the schema comes from the data you load into it, with MongoDB’s equivalent of tables called “collections”, and with those collections made up of documents, analogous to rows in Oracle. Where it gets interesting though is that collections and databases only get created when you first start using them, and individual documents can have slightly, or even completely, different schema structures to each other – which makes them ideal for holding the sorts of datasets generated by Twitter, Facebook and DataSift.

[root@cdh51-node1 ~]# mongo
MongoDB shell version: 2.6.4
connecting to: test
> use datasift2
switched to db datasift2

Let’s create a couple of simple documents, and then add those to a collection. Note that the document becomes available just by declaring it, as does the collection when I add documents to it. Note also that the query language we’re using to work with MongoDB is Javascript, again making it particularly suited to JSON documents, and web-type environments.

> a = { name : "mark" }
{ "name" : "mark" }
> b = { product : "chair", size : "L" }
{ "product" : "chair", "size" : "L" }
> db
datasift2
> db.testData.insert(a)
WriteResult({ "nInserted" : 1 })
> db.testData.insert(b)
WriteResult({ "nInserted" : 1 })
> db.testData.find()
{ "_id" : ObjectId("54094081b5b6021fe9bc8b10"), "name" : "mark" }
{ "_id" : ObjectId("54094088b5b6021fe9bc8b11"), "product" : "chair", "size" : "L" }

And note also how the second entry (document) in the collection has a different schema to the entry above it – perfect for our semi-structured Twitter data, and something we could store as-is in MongoDB in this loose data format and then apply more formal structures and schemas to when we come to access the data – as we’ll do in a moment using Pig, and more formally using ODI and Hive in the next article in this series.

Setting up the Twitter feed from DataSift is a two-stage process, once you’ve got an account with them and an API key; first you define your search terms against a nested document model for the data source, then you activate the feed, in this case into my MongoDB database, and wait for the tweets to roll in. For my feed I selected tweets written by myself and some of the Rittman Mead team, tweets mentioning us, and tweets that included links to our blog in the main tweet contents (there’s also a graphical query designer, but I prefer to write them by hand using what DataSift call their “curated stream definition language” (CSDL).

NewImage

You can then preview the feed, live, or go back and sample historic data if you’re interested in loading old tweets, rather than incoming new ones. Once you’re ready you then need to activate the feed, in my case by calling a URL using CURL with a bunch of parameters (our API key and other sensitive data has been masked):

curl -X POST 'https://api.datasift.com/v1/push/create' \
-d 'name=connectormongodb' \
-d 'hash=65bd9dc4943ec426b04819exxxxxxxxx' \
-d 'output_type=mongodb' \
-d 'output_params.host=rittmandev.com' \
-d 'output_params.port=27017' \
-d 'output_params.use_ssl=no' \
-d 'output_params.verify_ssl=no' \
-d 'output_params.db_name=datasiftmongodb' \
-d 'output_params.collection_name=rm_tweets' \
-H 'Auth: rittmanmead:xxxxxxxxxxxxxxxxxxxxxxxxxxx'

The “hash” in the parameter list is the specific feed to activate, and the output type is MongoDB. The collection name is new, and will be created by MongoDB when the first tweet comes in; let’s run the curl command now and sit back for a while, and wait for some twitter activity to arrive in MongoDB …

… and a couple of hours later, eight tweets have been captured by the DataSift filter, with the last of them being one from Michael Rainey about his trip tonight to the Seahawks game:

> db.rm_tweets.count()
8
> db.rm_tweets.findOne()
{
    "_id" : ObjectId("54089a879ad4ec99158b4d78"),
    "interactionId" : "1e43454b1a16a880e074e49c51369eac",
    "subscriptionId" : "f6cf211e03dca5da384786676c31fd3e",
    "hash" : "65bd9dc4943ec426b04819e6291ef1ce",
    "hashType" : "stream",
    "interaction" : {
        "demographic" : {
            "gender" : "male"
        },
        "interaction" : {
            "author" : {
                "avatar" : "http://pbs.twimg.com/profile_images/476898781821018113/YRkKyGDl_normal.jpeg",
                "id" : 14551637,
                "language" : "en",
                "link" : "https://twitter.com/mRainey",
                "name" : "Michael Rainey",
                "username" : "mRainey"
            },
            "content" : "Greyson and I will be ready for the @Seahawks game tonight! #GoHawks! #kickoff2014 #GBvsSEA http://t.co/4u16ziBhnD",
            "created_at" : "Thu, 04 Sep 2014 16:58:29 +0000",
            "hashtags" : [
                "GoHawks",
                "kickoff2014",
                "GBvsSEA"
            ],
            "id" : "1e43454b1a16a880e074e49c51369eac",
            "link" : "https://twitter.com/mRainey/status/507573423334100992",
            "mention_ids" : [
                23642374
            ],
            "mentions" : [
                "Seahawks"
            ],
            "received_at" : 1409849909.2967,
            "schema" : {
                "version" : 3
            },
            "source" : "Instagram",
            "type" : "twitter"
        },
        "language" : {
            "tag" : "en",
                "tag_extended" : "en",
            "confidence" : 98
        },
        "links" : {
            "code" : [
                200
            ],
            "created_at" : [
                "Thu, 04 Sep 2014 16:58:29 +0000"
            ],
            "meta" : {
                "charset" : [
                    "CP1252"
                ],
                "lang" : [
                    "en"
                ],
                "opengraph" : [
                    {
                        "description" : "mrainey's photo on Instagram",
                        "image" : "http://photos-d.ak.instagram.com/hphotos-ak-xfa1/10655141_1470641446544147_1761180844_n.jpg",
                        "site_name" : "Instagram",
                        "type" : "instapp:photo",
                        "url" : "http://instagram.com/p/sh_h6sQBYT/"
                    }
                ]
            },
            "normalized_url" : [
                "http://instagram.com/p/sh_h6sQBYT"
            ],
            "title" : [
                "Instagram"
            ],
            "url" : [
                "http://instagram.com/p/sh_h6sQBYT/"
            ]
        },
        "salience" : {
            "content" : {
                "sentiment" : 0,
                "topics" : [
                    {
                        "name" : "Video Games",
                        "hits" : 0,
                        "score" : 0.5354745388031,
                        "additional" : "Greyson and I will be ready for the @Seahawks game tonight!"
                    }
                ]
            }
        },
        "trends" : {
            "type" : [
                "San Jose",
                "United States"
            ],
            "content" : [
                "seahawks"
            ],
            "source" : [
                "twitter"
            ]
        },
        "twitter" : {
            "created_at" : "Thu, 04 Sep 2014 16:58:29 +0000",
            "display_urls" : [
                "instagram.com/p/sh_h6sQBYT/"
            ],
            "domains" : [
                "instagram.com"
            ],
            "filter_level" : "medium",
            "hashtags" : [
                "GoHawks",
                "kickoff2014",
                "GBvsSEA"
            ],
            "id" : "507573423334100992",
            "lang" : "en",
            "links" : [
                "http://instagram.com/p/sh_h6sQBYT/"
            ],
            "mention_ids" : [
                23642374
            ],
            "mentions" : [
                "Seahawks"
            ],
            "source" : "<a href=\"http://instagram.com\" rel=\"nofollow\">Instagram</a>",
            "text" : "Greyson and I will be ready for the @Seahawks game tonight! #GoHawks! #kickoff2014 #GBvsSEA http://t.co/4u16ziBhnD",
            "user" : {
                "created_at" : "Sat, 26 Apr 2008 21:18:01 +0000",
                "description" : "Data Integration (#ODI #GoldenGate #OBIA) consultant / blogger / speaker @RittmanMead.\nOracle ACE.\n#cycling #Seahawks #travel w/ @XiomaraRainey\n#GoCougs!",
                "favourites_count" : 746,
                "followers_count" : 486,
                "friends_count" : 349,
                "geo_enabled" : true,
                "id" : 14551637,
                "id_str" : "14551637",
                "lang" : "en",
                "listed_count" : 28,
                "location" : "Pasco, WA",
                "name" : "Michael Rainey",
                "profile_image_url" : "http://pbs.twimg.com/profile_images/476898781821018113/YRkKyGDl_normal.jpeg",
                "profile_image_url_https" : "https://pbs.twimg.com/profile_images/476898781821018113/YRkKyGDl_normal.jpeg",
                "screen_name" : "mRainey",
                "statuses_count" : 8549,
                "time_zone" : "Pacific Time (US & Canada)",
                "url" : "http://www.linkedin.com/in/rainey",
                "utc_offset" : -25200,
                "verified" : false
            }
        }
    }
}

If you’ve not looked at Twitter metadata before, it’s surprising how much metadata accompanies what’s ostensibly an 140-character tweet. As well as details on the author, where the tweet was sent from, what Twitter client sent the tweet and details of the tweet itself, there’s details and statistics on the sender, the number of followers they’ve got and where they’re located, a list of all other Twitter users mentioned in the tweet and any URLs and images referenced.

Not every tweet will use every element of metadata, and some tweets will repeat certain attributes – other Twitter users you’ve mentioned in the tweet, for example – as many times as there are mentions. Which makes Twitter data a prime candidate for analysis using Pig and Spark, which handle easily the concept of nested data structures, tuples (ordered lists of data, such as attribute sets for an entity such as “author”), and bags (sets of unordered attributes, such as the list of @ mentions in a tweet).

There’s a MongoDB connector for Hadoop on Github which allows MapReduce to connect to MongoDB databases, running MapReduce jobs on MongoDB storage rather than HDFS (or S3, or whatever). This gives us the ability to use languages such as Pig and Hive to filter and aggregate our MongoDB data rather than MongoDB’s Javascript API, which isn’t as fully-featured and scaleable as MapReduce and has limitations in terms of the number of documents you can include in aggregations; let’s start then by connecting Pig to our MongoDB database, and reading in the documents with no Pig schema applied:

grunt> tweets = LOAD 'mongodb://cdh51-node1:27017/datasiftmongodb.rm_tweets' using com.mongodb.hadoop.pig.MongoLoader;                                                                                                                                                  2014-09-05 06:40:51,773 [main] INFO  com.mongodb.hadoop.pig.MongoStorage - Initializing MongoLoader in dynamic schema mode.                                                                                                                                        
2014-09-05 06:40:51,838 [main] INFO  com.mongodb.hadoop.pig.MongoStorage - Initializing MongoLoader in dynamic schema mode.
grunt> tweets_count = FOREACH (GROUP tweets ALL) GENERATE COUNT (tweets);                                             
2014-09-05 06:41:07,772 [main] INFO  com.mongodb.hadoop.pig.MongoStorage - Initializing MongoLoader in dynamic schema mode.
2014-09-05 06:41:07,817 [main] INFO  com.mongodb.hadoop.pig.MongoStorage - Initializing MongoLoader in dynamic schema mode.
grunt> dump tweets_count
...
(9)
grunt>

So there’s nine tweets in the MongoDB database now. Let’s take a look at one of the documents by creating a Pig alias containing just a single record.

grunt> tweets_limit_1 = LIMIT tweets 1;
2014-09-05 06:43:12,351 [main] INFO  com.mongodb.hadoop.pig.MongoStorage - Initializing MongoLoader in dynamic schema mode.
2014-09-05 06:43:12,443 [main] INFO  com.mongodb.hadoop.pig.MongoStorage - Initializing MongoLoader in dynamic schema mode.
grunt> dump tweets_limit_1
...
([interaction#{trends={source=(twitter), content=(seahawks), type=(San Jose,United States)}, twitter={filter_level=medium, text=Greyson and I will be ready for the @Seahawks game tonight! #GoHawks! #kickoff2014 #GBvsSEA http://t.co/4u16ziBhnD, mention_ids=(23642374), domains=(instagram.com), links=(http://instagram.com/p/sh_h6sQBYT/), lang=en, id=507573423334100992, source=<a href="http://instagram.com" rel="nofollow">Instagram</a>, created_at=Thu, 04 Sep 2014 16:58:29 +0000, hashtags=(GoHawks,kickoff2014,GBvsSEA), mentions=(Seahawks), user={profile_image_url_https=https://pbs.twimg.com/profile_images/476898781821018113/YRkKyGDl_normal.jpeg, location=Pasco, WA, geo_enabled=true, statuses_count=8549, lang=en, url=http://www.linkedin.com/in/rainey, utc_offset=-25200, id=14551637, time_zone=Pacific Time (US & Canada), favourites_count=746, verified=false, friends_count=349, description=Data Integration (#ODI #GoldenGate #OBIA) consultant / blogger / speaker @RittmanMead.
Oracle ACE.
#cycling #Seahawks #travel w/ @XiomaraRainey
#GoCougs!, name=Michael Rainey, created_at=Sat, 26 Apr 2008 21:18:01 +0000, screen_name=mRainey, id_str=14551637, profile_image_url=http://pbs.twimg.com/profile_images/476898781821018113/YRkKyGDl_normal.jpeg, followers_count=486, listed_count=28}, display_urls=(instagram.com/p/sh_h6sQBYT/)}, salience={content={topics=([score#0.5354745388031,additional#Greyson and I will be ready for the @Seahawks game tonight!,hits#0,name#Video Games]), sentiment=0}}, links={created_at=(Thu, 04 Sep 2014 16:58:29 +0000), title=(Instagram), code=(200), normalized_url=(http://instagram.com/p/sh_h6sQBYT), url=(http://instagram.com/p/sh_h6sQBYT/), meta={lang=(en), charset=(CP1252), opengraph=([image#http://photos-d.ak.instagram.com/hphotos-ak-xfa1/10655141_1470641446544147_1761180844_n.jpg,type#instapp:photo,site_name#Instagram,url#http://instagram.com/p/sh_h6sQBYT/,description#mrainey's photo on Instagram])}}, interaction={schema={version=3}, id=1e43454b1a16a880e074e49c51369eac, content=Greyson and I will be ready for the @Seahawks game tonight! #GoHawks! #kickoff2014 #GBvsSEA http://t.co/4u16ziBhnD, author={id=14551637, username=mRainey, language=en, avatar=http://pbs.twimg.com/profile_images/476898781821018113/YRkKyGDl_normal.jpeg, name=Michael Rainey, link=https://twitter.com/mRainey}, received_at=1.4098499092967E9, source=Instagram, mention_ids=(23642374), link=https://twitter.com/mRainey/status/507573423334100992, created_at=Thu, 04 Sep 2014 16:58:29 +0000, hashtags=(GoHawks,kickoff2014,GBvsSEA), type=twitter, mentions=(Seahawks)}, language={tag=en, confidence=98, tag_extended=en}, demographic={gender=male}},interactionId#1e43454b1a16a880e074e49c51369eac,_id#54089a879ad4ec99158b4d78,hash#65bd9dc4943ec426b04819e6291ef1ce,subscriptionId#f6cf211e03dca5da384786676c31fd3e,hashType#stream])

And there’s Michael’s tweet again, with all the attributes from the MongoDB JSON document appended together into a single record. But in this format the data isn’t all that useful as we can’t easily access individual elements in the Twitter record; what would be better would be to apply a Pig schema definition to the LOAD statement, using the MongoDB document field listing that we saw when we displayed a single record from the MongoDB collection earlier.

I can start by referencing the document fields that become simple Pig dataypes; ID and interactionId, for example:

grunt> tweets = LOAD 'mongodb://cdh51-node1:27017/datasiftmongodb.my_first_test' using com.mongodb.hadoop.pig.MongoLoader('id:chararray,interactionId:chararray','id');
2014-09-05 06:57:57,985 [main] INFO  com.mongodb.hadoop.pig.MongoStorage - Initializing MongoLoader in dynamic schema mode.
2014-09-05 06:57:58,022 [main] INFO  com.mongodb.hadoop.pig.MongoStorage - Initializing MongoLoader in dynamic schema mode.
grunt> describe tweets
2014-09-05 06:58:11,611 [main] INFO  com.mongodb.hadoop.pig.MongoStorage - Initializing MongoLoader in dynamic schema mode.
tweets: {id: chararray,interactionId: chararray}
grunt> tweets_limit_1 = LIMIT tweets 1;
...
(53fae22e9ad4ec93658b513e,1e42c2747542a100e074fff55100414a)
grunt>

Where the MongoDB document has fields nested within other fields, you can reference these as a tuple if they’re a set of attributes under a common header, or a bag if they’re just a list of values for a single attribute; for example, the “username” field is contained within the author tuple, which in-turn is contained within the interaction tuple, so to count tweets by author I’d need to first flatten the author tuple to turn its fields into scalar fields, then project out the username and other details; then I can group the relation in the normal way on those author details, and generate a count of tweets, like this:

grunt> tweets = LOAD 'mongodb://cdh51-node1:27017/datasiftmongodb.rm_tweets' using com.mongodb.hadoop.pig.MongoLoader('id:chararray,interactionId:chararray,interaction:tuple(interaction:tuple(author:tuple(id:int,language:chararray,link:chararray,name:chararray,username:chararray)))','id');
grunt> tweets_author_tuple_flattened = FOREACH tweets GENERATE id, FLATTEN(interaction.$0);                                            
grunt> tweets_with_authors = FOREACH tweets_author_tuple_flattened GENERATE id, interaction::author.username, interaction::author.name;
grunt> tweets_author_group = GROUP tweets_with_authors by username; 
grunt> tweets_author_count = FOREACH tweets_author_group GENERATE group, COUNT(tweets_with_authors); 
...
(rmoff,1)
(dw_pete,1)
(mRainey,3)
(P_J_FLYNN,3)
(davidhuey,7)
(EdelweissK,1)
(JamesOickle,3)
(markrittman,3)
(rittmanmead,2)
(RedgraveChris,1)
grunt>

So there’s obviously a lot more we can do with the Twitter dataset as it stands, but where it’ll get really interesting is combining this with other social media interaction data – for example from Facebook, LinkedIn and so on – and then correlating that with out main site activity data. Check back in a few days when we’ll be covering this second stage in a further blog article, using ODI12c to orchestrate the process.

Categories: BI & Warehousing

Oracle 12.1.0.2 and Data Warehouses

Rittman Mead Consulting - Mon, 2014-09-01 09:21

If you follow Blogs and Tweets from the Oracle community you won’t have missed hearing about the recent release of the first patch-set for Oracle 12c. With this release there are some significant pieces of new functionality that will be of interest to Data Warehouse DBAs and architects. The headline feature that most Oracle followers will know of is the new in-memory option. In my opinion this is a game-changer for how we design reporting architectures; it gives us an effective way to build operational reporting over the reference data architecture described by Mark Rittman a few weeks ago. Of course, the database team here at Rittman Mead have been rolling up our sleeves and getting into in-memory technology for quite a while now, Mark even featured in the official launch presentation by Larry Ellison with the now famous “so easy it’s boring” quote. Last week Mark published the first of our Rittman Mead in-memory articles, with the promise of more in-memory articles to come including my article for the next edition of UKOUG’s “Oracle Scene”.

However, the in-memory option is not the only new feature that is going to be a benefit to us in the BI/DW world. One of the new features I am going to describe is Exadata only, but the first one I am going to mention is generally available in the 12.1.0.2 database.

Typically, data warehouse queries are different from those seen in the OLTP world – in DW we tend to access a large number of rows and probably aggregate things up to answer some business question. Often we are not using indexes and instead scanning tables or table partitions is the norm. Usually, the data we need to aggregate is widely scattered across the table or partition. Data Warehouse queries often look at records that share a set of common attributes; we look at the sales for the ‘ACME’ widget or the value of items shipped to Arizona. For us there can be great advantage if data we use together is stored together, and this is where Attribute Clustering can pay a part.

Attribute Clustering is usually configured on the table at at DDL time and in-effect controls the ordering of data inserted by DIRECT PATH operations, Oracle does not enforce this ordering for conventional inserts, this may not be an issue in data warehouses as bulk-batch operations typically use APPEND inserts, which are direct path inserts, or partition operations, it may be more of an issue with some of the real-time conventional path loading paradigms. In addition to Direct Path load operations Attribute Clustering can also occur when you do Alter table MOVE type operations (this also includes operations such as PARTITION SPLIT). On the surface, Attribute Clustering sounds little different to using an ORDER by on an append insert and hoping that Oracle actually stores the data where you expect it to. However, Attribute Clustering gives us two other possibilities in how we can order the data in the cluster.

Firstly, we can cluster on columns from JOINED dimension tables, for example in a SALES DW we may have a sales fact with a product key at the SKU level, but we often join to the product dimension and report at the Product Category level. In this case we can cluster our sales fact table so that each product category appears in the same cluster. For example, we have just opened a chain a supermarkets with a wide but uninspiring range of brands and products (see the tiny piece of our product dimension table below)

NewImage

As you can see, our Product PK has no relationship at all to the type of product being sold. In our Kimball-style data warehouse we typically store the product key on the fact table and join to the product dimension to obtain all of the other product attributes and hierarchy members. This is essentially what we can do with join Attribute Clustering, in our example we can cluster our fact table on PRODUCT_CATEGORY so that all of the Laundry sales are physically close to each other in the Fact table.

CREATE TABLE rm_sales (
product_idNUMBER NOT NULL,
store_id        NUMBER NOT NULL,
sales_date      DATE NOT NULL,
loyalty_card_id NUMBER ,
quantity_sold  NUMBER(3) NOT NULL,
value_sold    NUMBER(10,2) NOT NULL
)
CLUSTERING
rm_sales JOIN products ON (rm_sales.product_id = products.product_pk)
BY LINEAR ORDER (sales_date, product_category, store_id);

Notice we are clustering on a join to the product dimension table’s “product_category” column, we are also clustering on sales_date, this is especially important in the case of partitioned fact tables so that the benefits of clustering align to the partitioning strategy.  We are also not restricted in our clustering to just one join, if we wanted to we could also cluster our sales by store region e.g. the Colorado laundry product sales are located in the same area of the sales table. To use Join Attribute Clustering we need to define the PK / FK relationships between fact and dimension, however it is always good practice to have that in place as it helps the CBO so much with query plan evaluation

Secondly, notice the BY LINEAR ORDER clause in the table DDL. Of the two ordering options, Linear Order is the most basic form of clustering, it this case we have our data structured so that all the items for a sales day are clustered together and within that cluster we order by product category and those categories are in turn ordered by store_id. The other way we can cluster is BY INTERLEAVED ORDER; here, Oracle maps a combination of dimensional values to a single cluster value using a z-order curve fitting approach. This sounds complex but it ensures that items that are frequently queried together are co-located in the disk blocks in the storage.

Interleaved ordering is probably the best choice for data warehousing at it aligns well with how we access data in our queries. Although we could include all of the dimension keys in our ordering list, it is going to be more benefit to just include a subset of dimensions; typically for retail I’d go with DATE (or something that correlates to the time based partition key of the fact table), the product  and the store. Of course we can again join to the dimension tables and cluster at higher hierarchy levels such as product category and store region. The Oracle 12c Data Warehousing guide gives some good advice, but you can’t go far wrong if you think about clustering items together that will be queried together

Clustering data can give us some advantages in performance. Better data compression and improved index range scans spring to mind, but to get most benefits we should also look at another new feature, zone-maps. Unlike Attribute Clustering, Zone Maps are Engineered Systems only, In a way they are similar to storage indexes already found on Exadata, but they have some additional advantages, they are also somewhat different from zone maps encountered in other DB vendors’ products such as Netezza.

In Exadata, a storage index can provide the maximum and minimum values encountered for a column in storage cell. I say “can” as there is no guarantee that range for a given column is held in the storage index. Zone Maps on the other hand will always provide maxima and minima for all of the columns specified at zone map creation. The zone map is orientated in terms of contiguous database blocks and is materialized so that it is physically persisted in the database and thus survives DB startups. Like Materialized views Materialized zone maps can become stale and need to be maintained.

We can define a zone map on one or more table columns and just like Attribute Clustering we may also create zone maps on table joins. As a table can only have one zone map it is important to include all of the columns you wish to track. Zone Maps are designed to work well with attribute clustering, in fact it is just a simple DDL statement to add a zone-map to an Attribute Clustered table so that the zone map tracks the same attributes as the clustering. This is where we get the major performance boost from attribute clustering, Instead of looking at the whole table the zone map tells us which ranges of database blocks contain data that matches our query predicates.

Zone Maps with Attribute Clustering gives us another powerful tool to boost DW performance on Exadata – we can do star queries without resorting to bitmap indexes and we minimise IO when scanning fact tables as we only need look where we know the data to be. Exciting times!

Categories: BI & Warehousing

Taking a Look at the Oracle Database 12c In-Memory Option

Rittman Mead Consulting - Fri, 2014-08-29 13:43

The In-Memory Option for Oracle Database 12c became available a few weeks ago with the 12.1.0.2 database patchset, adding column-store and in-memory capabilities to the Oracle Database. Unlike pure in-memory databases such as Oracle TimesTen, the in-memory option adds an in-memory column-store feature to the regular row-based storage in the Oracle database, creating in-memory copies of selected row-store tables in a compressed column-based storage format, with the whole process being automatic and and enabled by issuing just a couple of commands – to the point where my feedback at the end of beta testing was that it was “almost boring” – said slightly tongue-in-cheek…

NewImage

But of course adding in-memory capabilities to the Oracle database is anything but boring – whilst TimesTen on Exalytics has given us the ability to store aggregate tables, and “hot data” tables  in a dedicated in-memory database co-located with the BI mid-tier, TimesTen is another set of skills to learn and another product to manage. Moreover, you’ve got to somehow get the data you want to aggregate or otherwise cache out of your data warehouse and into TimesTen, and then you’ve got the even more difficult task of keeping that TimesTen dataset in-sync with your main data warehouse data. Most importantly though, even with the 2TB and 4TB versions of Exalytics many data warehouses won’t fit entirely into memory, so you’re going to end-up with some of your data in TimesTen and some in your data warehouse, and with only the TimesTen dataset benefiting from in-memory speeds.

So what if you could enable the in-memory option on your data warehouse, and then just run it all in-memory there? To see how this worked, I thought it’d be interesting to take the flight delays dataset that comes with the latest OBIEE 11g v406 SampleApp, and copy that across to an Oracle 12.1.0.2.0 database to test it out with the in-memory option. This dataset comes with a set of dashboard pages and analyses and contains around 6m rows of data, and in SampleApp is accelerated by an aggregate schema, also stored in Oracle, that generally reduces query times down to just a few seconds.

NewImage

So what I thought would be interesting to check would be first, whether enabling the in-memory option sped-up the dashboards, and second, whether having the in-memory option removed the need for the aggregate tables altogether. I was also interested to see whether putting the aggregate tables into memory improved their access time significantly, as anecdotally I’d heard that there wasn’t much clock-time difference between accessing aggregates in a TimesTen database vs. just creating them in the same data warehouse database as the source data. To set this all up then I created the new 12.1.0.2.0 database,. exported the BI_AIRLINES and BI_AIRLINES_AGG schemas out of the 12c database that comes with SampleApp v406, and then imported them into the same schema names in the new 12.1.0.2.0 instance, on the new VM (rather than on the SampleApp VM), and then amended the connection pool details in the SampleApp RPD to point to this new, external Oracle 12.1.0.2.0 database instance. The 12.1.0.2.0 database at this point had the following memory allocation:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
 
Total System Global Area 6325010432 bytes
Fixed Size               2938448 bytes
Variable Size            1207962032 bytes
Database Buffers         5100273664 bytes
Redo Buffers             13836288 bytes
Database mounted.
Database opened.

So I don’t end-up comparing the larger database instance on my 12.1.0.2.0 VM with the smaller one the airlines data came from on the VM, I created my baseline response time measurements on this new VM and with none of the in-memory features enabled, and ran some of the dashboard pages and clicked-on some of the prompt values – and the response time wasn’t bad, with just the odd analysis that took an excessive time to return. Mostly though, the results for each dashboard came back straight-away, what you’d expect when summary tables have been deployed to speed-up reporting.

chart1

Looking at the SQL being generated by the BI Server, you can quickly see why this is the case; the BI Server is using aggregate navigation and actually querying the BI_AIRLINES_AGG schema to return the aggregated results the dashboard, as you can see from one of the SQL statements retrieved from the NQQuery.log file below:

select sum(T255906.Dep_Delay_00039B26) as c1,
     sum(T255906.Arr_Delay_00039B22) as c2,
     sum(T255906.Z_of_Fligh00039B28) as c3,
     substr(T255216.Carrier00039BA9 , 1, 25) as c4,
     T255216.Carrier00039BA9 as c5,
     T255216.Carrier_Co00039BAA as c6
from 
     BI_AIRLINES_AGG.SA_16_Dest00039D06 T255357,
     BI_AIRLINES_AGG.SA_Time_Mo00039CFB T255737,
     BI_AIRLINES_AGG.SA_31_Carr00039CEB T255216,
     BI_AIRLINES_AGG.FACT_AGG_OR_06M T255906
where  ( T255216.Carrier_Co00039BAA = T255906.Carrier_Co00039BAA and T255357.Dest_Airpo00039C2A = T255906.Dest_Airpo00039C2A and T255737.Dep_Month00039C07 = T255906.Dep_Month00039C07 and substr(T255216.Carrier00039BA9 , 1, 25) = 'SunFlower Airlines' and (T255357.Dest_Regio00039C31 in ('Midwest Region', 'Northeast Region', 'South Region', 'West Region')) and T255737.Month_of_Y00039C0F between 3 and 12 ) 
group by T255216.Carrier00039BA9, T255216.Carrier_Co00039BAA
order by c5, c6, c4

OBIEE will always use these aggregate tables if they’re available in the repository, so to enable me to test the reports with these aggregates disabled I borrowed the technique Robin introduced in his blog on testing aggregate navigation, and added a request variable prompt to the dashboard page that allows me to pass an INACTIVE_SCHEMAS value to the logical SQL queries issued for the dashboard analyses, and therefore tell the BI Server to ignore the aggregate schema and just use the detail-level BI_AIRLINES schema instead.

NewImage

I therefore test the dashboard again, this time with the BI_AIRLINES_AGG schema disabled; this time, every dashboard took a while to display properly, with every analysis showing the “spinning clock” for twenty, thirty seconds a time. Comparing the response times to the ones a moment ago when aggregates were enabled, and you can see the difference.

chart2

So let’s set-up the in-memory option and see how it affects these two scenarios. The in-memory option for Oracle Database 12c 12.1.0.2.0 is installed by default as part of the core database, but doesn’t start working until you set a value for the INMEMORY_SIZE initialisation parameter – as you can see from the SQL*Plus output below, it’s currently set to zero, effectively disabling the feature:

SQL> conn / as sysdba
Connected.

SQL> show parameter INMEMORY

NAME                     TYPE    VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default          string
inmemory_force               string  DEFAULT
inmemory_max_populate_servers        integer     0
inmemory_query               string  ENABLE
inmemory_size                big integer 0
inmemory_trickle_repopulate_servers_ integer     1
percent
optimizer_inmemory_aware         boolean     TRUE

The memory Oracle allocates to the in-memory option, via this initialisation parameter, comes out of the SGA and therefore before I set the INMEMORY_SIZE parameter, it’d be a good idea to increase the SGA_TARGET value to accommodate the 1G I’m then going to assign to INMEMORY_SIZE. I do both from the SQL*Plus command-prompt and then bounce the database to bring the new values into use:

SQL> alter system set inmemory_size = 1G scope=spfile;

System altered.

SQL> show parameter sga_target

NAME                     TYPE    VALUE
------------------------------------ ----------- ------------------------------
sga_target               big integer 6032M
SQL> alter system set sga_target = 7032M scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 7381975040 bytes
Fixed Size          2941480 bytes
Variable Size        1207963096 bytes
Database Buffers     5083496448 bytes
Redo Buffers           13832192 bytes
In-Memory Area       1073741824 bytes
Database mounted.
Database opened.

Now we’re at the point where we can enable the tables for in-memory access. Tables to be copied into memory can either be created like that at the start, or you can ALTER TABLE them afterwards and specify that they go into memory (or you can configure the in-memory settings at the tablespace level, or even specify particular columns or partitions to go into memory). The underlying data still gets stored row-wise on disk, but enabling a table for in-memory access tells the Oracle database to create column-store in-memory representations of the table and direct suitable queries to those copies, all the time in the background keeping the copy in-sync with the row-store base data.

I’ll start by enabling all of the BI_AIRLINES schema for in-memory access, as the biggest pay-off would be if then meant we didn’t need to maintain the aggregate tables. After running a SELECT table_name FROM user_tables to list out the table names, I then run a script to enable all the tables for in-memory access, with the in-memory copies being populated immediately:

alter table AIRCRAFT_GROUP inmemory priority high;
alter table AIRCRAFT_TYPE inmemory priority high;
alter table AIRLINES_USER_DATA inmemory priority high;
alter table AIRLINE_ID inmemory priority high;
alter table CANCELLATION inmemory priority high;
alter table CARRIER_GROUP_NEW inmemory priority high;
alter table CARRIER_REGION inmemory priority high;
alter table DEPARBLK inmemory priority high;
alter table DISTANCE_GROUP_250 inmemory priority high;
alter table DOMESTIC_SEGMENT inmemory priority high;
alter table OBIEE_COUNTY_HIER inmemory priority high;
alter table OBIEE_GEO_AIRPORT_BRIDGE inmemory priority high;
alter table OBIEE_GEO_ORIG inmemory priority high;
alter table OBIEE_ROUTE inmemory priority high;
alter table OBIEE_TIME_DAY_D inmemory priority high;
alter table OBIEE_TIME_MTH_D inmemory priority high;
alter table ONTIME_DELAY_GROUPS inmemory priority high;
alter table PERFORMANCE inmemory priority high;
alter table PERFORMANCE_ENDECA_MV inmemory priority high;
alter table ROUTES_FOR_LINKS inmemory priority high;
alter table SCHEDULES inmemory priority high;
alter table SERVICE_CLASS inmemory priority high;
alter table UNIQUE_CARRIERS inmemory priority high;

Looking at USER_TABLES from this schema, I can see all of the tables I selected now marked for in-memory access, for immediate loading:

SQL> @display_table_inmem_details.sql
SQL> select table_name
  2  ,    inmemory
  3  ,    inmemory_priority
  4  from   user_tables
  5  /
 
TABLE_NAME           INMEMORY INMEMORY
-------------------- -------- --------
AIRCRAFT_GROUP       ENABLED  HIGH
UNIQUE_CARRIERS      ENABLED  HIGH
SERVICE_CLASS        ENABLED  HIGH
SCHEDULES            ENABLED  HIGH
ROUTES_FOR_LINKS     ENABLED  HIGH
PERFORMANCE          ENABLED  HIGH
ONTIME_DELAY_GROUPS  ENABLED  HIGH
OBIEE_TIME_MTH_D     ENABLED  HIGH
OBIEE_TIME_DAY_D     ENABLED  HIGH
OBIEE_ROUTE          ENABLED  HIGH
OBIEE_GEO_ORIG       ENABLED  HIGH
 
TABLE_NAME    INMEMORY INMEMORY
-------------------- -------- --------
OBIEE_GEO_AIRPORT_BR ENABLED  HIGH
IDGE
 
OBIEE_COUNTY_HIER    ENABLED  HIGH
DOMESTIC_SEGMENT     ENABLED  HIGH
DISTANCE_GROUP_250   ENABLED  HIGH
DEPARBLK             ENABLED  HIGH
CARRIER_REGION       ENABLED  HIGH
CARRIER_GROUP_NEW    ENABLED  HIGH
CANCELLATION         ENABLED  HIGH
AIRLINE_ID           ENABLED  HIGH
 
TABLE_NAME           INMEMORY INMEMORY
-------------------- -------- --------
AIRLINES_USER_DATA   ENABLED  HIGH
AIRLINES_PBLOB$      DISABLED
AIRLINES_PART$       DISABLED
AIRLINES_NODE_TZ$    DISABLED
AIRLINES_NODE$       DISABLED
AIRLINES_LINK_TZ$    DISABLED
AIRLINES_LINK_SCH$   DISABLED
AIRLINES_LINK$       DISABLED
AIRLINES_AIRPORT_TZ$ DISABLED
AIRCRAFT_TYPE        ENABLED  HIGH
 
30 rows selected.

And I can track the progress of the tables being copied into memory using the V$IM_SEGMENTS v$ view, like this:

SQL> @display_im_segments.sql
SQL> set echo on
SQL> set pages 200
SQL> col owner for a20
SQL> col name for a20
SQL> col status for a10
SQL> select v.owner
  2  ,      v.segment_name name
  3  ,      v.populate_status status
  4  from   v$im_segments v;

OWNER            NAME         STATUS
-------------------- -------------------- ----------
BI_AIRLINES      OBIEE_COUNTY_HIER    COMPLETED
BI_AIRLINES      PERFORMANCE      STARTED
BI_AIRLINES      UNIQUE_CARRIERS      COMPLETED
BI_AIRLINES      AIRLINES_LINK_TZ$    COMPLETED
BI_AIRLINES      OBIEE_TIME_MTH_D     COMPLETED
BI_AIRLINES      AIRLINES_LINK_SCH$   COMPLETED
BI_AIRLINES      OBIEE_ROUTE      COMPLETED
BI_AIRLINES      DOMESTIC_SEGMENT     COMPLETED
BI_AIRLINES      AIRLINES_LINK$   COMPLETED
BI_AIRLINES      AIRLINE_ID       COMPLETED
BI_AIRLINES      OBIEE_GEO_ORIG   COMPLETED
BI_AIRLINES      AIRLINES_NODE$   COMPLETED
BI_AIRLINES      OBIEE_GEO_AIRPORT_BR COMPLETED
             IDGE

BI_AIRLINES      AIRLINES_NODE_TZ$    COMPLETED
BI_AIRLINES      OBIEE_TIME_DAY_D     COMPLETED

Note that most of the tables went into memory immediately, but one (PERFORMANCE) is taking a while because it’s so big. Also note that not all tables are listed in the v$ view yet, as the database hasn’t got around to adding them into memory yet, or it might choose not to populate them if it feels the memory could be used better elsewhere. You can alter the priority of these in-memory copy processes if you want, and decide whether the copying happens immediately, or when the table is first scanned (accessed).

Running the dashboards again, with the request variable prompt set to disallow the aggregate schema, gave me the response times below – the in-memory queries were much faster than the row-based non in-memory ones, but they weren’t down to the response time of the dashboards right at the start, where all data is stored row-wise but we’ve got aggregate tables to speed up the queries (note I’ve reordered the bars so the non in-memory queries with no aggregate tables are on the left of each series, as the slowest of all approaches)

chart3

Taking a look at one of the physical SQL queries for a cross-tab (hierarchical columns) analysis, you can see that in-memory table access is happening:

WITH 
SAWITH0 AS (select sum(T233937.ACTUALELAPSEDTIME) as c1,
     sum(T233937.ARRDELAYMINUTES) as c2,
     concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50)) as c3,
     T233820.D_NAME as c4,
     T233820.R_NAME as c5
from 
     BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ ,
     BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ ,
     BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ ,
     BI_AIRLINES.PERFORMANCE T233937 /* 00 PERFORMANCE FULL */ 
where  ( T233484.AIRPORT = T233937.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233732.Cal_Month = T233937.MONTH and T233732.Cal_Qtr = T233937.QUARTER and T233732.Cal_Year = T233937.YEAR and T233732.Cal_Month between 6 and 12 and T233937.MONTH between 6 and 12 ) 
group by T233820.D_NAME, T233820.R_NAME, concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50))),
SAWITH1 AS (select sum(T233609.PASSENGERS) as c1,
     T233820.R_NAME as c2,
     T233820.D_NAME as c3,
     concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50)) as c4,
     sum(T233609.PASSENGERS_MILES) as c5
from 
     BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ ,
     BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ ,
     BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ ,
     BI_AIRLINES.DOMESTIC_SEGMENT T233609 /* 01 DOMESTIC Segment */ 
where  ( T233484.AIRPORT = T233609.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233609.MONTH = T233732.Cal_Month and T233609.QUARTER = T233732.Cal_Qtr and T233609.YEAR = T233732.Cal_Year and T233609.MONTH between 6 and 12 and T233732.Cal_Month between 6 and 12 ) 
group by T233820.D_NAME, T233820.R_NAME, concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50)))
select 2 as c1,
     case  when D1.c3 is not null then D1.c3 when D2.c4 is not null then D2.c4 end  as c2,
     case  when D1.c4 is not null then D1.c4 when D2.c3 is not null then D2.c3 end  as c3,
     case  when D1.c5 is not null then D1.c5 when D2.c2 is not null then D2.c2 end  as c4,
     'All Orig Airports' as c5,
     1 as c6,
     case  when D1.c4 is not null then D1.c4 when D2.c3 is not null then D2.c3 end  as c7,
     case  when D1.c5 is not null then D1.c5 when D2.c2 is not null then D2.c2 end  as c8,
     cast(D1.c2 as  DOUBLE PRECISION  ) / nullif( D1.c1, 0) * 100 as c9,
     D2.c5 as c10,
     D2.c1 as c14
from 
     SAWITH0 D1 full outer join SAWITH1 D2 On D1.c3 = D2.c4 and  SYS_OP_MAP_NONNULL(D1.c4) = SYS_OP_MAP_NONNULL(D2.c3) 
order by c4, c3, c2
SQL> @complex_query_explain.sql
 
Explained.

SQL> set lines 200
SQL> set pages 0
SQL> select * from table(dbms_xplan.display);
Plan hash value: 3097908901

----------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |              |  2673K|  1392M|   |   112K  (1)| 00:00:05 |    |      |        |
|   1 |  PX COORDINATOR                    |              |   |   |   |        |      |    |      |        |
|   2 |   PX SEND QC (ORDER)                   | :TQ10006         |  2673K|  1392M|   |   112K  (1)| 00:00:05 |  Q1,06 | P->S | QC (ORDER) |
|   3 |    SORT ORDER BY                   |              |  2673K|  1392M|  1492M|   112K  (1)| 00:00:05 |  Q1,06 | PCWP |        |
|   4 |     PX RECEIVE                     |              |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,06 | PCWP |        |
|   5 |      PX SEND RANGE                 | :TQ10005         |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,05 | P->P | RANGE      |
|   6 |       VIEW                     | VW_FOJ_0         |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,05 | PCWP |        |
|*  7 |        HASH JOIN FULL OUTER BUFFERED           |              |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,05 | PCWP |        |
|   8 |     PX RECEIVE                 |              |   136 | 37128 |   |   144   (7)| 00:00:01 |  Q1,05 | PCWP |        |
|   9 |      PX SEND HASH                  | :TQ10003         |   136 | 37128 |   |   144   (7)| 00:00:01 |  Q1,03 | S->P | HASH       |
|  10 |       PX SELECTOR                  |              |   |   |   |        |      |  Q1,03 | SCWC |        |
|  11 |        VIEW                    |              |   136 | 37128 |   |   144   (7)| 00:00:01 |  Q1,03 | SCWC |        |
|  12 |         HASH GROUP BY              |              |   136 | 17408 |   |   144   (7)| 00:00:01 |  Q1,03 | SCWC |        |
|* 13 |          HASH JOIN                 |              |   136 | 17408 |   |   143   (6)| 00:00:01 |  Q1,03 | SCWC |        |
|  14 |           JOIN FILTER CREATE           | :BF0000          |   136 | 11288 |   |   142   (6)| 00:00:01 |  Q1,03 | PCWP |        |
|* 15 |            HASH JOIN               |              |   136 | 11288 |   |   142   (6)| 00:00:01 |  Q1,03 | SCWC |        |
|  16 |         JOIN FILTER CREATE         | :BF0001          |   136 |  4896 |   |   142   (6)| 00:00:01 |  Q1,03 | PCWP |        |
|* 17 |          HASH JOIN             |              |   136 |  4896 |   |   142   (6)| 00:00:01 |  Q1,03 | SCWC |        |
|* 18 |           TABLE ACCESS INMEMORY FULL   | OBIEE_TIME_MTH_D     |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|* 19 |           TABLE ACCESS INMEMORY FULL   | DOMESTIC_SEGMENT     |   771 | 20046 |   |   141   (6)| 00:00:01 |  Q1,03 | SCWP |        |
|  20 |         JOIN FILTER USE            | :BF0001          |  1787 | 83989 |   | 1   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|* 21 |          TABLE ACCESS INMEMORY FULL    | OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 83989 |   | 1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|  22 |           JOIN FILTER USE              | :BF0000          |  3221 |   141K|   | 1   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|* 23 |            TABLE ACCESS INMEMORY FULL      | OBIEE_COUNTY_HIER    |  3221 |   141K|   | 1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|  24 |     PX RECEIVE                 |              |  2255K|   587M|   | 26673   (1)| 00:00:02 |  Q1,05 | PCWP |        |
|  25 |      PX SEND HASH                  | :TQ10004         |  2255K|   587M|   | 26673   (1)| 00:00:02 |  Q1,04 | P->P | HASH       |
|  26 |       VIEW                     |              |  2255K|   587M|   | 26673   (1)| 00:00:02 |  Q1,04 | PCWP |        |
|  27 |        HASH GROUP BY               |              |  2255K|   264M|   489M| 26673   (1)| 00:00:02 |  Q1,04 | PCWP |        |
|  28 |         PX RECEIVE                 |              |  2255K|   264M|   | 26673   (1)| 00:00:02 |  Q1,04 | PCWP |        |
|  29 |          PX SEND HASH              | :TQ10002         |  2255K|   264M|   | 26673   (1)| 00:00:02 |  Q1,02 | P->P | HASH       |
|  30 |           HASH GROUP BY            |              |  2255K|   264M|   489M| 26673   (1)| 00:00:02 |  Q1,02 | PCWP |        |
|* 31 |            HASH JOIN               |              |  3761K|   441M|   |   841   (9)| 00:00:01 |  Q1,02 | PCWP |        |
|  32 |         PX RECEIVE             |              |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,02 | PCWP |        |
|  33 |          PX SEND BROADCAST         | :TQ10000         |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,00 | S->P | BROADCAST  |
|  34 |           PX SELECTOR              |              |   |   |   |        |      |  Q1,00 | SCWC |        |
|* 35 |            TABLE ACCESS INMEMORY FULL  | OBIEE_TIME_MTH_D     |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,00 | SCWP |        |
|* 36 |         HASH JOIN              |              |  3773K|   406M|   |   838   (9)| 00:00:01 |  Q1,02 | PCWP |        |
|  37 |          PX RECEIVE            |              |  1787 |   160K|   | 2   (0)| 00:00:01 |  Q1,02 | PCWP |        |
|  38 |           PX SEND BROADCAST        | :TQ10001         |  1787 |   160K|   | 2   (0)| 00:00:01 |  Q1,01 | S->P | BROADCAST  |
|  39 |            PX SELECTOR             |              |   |   |   |        |      |  Q1,01 | SCWC |        |
|* 40 |             HASH JOIN              |              |  1787 |   160K|   | 2   (0)| 00:00:01 |  Q1,01 | SCWC |        |
|  41 |              TABLE ACCESS INMEMORY FULL| OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 83989 |   | 1   (0)| 00:00:01 |  Q1,01 | SCWP |        |
|  42 |              TABLE ACCESS INMEMORY FULL| OBIEE_COUNTY_HIER    |  3221 |   141K|   | 1   (0)| 00:00:01 |  Q1,01 | SCWP |        |
|  43 |          PX BLOCK ITERATOR         |              |  3773K|    75M|   |   834   (9)| 00:00:01 |  Q1,02 | PCWC |        |
|* 44 |           TABLE ACCESS INMEMORY FULL   | PERFORMANCE          |  3773K|    75M|   |   834   (9)| 00:00:01 |  Q1,02 | PCWP |        |
----------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("D1"."C3"="D2"."C4" AND SYS_OP_MAP_NONNULL("D1"."C4")=SYS_OP_MAP_NONNULL("D2"."C3"))
  13 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  15 - access("T233484"."AIRPORT"="T233609"."ORIGIN")
  17 - access("T233732"."CAL_MONTH"=TO_NUMBER("T233609"."MONTH") AND "T233732"."CAL_QTR"=TO_NUMBER("T233609"."QUARTER") AND
          "T233732"."CAL_YEAR"=TO_NUMBER("T233609"."YEAR"))
  18 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
       filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  19 - inmemory(TO_NUMBER("T233609"."MONTH")>=6 AND TO_NUMBER("T233609"."MONTH")<=12)
       filter(TO_NUMBER("T233609"."MONTH")>=6 AND TO_NUMBER("T233609"."MONTH")<=12)
  21 - inmemory(SYS_OP_BLOOM_FILTER(:BF0001,"T233484"."AIRPORT"))
       filter(SYS_OP_BLOOM_FILTER(:BF0001,"T233484"."AIRPORT"))
  23 - inmemory(SYS_OP_BLOOM_FILTER(:BF0000,"T233820"."SC_CODE"))
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"T233820"."SC_CODE"))
  31 - access("T233732"."CAL_MONTH"="T233937"."MONTH" AND "T233732"."CAL_QTR"="T233937"."QUARTER" AND "T233732"."CAL_YEAR"="T233937"."YEAR")
  35 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
       filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  36 - access("T233484"."AIRPORT"="T233937"."ORIGIN")
  40 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  44 - inmemory("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12)
       filter("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12)

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of table property

80 rows selected.

Indeed, looking at the queries the BI Server is sending to the database it’s not too surprising the in-memory difference in this case wasn’t too dramatic. According to the docs (emphasis mine):

Storing a database object in the IM column store can improve performance significantly for the following types of operations performed on the database object 

  • A query that scans a large number of rows and applies filters that use operators such as the following: =, <, >, and IN
  • A query that selects a small number of columns from a table or materialized view with a large number of columns, such as a query that selects five columns from a table with 100 columns
  • A query that joins a small table to a large table
  • A query that aggregates data

The IM column store does not improve performance for the following types of operations:

  • Queries with complex predicates
  • Queries that select a large number of columns
  • Queries that return a large number of rows
  • Queries with multiple large table joins

and our query certainly has complex predicates, returns a fair few rows, has large and lots of joins etc. Taking a more simple query that you’d likely write yourself if querying a data warehouse, you can see the in-memory table access being used again but a much simpler, an cheaper explain plan:

SQL> @inmem_explain.sql
SQL> set echo on
SQL> explain plan for
  2  select /*+ INMEMORY */ sum(T233937.ACTUALELAPSEDTIME) as c1,
  3   sum(T233937.WEATHERDELAY) as c2,
  4   sum(T233937.SECURITYDELAY) as c3,
  5   sum(T233937.NASDELAY) as c4,
  6   sum(T233937.LATEAIRCRAFTDELAY) as c5,
  7   sum(T233937.ARRDELAYMINUTES) as c6,
  8   sum(T233937.CARRIERDELAY) as c7,
  9   sum(nvl(casewhen T233937.CANCELLED < 1 then T233937.FLIGHTS end  , 0)) as c8,
 10   T233820.D_NAME as c9
 11  from
 12   BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ ,
 13   BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ ,
 14   BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233497 /* 12 GEO_AIPORT_DEST */ ,
 15   BI_AIRLINES.OBIEE_COUNTY_HIER T233831 /* 13 COUNTY_HIER_DEST */ ,
 16   BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ ,
 17   BI_AIRLINES.DISTANCE_GROUP_250 T233594 /* 19 DISTANCE_GROUP_250 */ ,
 18   BI_AIRLINES.PERFORMANCE T233937 /* 00 PERFORMANCE FULL */
 19  where  ( T233484.AIRPORT = T233937.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233497.AIRPORT = T233937.DEST and T233497.STCTY_FIPS = T233831.SC_CODE and T233594.DESCRIPTION = '1000-1249 Miles' and T233594.CODE = T233937.DISTANCEGROUP and T233732.Cal_Month = T233937.MONTH and T233732.Cal_Qtr = T233937.QUARTER and T233732.Cal_Year = T233937.YEAR and T233831.R_NAME = 'Northeast Region' and T233732.Cal_Month between 6 and 12 and T233937.MONTH between 6 and 12 )
 20  group by T233820.D_NAME
 21  order by c9
 22  /
 
Explained.
 
SQL> set lines 300
SQL> set pages 0
SQL> select * from table(dbms_xplan.display);
Plan hash value: 3055743864

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                  |     9 |  1314 |   883  (13)| 00:00:01 |    |      |        |
|   1 |  PX COORDINATOR                 |                  |       |       |        |          |    |      |        |
|   2 |   PX SEND QC (ORDER)                | :TQ10006             |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,06 | P->S | QC (ORDER) |
|   3 |    SORT GROUP BY                |                  |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,06 | PCWP |        |
|   4 |     PX RECEIVE                  |                  |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,06 | PCWP |        |
|   5 |      PX SEND RANGE              | :TQ10005             |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,05 | P->P | RANGE       |
|   6 |       HASH GROUP BY             |                  |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|*  7 |        HASH JOIN                |                  | 60775 |  8665K|   882  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|   8 |     PX RECEIVE              |                  |  3221 | 99851 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|   9 |      PX SEND BROADCAST          | :TQ10000             |  3221 | 99851 |     1   (0)| 00:00:01 |  Q1,00 | S->P | BROADCAST  |
|  10 |       PX SELECTOR               |                  |       |       |        |          |  Q1,00 | SCWC |        |
|  11 |        TABLE ACCESS INMEMORY FULL       | OBIEE_COUNTY_HIER        |  3221 | 99851 |     1   (0)| 00:00:01 |  Q1,00 | SCWP |        |
|* 12 |     HASH JOIN               |                  | 60775 |  6825K|   881  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  13 |      PX RECEIVE             |                  |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  14 |       PX SEND BROADCAST         | :TQ10001             |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,01 | S->P | BROADCAST  |
|  15 |        PX SELECTOR              |                  |       |       |        |          |  Q1,01 | SCWC |        |
|  16 |         TABLE ACCESS INMEMORY FULL      | OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,01 | SCWP |        |
|* 17 |      HASH JOIN              |                  | 60775 |  6231K|   880  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  18 |       PX RECEIVE                |                  |   161 |  1610 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  19 |        PX SEND BROADCAST            | :TQ10002             |   161 |  1610 |     1   (0)| 00:00:01 |  Q1,02 | S->P | BROADCAST  |
|  20 |         PX SELECTOR             |                  |       |       |        |          |  Q1,02 | SCWC |        |
|* 21 |          TABLE ACCESS INMEMORY FULL     | OBIEE_TIME_MTH_D         |   161 |  1610 |     1   (0)| 00:00:01 |  Q1,02 | SCWP |        |
|* 22 |       HASH JOIN             |                  | 60964 |  5655K|   879  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  23 |        JOIN FILTER CREATE           | :BF0000              |     1 |    19 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  24 |         PX RECEIVE              |                  |     1 |    19 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  25 |          PX SEND BROADCAST          | :TQ10003             |     1 |    19 |     1   (0)| 00:00:01 |  Q1,03 | S->P | BROADCAST  |
|  26 |           PX SELECTOR           |                  |       |       |        |          |  Q1,03 | SCWC |        |
|* 27 |            TABLE ACCESS INMEMORY FULL   | DISTANCE_GROUP_250       |     1 |    19 |     1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|* 28 |        HASH JOIN                |                  |   670K|    48M|   878  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  29 |         JOIN FILTER CREATE          | :BF0001              |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  30 |          PX RECEIVE             |                  |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  31 |           PX SEND BROADCAST         | :TQ10004             |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,04 | S->P | BROADCAST  |
|  32 |            PX SELECTOR          |                  |       |       |        |          |  Q1,04 | SCWC |        |
|* 33 |         HASH JOIN           |                  |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,04 | SCWC |        |
|  34 |          JOIN FILTER CREATE     | :BF0002              |   217 |  4340 |     1   (0)| 00:00:01 |  Q1,04 | PCWP |        |
|* 35 |           TABLE ACCESS INMEMORY FULL| OBIEE_COUNTY_HIER        |   217 |  4340 |     1   (0)| 00:00:01 |  Q1,04 | SCWP |        |
|  36 |          JOIN FILTER USE        | :BF0002              |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,04 | PCWP |        |
|* 37 |           TABLE ACCESS INMEMORY FULL| OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,04 | SCWP |        |
|  38 |         JOIN FILTER USE         | :BF0000              |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  39 |          JOIN FILTER USE            | :BF0001              |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  40 |           PX BLOCK ITERATOR         |                  |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWC |        |
|* 41 |            TABLE ACCESS INMEMORY FULL   | PERFORMANCE          |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWP |        |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  12 - access("T233484"."AIRPORT"="T233937"."ORIGIN")
  17 - access("T233732"."CAL_MONTH"="T233937"."MONTH" AND "T233732"."CAL_QTR"="T233937"."QUARTER" AND "T233732"."CAL_YEAR"="T233937"."YEAR")
  21 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
       filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  22 - access("T233594"."CODE"="T233937"."DISTANCEGROUP")
  27 - inmemory("T233594"."DESCRIPTION"='1000-1249 Miles')
       filter("T233594"."DESCRIPTION"='1000-1249 Miles')
  28 - access("T233497"."AIRPORT"="T233937"."DEST")
  33 - access("T233497"."STCTY_FIPS"="T233831"."SC_CODE")
  35 - inmemory("T233831"."R_NAME"='Northeast Region')
       filter("T233831"."R_NAME"='Northeast Region')
  37 - inmemory(SYS_OP_BLOOM_FILTER(:BF0002,"T233497"."STCTY_FIPS"))
       filter(SYS_OP_BLOOM_FILTER(:BF0002,"T233497"."STCTY_FIPS"))
  41 - inmemory("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12 AND SYS_OP_BLOOM_FILTER_LIST(SYS_OP_BLOOM_FILTER(:BF0001,"T233937"."DEST"),SYS
          _OP_BLOOM_FILTER(:BF0000,"T233937"."DISTANCEGROUP")))
       filter("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12 AND SYS_OP_BLOOM_FILTER_LIST(SYS_OP_BLOOM_FILTER(:BF0001,"T233937"."DEST"),SYS_O
          P_BLOOM_FILTER(:BF0000,"T233937"."DISTANCEGROUP")))

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of table property

75 rows selected.

If I then turn-off the in-memory feature and regenerate the execution plan, you can see without in-memory the plan is around 5x as expensive:

SQL> alter system set INMEMORY_SIZE = 0 scope = spfile;

System altered.

SQL> shutdown immediate;
ORA-01097: cannot shutdown while in a transaction - commit or rollback first
SQL> rollback;

Rollback complete.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 7398752256 bytes
Fixed Size          2941528 bytes
Variable Size        1056968104 bytes
Database Buffers     6325010432 bytes
Redo Buffers           13832192 bytes
Database mounted.
Database opened.
SQL> @noinmem_explain.sql

Explained.

SQL> set lines 300
SQL> set pages 0
SQL> select * from table(dbms_xplan.display);
Plan hash value: 2990499928

--------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name              | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |               |     9 |  1341 |  4086   (1)| 00:00:01 |        |      |        |
|   1 |  TEMP TABLE TRANSFORMATION                |               |       |       |        |          |        |      |        |
|   2 |   LOAD AS SELECT                      | SYS_TEMP_0FD9D6605_275335 |       |       |        |          |        |      |        |
|*  3 |    HASH JOIN                          |               |   318 |  9540 |    22   (0)| 00:00:01 |        |      |        |
|*  4 |     TABLE ACCESS FULL                     | OBIEE_COUNTY_HIER         |   217 |  4340 |    13   (0)| 00:00:01 |        |      |        |
|   5 |     TABLE ACCESS FULL                     | OBIEE_GEO_AIRPORT_BRIDGE  |  1787 | 17870 |     9   (0)| 00:00:01 |        |      |        |
|   6 |   PX COORDINATOR                      |               |       |       |        |          |        |      |        |
|   7 |    PX SEND QC (ORDER)                     | :TQ10008              |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,08 | P->S | QC (ORDER) |
|   8 |     SORT GROUP BY                     |               |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,08 | PCWP |        |
|   9 |      PX RECEIVE                       |               |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,08 | PCWP |        |
|  10 |       PX SEND RANGE                   | :TQ10007              |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,07 | P->P | RANGE       |
|  11 |        HASH GROUP BY                      |               |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,07 | PCWP |        |
|* 12 |     HASH JOIN                     |               |   281 | 41869 |  4085   (1)| 00:00:01 |  Q1,07 | PCWP |        |
|  13 |      PX RECEIVE                   |               |   281 | 33158 |  4072   (1)| 00:00:01 |  Q1,07 | PCWP |        |
|  14 |       PX SEND HYBRID HASH                 | :TQ10005              |   281 | 33158 |  4072   (1)| 00:00:01 |  Q1,05 | P->P | HYBRID HASH|
|  15 |        STATISTICS COLLECTOR               |               |       |       |        |          |  Q1,05 | PCWC |        |
|* 16 |         HASH JOIN BUFFERED                |               |   281 | 33158 |  4072   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  17 |          VIEW                     | VW_GBC_29             |   281 | 30348 |  4063   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  18 |           HASH GROUP BY               |               |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  19 |            PX RECEIVE                 |               |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  20 |         PX SEND HASH                  | :TQ10003              |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,03 | P->P | HASH    |
|  21 |          HASH GROUP BY                |               |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|* 22 |           HASH JOIN               |               | 60853 |    16M|  4039   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  23 |            BUFFER SORT                |               |       |       |        |          |  Q1,03 | PCWC |        |
|  24 |             PX RECEIVE                |               |   318 |  1272 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|  25 |              PX SEND BROADCAST            | :TQ10000              |   318 |  1272 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
|  26 |               TABLE ACCESS FULL           | SYS_TEMP_0FD9D6605_275335 |   318 |  1272 |     2   (0)| 00:00:01 |        |      |        |
|* 27 |            HASH JOIN                  |               | 60853 |    16M|  4037   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  28 |             PX RECEIVE                |               |   160 |  4640 |     4   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|  29 |              PX SEND BROADCAST            | :TQ10002              |   160 |  4640 |     4   (0)| 00:00:01 |  Q1,02 | S->P | BROADCAST  |
|  30 |               PX SELECTOR             |               |       |       |        |          |  Q1,02 | SCWC |        |
|  31 |                MERGE JOIN CARTESIAN       |               |   160 |  4640 |     4   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 32 |             VIEW                  | index$_join$_006          |     1 |    19 |     2   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 33 |              HASH JOIN            |               |       |       |        |          |  Q1,02 | SCWC |        |
|  34 |               BITMAP CONVERSION TO ROWIDS     |               |     1 |    19 |     1   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 35 |                BITMAP INDEX SINGLE VALUE      | M_INDEX32             |       |       |        |          |  Q1,02 | SCWP |        |
|  36 |               BITMAP CONVERSION TO ROWIDS     |               |     1 |    19 |     1   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|  37 |                BITMAP INDEX FULL SCAN     | INDEX4            |       |       |        |          |  Q1,02 | SCWP |        |
|  38 |             BUFFER SORT           |               |   161 |  1610 |     2   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|  39 |              BITMAP CONVERSION TO ROWIDS      |               |   161 |  1610 |     2   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 40 |               BITMAP INDEX FAST FULL SCAN     | M_INDEX28             |       |       |        |          |  Q1,02 | SCWP |        |
|* 41 |             VIEW                  | VW_ST_167D3604        | 61043 |    14M|  4033   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  42 |              NESTED LOOPS             |               | 61043 |  4768K|  4029   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  43 |               BUFFER SORT             |               |       |       |        |          |  Q1,03 | PCWC |        |
|  44 |                PX RECEIVE             |               |       |       |        |          |  Q1,03 | PCWP |        |
|  45 |             PX SEND HASH (BLOCK ADDRESS)      | :TQ10001              |       |       |        |          |        | S->P | HASH (BLOCK|
|  46 |              BITMAP CONVERSION TO ROWIDS      |               | 61042 |  1311K|   365   (1)| 00:00:01 |        |      |        |
|  47 |               BITMAP AND              |               |       |       |        |          |        |      |        |
|  48 |                BITMAP MERGE           |               |       |       |        |          |        |      |        |
|  49 |                 BITMAP KEY ITERATION      |               |       |       |        |          |        |      |        |
|* 50 |                  VIEW             | index$_join$_255          |     1 |    19 |     2   (0)| 00:00:01 |        |      |        |
|* 51 |                   HASH JOIN           |               |       |       |        |          |        |      |        |
|  52 |                    BITMAP CONVERSION TO ROWIDS|               |     1 |    19 |     1   (0)| 00:00:01 |        |      |        |
|* 53 |                 BITMAP INDEX SINGLE VALUE | M_INDEX32             |       |       |        |          |        |      |        |
|  54 |                    BITMAP CONVERSION TO ROWIDS|               |     1 |    19 |     1   (0)| 00:00:01 |        |      |        |
|  55 |                 BITMAP INDEX FULL SCAN    | INDEX4            |       |       |        |          |        |      |        |
|* 56 |                  BITMAP INDEX RANGE SCAN      | PERF_DISTANCEGRP          |       |       |        |          |        |      |        |
|  57 |                BITMAP MERGE           |               |       |       |        |          |        |      |        |
|  58 |                 BITMAP KEY ITERATION      |               |       |       |        |          |        |      |        |
|  59 |                  TABLE ACCESS FULL        | SYS_TEMP_0FD9D6605_275335 |   318 |  1272 |     2   (0)| 00:00:01 |        |      |        |
|* 60 |                  BITMAP INDEX RANGE SCAN      | PERF_DEST             |       |       |        |          |        |      |        |
|  61 |                BITMAP MERGE           |               |       |       |        |          |        |      |        |
|* 62 |                 BITMAP INDEX RANGE SCAN   | PERF_MONTH            |       |       |        |          |        |      |        |
|  63 |               TABLE ACCESS BY USER ROWID      | PERFORMANCE           |     1 |    58 |  3669   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  64 |          PX RECEIVE                   |               |  1787 | 17870 |     9   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  65 |           PX SEND BROADCAST               | :TQ10004              |  1787 | 17870 |     9   (0)| 00:00:01 |  Q1,04 | S->P | BROADCAST  |
|  66 |            PX SELECTOR                |               |       |       |        |          |  Q1,04 | SCWC |        |
|  67 |         TABLE ACCESS FULL             | OBIEE_GEO_AIRPORT_BRIDGE  |  1787 | 17870 |     9   (0)| 00:00:01 |  Q1,04 | SCWP |        |
|  68 |      PX RECEIVE                   |               |  3221 | 99851 |    13   (0)| 00:00:01 |  Q1,07 | PCWP |        |
|  69 |       PX SEND HYBRID HASH                 | :TQ10006              |  3221 | 99851 |    13   (0)| 00:00:01 |  Q1,06 | S->P | HYBRID HASH|
|  70 |        PX SELECTOR                    |               |       |       |        |          |  Q1,06 | SCWC |        |
|  71 |         TABLE ACCESS FULL                 | OBIEE_COUNTY_HIER         |  3221 | 99851 |    13   (0)| 00:00:01 |  Q1,06 | SCWP |        |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T233497"."STCTY_FIPS"="T233831"."SC_CODE")
   4 - filter("T233831"."R_NAME"='Northeast Region')
  12 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  16 - access("T233484"."AIRPORT"="ITEM_1")
  22 - access("C0"="ITEM_1")
  27 - access("T233732"."CAL_YEAR"="ITEM_5" AND "T233732"."CAL_QTR"="ITEM_4" AND "T233732"."CAL_MONTH"="ITEM_3" AND "T233594"."CODE"="ITEM_2")
  32 - filter("T233594"."DESCRIPTION"='1000-1249 Miles')
  33 - access(ROWID=ROWID)
  35 - access("T233594"."DESCRIPTION"='1000-1249 Miles')
  40 - filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  41 - filter("ITEM_3"<=12 AND "ITEM_3">=6)
  50 - filter("T233594"."DESCRIPTION"='1000-1249 Miles')
  51 - access(ROWID=ROWID)
  53 - access("T233594"."DESCRIPTION"='1000-1249 Miles')
  56 - access("T233937"."DISTANCEGROUP"="T233594"."CODE")
  60 - access("T233937"."DEST"="C0")
  62 - access("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12)

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of table property
   - star transformation used for this statement

105 rows selected.

Running the actual queries in this case gives me a wall-time of around 4 seconds for the in-memory version, and 14 seconds when in-memory query is disabled … but the response time isn’t anywhere near the initial run where we had data stored row-wise but with aggregate tables, so let’s finish-off the testing by putting the aggregate table in-memory too, and see if that makes a difference.

And of course, it absolutely flew:

chart4

So to conclude from my look at the Oracle Database 12c In-Memory option with OBIEE11g v406 SampleApp, I’d say the following based on my initial tests:

  • For BI-type reporting where you’re typically summarising lots of data, the in-memory option doesn’t remove the need for aggregate tables – you’ll still benefit significantly from having them, in my observation
  • Where the in-memory option does benefit you is when you’re querying the detail-level data – it helps with aggregation but it’s main strength is fast filtering against subsets of columns
  • Some of the more complex SQL queries issued by OBIEE’s BI Server, for example when creating lots of subtotals and totals against a dataset, reduce the effectiveness of the in-memory option – you’ll get the biggest speed improvement, at least at the moment, with queries with simpler predicates and not so complex joins

There might be more to the aggregation story in the end, though. Looking at the Oracle Database 12c In-Memory Option Technical White Paper, the in-memory option should in-fact help with aggregation through a new optimiser transformation called “vector group by”, a transformation that’s likened to a star transformation that uses CPU-efficient algorithms and a multi-dimensional array created on-the-fly in the PGA called an “in-memory accumulator”.

NewImage

In fact, what we’ve heard is that many of the old Oracle OLAP team have moved over to the in-memory option team and were responsible for this feature, so I’ll be taking a closer look at in-memory aggregation in this new feature over the next few months. In my examples though, I didn’t see any examples of vector group by in the query execution plans, so I’m assuming either conditions weren’t right for it, or like star transformations there’s some combination of setting and query factors that need to be in place before it’ll appear in the execution plan (and queries presumably run that much faster).

For now though – that’s my first run-through of the 12c In-Memory Option in the context of a typical BI workload. Check back later in the year for more in-memory option postings, including hopefully something more on the in-memory aggregation feature.

Categories: BI & Warehousing

Data Warehouse for Big Data: Scale-Up vs. Scale-Out

Dylan Wan - Thu, 2014-01-02 15:33

Found a very good paper: http://research.microsoft.com/pubs/204499/a20-appuswamy.pdf


This paper discuss if it is a right approach of using Hadoop as the analytics infrastructure.


It is hard to argue with the industry trend.  However, Hadoop is not
new any more.  It is time for people to calm down and rethink about the
real benefits.

Categories: BI & Warehousing