Skip navigation.

BI & Warehousing

Oracle 12.1.0.2 and Data Warehouses

Rittman Mead Consulting - Mon, 2014-09-01 09:21

If you follow Blogs and Tweets from the Oracle community you won’t have missed hearing about the recent release of the first patch-set for Oracle 12c. With this release there are some significant pieces of new functionality that will be of interest to Data Warehouse DBAs and architects. The headline feature that most Oracle followers will know of is the new in-memory option. In my opinion this is a game-changer for how we design reporting architectures; it gives us an effective way to build operational reporting over the reference data architecture described by Mark Rittman a few weeks ago. Of course, the database team here at Rittman Mead have been rolling up our sleeves and getting into in-memory technology for quite a while now, Mark even featured in the official launch presentation by Larry Ellison with the now famous “so easy it’s boring” quote. Last week Mark published the first of our Rittman Mead in-memory articles, with the promise of more in-memory articles to come including my article for the next edition of UKOUG’s “Oracle Scene”.

However, the in-memory option is not the only new feature that is going to be a benefit to us in the BI/DW world. One of the new features I am going to describe is Exadata only, but the first one I am going to mention is generally available in the 12.1.0.2 database.

Typically, data warehouse queries are different from those seen in the OLTP world – in DW we tend to access a large number of rows and probably aggregate things up to answer some business question. Often we are not using indexes and instead scanning tables or table partitions is the norm. Usually, the data we need to aggregate is widely scattered across the table or partition. Data Warehouse queries often look at records that share a set of common attributes; we look at the sales for the ‘ACME’ widget or the value of items shipped to Arizona. For us there can be great advantage if data we use together is stored together, and this is where Attribute Clustering can pay a part.

Attribute Clustering is usually configured on the table at at DDL time and in-effect controls the ordering of data inserted by DIRECT PATH operations, Oracle does not enforce this ordering for conventional inserts, this may not be an issue in data warehouses as bulk-batch operations typically use APPEND inserts, which are direct path inserts, or partition operations, it may be more of an issue with some of the real-time conventional path loading paradigms. In addition to Direct Path load operations Attribute Clustering can also occur when you do Alter table MOVE type operations (this also includes operations such as PARTITION SPLIT). On the surface, Attribute Clustering sounds little different to using an ORDER by on an append insert and hoping that Oracle actually stores the data where you expect it to. However, Attribute Clustering gives us two other possibilities in how we can order the data in the cluster.

Firstly, we can cluster on columns from JOINED dimension tables, for example in a SALES DW we may have a sales fact with a product key at the SKU level, but we often join to the product dimension and report at the Product Category level. In this case we can cluster our sales fact table so that each product category appears in the same cluster. For example, we have just opened a chain a supermarkets with a wide but uninspiring range of brands and products (see the tiny piece of our product dimension table below)

NewImage

As you can see, our Product PK has no relationship at all to the type of product being sold. In our Kimball-style data warehouse we typically store the product key on the fact table and join to the product dimension to obtain all of the other product attributes and hierarchy members. This is essentially what we can do with join Attribute Clustering, in our example we can cluster our fact table on PRODUCT_CATEGORY so that all of the Laundry sales are physically close to each other in the Fact table.

CREATE TABLE rm_sales (
product_idNUMBER NOT NULL,
store_id        NUMBER NOT NULL,
sales_date      DATE NOT NULL,
loyalty_card_id NUMBER ,
quantity_sold  NUMBER(3) NOT NULL,
value_sold    NUMBER(10,2) NOT NULL
)
CLUSTERING
rm_sales JOIN products ON (rm_sales.product_id = products.product_pk)
BY LINEAR ORDER (sales_date, product_category, store_id);

Notice we are clustering on a join to the product dimension table’s “product_category” column, we are also clustering on sales_date, this is especially important in the case of partitioned fact tables so that the benefits of clustering align to the partitioning strategy.  We are also not restricted in our clustering to just one join, if we wanted to we could also cluster our sales by store region e.g. the Colorado laundry product sales are located in the same area of the sales table. To use Join Attribute Clustering we need to define the PK / FK relationships between fact and dimension, however it is always good practice to have that in place as it helps the CBO so much with query plan evaluation

Secondly, notice the BY LINEAR ORDER clause in the table DDL. Of the two ordering options, Linear Order is the most basic form of clustering, it this case we have our data structured so that all the items for a sales day are clustered together and within that cluster we order by product category and those categories are in turn ordered by store_id. The other way we can cluster is BY INTERLEAVED ORDER; here, Oracle maps a combination of dimensional values to a single cluster value using a z-order curve fitting approach. This sounds complex but it ensures that items that are frequently queried together are co-located in the disk blocks in the storage.

Interleaved ordering is probably the best choice for data warehousing at it aligns well with how we access data in our queries. Although we could include all of the dimension keys in our ordering list, it is going to be more benefit to just include a subset of dimensions; typically for retail I’d go with DATE (or something that correlates to the time based partition key of the fact table), the product  and the store. Of course we can again join to the dimension tables and cluster at higher hierarchy levels such as product category and store region. The Oracle 12c Data Warehousing guide gives some good advice, but you can’t go far wrong if you think about clustering items together that will be queried together

Clustering data can give us some advantages in performance. Better data compression and improved index range scans spring to mind, but to get most benefits we should also look at another new feature, zone-maps. Unlike Attribute Clustering, Zone Maps are Engineered Systems only, In a way they are similar to storage indexes already found on Exadata, but they have some additional advantages, they are also somewhat different from zone maps encountered in other DB vendors’ products such as Netezza.

In Exadata, a storage index can provide the maximum and minimum values encountered for a column in storage cell. I say “can” as there is no guarantee that range for a given column is held in the storage index. Zone Maps on the other hand will always provide maxima and minima for all of the columns specified at zone map creation. The zone map is orientated in terms of contiguous database blocks and is materialized so that it is physically persisted in the database and thus survives DB startups. Like Materialized views Materialized zone maps can become stale and need to be maintained.

We can define a zone map on one or more table columns and just like Attribute Clustering we may also create zone maps on table joins. As a table can only have one zone map it is important to include all of the columns you wish to track. Zone Maps are designed to work well with attribute clustering, in fact it is just a simple DDL statement to add a zone-map to an Attribute Clustered table so that the zone map tracks the same attributes as the clustering. This is where we get the major performance boost from attribute clustering, Instead of looking at the whole table the zone map tells us which ranges of database blocks contain data that matches our query predicates.

Zone Maps with Attribute Clustering gives us another powerful tool to boost DW performance on Exadata – we can do star queries without resorting to bitmap indexes and we minimise IO when scanning fact tables as we only need look where we know the data to be. Exciting times!

Categories: BI & Warehousing

Taking a Look at the Oracle Database 12c In-Memory Option

Rittman Mead Consulting - Fri, 2014-08-29 13:43

The In-Memory Option for Oracle Database 12c became available a few weeks ago with the 12.1.0.2 database patchset, adding column-store and in-memory capabilities to the Oracle Database. Unlike pure in-memory databases such as Oracle TimesTen, the in-memory option adds an in-memory column-store feature to the regular row-based storage in the Oracle database, creating in-memory copies of selected row-store tables in a compressed column-based storage format, with the whole process being automatic and and enabled by issuing just a couple of commands – to the point where my feedback at the end of beta testing was that it was “almost boring” – said slightly tongue-in-cheek…

NewImage

But of course adding in-memory capabilities to the Oracle database is anything but boring – whilst TimesTen on Exalytics has given us the ability to store aggregate tables, and “hot data” tables  in a dedicated in-memory database co-located with the BI mid-tier, TimesTen is another set of skills to learn and another product to manage. Moreover, you’ve got to somehow get the data you want to aggregate or otherwise cache out of your data warehouse and into TimesTen, and then you’ve got the even more difficult task of keeping that TimesTen dataset in-sync with your main data warehouse data. Most importantly though, even with the 2TB and 4TB versions of Exalytics many data warehouses won’t fit entirely into memory, so you’re going to end-up with some of your data in TimesTen and some in your data warehouse, and with only the TimesTen dataset benefiting from in-memory speeds.

So what if you could enable the in-memory option on your data warehouse, and then just run it all in-memory there? To see how this worked, I thought it’d be interesting to take the flight delays dataset that comes with the latest OBIEE 11g v406 SampleApp, and copy that across to an Oracle 12.1.0.2.0 database to test it out with the in-memory option. This dataset comes with a set of dashboard pages and analyses and contains around 6m rows of data, and in SampleApp is accelerated by an aggregate schema, also stored in Oracle, that generally reduces query times down to just a few seconds.

NewImage

So what I thought would be interesting to check would be first, whether enabling the in-memory option sped-up the dashboards, and second, whether having the in-memory option removed the need for the aggregate tables altogether. I was also interested to see whether putting the aggregate tables into memory improved their access time significantly, as anecdotally I’d heard that there wasn’t much clock-time difference between accessing aggregates in a TimesTen database vs. just creating them in the same data warehouse database as the source data. To set this all up then I created the new 12.1.0.2.0 database,. exported the BI_AIRLINES and BI_AIRLINES_AGG schemas out of the 12c database that comes with SampleApp v406, and then imported them into the same schema names in the new 12.1.0.2.0 instance, on the new VM (rather than on the SampleApp VM), and then amended the connection pool details in the SampleApp RPD to point to this new, external Oracle 12.1.0.2.0 database instance. The 12.1.0.2.0 database at this point had the following memory allocation:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
 
Total System Global Area 6325010432 bytes
Fixed Size               2938448 bytes
Variable Size            1207962032 bytes
Database Buffers         5100273664 bytes
Redo Buffers             13836288 bytes
Database mounted.
Database opened.

So I don’t end-up comparing the larger database instance on my 12.1.0.2.0 VM with the smaller one the airlines data came from on the VM, I created my baseline response time measurements on this new VM and with none of the in-memory features enabled, and ran some of the dashboard pages and clicked-on some of the prompt values – and the response time wasn’t bad, with just the odd analysis that took an excessive time to return. Mostly though, the results for each dashboard came back straight-away, what you’d expect when summary tables have been deployed to speed-up reporting.

NewImage

Looking at the SQL being generated by the BI Server, you can quickly see why this is the case; the BI Server is using aggregate navigation and actually querying the BI_AIRLINES_AGG schema to return the aggregated results the dashboard, as you can see from one of the SQL statements retrieved from the NQQuery.log file below:

select sum(T255906.Dep_Delay_00039B26) as c1,
     sum(T255906.Arr_Delay_00039B22) as c2,
     sum(T255906.Z_of_Fligh00039B28) as c3,
     substr(T255216.Carrier00039BA9 , 1, 25) as c4,
     T255216.Carrier00039BA9 as c5,
     T255216.Carrier_Co00039BAA as c6
from 
     BI_AIRLINES_AGG.SA_16_Dest00039D06 T255357,
     BI_AIRLINES_AGG.SA_Time_Mo00039CFB T255737,
     BI_AIRLINES_AGG.SA_31_Carr00039CEB T255216,
     BI_AIRLINES_AGG.FACT_AGG_OR_06M T255906
where  ( T255216.Carrier_Co00039BAA = T255906.Carrier_Co00039BAA and T255357.Dest_Airpo00039C2A = T255906.Dest_Airpo00039C2A and T255737.Dep_Month00039C07 = T255906.Dep_Month00039C07 and substr(T255216.Carrier00039BA9 , 1, 25) = 'SunFlower Airlines' and (T255357.Dest_Regio00039C31 in ('Midwest Region', 'Northeast Region', 'South Region', 'West Region')) and T255737.Month_of_Y00039C0F between 3 and 12 ) 
group by T255216.Carrier00039BA9, T255216.Carrier_Co00039BAA
order by c5, c6, c4

OBIEE will always use these aggregate tables if they’re available in the repository, so to enable me to test the reports with these aggregates disabled I borrowed the technique Robin introduced in his blog on testing aggregate navigation, and added a request variable prompt to the dashboard page that allows me to pass an INACTIVE_SCHEMAS value to the logical SQL queries issued for the dashboard analyses, and therefore tell the BI Server to ignore the aggregate schema and just use the detail-level BI_AIRLINES schema instead.

NewImage

I therefore test the dashboard again, this time with the BI_AIRLINES_AGG schema disabled; this time, every dashboard took a while to display properly, with every analysis showing the “spinning clock” for twenty, thirty seconds a time. Comparing the response times to the ones a moment ago when aggregates were enabled, and you can see the difference.

NewImage

So let’s set-up the in-memory option and see how it affects these two scenarios. The in-memory option for Oracle Database 12c 12.1.0.2.0 is installed by default as part of the core database, but doesn’t start working until you set a value for the INMEMORY_SIZE initialisation parameter – as you can see from the SQL*Plus output below, it’s currently set to zero, effectively disabling the feature:

SQL> conn / as sysdba
Connected.

SQL> show parameter INMEMORY

NAME                     TYPE    VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default          string
inmemory_force               string  DEFAULT
inmemory_max_populate_servers        integer     0
inmemory_query               string  ENABLE
inmemory_size                big integer 0
inmemory_trickle_repopulate_servers_ integer     1
percent
optimizer_inmemory_aware         boolean     TRUE

The memory Oracle allocates to the in-memory option, via this initialisation parameter, comes out of the SGA and therefore before I set the INMEMORY_SIZE parameter, it’d be a good idea to increase the SGA_TARGET value to accommodate the 1G I’m then going to assign to INMEMORY_SIZE. I do both from the SQL*Plus command-prompt and then bounce the database to bring the new values into use:

SQL> alter system set inmemory_size = 1G scope=spfile;

System altered.

SQL> show parameter sga_target

NAME                     TYPE    VALUE
------------------------------------ ----------- ------------------------------
sga_target               big integer 6032M
SQL> alter system set sga_target = 7032M scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 7381975040 bytes
Fixed Size          2941480 bytes
Variable Size        1207963096 bytes
Database Buffers     5083496448 bytes
Redo Buffers           13832192 bytes
In-Memory Area       1073741824 bytes
Database mounted.
Database opened.

Now we’re at the point where we can enable the tables for in-memory access. Tables to be copied into memory can either be created like that at the start, or you can ALTER TABLE them afterwards and specify that they go into memory (or you can configure the in-memory settings at the tablespace level, or even specify particular columns or partitions to go into memory). The underlying data still gets stored row-wise on disk, but enabling a table for in-memory access tells the Oracle database to create column-store in-memory representations of the table and direct suitable queries to those copies, all the time in the background keeping the copy in-sync with the row-store base data.

I’ll start by enabling all of the BI_AIRLINES schema for in-memory access, as the biggest pay-off would be if then meant we didn’t need to maintain the aggregate tables. After running a SELECT table_name FROM user_tables to list out the table names, I then run a script to enable all the tables for in-memory access, with the in-memory copies being populated immediately:

alter table AIRCRAFT_GROUP inmemory priority high;
alter table AIRCRAFT_TYPE inmemory priority high;
alter table AIRLINES_USER_DATA inmemory priority high;
alter table AIRLINE_ID inmemory priority high;
alter table CANCELLATION inmemory priority high;
alter table CARRIER_GROUP_NEW inmemory priority high;
alter table CARRIER_REGION inmemory priority high;
alter table DEPARBLK inmemory priority high;
alter table DISTANCE_GROUP_250 inmemory priority high;
alter table DOMESTIC_SEGMENT inmemory priority high;
alter table OBIEE_COUNTY_HIER inmemory priority high;
alter table OBIEE_GEO_AIRPORT_BRIDGE inmemory priority high;
alter table OBIEE_GEO_ORIG inmemory priority high;
alter table OBIEE_ROUTE inmemory priority high;
alter table OBIEE_TIME_DAY_D inmemory priority high;
alter table OBIEE_TIME_MTH_D inmemory priority high;
alter table ONTIME_DELAY_GROUPS inmemory priority high;
alter table PERFORMANCE inmemory priority high;
alter table PERFORMANCE_ENDECA_MV inmemory priority high;
alter table ROUTES_FOR_LINKS inmemory priority high;
alter table SCHEDULES inmemory priority high;
alter table SERVICE_CLASS inmemory priority high;
alter table UNIQUE_CARRIERS inmemory priority high;

Looking at USER_TABLES from this schema, I can see all of the tables I selected now marked for in-memory access, for immediate loading:

SQL> @display_table_inmem_details.sql
SQL> select table_name
  2  ,    inmemory
  3  ,    inmemory_priority
  4  from   user_tables
  5  /
 
TABLE_NAME           INMEMORY INMEMORY
-------------------- -------- --------
AIRCRAFT_GROUP       ENABLED  HIGH
UNIQUE_CARRIERS      ENABLED  HIGH
SERVICE_CLASS        ENABLED  HIGH
SCHEDULES            ENABLED  HIGH
ROUTES_FOR_LINKS     ENABLED  HIGH
PERFORMANCE          ENABLED  HIGH
ONTIME_DELAY_GROUPS  ENABLED  HIGH
OBIEE_TIME_MTH_D     ENABLED  HIGH
OBIEE_TIME_DAY_D     ENABLED  HIGH
OBIEE_ROUTE          ENABLED  HIGH
OBIEE_GEO_ORIG       ENABLED  HIGH
 
TABLE_NAME    INMEMORY INMEMORY
-------------------- -------- --------
OBIEE_GEO_AIRPORT_BR ENABLED  HIGH
IDGE
 
OBIEE_COUNTY_HIER    ENABLED  HIGH
DOMESTIC_SEGMENT     ENABLED  HIGH
DISTANCE_GROUP_250   ENABLED  HIGH
DEPARBLK             ENABLED  HIGH
CARRIER_REGION       ENABLED  HIGH
CARRIER_GROUP_NEW    ENABLED  HIGH
CANCELLATION         ENABLED  HIGH
AIRLINE_ID           ENABLED  HIGH
 
TABLE_NAME           INMEMORY INMEMORY
-------------------- -------- --------
AIRLINES_USER_DATA   ENABLED  HIGH
AIRLINES_PBLOB$      DISABLED
AIRLINES_PART$       DISABLED
AIRLINES_NODE_TZ$    DISABLED
AIRLINES_NODE$       DISABLED
AIRLINES_LINK_TZ$    DISABLED
AIRLINES_LINK_SCH$   DISABLED
AIRLINES_LINK$       DISABLED
AIRLINES_AIRPORT_TZ$ DISABLED
AIRCRAFT_TYPE        ENABLED  HIGH
 
30 rows selected.

And I can track the progress of the tables being copied into memory using the V$IM_SEGMENTS v$ view, like this:

SQL> @display_im_segments.sql
SQL> set echo on
SQL> set pages 200
SQL> col owner for a20
SQL> col name for a20
SQL> col status for a10
SQL> select v.owner
  2  ,      v.segment_name name
  3  ,      v.populate_status status
  4  from   v$im_segments v;

OWNER            NAME         STATUS
-------------------- -------------------- ----------
BI_AIRLINES      OBIEE_COUNTY_HIER    COMPLETED
BI_AIRLINES      PERFORMANCE      STARTED
BI_AIRLINES      UNIQUE_CARRIERS      COMPLETED
BI_AIRLINES      AIRLINES_LINK_TZ$    COMPLETED
BI_AIRLINES      OBIEE_TIME_MTH_D     COMPLETED
BI_AIRLINES      AIRLINES_LINK_SCH$   COMPLETED
BI_AIRLINES      OBIEE_ROUTE      COMPLETED
BI_AIRLINES      DOMESTIC_SEGMENT     COMPLETED
BI_AIRLINES      AIRLINES_LINK$   COMPLETED
BI_AIRLINES      AIRLINE_ID       COMPLETED
BI_AIRLINES      OBIEE_GEO_ORIG   COMPLETED
BI_AIRLINES      AIRLINES_NODE$   COMPLETED
BI_AIRLINES      OBIEE_GEO_AIRPORT_BR COMPLETED
             IDGE

BI_AIRLINES      AIRLINES_NODE_TZ$    COMPLETED
BI_AIRLINES      OBIEE_TIME_DAY_D     COMPLETED

Note that most of the tables went into memory immediately, but one (PERFORMANCE) is taking a while because it’s so big. Also note that not all tables are listed in the v$ view yet, as the database hasn’t got around to adding them into memory yet, or it might choose not to populate them if it feels the memory could be used better elsewhere. You can alter the priority of these in-memory copy processes if you want, and decide whether the copying happens immediately, or when the table is first scanned (accessed).

Running the dashboards again, with the request variable prompt set to disallow the aggregate schema, gave me the response times below – the in-memory queries were much faster than the row-based non in-memory ones, but they weren’t down to the response time of the dashboards right at the start, where all data is stored row-wise but we’ve got aggregate tables to speed up the queries (note I’ve reordered the bars so the non in-memory queries with no aggregate tables are on the left of each series, as the slowest of all approaches)

NewImage

Taking a look at one of the physical SQL queries for a cross-tab (hierarchical columns) analysis, you can see that in-memory table access is happening:

WITH 
SAWITH0 AS (select sum(T233937.ACTUALELAPSEDTIME) as c1,
     sum(T233937.ARRDELAYMINUTES) as c2,
     concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50)) as c3,
     T233820.D_NAME as c4,
     T233820.R_NAME as c5
from 
     BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ ,
     BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ ,
     BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ ,
     BI_AIRLINES.PERFORMANCE T233937 /* 00 PERFORMANCE FULL */ 
where  ( T233484.AIRPORT = T233937.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233732.Cal_Month = T233937.MONTH and T233732.Cal_Qtr = T233937.QUARTER and T233732.Cal_Year = T233937.YEAR and T233732.Cal_Month between 6 and 12 and T233937.MONTH between 6 and 12 ) 
group by T233820.D_NAME, T233820.R_NAME, concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50))),
SAWITH1 AS (select sum(T233609.PASSENGERS) as c1,
     T233820.R_NAME as c2,
     T233820.D_NAME as c3,
     concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50)) as c4,
     sum(T233609.PASSENGERS_MILES) as c5
from 
     BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ ,
     BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ ,
     BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ ,
     BI_AIRLINES.DOMESTIC_SEGMENT T233609 /* 01 DOMESTIC Segment */ 
where  ( T233484.AIRPORT = T233609.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233609.MONTH = T233732.Cal_Month and T233609.QUARTER = T233732.Cal_Qtr and T233609.YEAR = T233732.Cal_Year and T233609.MONTH between 6 and 12 and T233732.Cal_Month between 6 and 12 ) 
group by T233820.D_NAME, T233820.R_NAME, concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50)))
select 2 as c1,
     case  when D1.c3 is not null then D1.c3 when D2.c4 is not null then D2.c4 end  as c2,
     case  when D1.c4 is not null then D1.c4 when D2.c3 is not null then D2.c3 end  as c3,
     case  when D1.c5 is not null then D1.c5 when D2.c2 is not null then D2.c2 end  as c4,
     'All Orig Airports' as c5,
     1 as c6,
     case  when D1.c4 is not null then D1.c4 when D2.c3 is not null then D2.c3 end  as c7,
     case  when D1.c5 is not null then D1.c5 when D2.c2 is not null then D2.c2 end  as c8,
     cast(D1.c2 as  DOUBLE PRECISION  ) / nullif( D1.c1, 0) * 100 as c9,
     D2.c5 as c10,
     D2.c1 as c14
from 
     SAWITH0 D1 full outer join SAWITH1 D2 On D1.c3 = D2.c4 and  SYS_OP_MAP_NONNULL(D1.c4) = SYS_OP_MAP_NONNULL(D2.c3) 
order by c4, c3, c2
SQL> @complex_query_explain.sql
 
Explained.

SQL> set lines 200
SQL> set pages 0
SQL> select * from table(dbms_xplan.display);
Plan hash value: 3097908901

----------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |              |  2673K|  1392M|   |   112K  (1)| 00:00:05 |    |      |        |
|   1 |  PX COORDINATOR                    |              |   |   |   |        |      |    |      |        |
|   2 |   PX SEND QC (ORDER)                   | :TQ10006         |  2673K|  1392M|   |   112K  (1)| 00:00:05 |  Q1,06 | P->S | QC (ORDER) |
|   3 |    SORT ORDER BY                   |              |  2673K|  1392M|  1492M|   112K  (1)| 00:00:05 |  Q1,06 | PCWP |        |
|   4 |     PX RECEIVE                     |              |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,06 | PCWP |        |
|   5 |      PX SEND RANGE                 | :TQ10005         |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,05 | P->P | RANGE      |
|   6 |       VIEW                     | VW_FOJ_0         |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,05 | PCWP |        |
|*  7 |        HASH JOIN FULL OUTER BUFFERED           |              |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,05 | PCWP |        |
|   8 |     PX RECEIVE                 |              |   136 | 37128 |   |   144   (7)| 00:00:01 |  Q1,05 | PCWP |        |
|   9 |      PX SEND HASH                  | :TQ10003         |   136 | 37128 |   |   144   (7)| 00:00:01 |  Q1,03 | S->P | HASH       |
|  10 |       PX SELECTOR                  |              |   |   |   |        |      |  Q1,03 | SCWC |        |
|  11 |        VIEW                    |              |   136 | 37128 |   |   144   (7)| 00:00:01 |  Q1,03 | SCWC |        |
|  12 |         HASH GROUP BY              |              |   136 | 17408 |   |   144   (7)| 00:00:01 |  Q1,03 | SCWC |        |
|* 13 |          HASH JOIN                 |              |   136 | 17408 |   |   143   (6)| 00:00:01 |  Q1,03 | SCWC |        |
|  14 |           JOIN FILTER CREATE           | :BF0000          |   136 | 11288 |   |   142   (6)| 00:00:01 |  Q1,03 | PCWP |        |
|* 15 |            HASH JOIN               |              |   136 | 11288 |   |   142   (6)| 00:00:01 |  Q1,03 | SCWC |        |
|  16 |         JOIN FILTER CREATE         | :BF0001          |   136 |  4896 |   |   142   (6)| 00:00:01 |  Q1,03 | PCWP |        |
|* 17 |          HASH JOIN             |              |   136 |  4896 |   |   142   (6)| 00:00:01 |  Q1,03 | SCWC |        |
|* 18 |           TABLE ACCESS INMEMORY FULL   | OBIEE_TIME_MTH_D     |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|* 19 |           TABLE ACCESS INMEMORY FULL   | DOMESTIC_SEGMENT     |   771 | 20046 |   |   141   (6)| 00:00:01 |  Q1,03 | SCWP |        |
|  20 |         JOIN FILTER USE            | :BF0001          |  1787 | 83989 |   | 1   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|* 21 |          TABLE ACCESS INMEMORY FULL    | OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 83989 |   | 1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|  22 |           JOIN FILTER USE              | :BF0000          |  3221 |   141K|   | 1   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|* 23 |            TABLE ACCESS INMEMORY FULL      | OBIEE_COUNTY_HIER    |  3221 |   141K|   | 1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|  24 |     PX RECEIVE                 |              |  2255K|   587M|   | 26673   (1)| 00:00:02 |  Q1,05 | PCWP |        |
|  25 |      PX SEND HASH                  | :TQ10004         |  2255K|   587M|   | 26673   (1)| 00:00:02 |  Q1,04 | P->P | HASH       |
|  26 |       VIEW                     |              |  2255K|   587M|   | 26673   (1)| 00:00:02 |  Q1,04 | PCWP |        |
|  27 |        HASH GROUP BY               |              |  2255K|   264M|   489M| 26673   (1)| 00:00:02 |  Q1,04 | PCWP |        |
|  28 |         PX RECEIVE                 |              |  2255K|   264M|   | 26673   (1)| 00:00:02 |  Q1,04 | PCWP |        |
|  29 |          PX SEND HASH              | :TQ10002         |  2255K|   264M|   | 26673   (1)| 00:00:02 |  Q1,02 | P->P | HASH       |
|  30 |           HASH GROUP BY            |              |  2255K|   264M|   489M| 26673   (1)| 00:00:02 |  Q1,02 | PCWP |        |
|* 31 |            HASH JOIN               |              |  3761K|   441M|   |   841   (9)| 00:00:01 |  Q1,02 | PCWP |        |
|  32 |         PX RECEIVE             |              |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,02 | PCWP |        |
|  33 |          PX SEND BROADCAST         | :TQ10000         |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,00 | S->P | BROADCAST  |
|  34 |           PX SELECTOR              |              |   |   |   |        |      |  Q1,00 | SCWC |        |
|* 35 |            TABLE ACCESS INMEMORY FULL  | OBIEE_TIME_MTH_D     |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,00 | SCWP |        |
|* 36 |         HASH JOIN              |              |  3773K|   406M|   |   838   (9)| 00:00:01 |  Q1,02 | PCWP |        |
|  37 |          PX RECEIVE            |              |  1787 |   160K|   | 2   (0)| 00:00:01 |  Q1,02 | PCWP |        |
|  38 |           PX SEND BROADCAST        | :TQ10001         |  1787 |   160K|   | 2   (0)| 00:00:01 |  Q1,01 | S->P | BROADCAST  |
|  39 |            PX SELECTOR             |              |   |   |   |        |      |  Q1,01 | SCWC |        |
|* 40 |             HASH JOIN              |              |  1787 |   160K|   | 2   (0)| 00:00:01 |  Q1,01 | SCWC |        |
|  41 |              TABLE ACCESS INMEMORY FULL| OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 83989 |   | 1   (0)| 00:00:01 |  Q1,01 | SCWP |        |
|  42 |              TABLE ACCESS INMEMORY FULL| OBIEE_COUNTY_HIER    |  3221 |   141K|   | 1   (0)| 00:00:01 |  Q1,01 | SCWP |        |
|  43 |          PX BLOCK ITERATOR         |              |  3773K|    75M|   |   834   (9)| 00:00:01 |  Q1,02 | PCWC |        |
|* 44 |           TABLE ACCESS INMEMORY FULL   | PERFORMANCE          |  3773K|    75M|   |   834   (9)| 00:00:01 |  Q1,02 | PCWP |        |
----------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("D1"."C3"="D2"."C4" AND SYS_OP_MAP_NONNULL("D1"."C4")=SYS_OP_MAP_NONNULL("D2"."C3"))
  13 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  15 - access("T233484"."AIRPORT"="T233609"."ORIGIN")
  17 - access("T233732"."CAL_MONTH"=TO_NUMBER("T233609"."MONTH") AND "T233732"."CAL_QTR"=TO_NUMBER("T233609"."QUARTER") AND
          "T233732"."CAL_YEAR"=TO_NUMBER("T233609"."YEAR"))
  18 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
       filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  19 - inmemory(TO_NUMBER("T233609"."MONTH")>=6 AND TO_NUMBER("T233609"."MONTH")<=12)
       filter(TO_NUMBER("T233609"."MONTH")>=6 AND TO_NUMBER("T233609"."MONTH")<=12)
  21 - inmemory(SYS_OP_BLOOM_FILTER(:BF0001,"T233484"."AIRPORT"))
       filter(SYS_OP_BLOOM_FILTER(:BF0001,"T233484"."AIRPORT"))
  23 - inmemory(SYS_OP_BLOOM_FILTER(:BF0000,"T233820"."SC_CODE"))
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"T233820"."SC_CODE"))
  31 - access("T233732"."CAL_MONTH"="T233937"."MONTH" AND "T233732"."CAL_QTR"="T233937"."QUARTER" AND "T233732"."CAL_YEAR"="T233937"."YEAR")
  35 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
       filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  36 - access("T233484"."AIRPORT"="T233937"."ORIGIN")
  40 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  44 - inmemory("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12)
       filter("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12)

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of table property

80 rows selected.

Indeed, looking at the queries the BI Server is sending to the database it’s not too surprising the in-memory difference in this case wasn’t too dramatic. According to the docs (emphasis mine):

Storing a database object in the IM column store can improve performance significantly for the following types of operations performed on the database object 

  • A query that scans a large number of rows and applies filters that use operators such as the following: =, <, >, and IN
  • A query that selects a small number of columns from a table or materialized view with a large number of columns, such as a query that selects five columns from a table with 100 columns
  • A query that joins a small table to a large table
  • A query that aggregates data

The IM column store does not improve performance for the following types of operations:

  • Queries with complex predicates
  • Queries that select a large number of columns
  • Queries that return a large number of rows
  • Queries with multiple large table joins

and our query certainly has complex predicates, returns a fair few rows, has large and lots of joins etc. Taking a more simple query that you’d likely write yourself if querying a data warehouse, you can see the in-memory table access being used again but a much simpler, an cheaper explain plan:

SQL> @inmem_explain.sql
SQL> set echo on
SQL> explain plan for
  2  select /*+ INMEMORY */ sum(T233937.ACTUALELAPSEDTIME) as c1,
  3   sum(T233937.WEATHERDELAY) as c2,
  4   sum(T233937.SECURITYDELAY) as c3,
  5   sum(T233937.NASDELAY) as c4,
  6   sum(T233937.LATEAIRCRAFTDELAY) as c5,
  7   sum(T233937.ARRDELAYMINUTES) as c6,
  8   sum(T233937.CARRIERDELAY) as c7,
  9   sum(nvl(casewhen T233937.CANCELLED < 1 then T233937.FLIGHTS end  , 0)) as c8,
 10   T233820.D_NAME as c9
 11  from
 12   BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ ,
 13   BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ ,
 14   BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233497 /* 12 GEO_AIPORT_DEST */ ,
 15   BI_AIRLINES.OBIEE_COUNTY_HIER T233831 /* 13 COUNTY_HIER_DEST */ ,
 16   BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ ,
 17   BI_AIRLINES.DISTANCE_GROUP_250 T233594 /* 19 DISTANCE_GROUP_250 */ ,
 18   BI_AIRLINES.PERFORMANCE T233937 /* 00 PERFORMANCE FULL */
 19  where  ( T233484.AIRPORT = T233937.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233497.AIRPORT = T233937.DEST and T233497.STCTY_FIPS = T233831.SC_CODE and T233594.DESCRIPTION = '1000-1249 Miles' and T233594.CODE = T233937.DISTANCEGROUP and T233732.Cal_Month = T233937.MONTH and T233732.Cal_Qtr = T233937.QUARTER and T233732.Cal_Year = T233937.YEAR and T233831.R_NAME = 'Northeast Region' and T233732.Cal_Month between 6 and 12 and T233937.MONTH between 6 and 12 )
 20  group by T233820.D_NAME
 21  order by c9
 22  /
 
Explained.
 
SQL> set lines 300
SQL> set pages 0
SQL> select * from table(dbms_xplan.display);
Plan hash value: 3055743864

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                  |     9 |  1314 |   883  (13)| 00:00:01 |    |      |        |
|   1 |  PX COORDINATOR                 |                  |       |       |        |          |    |      |        |
|   2 |   PX SEND QC (ORDER)                | :TQ10006             |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,06 | P->S | QC (ORDER) |
|   3 |    SORT GROUP BY                |                  |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,06 | PCWP |        |
|   4 |     PX RECEIVE                  |                  |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,06 | PCWP |        |
|   5 |      PX SEND RANGE              | :TQ10005             |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,05 | P->P | RANGE       |
|   6 |       HASH GROUP BY             |                  |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|*  7 |        HASH JOIN                |                  | 60775 |  8665K|   882  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|   8 |     PX RECEIVE              |                  |  3221 | 99851 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|   9 |      PX SEND BROADCAST          | :TQ10000             |  3221 | 99851 |     1   (0)| 00:00:01 |  Q1,00 | S->P | BROADCAST  |
|  10 |       PX SELECTOR               |                  |       |       |        |          |  Q1,00 | SCWC |        |
|  11 |        TABLE ACCESS INMEMORY FULL       | OBIEE_COUNTY_HIER        |  3221 | 99851 |     1   (0)| 00:00:01 |  Q1,00 | SCWP |        |
|* 12 |     HASH JOIN               |                  | 60775 |  6825K|   881  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  13 |      PX RECEIVE             |                  |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  14 |       PX SEND BROADCAST         | :TQ10001             |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,01 | S->P | BROADCAST  |
|  15 |        PX SELECTOR              |                  |       |       |        |          |  Q1,01 | SCWC |        |
|  16 |         TABLE ACCESS INMEMORY FULL      | OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,01 | SCWP |        |
|* 17 |      HASH JOIN              |                  | 60775 |  6231K|   880  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  18 |       PX RECEIVE                |                  |   161 |  1610 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  19 |        PX SEND BROADCAST            | :TQ10002             |   161 |  1610 |     1   (0)| 00:00:01 |  Q1,02 | S->P | BROADCAST  |
|  20 |         PX SELECTOR             |                  |       |       |        |          |  Q1,02 | SCWC |        |
|* 21 |          TABLE ACCESS INMEMORY FULL     | OBIEE_TIME_MTH_D         |   161 |  1610 |     1   (0)| 00:00:01 |  Q1,02 | SCWP |        |
|* 22 |       HASH JOIN             |                  | 60964 |  5655K|   879  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  23 |        JOIN FILTER CREATE           | :BF0000              |     1 |    19 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  24 |         PX RECEIVE              |                  |     1 |    19 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  25 |          PX SEND BROADCAST          | :TQ10003             |     1 |    19 |     1   (0)| 00:00:01 |  Q1,03 | S->P | BROADCAST  |
|  26 |           PX SELECTOR           |                  |       |       |        |          |  Q1,03 | SCWC |        |
|* 27 |            TABLE ACCESS INMEMORY FULL   | DISTANCE_GROUP_250       |     1 |    19 |     1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|* 28 |        HASH JOIN                |                  |   670K|    48M|   878  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  29 |         JOIN FILTER CREATE          | :BF0001              |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  30 |          PX RECEIVE             |                  |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  31 |           PX SEND BROADCAST         | :TQ10004             |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,04 | S->P | BROADCAST  |
|  32 |            PX SELECTOR          |                  |       |       |        |          |  Q1,04 | SCWC |        |
|* 33 |         HASH JOIN           |                  |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,04 | SCWC |        |
|  34 |          JOIN FILTER CREATE     | :BF0002              |   217 |  4340 |     1   (0)| 00:00:01 |  Q1,04 | PCWP |        |
|* 35 |           TABLE ACCESS INMEMORY FULL| OBIEE_COUNTY_HIER        |   217 |  4340 |     1   (0)| 00:00:01 |  Q1,04 | SCWP |        |
|  36 |          JOIN FILTER USE        | :BF0002              |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,04 | PCWP |        |
|* 37 |           TABLE ACCESS INMEMORY FULL| OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,04 | SCWP |        |
|  38 |         JOIN FILTER USE         | :BF0000              |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  39 |          JOIN FILTER USE            | :BF0001              |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  40 |           PX BLOCK ITERATOR         |                  |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWC |        |
|* 41 |            TABLE ACCESS INMEMORY FULL   | PERFORMANCE          |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWP |        |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  12 - access("T233484"."AIRPORT"="T233937"."ORIGIN")
  17 - access("T233732"."CAL_MONTH"="T233937"."MONTH" AND "T233732"."CAL_QTR"="T233937"."QUARTER" AND "T233732"."CAL_YEAR"="T233937"."YEAR")
  21 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
       filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  22 - access("T233594"."CODE"="T233937"."DISTANCEGROUP")
  27 - inmemory("T233594"."DESCRIPTION"='1000-1249 Miles')
       filter("T233594"."DESCRIPTION"='1000-1249 Miles')
  28 - access("T233497"."AIRPORT"="T233937"."DEST")
  33 - access("T233497"."STCTY_FIPS"="T233831"."SC_CODE")
  35 - inmemory("T233831"."R_NAME"='Northeast Region')
       filter("T233831"."R_NAME"='Northeast Region')
  37 - inmemory(SYS_OP_BLOOM_FILTER(:BF0002,"T233497"."STCTY_FIPS"))
       filter(SYS_OP_BLOOM_FILTER(:BF0002,"T233497"."STCTY_FIPS"))
  41 - inmemory("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12 AND SYS_OP_BLOOM_FILTER_LIST(SYS_OP_BLOOM_FILTER(:BF0001,"T233937"."DEST"),SYS
          _OP_BLOOM_FILTER(:BF0000,"T233937"."DISTANCEGROUP")))
       filter("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12 AND SYS_OP_BLOOM_FILTER_LIST(SYS_OP_BLOOM_FILTER(:BF0001,"T233937"."DEST"),SYS_O
          P_BLOOM_FILTER(:BF0000,"T233937"."DISTANCEGROUP")))

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of table property

75 rows selected.

If I then turn-off the in-memory feature and regenerate the execution plan, you can see without in-memory the plan is around 5x as expensive:

SQL> alter system set INMEMORY_SIZE = 0 scope = spfile;

System altered.

SQL> shutdown immediate;
ORA-01097: cannot shutdown while in a transaction - commit or rollback first
SQL> rollback;

Rollback complete.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 7398752256 bytes
Fixed Size          2941528 bytes
Variable Size        1056968104 bytes
Database Buffers     6325010432 bytes
Redo Buffers           13832192 bytes
Database mounted.
Database opened.
SQL> @noinmem_explain.sql

Explained.

SQL> set lines 300
SQL> set pages 0
SQL> select * from table(dbms_xplan.display);
Plan hash value: 2990499928

--------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name              | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |               |     9 |  1341 |  4086   (1)| 00:00:01 |        |      |        |
|   1 |  TEMP TABLE TRANSFORMATION                |               |       |       |        |          |        |      |        |
|   2 |   LOAD AS SELECT                      | SYS_TEMP_0FD9D6605_275335 |       |       |        |          |        |      |        |
|*  3 |    HASH JOIN                          |               |   318 |  9540 |    22   (0)| 00:00:01 |        |      |        |
|*  4 |     TABLE ACCESS FULL                     | OBIEE_COUNTY_HIER         |   217 |  4340 |    13   (0)| 00:00:01 |        |      |        |
|   5 |     TABLE ACCESS FULL                     | OBIEE_GEO_AIRPORT_BRIDGE  |  1787 | 17870 |     9   (0)| 00:00:01 |        |      |        |
|   6 |   PX COORDINATOR                      |               |       |       |        |          |        |      |        |
|   7 |    PX SEND QC (ORDER)                     | :TQ10008              |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,08 | P->S | QC (ORDER) |
|   8 |     SORT GROUP BY                     |               |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,08 | PCWP |        |
|   9 |      PX RECEIVE                       |               |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,08 | PCWP |        |
|  10 |       PX SEND RANGE                   | :TQ10007              |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,07 | P->P | RANGE       |
|  11 |        HASH GROUP BY                      |               |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,07 | PCWP |        |
|* 12 |     HASH JOIN                     |               |   281 | 41869 |  4085   (1)| 00:00:01 |  Q1,07 | PCWP |        |
|  13 |      PX RECEIVE                   |               |   281 | 33158 |  4072   (1)| 00:00:01 |  Q1,07 | PCWP |        |
|  14 |       PX SEND HYBRID HASH                 | :TQ10005              |   281 | 33158 |  4072   (1)| 00:00:01 |  Q1,05 | P->P | HYBRID HASH|
|  15 |        STATISTICS COLLECTOR               |               |       |       |        |          |  Q1,05 | PCWC |        |
|* 16 |         HASH JOIN BUFFERED                |               |   281 | 33158 |  4072   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  17 |          VIEW                     | VW_GBC_29             |   281 | 30348 |  4063   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  18 |           HASH GROUP BY               |               |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  19 |            PX RECEIVE                 |               |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  20 |         PX SEND HASH                  | :TQ10003              |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,03 | P->P | HASH    |
|  21 |          HASH GROUP BY                |               |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|* 22 |           HASH JOIN               |               | 60853 |    16M|  4039   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  23 |            BUFFER SORT                |               |       |       |        |          |  Q1,03 | PCWC |        |
|  24 |             PX RECEIVE                |               |   318 |  1272 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|  25 |              PX SEND BROADCAST            | :TQ10000              |   318 |  1272 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
|  26 |               TABLE ACCESS FULL           | SYS_TEMP_0FD9D6605_275335 |   318 |  1272 |     2   (0)| 00:00:01 |        |      |        |
|* 27 |            HASH JOIN                  |               | 60853 |    16M|  4037   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  28 |             PX RECEIVE                |               |   160 |  4640 |     4   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|  29 |              PX SEND BROADCAST            | :TQ10002              |   160 |  4640 |     4   (0)| 00:00:01 |  Q1,02 | S->P | BROADCAST  |
|  30 |               PX SELECTOR             |               |       |       |        |          |  Q1,02 | SCWC |        |
|  31 |                MERGE JOIN CARTESIAN       |               |   160 |  4640 |     4   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 32 |             VIEW                  | index$_join$_006          |     1 |    19 |     2   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 33 |              HASH JOIN            |               |       |       |        |          |  Q1,02 | SCWC |        |
|  34 |               BITMAP CONVERSION TO ROWIDS     |               |     1 |    19 |     1   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 35 |                BITMAP INDEX SINGLE VALUE      | M_INDEX32             |       |       |        |          |  Q1,02 | SCWP |        |
|  36 |               BITMAP CONVERSION TO ROWIDS     |               |     1 |    19 |     1   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|  37 |                BITMAP INDEX FULL SCAN     | INDEX4            |       |       |        |          |  Q1,02 | SCWP |        |
|  38 |             BUFFER SORT           |               |   161 |  1610 |     2   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|  39 |              BITMAP CONVERSION TO ROWIDS      |               |   161 |  1610 |     2   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 40 |               BITMAP INDEX FAST FULL SCAN     | M_INDEX28             |       |       |        |          |  Q1,02 | SCWP |        |
|* 41 |             VIEW                  | VW_ST_167D3604        | 61043 |    14M|  4033   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  42 |              NESTED LOOPS             |               | 61043 |  4768K|  4029   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  43 |               BUFFER SORT             |               |       |       |        |          |  Q1,03 | PCWC |        |
|  44 |                PX RECEIVE             |               |       |       |        |          |  Q1,03 | PCWP |        |
|  45 |             PX SEND HASH (BLOCK ADDRESS)      | :TQ10001              |       |       |        |          |        | S->P | HASH (BLOCK|
|  46 |              BITMAP CONVERSION TO ROWIDS      |               | 61042 |  1311K|   365   (1)| 00:00:01 |        |      |        |
|  47 |               BITMAP AND              |               |       |       |        |          |        |      |        |
|  48 |                BITMAP MERGE           |               |       |       |        |          |        |      |        |
|  49 |                 BITMAP KEY ITERATION      |               |       |       |        |          |        |      |        |
|* 50 |                  VIEW             | index$_join$_255          |     1 |    19 |     2   (0)| 00:00:01 |        |      |        |
|* 51 |                   HASH JOIN           |               |       |       |        |          |        |      |        |
|  52 |                    BITMAP CONVERSION TO ROWIDS|               |     1 |    19 |     1   (0)| 00:00:01 |        |      |        |
|* 53 |                 BITMAP INDEX SINGLE VALUE | M_INDEX32             |       |       |        |          |        |      |        |
|  54 |                    BITMAP CONVERSION TO ROWIDS|               |     1 |    19 |     1   (0)| 00:00:01 |        |      |        |
|  55 |                 BITMAP INDEX FULL SCAN    | INDEX4            |       |       |        |          |        |      |        |
|* 56 |                  BITMAP INDEX RANGE SCAN      | PERF_DISTANCEGRP          |       |       |        |          |        |      |        |
|  57 |                BITMAP MERGE           |               |       |       |        |          |        |      |        |
|  58 |                 BITMAP KEY ITERATION      |               |       |       |        |          |        |      |        |
|  59 |                  TABLE ACCESS FULL        | SYS_TEMP_0FD9D6605_275335 |   318 |  1272 |     2   (0)| 00:00:01 |        |      |        |
|* 60 |                  BITMAP INDEX RANGE SCAN      | PERF_DEST             |       |       |        |          |        |      |        |
|  61 |                BITMAP MERGE           |               |       |       |        |          |        |      |        |
|* 62 |                 BITMAP INDEX RANGE SCAN   | PERF_MONTH            |       |       |        |          |        |      |        |
|  63 |               TABLE ACCESS BY USER ROWID      | PERFORMANCE           |     1 |    58 |  3669   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  64 |          PX RECEIVE                   |               |  1787 | 17870 |     9   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  65 |           PX SEND BROADCAST               | :TQ10004              |  1787 | 17870 |     9   (0)| 00:00:01 |  Q1,04 | S->P | BROADCAST  |
|  66 |            PX SELECTOR                |               |       |       |        |          |  Q1,04 | SCWC |        |
|  67 |         TABLE ACCESS FULL             | OBIEE_GEO_AIRPORT_BRIDGE  |  1787 | 17870 |     9   (0)| 00:00:01 |  Q1,04 | SCWP |        |
|  68 |      PX RECEIVE                   |               |  3221 | 99851 |    13   (0)| 00:00:01 |  Q1,07 | PCWP |        |
|  69 |       PX SEND HYBRID HASH                 | :TQ10006              |  3221 | 99851 |    13   (0)| 00:00:01 |  Q1,06 | S->P | HYBRID HASH|
|  70 |        PX SELECTOR                    |               |       |       |        |          |  Q1,06 | SCWC |        |
|  71 |         TABLE ACCESS FULL                 | OBIEE_COUNTY_HIER         |  3221 | 99851 |    13   (0)| 00:00:01 |  Q1,06 | SCWP |        |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T233497"."STCTY_FIPS"="T233831"."SC_CODE")
   4 - filter("T233831"."R_NAME"='Northeast Region')
  12 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  16 - access("T233484"."AIRPORT"="ITEM_1")
  22 - access("C0"="ITEM_1")
  27 - access("T233732"."CAL_YEAR"="ITEM_5" AND "T233732"."CAL_QTR"="ITEM_4" AND "T233732"."CAL_MONTH"="ITEM_3" AND "T233594"."CODE"="ITEM_2")
  32 - filter("T233594"."DESCRIPTION"='1000-1249 Miles')
  33 - access(ROWID=ROWID)
  35 - access("T233594"."DESCRIPTION"='1000-1249 Miles')
  40 - filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  41 - filter("ITEM_3"<=12 AND "ITEM_3">=6)
  50 - filter("T233594"."DESCRIPTION"='1000-1249 Miles')
  51 - access(ROWID=ROWID)
  53 - access("T233594"."DESCRIPTION"='1000-1249 Miles')
  56 - access("T233937"."DISTANCEGROUP"="T233594"."CODE")
  60 - access("T233937"."DEST"="C0")
  62 - access("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12)

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of table property
   - star transformation used for this statement

105 rows selected.

Running the actual queries in this case gives me a wall-time of around 4 seconds for the in-memory version, and 14 seconds when in-memory query is disabled … but the response time isn’t anywhere near the initial run where we had data stored row-wise but with aggregate tables, so let’s finish-off the testing by putting the aggregate table in-memory too, and see if that makes a difference.

And of course, it absolutely flew:

NewImage

So to conclude from my look at the Oracle Database 12c In-Memory option with OBIEE11g v406 SampleApp, I’d say the following based on my initial tests:

  • For BI-type reporting where you’re typically summarising lots of data, the in-memory option doesn’t remove the need for aggregate tables – you’ll still benefit significantly from having them, in my observation
  • Where the in-memory option does benefit you is when you’re querying the detail-level data – it helps with aggregation but it’s main strength is fast filtering against subsets of columns
  • Some of the more complex SQL queries issued by OBIEE’s BI Server, for example when creating lots of subtotals and totals against a dataset, reduce the effectiveness of the in-memory option – you’ll get the biggest speed improvement, at least at the moment, with queries with simpler predicates and not so complex joins

There might be more to the aggregation story in the end, though. Looking at the Oracle Database 12c In-Memory Option Technical White Paper, the in-memory option should in-fact help with aggregation through a new optimiser transformation called “vector group by”, a transformation that’s likened to a star transformation that uses CPU-efficient algorithms and a multi-dimensional array created on-the-fly in the PGA called an “in-memory accumulator”.

NewImage

In fact, what we’ve heard is that many of the old Oracle OLAP team have moved over to the in-memory option team and were responsible for this feature, so I’ll be taking a closer look at in-memory aggregation in this new feature over the next few months. In my examples though, I didn’t see any examples of vector group by in the query execution plans, so I’m assuming either conditions weren’t right for it, or like star transformations there’s some combination of setting and query factors that need to be in place before it’ll appear in the execution plan (and queries presumably run that much faster).

For now though – that’s my first run-through of the 12c In-Memory Option in the context of a typical BI workload. Check back later in the year for more in-memory option postings, including hopefully something more on the in-memory aggregation feature.

Categories: BI & Warehousing

Rittman Mead in the OTN TOUR Latin America 2014

Rittman Mead Consulting - Tue, 2014-08-26 11:28

Another OTN Tour Latin America has come and gone. This is the most important technical event in the region visiting 12 countries and with more than 2500 attendees in two weeks.

This year Rittman Mead was part of the OTN Tour in Buenos Aires (Argentina) and Montevideo (Uruguay) presenting about ODI and OGG.

We have started in Buenos Aires on August 11 for the first day of the OTN Tour in Argentina. I’ve talked about the integration of ODI and OGG 12c, explaining all the technical details to configure and how to implement it. Most of the attendees didn’t work with these tools (but were curious about them) so I personalised a little the presentation giving them first an introduction of ODI and OGG.

As the vice-president of the UYOUG (Uruguayan Oracle User Group) I’m part of the organisation of the OTN Tour in my country, so we needed to come back in the same Monday to adjust some last details to have everything ready for the event in Uruguay.

Most of the speakers came on Wednesday, and we have spent a great day with Michelle Malcher, Kamran Agayev, Hans Forbrich and Mike Dietrich. First, we went to lunch at the Mercado del Puerto, an emblematic place that has lot of “parrillas” (kind of barbecues) and then we gave them a little city tour which included a visit to El Cerro de Montevideo. Finally we visited one of the most important wineries in Uruguay, Bodega Bouza where we have a wine tour followed by an amazing wine tasting of a variety of wines including Tannat which is our insignia grape. You know…it is important to be relaxed before a conference :-)

luch_allCerro_Montevideo

parrilla

The first day of the event in Uruguay was dedicated exclusively to technical sessions and in the second one we had the hands-on labs. The conference covered a wide range of topics from BI Mobile, e-Business Suite to how to upgrade to Oracle Database12c, Oracle Virtualization and Oracle RAC. All the sessions were packed with attendees.

Mike session Edel session

The next day, we had labs with PCs with software already installed but attendees could came with their own laptops to install all the software needed for the hands-on. We had the famous RAC Attack! lead by Kamran and with the help of the ninjas Michelle, Hans and Nelson Calero, and an Oracle Virtualization lab by Hernan Petitti for 7 hours!

Rac AttackRAC Attack2

It was a great event. You can see more pictures here and download the presentations here. The attendees as well as all the speakers were really happy with the result. And so did we.

This  is only the beginning for Rittman Mead in Latin America. There are a lot of things to come, so stay tuned!

Categories: BI & Warehousing

Upcoming Big Data and Hadoop for Oracle BI, DW and DI Developers Presentations

Rittman Mead Consulting - Tue, 2014-08-26 03:34

If you’ve been following our postings on the blog over the past year, you’ll probably have seen quite a lot of activity around big data and Hadoop and in particular, what these technologies bring to the world of Oracle Business Intelligence, Oracle Data Warehousing and Oracle Data Integration. For anyone who’s not had a chance to read the posts and articles, the three links below are a great introduction to what we’ve been up to:

In addition, we recently took part in an OTN ArchBeat podcast with Stewart Bryson and Andrew Bond on the updated Oracle Information Management Reference Architecture we co-developed with Oracle’s Enterprise Architecture team, where you can hear me talk with Stewart and Andrew about how the updated architecture came about, the thinking behind it, and how concepts like the data reservoir and data factory can be delivered in an agile way.

I’m also pleased to be delivering a number of presentations and seminars over the next few months, on Oracle and Cloudera’s Hadoop technology and how it applies to Oracle BI, DW and DI developers – if you’re part of a local Oracle user group and you’d like me to deliver one of them for your group, drop me an email at mark.rittman@rittmanmead.com.

Slovenian Oracle User Group / Croatian Oracle User Group Conferences, October 2014

These two events run over consecutive days in Slovenia and Croatia, and I’m delivering the keynote at each on Analytics and Big Data, and a one-day seminar running on the Tuesday in Slovenia, and over the Wednesday and Thursday in Croatia. The theme of the seminar is around applying Hadoop and big data technologies to Oracle BI, DW and data integration, and is made up of four sessions:

Part 1 : Introduction to Hadoop and Big Data Technologies for Oracle BI & DW Developers

“In this session we’ll introduce some key Hadoop concepts including HDFS, MapReduce, Hive and NoSQL/HBase, with the focus on Oracle Big Data Appliance and Cloudera Distribution including Hadoop. We’ll explain how data is stored on a Hadoop system and the high-level ways it is accessed and analysed, and outline Oracle’s products in this area including the Big Data Connectors, Oracle Big Data SQL, and Oracle Business Intelligence (OBI) and Oracle Data Integrator (ODI).”

Part 2 : Hadoop and NoSQL Data Ingestion using Oracle Data Integrator 12c and Hadoop Technologies

“There are many ways to ingest (load) data into a Hadoop cluster, from file copying using the Hadoop Filesystem (FS) shell through to real-time streaming using technologies such as Flume and Hadoop streaming. In this session we’ll take a high-level look at the data ingestion options for Hadoop, and then show how Oracle Data Integrator and Oracle GoldenGate leverage these technologies to load and process data within your Hadoop cluster. We’ll also consider the updated Oracle Information Management Reference Architecture and look at the best places to land and process your enterprise data, using Hadoop’s schema-on-read approach to hold low-value, low-density raw data, and then use the concept of a “data factory” to load and process your data into more traditional Oracle relational storage, where we hold high-density, high-value data.”

Part 3 : Big Data Analysis using Hive, Pig, Spark and Oracle R Enterprise / Oracle R Advanced Analytics for Hadoop

“Data within a Hadoop cluster is typically analysed and processed using technologies such as Pig, Hive and Spark before being made available for wider use using products like Oracle Big Data SQL and Oracle Business Intelligence. In this session, we’ll introduce Pig and Hive as key analysis tools for working with Hadoop data using MapReduce, and then move on to Spark as the next-generation analysis platform typically being used on Hadoop clusters today. We’ll also look at the role of Oracle’s R technologies in this scenario, using Oracle R Enterprise and Oracle R Advanced Analytics for Hadoop to analyse and understand larger datasets than we could normally accommodate with desktop analysis environments.”

Part 4 : Visualizing Hadoop Datasets using Oracle Business Intelligence, Oracle BI Publisher and Oracle Endeca Information Discovery

“Once insights and analysis have been produced within your Hadoop cluster by analysts and technical staff, it’s usually the case that you want to share the output with a wider audience in the organisation. Oracle Business Intelligence has connectivity to Hadoop through Apache Hive compatibility, and other Oracle tools such as Oracle BI Publisher and Oracle Endeca Information Discovery can be used to visualise and publish Hadoop data. In this final session we’ll look at what’s involved in connecting these tools to your Hadoop environment, and also consider where data is optimally located when large amounts of Hadoop data need to be analysed alongside more traditional data warehouse datasets.”

Oracle Openworld 2014 (ODTUG Sunday Symposium), September 2014

Along with another session later in the week on the upcoming Oracle BI Cloud Services, I’m doing a session on the User Group Sunday for ODTUG on ODI12c and the Big Data Connectors for ETL on Hadoop:

Deep Dive into Big Data ETL with Oracle Data Integrator 12c and Oracle Big Data Connectors [UGF9481]

“Much of the time required to work with big data sources is spent in the data acquisition, preparation, and transformation stages of a project before your data reaches a state suitable for analysis by your users. Oracle Data Integrator, together with Oracle Big Data Connectors, provides a means to efficiently load and unload data to and from Oracle Database into a Hadoop cluster and perform transformations on the data, either in raw form or in technologies such as Apache Hive or R. This presentation looks at how Oracle Data Integrator can form the centerpiece of your big data ETL strategy, within either a custom-built big data environment or one based on Oracle Big Data Appliance.”

UK Oracle User Group Tech’14 Conference, December 2014

I’m delivering an extended version of my OOW presentation on the UKOUG Tech’14’s “Super Sunday” event, this time over 90 minutes rather than the 45 at OOW, giving me a bit more time for demos and discussion:

Deep-Dive into Big Data ETL using ODI12c and Oracle Big Data Connectors

“Much of the time required to work with Big Data sources is spent in the data aquisition, preparation and transformation stages of a project; before your data is in a state suitable for analysis by your users.Oracle Data Integrator, together with Oracle Big Data Connectors, provides a means to efficiently load and unload data from Oracle Database into a Hadoop cluster, and perform transformations on the data either in raw form or technologies such as Apache Hive or R. In this presentation, we will look at how ODI can form the centrepiece of your Big Data ETL strategy, either within a custom-built Big Data environment or one based on Oracle Big Data Appliance.”

Oracle DW Global Leaders’ Meeting, Dubai, December 2014

The Oracle DW Global Leaders forum is an invite-only group organised by Oracle and attended by select customers and associate partners, one of which is Rittman Mead. I’ll be delivering the technical seminar at the end of the second day, which will run over two sessions and will be based on the main points from the one-day seminars I’m running in Croatia and Slovenia.

From Hadoop to dashboards, via ODI and the BDA – the complete trail : Part 1 and Part 2

“Join Rittman Mead for this afternoon workshop, taking you through data acquisition and transformation in Hadoop using ODI, Cloudera CDH and Oracle Big Data Appliance, through to reporting on that data using OBIEE, Endeca and Oracle Big Data SQL. Hear our project experiences, and tips and techniques based on real-world implementations”

Keep an eye out for more Hadoop and big data content over the next few weeks, including a look at MongoDB and NoSQL-type databases, and how they can be used alongside Oracle BI, DW and data integration tools.

 

Categories: BI & Warehousing

Smoothing the Transition – The New Smart View 11.1.2.1.102 for Microsoft Office and OBIEE

Rittman Mead Consulting - Fri, 2014-08-15 12:24

Introductions

There’s a good chance that, if you’re reading this, you likely perform some reporting, analytics, data stewardship role or probably some combination of all three. And be it for a large corporation or a small company, there are likely standards and practices that pertain to how the above jobs are performed on a day to day basis; not easily changed and perpetually validated by big budgets and long careers. It is equally likely that deeply ingrained within these reporting practices lies some moderate to heavy implementation of Excel. It wasn’t long ago that I found myself utilizing the spreadsheet program on a daily basis and for hours upon hours at a time.

What this essentially amounted to:

  • Pulling down large amounts of data from our department’s data model using large SQL queries that themselves could take most of the day to elucidate, let alone waiting on the query to yield results, which could easily warrant a bathroom break, a phone call, or if you were feeling adventurous, catching up on email.
  • Validating your results
  • Exporting to Excel (key step here!)Massaging and formatting your data by implementing innumerable and often unwieldy functions that deserved their own time slot on your schedule for the day to figure out
  • Proofing your analysis so that it got to management in ship shape
  • Hoping that an analyst from another department who utilized the same metric on their report and who would be at the same meeting actually coincided with yours

                   

Fast forward a bit and I’m sitting here, writing this blog as a sort of proverbial white flag in the great battle between Excel and the behemoth that is OBIEE. And just what is this white flag? Why, it’s Oracle’s most recent iteration of Smart View, which provides expanded functionality and support for the Microsoft Office Suite of programs. Namely, its golden boy, Excel. That’s right, Excel, the darling of office staff everywhere, the program upon which empires rise and fall. In paraphrasing a quote from www.cfo.com, some 64% of public and private companies still use Excel and other “manual” solutions to perform their finance functions. So, in the world of the spreadsheet, when does it makes sense to cross that blurry line from cell to subject area? Smart View now makes answering that question much easier. It seems that they’ve really gotten a grasp on the formatting shortcomings of the last version and made up for it in spades. Or, so at least they claim.

The Test Run – OBIEE to Excel 

The example below illustrates a simple import via Smart View. I generated a dashboard in Answers which mimics that of an Excel design I found online. Thank the good folks over at www.chandoo.org for their excellent skills in Excel dashboarding and for providing plenty of great examples. The dashboard contains a table with a selection of KPI’s that the user may then choose to sort on via a View Selector (each view has been sorted on a different KPI and is on a different Compound Layout). Upon selecting a KPI, the analysis will then display the Top 10 products by the KPI selected. In addition, the table contains conditional formatting which simply alerts users to the variance between different KPI’s and their targets. Lastly, there is a scatter plot view which displays our Product dimension as seen through the lens of Revenue and Quantity. Per the most recent Oracle documentation, we shouldn’t have any trouble including the current selections of a dashboard prompt either. Let’s see how it performs when we move it over to Excel.

 “OBIEE report and page prompts are fully supported as part of the import process. Dashboards can be imported through Oracle Smart View on a per page basis or the entire dashboard. Prompts are applied at the current state of the logged in user. Future releases of the product will support dashboard prompts directly through Microsoft Office.”

SVB 2

The Results

And there you have it! Excel displays our table and graph views as per the most recent selection from the Dashboard prompt. But wait! Our conditional formatting seems to be missing and to prove this, this is even the case when exported directly from the analysis view as an Excel workbook.

SVB 3

Conditional Formatting

For our second scenario, let’s see how Excel handles a simpler, heat map style conditional formatting. I’ve made a simple table on our dashboard that measures Revenue, Quantity Sold, and the Average Order in $. I set up conditional formatting around the Average Order measure to see how Excel handles importing the color scheme for the currently selected Time parameters on the dashboard.

Contrastingly, we see that Smart View has preserved a simpler, Heatmap style of conditional formatting when imported from OBIEE through Smart View. So, perhaps it is Excel’s lack of corresponding graphic in the previous example that has caused the migration snafu? OBIEE doesn’t even seem to render our arrow graphics as per the documentation.

“Oracle BI Customizations and View Standards – The Import of Oracle BI content can leverage the customizations and view standards used within an OBIEE environment. All view designed modifications such as conditional formatting, background colors or data configuration is automatically translated to the Microsoft Office environment.”

SVB 4

SVB 5

 

Excel to OBIEE

Let’s see what the latest edition of Smart View offers when moving an analysis from Excel to OBIEE.
Because we weren’t able to import our full table view, why don’t we construct it using the View Designer? The interface looks clean and provides an intuitive approach to producing basic Answers views. Accessing our subject area, I simply selected the columns that matched those on our Answers analysis. After clicking ‘OK’, sorting on our Revenue column from largest to smallest and doing a little deleting, we have a pseudo ‘Top 10’ analysis by Revenue. Given the aesthetic attributes of our Answers analysis, lets see how we’re going to replicate this in Excel.

 

SVB 6

SVB 8

SVB 7

After selecting the table, we can navigate to the Design tab under ‘Table Tools’ and select an alternating Grey scheme which gives us the ‘Enable Alternate Styling’ design quality. Now lets add some formulas and conditional formatting that will give us our Calculated column equivalents. We can insert two rows, one between Revenue and Target, and between Qty and Target, to make room for conditional formatting and Excel’s Icon Sets feature. We then create a simple formula that subtracts Revenue and Quantity from their respective targets in the column between the two, assign conditional formatting and voila! Excel even has a check box that lets you show the arrow only.

 

SVB 9

SVB 10

From Excel, we can select Publish View to deposit our analysis into our Shared Folder. The results indicate a sort of ‘two way street’ between Smart View and Excel and vice versa. Neither totally supports the formatting capabilities of the other, as if to say Smart View is giving ground with every new release. In this blog, we’ve taken a look at how Smart View handles some mildly complex conditional formatting and what it takes to replicate this feature in native Excel. In a user environment where reports are flying back and forth between the two platforms, Smart View definitely makes sense, however it might be advisable to simply deliver the minimum of what is needed and let an end user make any formatting based modifications. After all, who would want to do all that work only to have it lost in translation?

Categories: BI & Warehousing

The Business Value In Training

Rittman Mead Consulting - Mon, 2014-08-11 14:59

One of the main things I get asked to do here at Rittman Mead, is deliver the OBIEE front-end training course (TRN 202). This a great course that has served both us, and our clients well over the years. It has always been in high demand and always delivered with great feedback from those in attendance. However, as with all things in life and business, there is going to be room for improvement and opportunities to provide even more value to our clients. Of all the feedback I receive from delivering the course, my favorite is that we do an incredible job delivering both the content and providing real business scenarios on how we have used this tool in the consulting field. Attendees will ask me how a feature works, and how I have used it with current and former clients, 100% of the time.

This year at KSCope ’14 in Seattle, we were asked to deliver a 2 hour front-end training course. Our normal front-end course runs a span of two days and covers just about every feature you can use all the way from Answers and Dashboards, to BI Publisher. Before the invitation to KScope ’14, we had bee tooling with the idea to deliver a course that not only teaches attendees on how to navigate OBIEE and use it’s features, but also emphasizes the business value behind why those features exist in the first place. We felt that too often users are given a quick overview of what the tool includes, but left figure out on their own how to extract the most value. It is one thing to create a graph in Answers, and another to know what the best graph to use might be. So in preparation for the KScope session, we decided to build the content around not only how to develop in OBIEE, but also why, as a business user, you would choose one layout/graph/feature over another. As you would expect, the turn out for the session was fantastic, we had over 70 plus pre-register, with another 10 on the waiting list. This was proof that there is an impending need to pull as much business value out of the tool as there is to simply learn how to use it. We were so encouraged by the attendance and feedback from this event, that we spent the next several weeks developing what is called the “Business Enablement Bootcamp”. It is a 3 day course that will cover Answers, Dashboards, Action Framework, BI Publisher, and the new Mobile App Designer. This is an exciting time for us in that we not only get show people how to use all of the great features that are built into the tool, but to also incorporate years of consulting experience and hundreds of client engagements right into the content. Below I have listed a breakdown of the material and the value it will provide.

Answers

Whenever we deliver our OBIEE 5-day bootcamp, which covers everything from infrastructure to the front end, Answers is one of the key components that we teach. Answers is the building block for analysis in OBIEE. While this portion of the tool is relatively intuitive to get started with, there are so many valuable nuances and settings that can get over looked without proper instruction. In order to get the most out of the tool, a business user needs be able to not only create basic analyses, but be able to use many of the advanced features such as hierarchical columns, master-detail, and selection steps. Knowing how and why to use these features is a key component to gaining valuable insight for your business users.

Dashboards

This one in particular is dear to my heart. To create an analysis and share it on a dashboard is one thing, but to tell a particular story with a series of visualizations strategically placed on a dashboard is something entirely different. Like anything else business intelligence, optimal visualization and best practices are learned skills that take time and practice. Valuable skills like making the most of your white space, choosing the correct visualizations, and formatting will be covered. When you provide your user base with the knowledge and skills to tell the best story, there will be no time wasted with clumsy iterations and guesswork as to what is the best way to present your data. This training will provide some simple parameters to work within, so that users can quickly gather requirements and develop dashboards that more polish and relevance than ever before.

 Dashboard

 Action Framework

Whenever I deliver any form of front end training, I always feel like this piece of OBIEE is either overlooked, undervalued, or both. This is because most users are either unaware of it’s use, or really don’t have a clear idea of its value and functionality. It’s as if it is viewed as an add-on in the sense that is just simply a nice feature. The action framework is something that when properly taught how to navigate, or given demonstration of its value, it will indeed become an invaluable piece of the stack. In order to get the most out of your catalog, users need to be shown how to strategically place action links to give the ability to drill across to analyses and add more context for discovery. These are just a few capabilities within the action framework that when shown how and when to use it, can add valuable insight (not to mention convenience) to an organization.

Bi Publisher/Mobile App Designer

Along with the action framework, this particular piece of the tool has the tendency to get overlooked, or simply give users cold feet about implementing it to complement answers. I actually would have agreed with these feelings before the release of 11.1.1.7. Before this release, a user would need to have a pretty advanced knowledge of data modeling. However, users can now simply pick any subject area, and use the report creation wizard to be off and running creating pixel perfect reports in no time. Also, the new Mobile App Designer on top of the publisher platform is another welcomed addition to this tool. Being the visual person that I am, I think that this is where this pixel perfect tool really shines. Objects just look a lot more polished right out of the box, without having to spend a lot of time formatting the same way you would have to in answers. During training, attendees will be exposed the many of the new features within BIP and MAD, as well as how to use them to complement answers and dashboards.

Third Party Visualizations

While having the ability to implement third party visualizations like D3 and Flot into OBIEE is more of an advanced skill, the market and need seems to be growing for this. While Oracle has done some good things in past releases with new visualizations like performance tiles and waterfall charts, we all know that business requirements can be demanding at times and may require going elsewhere to appease the masses. You can visit https://github.com/mbostock/d3/wiki/Gallery to see some of the other available visualizations beyond what is available in OBIEE. During training, attendees will learn the value of when and why external visualizations might be useful, as well as a high level view of how they can be implemented.

Bullet Chart

Users often make the mistake of viewing each piece of the front end stack as separate entities, and without proper training this is very understandable. Even though they are separate pieces of the product, they are all meant to work together and enhance the “Business Intelligence” of an organization. Without training the business to complement one piece to another, it will always be viewed as just another frustrating tool that they don’t have enough time to learn on their own. This tool is meant to empower your organization to have everything they need to make the most informed and timely decisions, let us use our experience to enable your business.

Categories: BI & Warehousing

Rittman Mead and Oracle Big Data Appliance

Rittman Mead Consulting - Mon, 2014-08-11 07:00

Over the past couple of years Rittman Mead have been broadening our skills and competencies out from core OBIEE, ODI and Oracle data warehousing into the new “emerging” analytic platforms: R and database advanced analytics, Hadoop, cloud and clustered/distributed systems. As we talked about in the recent series of updated Oracle Information Management Reference Architecture blog posts and my initial look at the Oracle Big Data SQL product, our customers are increasingly looking to complement their core Oracle analytics platform with ones to handle unstructured and big data, and as technologists we’re always interesting in what else we can use to help our customers get more insight out of their (total) dataset.

An area we’ve particularly focused on over the past year has been Hadoop and R analysis, with the recent announcement of our partnering with Cloudera and the recruitment of a big data and advanced analytics team operating our of our Brighton, UK office. We’ve also started to work on a number of projects and proof of concepts with customers in the UK and Europe, working mainly with core Oracle BI, DW and ETL customers looking to make their first move into Hadoop and big data. The usual pattern of engagement is for us to engage with some business users looking to analyse a dataset hitherto too large or too unstructured to load into their Oracle data warehouse, or where they recognise the need for more advanced analytics tools such as R, MapReduce and Spark but need some help getting started. Most often we put together a PoC Hadoop cluster for them using virtualization technology on existing hardware they own, allowing them to get started quickly and with no initial licensing outlay, with our preferred Hadoop distribution being Cloudera CDH, the same Hadoop distribution that comes on the Oracle Big Data Appliance. Projects then typically move on to Hadoop running directly on physical hardware, in a couple of cases Oracle’s Big Data Appliance, usually in conjunction with Oracle Database, Oracle Exadata and Oracle Exalytics for reporting.

One such project started off by the customer wanting to analyse a dataset that was too large for the space available in their Oracle database and that they couldn’t easily process or analyse using the SQL-based tools they usually used; in addition, like most large organisations, database and hardware provisioning took a long time and they needed to get the project moving quickly. We came in and quickly put together a virtualised Hadoop cluster together for them, on re-purposed hardware and using the free (Standard) edition of Cloudera CDH4, and then used the trial version of Oracle Big Data Connectors along with SFTP transfers to get data into the cluster and then analysed.

NewImage

The PoC itself then ran for just over a month with the bulk of the analysis being done using Oracle R Advanced Analytics for Hadoop, an extension to R that allows you to use Hive tables as a data source and create MapReduce jobs from within R itself; the output from the exercise was a series of specific-answer-to-specific-question R graphs that solved an immediate problem for the client, and showed the value of further investment in the technology and our services – the screenshot below shows a typical ORAAH session, in this case analyzing the flight delays dataset that you can also find on the Exalytics server and in smaller form in OBIEE 11g’s SampleApp dataset.

NewImage

That project has now moved onto a larger phase of work with Oracle Big Data Appliance used as the Hadoop platform rather than VMs, and Cloudera Hadoop upgraded from the free, unsupported Standard version to Cloudera Enterprise. The VMs in fact worked pretty well and had the advantage that they could be quickly spun-up and housed temporarily on an existing server, but were restricted by the RAM that we could assign to each VM – 2GB initially, quickly upgraded to 8GB per VM, and the fact that they were sharing CPU and IO resources. Big Data Appliance, by contrast, has 64GB or RAM per node – something that’s increasingly important now in-memory tools like Impala are begin used – and has InfiniBand networking between the nodes as well as fast network connections out to the wider network, something thats often overlooked when speccing up a Hadoop system.

The support setup for the BDA is pretty good as well; from a sysadmin perspective there’s a lights-out ILOM console for low-level administration, as well as plugins for Oracle Enterprise Manager 12c (screenshot below), and Oracle support the whole package, typically handling the hardware support themselves and delegating to Cloudera for more Hadoop-specific queries. I’ve raised several SRs on client support contracts since starting work on BDAs, and I’ve not had any problem with questions not being answered or buck-passing between Oracle and Cloudera.

NewImageOne thing that’s been interesting is the amount of actual work that you need to do with the Big Data Appliance beyond the actual installation and initial configuration by Oracle to “on-board” it into the typical enterprise environment. BDAs are left with customers in a fully-working state, but like Exalytics and Exadata though, initial install and configuration is just the start, and you’ve then got to integrate the platform in with your corporate systems and get developers on-boarded onto the platform. Tasks we’ve typically provided assistance with on projects like these include:

  • Configuring Cloudera Manager and Hue to connect to the corporate LDAP directory, and working with their security team to create LDAP groups for developer and administrative access that we then used to restrict and control access to these tools
  • Configuring other tools such as RStudio Server so that developers can be more productive on the platform
  • Putting in place an HDFS directory structure to support incoming data loads and data archiving, as well as directories to hold the output datasets from the analysis work we’re doing – all within the POSIX security setup that HDFS currently uses which limits us to just granting owner, group and world permissions on directories
  • Working with the client’s infrastructure team on things like alerting, troubleshooting and setting up backup and recovery – something that’s surprisingly tricky in the Hadoop world as Cloudera’s backup tools only backup from Hadoop-to-Hadoop, and by definition your Hadoop system is going to hold a lot of data, the volume of which your current backup tools aren’t going to easily handle

Once things are set up though you’ve got a pretty comprehensive platform that can be expanded up from the initial six nodes our customers’ systems typically start with to the full eighteen node cluster, and can use tools such as ODI to do data loading and movement, Spark and MapReduce to process and analyse data, and Hive, Impala and Pig to provide end-user access. The diagram below shows a typical future-state architecture we propose for clients on this initial BDA “starter config” where we’ve moved up to CDH5.x, with Spark and YARN generally used as the processing framework and with additional products such as MongoDB used for document-type storage and analysis:

NewImage

 

Something that’s turned out to be more of an issue on projects than I’d originally anticipated is complying with corporate security policies. By definition, most customers who buy an Oracle Big Data Appliance and going to be large customers with an existing Oracle database estate, and if they deal with the public they’re going to have pretty strict security and privacy rules you’ll need to adhere to. Something that’s surprising therefore to most customers new to Hadoop is how insecure or at least easily compromised the average Hadoop cluster is, with Hadoop FS shell security relying on trusted networks and incoming user connections and interfaces such as ODBC not checking passwords at all.

Hadoop and the BDA only becomes what’s termed “secure” when you link it to a Kerebos server, but not every customer has Kerebos set up and unless you enable this feature right at the start when you set up the BDA, it’s a fairly involved task to add retrospectively. Moreover, customers are used to fine-grained access control to their data, a single security model over their data and a good understanding in their heads as to how security works on their database, whereas Hadoop is still a collection of fairly-loosely coupled components with pretty primitive access controls, and no easy way to delete or redact data, for example, when a particular country’s privacy laws in-theory mandate this.

Like everything there’s a solution if you’re creative enough, with tools such as Apache Sentry providing role-based access control over Hive and Impala tables, alternative storage tools like HBase that permit read, write, update and delete operations on data rather than just HDFS’s insert and (table or partition-level) delete, and tools like Cloudera Navigator and BDA features like Oracle Audit Vault that provide administrators with some sort of oversight as to who’s accessing what data and when. As I mentioned in my blog post a couple of weeks ago, Oracle’s Big Data SQL product addresses this requirement pretty well, potentially allowing us to apply Oracle security over both relational, and Hadoop, datasets, but for now we’re working within current CDH4 capabilities and planning on introducing Apache Sentry for role-based access control to Hive and Impala in the coming weeks. We’re also looking at implementing Cloudera’s “secure gateway” cluster topology with all access restricted to just a single gateway Hadoop node, and the cluster itself firewalled-off with external access to just that gateway node and HTTP / REST API access to the various cluster services, for example as shown in the diagram below:

NewImage

My main focus on Hadoop projects has been on the overall Hadoop system architecture, and interacting with the client’s infrastructure and security teams to help them adopt the BDA and take over its maintenance. From the analysis side, it’s been equally as interesting, with a number of projects using tools such as R, Oracle R Advanced Analytics for Hadoop and core Hive/MapReduce for data analysis, Flume, Java and Python for data ingestion and processing, and most recently OBIEE11g for publishing the results out to a wider audience. Following the development model that we outlined in the second post in our updated Information Management Reference Architecture blog series, we typically split delivery of each project’s output into two distinct phases; a discovery phase, typically done using RStudio and Oracle R Advanced Analytics for Hadoop, where we explore and start understanding the dataset, presenting initial findings to the business and using their feedback and direction to inform the second phase; and a second, commercial exploitation phase where we use the discovery phases’ outputs and models to drive a more structured dimensional model with output begin in the form of OBIEE analyses and dashboards.

NewImage

We looked at several options for providing the datasets for OBIEE to query, with our initial idea being to connect OBIEE directly to Hive and Impala and let the users query the data in-place, directly on the Hadoop cluster, with an architecture like the one in the diagram below:

NewImage

In fact this turned out to not be possible, as whilst OBIEE 11.1.1.7 can access Apache Hive datasources, it currently only ships with HiveServer1 ODBC support, and no support for Cloudera Impala, which means we need to wait for a subsequent release of OBIEE11g to be able to report against the ODBC interfaces provided by CDH4 and CDH5 on the BDA (although ironically, you can get HiveServer2 and Impala working on OBIEE 11.1.1.7 on Windows, though this platform isn’t officially supported by Oracle for Hadoop access, only Linux). Whichever way though, it soon became apparent that even if we could get Hive and Impala access working, in reality it made more sense to use Hadoop as the data ingestion and processing platform – providing access to data analysts at this point if they wanted access to the raw datasets – but with the output of this then being loaded into an Oracle Exadata database, either via Sqoop or via Oracle Loader for Hadoop and ideally orchestrated by Oracle Data Integrator 12c, and users then querying these Oracle tables rather than the Hive and Impala ones on the BDA, as shown in the diagram below.

NewImage

In-practice, Oracle SQL is far more complete and expressive than HiveQL and Impala SQL and it makes more sense to use Oracle as the query platform for the vast majority of users, with data analysts and data scientists still able to access the raw data on Hadoop using tools like Hive, R and (when we move to CDH5) Spark.

The final thing that’s been interesting about working on Hadoop and Big Data Appliance projects is that 80% of it, in my opinion, is just the same as working on large enterprise data warehouse projects, with 20% being “the magic”. A large portion of your time is spent on analysing and setting up feeds into the system, just in this case you use tools like Flume instead of GoldenGate (though GoldenGate can also load into HDFS and Hive, something that’s useful for transactional database data sources vs. Flume’s focus on file and server log data sources). Another big part of the work is data processing, ingestion, reformatting and combining, again skills an ETL developer would have (though there’s much more reliance, at this point, on command-line tools and Unix utilities, albeit with a place for tools like ODI once you get to the set-based filtering, joining and aggregating phase). In most cases, the output of your analysis and processing will be Hive and Impala tables so that results can be analysed using tools such as OBIEE, and you therefore need skills in areas such as dimensional modelling, business analysis and dashboard prototyping as well as tool-specific skills such as OBIEE RPD development.

Where the “magic” happens, of course, is the data preparation and analysis that you do once the data is loaded, quite intensively and interactively in the discovery phase and then in the form of MapReduce and Spark jobs, Sqoop loads and Oozie workflows once you know what you’re after and need to process the data into something more tabular for tools like OBIEE to access. We’re building up a team competent in techniques such as large-scale data analysis, data visualisation, statistical analysis, text classification and sentiment analysis, and use of NoSQL and JSON-type data sources, which combined with our core BI, DW and ETL teams allows us to cover the project from end-to-end. It’s still relatively early days but we’re encouraged by the response from our project customers so far, and – to be honest – the quality of the Oracle big data products and the Cloudera platform they’re based around – and we’re looking forward to helping other Oracle customers get the most out of their adoption of these new technologies. 

If you’re an Oracle customer looking to make their first move into the worlds of Hadoop, big data and advanced analytics techniques, feel free to drop me an email at mark.rittman@rittmanmead.com  for some initial advice and guidance – the fact we come from an Oracle-centric background as well typically makes it easier for us to relate these new concepts to the ones you’re typically more familiar with. Similarly, if you’re about to bring on-board an Oracle Big Data Appliance system and want to know how best to integrate it in with your existing Oracle BI, DW, data integration and systems management estate, get in contact and I’d be happy to share experiences and our delivery approach.

Categories: BI & Warehousing

Vote for Rittman Mead at the UKOUG Partner of the Year Awards 2014!

Rittman Mead Consulting - Mon, 2014-08-11 03:00

Rittman Mead are proud to announce that we’ve been nominated by UKOUG members and Oracle customers for five categories in the upcoming UKOUG Parter of the Year Awards 2014;  Business Intelligence, Training, Managed Services, Operating Systems Storage and Hardware, and Emerging Partner, reflecting the range of products and services we now offer for customers in the UK and around the world.

NewImage

Although Rittman Mead are a worldwide organisation with offices in the US, India, Australia and now South Africa, our main operation is in the UK and for many years we’ve been a partner member of the UK Oracle User Group (UKOUG). Our consultants speak at UKOUG Special Interest Group events as well as the Tech and Apps conferences in December each year, we write articles for Oracle Scene, the UKOUG members’ magazine, and several of our team including Jon and myself have held various roles including SIG chair and deputy chair, board member and even editor of Oracle Scene.

Partners, along with Oracle customers and of course Oracle themselves, are a key part of the UK Oracle ecosystem and to recognise their contribution the UKOUG recently brought in their Partner of the Year Awards that are voted on by UKOUG members and Oracle customers in the region. As these awards are voted on by actual users and customers we’ve been especially pleased over the years to win several Oracle Business Intelligence Partner of the Year Gold awards, and last year we were honoured to receive awards in five categories, including Business Intelligence Partner of the Year, Training Partner of the Year and Engineered Systems Partner of the Year.

This year we’ve been nominated again in five categories, and if you like what we do we’d really appreciate your vote, which you can cast at any time up to the closing date, September 15th 2014. Voting is open to UKOUG members and Oracle customers and only takes a few minutes – the voting form is here and you don’t need to be a UKOUG member, only an Oracle end-user or customer – these awards are a great recognition for the hard work out team puts in, so thanks in advance for any votes you can put in for us!

Categories: BI & Warehousing

Why Oracle Big Data SQL Potentially Solves a Big Issue with Hadoop Security

Rittman Mead Consulting - Sat, 2014-07-26 07:21

Oracle announced their Big Data SQL product a couple of weeks ago, which effectively extends Exadata’s query-offloading to Hadoop data sources. I covered the launch a few days afterwards, focusing on how it implements Exadata’s SmartScan on Hive and NoSQL data sources and provides a single metadata catalog over both relational, and Hadoop, data sources. In a Twitter conversation later in the day though, I made the comment that in my opinion, the biggest benefit of Big Data SQL will be in its ability to extend Oracle’s security model to Hadoop data sources, because Hadoop security Hadoop security is still a bit of a mess:

lang=”en”>To me the greatest benefit of Big Data SQL is the single security model; even with Sentry, Hadoop security is fragmented and a mess (IMO)

— Mark Rittman (@markrittman) July 17, 2014

I’ve been working on an Oracle Big Data Appliance project over the past few weeks, as the technical architect and initial sysadmin for the cluster, and it’s given me a first-hand experience of what security’s like on a Hadoop cluster. Over the past few weeks I’ve had to come up with a security policy covering HDFS, Hive and the Cloudera management tools (Cloudera Manager, Hue etc), and try and implement an access and authorisation approach that ensures only designated people can log in, and when they’re in, they can only see the data they’re supposed to see. Hadoop at this point, to my mind, suffers from a couple of major issues when it comes to security:

  • It’s fragmented, in that each tool or Hadoop product tends to have its own security setup, and the documentation is all split up, rapidly goes out of date, and is more of a reference than a tutorial (Cloudera’s Security documentation is one of the better examples, but it still splits the key information you need over several sections and several other docs)
  • It’s full of holes, such that the default security setup is considered insecure in terms of users being able to spoof their details, and making it more secure is again an exercise in hunting through docs, with some pretty complex configuration steps you need to perform (for example, configuring Kerebos authentication, a prerequisite for things like Apache Sentry)

If we take a typical security policy that a large enterprise customer’s going to want to put in place, it’ll look something like this:

  • Users should only be able to log in via their corporate LDAP account, and we’ll want that login process to be secure so it can’t easily be bypassed
  • We want to be able to secure our datasets, so that only authorised users can view particular datasets, and there’s likely to be some groups we grant read-only access to, and others we grant read-write
  • The data loading processes for the Hadoop cluster need to be locked-down so they can’t overwrite the datasets of other applications
  • Our security policy ideally needs to sync-up, or be an extension of, our existing enterprise security policy, not something we maintain separately
  • We need to be able to audit and review who’s actually accessing what dataset, to ensure that these policies are being followed and enforced
  • We also need the ability to obfuscate or depersonalise data before it gets into the cluster, and also have the option of encrypting the data at-rest as well as on-the-wire

Back in the early days of Hadoop these types of security policy weren’t often needed, as the users of the Hadoop cluster were typically a small set of data scientists or analysts who’d been cleared already to view and work with the data in the cluster (or more likely, they did it and just didn’t tell anyone). But as we move to enterprise information management architectures such as the one outlined in my two-part blog post series a few weeks ago (pt.1, pt.2), the users of Hadoop and other “data reservoir” data sources are likely to increase significantly in number as data from these systems becomes just another part of the general enterprise data set.

NewImage

But in practice, this is hard to do. Let’s start with HDFS first, the Hadoop Distributed File System on which most Hadoop data is stored. HDFS aims to look as similar to a Linux or Unix-type filesystem as possible, with similar commands (mkdir, ls, chmod etc) and the same POSIX permissions model, where files and directories are associated with an owner and a group and where permissions are set for that owner, the group and all others. For example, in the HDFS file listing below, the “/user/cust_segment_analysis” directory is owned by the user “mrittman” and the group “marketing”, with the directory owner having full read, write and subdirectory traversal access to the directory, the group having read-only and subdirectory traversal access, and all others having no access at all.

[root@bdanode1 ~]# hadoop fs -ls /user
Found 13 items
drwxrwxrwx   - admin    admin               0 2014-06-02 16:06 /user/admin
drwxr-x---   - mrittman marketing           0 2014-07-26 21:31 /user/cust_segment_analysis
drwxr-xr-x   - hdfs     supergroup          0 2014-05-27 13:19 /user/hdfs
drwxrwxrwx   - mapred   hadoop              0 2014-05-25 20:47 /user/history
drwxrwxr-t   - hive     hive                0 2014-06-04 16:31 /user/hive
drwxr-xr-x   - hue      hue                 0 2014-05-31 18:51 /user/hue
drwxrwxr-x   - impala   impala              0 2014-05-25 20:54 /user/impala
drwxrwxr-x   - oozie    oozie               0 2014-05-25 20:52 /user/oozie
drwxrwxrwx   - oracle   oracle              0 2014-06-09 21:38 /user/oracle
drwxr-xr-x   - root     root                0 2014-06-06 16:25 /user/root
drwxr-xr-x   - sample   sample              0 2014-05-31 18:51 /user/sample
drwxr-x--x   - spark    spark               0 2014-05-25 20:45 /user/spark
drwxrwxr-x   - sqoop2   sqoop               0 2014-05-25 20:53 /user/sqoop2

Which all sounds great until you then have another group that needs read-write access to the directory, but you’re limited to just one group permissions setting for the directory which you’ve already used to set up read-only access for that particular group. If you therefore need to set up different sets of security access for different groups, you typically then end-up creating multiple HDFS directories and multiple copies of the dataset in question, assigning each copy to a different group, which isn’t all that convenient and gives you other problems in terms of maintenance and keeping it all in-sync.

What you of course need is something like the “access control lists” (ACLs) you get with operating systems like Windows NT and MacOS, where you can define an arbitrary number of user groups and then assign each of them their own permission set on the directory and the files it contains. The most recent versions of Hadoop actually implement a form of ACL for HDFS, with this feature making its way into the recently-released Cloudera CDH5.1, but these ACLs are an addition to the standard POSIX user, group, others model and aren’t recommended for all files in your HDFS filesystem as according to the Hadoop docs “Best practice is to rely on traditional permission bits to implement most permission requirements, and define a smaller number of ACLs to augment the permission bits with a few exceptional rules. A file with an ACL incurs an additional cost in memory in the NameNode compared to a file that has only permission bits.” Still, it’s better than not having them at all, and I’d imagine using this feature for particular directories and sets of files that need more than one set of group permissions configured for them.

In most cases though, the way you’ll present data out to non-technical end-users and applications is through Hive and Impala tables, or through tools like Pig and Spark. Under the covers, these tools still use HDFS permissions to control access to the data within Hive and Impala tables, but again by default you’re limited to granting access to whole HDFS directories, or the files contained within those directories. Something that addresses this issue is a product called Apache Sentry, an open-source project within the Hadoop family that enables role-based access control for Hive and Impala tables. Oracle are one of the co-founders of the Sentry project and include it in the base software on the Big Data Appliance, and using Sentry you can grant SELECT, INSERT or ALL privileges to a group on a particular Hive or Impala table, rather than on the underlying HDFS directories and files. A form of fine-grained access control can be set up using Sentry by creating views with particular row-level security settings, giving you the basics of a database-like security policy that you can apply over the main way that users access data in the cluster.

But Sentry itself has a few significant prerequisites – you have to enable Kerebos authentication on your cluster, which you should do anyway because of the risk of account spoofing, but is still a significant thing to set up – and of course you need to link Hive and Impala to your corporate LDAP server and configure them to work in the way that Sentry requires. Most importantly though, you’re still left with the situation where you’ve got two separate security setups – the one for your corporate data warehouse and relational data sources, and another for data accessed on Hadoop, and it’s still hard to be sure, what with all the disparate products and partially-complete open-source products, whether data in your Hadoop cluster is still really secure (though products like Cloudera Navigator aim to provide some form of data governance and auditing over these datasets); and, there’s still no straightforward way to remove individual customers’ data out of the Hadoop dataset (“data redaction”), no easy way to obfuscate or mask data, and no easy way (apart from the Hive views mentioned before) to restrict users to accessing only certain columns in a Hive or Impala table.

And so this is where Oracle’s Big Data SQL product could be very interesting. Big Data SQL takes the Exadata model of moving as much filtering and column-projection as it can to the storage server, adding Oracle SmartScan functionality to the Hadoop node and allowing it to understand the full Oracle SQL dialect (and PL/SQL security functions), rather than just the subset of SQL provided by HiveQL and Impala SQL.

NewImage

More importantly, it’ll enable a single unified data dictionary over both Oracle and Hadoop data sources, presenting Hive tables and NoSQL data as regular Oracle tables and allowing the DBA to create data security, redaction and row-level filtering policies over both relational and Hadoop data – giving you potentially the ability to define a single security policy across all data in your overall information management architecture.

NewImage

So I think this is actually a “big deal”, and potentially even more game-changing that the SmartScan functionality that got most of the attention with the Big Data SQL product launch. How well it’ll work in-practice, and how much will be enabled on day one it’s hard to say, but this feature meets a real need that our customers are finding now, so I’ll be very interested to try it out when the product becomes available (presumably) later in the year.

Categories: BI & Warehousing

Taking a Look at the New Oracle Big Data SQL

Rittman Mead Consulting - Thu, 2014-07-17 13:15

Oracle launched their Oracle Big Data SQL product earlier this week, and it’ll be of interest to anyone who saw our series of posts a few weeks ago about the updated Oracle Information Management Reference Architecture, where Hadoop now sits alongside traditional Oracle data warehouses to provide what’s termed a “data reservoir”. In this type of architecture, Hadoop and its underlying technologies HDFS, Hive and schema-on-read databases provide an extension to the more structured relational Oracle data warehouses, making it possible to store and analyse much larger sets of data with much more diverse data types and structures; the issue that customers face when trying to implement this architecture is that Hadoop is a bit of a “wild west” in terms of data access methods, security and metadata, making it difficult for enterprises to come up with a consistent, over-arching data strategy that works for both types of data store.

Oracle Big Data SQL attempts to address this issue by providing a SQL access layer over Hadoop, managed by the Oracle database and integrated in with the regular SQL engine within the database. Where it differs from SQL on Hadoop technologies such as Apache Hive and Cloudera Impala is that there’s a single unified data dictionary, single Oracle SQL dialect and the full management capabilities of the Oracle database over both sources, giving you the ability to define access controls over both sources, use full Oracle SQL (including analytic functions, complex joins and the like) without having to drop down into HiveQL or other Hadoop SQL dialects. Those of you who follow the blog or work with Oracle’s big data connector products probably know of a couple of current technologies that sound like this; Oracle Loader for Hadoop (OLH) is a bulk-unloader for Hadoop that copies Hive or HDFS data into an Oracle database typically faster than a tool like Sqoop, whilst Oracle Direct Connector for HDFS (ODCH) gives the database the ability to define external tables over Hive or HDFS data, and then query that data using regular Oracle SQL.

Where ODCH falls short is that it treats the HDFS and Hive data as a single stream, making it easy to read once but, like regular external tables, slow to access frequently as there’s no ability to define indexes over the Hadoop data; OLH is also good but you can only use it to bulk-load data into Oracle, you can’t use it to query data in-place. Oracle Big Data SQL uses an approach similar to ODCH but crucially, it uses some Exadata concepts to move processing down to the Hadoop cluster, just as Exadata moves processing down to the Exadata storage cells (so much so that the project was called “Project Exadoop” internally within Oracle up to the launch) – but also meaning that it’s Exadata only, and not available for Oracle Databases running on non-Exadata hardware.

As explained by the launch blog post by Oracle’s Dan McClary, Oracle Big Data SQL includes components that install on the Hadoop cluster nodes that provide the same “SmartScan” functionality that Exadata uses to reduce network traffic between storage servers and compute servers. In the case of Big Data SQL, this SmartScan functionality retrieves just the columns of data requested in the query (a process referred to as “column projection”), and also only sends back those rows that are requested by the query predicate.

NewImage

Combined with Hive’s ability to map unstructured data sources into regular columns and tables, and Big Data SQL’s support for Oracle NoSQL database, the promise of this new technology is the ability to run queries against both relational, Hadoop and NoSQL data sources using a common data dictionary and common set of identity and data access controls.

There’s a couple of potential downsides, though. First-off, Big Data SQL will only be available as part of Oracle Big Data Appliance, which though an impressive bit of hardware and software is a much smaller market than the total set of Oracle customers looking to combine relational and Hadoop-based data; it’s also restricted to Oracle 12c on Exadata meaning you’ll most probably need to do a database upgrade even if you’ve already got the required Exadata servers in-place. Finally, it’s also restricted to the Oracle-specific distribution of Cloudera Hadoop, though if you’re using the BDA you’ll be using this anyway.

My other concern though is that Oracle now focus on SQL as their only access mechanism into Hadoop and big data, in a similar way to how they focused on SQL as their access route into OLAP when they incorporated Oracle Express into the Oracle Database, back in the mid-2000’s. Focusing on SQL over multidimensional languages such as Express 4GL and MDX meant you missed the real point of using a multidimensional, OLAP database – which of course was being able to use a multidimensional query language, and my concern with Big Data SQL is that we’ll end up focusing on that rather than languages such as Spark, Pig and NoSQL query languages which, combined with schema-on-read, is the real differentiator for Hadoop-based systems. As long as Big Data SQL is positioned as a “bonus” – a convenient way of getting data out of Hadoop once it’s been processed and analysed using more Hadoop-native technologies – then Big Data SQL will be a great enabling and acceptance technology for enterprises, rather than one that ends up restricting them.

We’re not aware of any beta program and I don’t think the launch webcast mentioned a specific date or BDA version when Big Data SQL will be out, but with Openworld coming up soon I’d expect to hear more about this over the next few months. We’re involved in a couple of significant Oracle Big Data Appliance implementations at the moment and this product would address a real, pressing need at the moment with our customers, so I’m looking forward to getting more involved in it over the next few months.

This article was updated on 18th July to add the fact that Big Data SQL is only available on Exadata, and is not a generic Oracle Database 12c technology.

Categories: BI & Warehousing

GoldenGate and Oracle Data Integrator – A Perfect Match in 12c… Part 4: Start Journalizing!

Rittman Mead Consulting - Thu, 2014-07-10 15:26

In this post, the finale of the four-part series “GoldenGate and Oracle Data Integrator – A Perfect Match in 12c”, I’ll walk through the setup of the ODI Models and start journalizing in “online” mode. This will utilize our customized JKM to build the GoldenGate parameter files based on the ODI Metadata and deploy them to both the source and target GoldenGate installation locations. Before I get into all of the details, let’s recap the first 3 posts and see how we arrived at this point.

Part one of the series, Getting Started, led us through a quick review of the Oracle Reference Architecture for Information Management and the tasks we’re trying to accomplish; loading both the Raw Data Reservoir (RDR) and Foundation schemas simultaneously, using Oracle GoldenGate 12c replication, and set it all up via Oracle Data Integrator 12c. We also reviewed the setup of the GoldenGate JAgent process, necessary for communication between ODI and GoldenGate when using the “online” version of the Journalizing Knowledge Module.

In part two, we reviewed the Journalizing Knowledge Module, “JKM Oracle to Oracle Consistent (OGG Online)”, its new features, and how much it has been improved in ODI 12c. Full integration with Oracle GoldenGate, through the use of the new ODI Tool “OdiOggCommand”, allows for the setup and configuration of GoldenGate process groups, trail file directories, and table-level supplemental logging, all from the ODI JKM.

Most recently, part 3, titled Setup Journalizing, walked us through the customizations of the “JKM Oracle to Oracle Consistent (OGG Online)” that will allow us to create the source-to-foundation replication alongside the standard source-to-RDR setup. We added a set of options, the first to control whether or not we replicat to foundation and the second to capture the ODI Logical Schema corresponding to the foundation schema. Then we added the task that will create the source-to-foundation table mapping inside the GoldenGate replicat parameter file and set the options appropriately. I’ve been using the ODI 12c Getting Started VM, with the 12.1.2 version of ODI, for my demo setup. If you haven’t done so already, you can download the latest version of the VM, with ODI 12.1.3, from the Oracle Technical Network. I’d say that’s enough recap, now on to the final steps for GoldenGate and ODI 12c integration and let’s start journalizing!

Setup ODI Models Create Models

We first need to create the ODI Models and Datastores for the Source, Staging (Raw Data Reservoir) and Foundation tables. I will typically reverse engineer the source tables into a Model first, then copy them to the Staging and Foundation Models. This approach will ensure the column names and data types remain consistent with the source. I then execute a Groovy script to create the additional data warehouse audit columns in each of the Foundation Datastores.

models

Configure JKM

Unlike the 11g version of ODI, in 12c the “JKM Oracle to Oracle Consistent (OGG Online)” Knowledge Module will be set on the source Model. Open up the Model, in this example, PM_SRC, and switch to the Journalizing tab.

model-journalizing

We’ll set the Journalizing Mode to “Consistent Set” and then choose the customized JKM that we have been working with in this example, “JKM Oracle to Oracle Consistent (OGG Online) RM”, from the dropdown list. Now we are presented with the GoldenGate Process Selection parameters and a list of KM Options to configure.

Set the Process Selection parameters for the Capture Process and Delivery Process by selecting the Logical Schemas created in Part 3 - Setup GoldenGate Topology – Schemas. This setting drives the naming of the Extract, Pump, and Replicat parameter files and process groups in GoldenGate. If you plan to use GoldenGate for the initial load, select the processes here as well. I’m not setting mine, as I typically use a batch load tool, such as Oracle Datapump or insert across DBLink to perform the initial load of the target. As you can see in the image below, you can also create the Oracle GoldenGate Logical Schemas from the Model.

model-jkm-process

Next are the set of Options, including the 2 new Options added in the previous post. We can leave several of the values as the default, as they are specific to particular character sets or implementation on a multi-node Oracle RAC setup.

model-jkm-options

The Options we do want to set:

ONLINE - Set to “true” to enable the automatic GoldenGate configuration when Start Journal is run.
LOCAL_TEMP_DIR – Enter a directory local to the machine on which the Start Journal process will be executed. Be sure the user executing the Start Journal process has privileges to create/modify/remove directories and files.
APPLY_FOUNDATION – Custom Option, set to “true” to enable the addition of the source-to-foundation mapping to the GoldenGate Replicat parameter file.
FND_LSCHEMA – The Logical Schema for the Foundation layer, necessary when APPLY_FOUNDATION is true.

After the Options are set, the Model can be saved and closed. Back in the Designer Navigator, add the Datastores to CDC by either selecting each individual Datastore and adding it or by right-clicking the Model and choosing to add the entire set all at once.

model-add-to-cdc

Before we get on with using the JKM, there is one thing I forgot to mention in the previous post. When setting up the Logical Schema for the GoldenGate “Delivery” process, you must also set the target Logical Schema. If you fail to do so, you’ll get an error stating “SnpLSchema does not exist” when attempting any Change Data Capture commands on the source Model.

logical-schema-set-target

With the JKM set and the tables added to Change Data Capture, we can now add a Subscriber. Subscribers allow multiple mappings to consume the change data from the J$ tables at different intervals. For example, a table may be consumed by one mapping every hour, and then by an additional mapping each night. Two different Subscribers would be used in this case. In our example, I’ll create a single Subscriber named “PERFECT_MATCH”. Make sure the process runs successfully, then it’s time to start Journalizing.

Start Capturing Changes

With the setup and configuration out of the way, the rest is up to the JKM. We’re now able to right-click the source Model and select Change Data Capture–>Start Journal. This executes all of the Start Journal related steps in the JKM, which will create the CDC Framework (J$ tables, JV$ views, etc.), generate and deploy the GoldenGate parameter files (Extract, Pump, and Replicat), and configure and start the GoldenGate process groups. Be sure that the source and target GoldenGate Manager and JAgent processes are running prior to executing the Start Journal process. Also, make sure that the database is in ArchiveLog mode and ready for GoldenGate to capture transactions.

After the Start Journal process is successfully completed, you can browse to the source GoldenGate home directory, run GGSCI, and view the status of the OGG process groups.

ogg-src-info-all

It looks like the Extract and Pump are in place and running. Now let’s check out the Replicat on the target GoldenGate installation.

ogg-trg-info-all

Here we see that the Replicat process is in place, but not actually running. Remember from our JKM editing that we commented out the step that will start the Replicat process. This is to ensure we perform an initial load prior to applying any captured change data to the target.

The last bit of work that must be completed is the initial load. I won’t go into details here, but the way I like to do it is to load the source to the target data based on a captured SCN using Oracle Datapump or DBLink rather than using GoldenGate process groups to perform the load. Note: The ODI 12c Getting Started Guide doesn’t even use the OGG initial load! Then, we can start the replicat in GoldenGate after the captured SCN using the same approach I wrote about in a previous blog on ODI and GoldenGate 11g.

JKM “Online” Mode

Beyond adding the parameter files and process groups, what exactly did the “online” version of the JKM do for us? If you browse to the directory that we set in the JKM Options under LOCAL_TEMP_DIR, you’ll find all of the GoldenGate files generated by the JKM. These files were generated locally, then uploaded to their proper GoldenGate home directory. Without “online” mode, they would had to have been manually copied to GoldenGate.

jkm-steps

Once uploaded, the obey files (batch files for GoldenGate commands) were executed.

OdiOggCommand "-LSCHEMA=EXTPMSRC" "-OPERATION=EXECUTEOBEY" "-OBEY_FILE=/home/oracle/Oracle/Middleware/oggsrc/EXTPMSRC.oby"

And finally, JKM steps were generated to perform the creation of the process groups in GoldenGate.

OdiOggCommand "-LSCHEMA=EXTPMSRC" "-OPERATION=EXECUTECMD"
add extract EXTPMSRC, tranlog, begin now 
add exttrail /home/oracle/Oracle/Middleware/oggsrc/dirdat/oc, extract EXTPMSRC, megabytes 100
stop extract EXTPMSRC
start extract

If you ever need to stop the change data capture process, either to add additional tables or make modifications to the metadata, you can run the Drop Journal process. Not only will the CDC Framework of tables and views be removed, but the “online” mode also reaches into GoldenGate and drops the process groups that were generated by the JKM.

OdiOggCommand "-LSCHEMA=EXTPMSRC" "-OPERATION=EXECUTECMD"
stop extract RPMEDWP
delete extract RPMEDWP

In conclusion, the integration between GoldenGate and Oracle Data Integrator  in 12c has been vastly improved over the 11g version. The ability to manage the entire setup process from within ODI is a big step forward, and I can only see these two products being further integrated in future releases. If you have any questions or comments about ODI or GoldenGate, or would like some help with your own implementation, feel free to add a comment below or reach out to me at michael.rainey@rittmanmead.com.

 

Categories: BI & Warehousing

Simple Ways to Simplify: Quick Fixes to Enhance OBIEE Visuals

Rittman Mead Consulting - Tue, 2014-07-08 13:18

Over the past couple of months I worked with a few clients in order to conduct an assessment on dashboard and analysis design. In most cases I was noticing that the dashboards were overflowing with content to the point that there was really no discernible “story” or “flow” to the information. Most of the dashboards were an overwhelming hub of large tables, excess dashboard prompts, and complicated charts and graphs. Many times the client complaints were that the dashboards were not being fully adopted by the user base. The users knew that something was in fact wrong with what they were looking at, but just couldn’t put their finger on why the analysis process was so frustrating. While this may seem unavoidable during a time when you are trying to provide your users with what they want and “need,” there are a few simple ways to aid this issue. During the development process, simplicity is often overlooked and undervalued, and that is why a few key design principles can drastically improve the user experience.

In many cases that I’ve seen, all dashboards start with the good intention of quickly relaying information to the user. However, as the requirements process grows in length, more and more compromise is made in keeping clean, consistent, functional design. Many of the dashboards in my assessments often hand the user everything they ask for without ever questioning the value. Developers often settle on poor design just to have something accepted by their users and released into production.

While each client is unique and has their own set of issues to resolve, there are a few consistent principles of dashboard design that can be applied for everyone.

Top-Down Analysis—The practice of allowing users to drill from summary to detail, gives your user community the ability to make their dashboards as dynamic as they need to be. This method will never give the user too much, or too little information. With detail being a choice, not the default, the user that can log in and be prompted to have to drill into further detail, rather than being presented everything all at once. While this is a widely accepted best practice when it comes to dashboard design, this principle is ignored more than you would think. Odds are that your VP does not need to see 50 rows of detail level information in a table every time they view their dashboard. The primary purpose of presenting this detail level is to answer a very important question that should be posed by analyzing your data at the summary level. The user can examine something simple like a trend analysis, and then decide whether further examination is needed. The benefit of this common principle is that your user is never overwhelmed and irritated with an overload of information. The dashboard should be treated with the same care and consideration you would . Your organization’s dashboard (product) should be a joy to use, not an experience coupled with frustration.

top down image

Simplicity and Trimming the Fat-This is another very simple, yet often ignored design principle. From my observation, many developers  will create a chart or graph and will leave all the default settings, no modifications needed right? While there is nothing inherently wrong with this, the default will leave a lot of extra pixels on the graph such as unneeded axis titles, shadowing, canvas size, etc. With just a little effort here, you can remove all of these unnecessary data pixels (pictured below) and provide a more professional, clean design. The point that I’m trying to make here is, let’s not get lazy with our visualizations. Instead, we should try to give a lot of thought what is useful in the visualization, and what can be discarded without hindering the message. The less cluttered we make our visualizations, the more pleasant the user experience will be. And as we all know, the happier our user community is, the easier your life as a developer’s will be.

default to flat

 Options for Option’s Sake-The types of visualizations used for a dashboard are one of the most important criteria for user adoption. By choosing visualizations that do not adequately display the type of analysis needed, or tell the correct story about the data, can be a frustrating waste of time for the user. Just because there are a lot of available graphing options in your analytical tool, does not mean they need to be used. This mistake is often made in an attempt to visually enhance the dashboard, or add “variety” . Try to consider things like what type of scale is in my graph (nominal, or interval pictured below), or do I want to provide summary or detail? Be sure to choose your graph based on these factors, rather than picking a graph that you think will add to variety and then figuring out how you can make it useful.

interval graphs

Visually appealing dashboards are important, however this is only relevant when the graphs are enhancing the users analytical experience. These mistakes are very costly because the overall goal of a dashboard is not to provide variety for varieties sake, but to quickly and accurately relay a message. By focusing only on visualization variety, we run a terrible risk of rendering a dashboard useless.

There are a lot of great resources out there that can provide more detail that can surely take your dashboards to the next level so I certainly suggest reading up information design methodology. I think the principles I’ve listed above are a great way to get started and provide some quick fixes on the road to enhancing the user experience within your organization.

 

Categories: BI & Warehousing

GoldenGate and Oracle Data Integrator – A Perfect Match in 12c… Part 3: Setup Journalizing

Rittman Mead Consulting - Wed, 2014-07-02 23:39

After a short vacation, some exciting news, and a busy few weeks (including KScope14 in Seattle, WA), it’s time to get the “GoldenGate and Oracle Data Integrator – A Perfect Match in 12c” blog series rolling again. Hopefully readers can find some time between World Cup matches to try integrating ODI and GoldenGate on their own!

To recap my previous two posts on this subject, I first started by showing the latest Information Management Reference Architecture at a high-level (described in further detail by Mark Rittman) and worked through the JAgent configuration, necessary for communication between ODI and GoldenGate. In the second post, I walked through the changes made to the GoldenGate JKM in ODI 12c and laid out the necessary edits for loading the Foundation layer at a high-level. Now, it’s time to make the edits to the JKM and set up the ODI metadata.

Before I jump into the JKM customization, let’s go through a brief review of the foundation layer and its purpose. The foundation schema contains tables that are essentially duplicates of the source table structure, but with the addition of the foundation audit columns, described below, that allow for the storage of all transactional history in the tables.

FND_SCN (System Change Number)
FND_COMMIT_DATE (when the change was committed)
FND_DML_TYPE (DML type for the transaction: insert, update, delete)

The GoldenGate replicat parameter file must be setup to map the source transactions into the foundation tables using the INSERTALLRECORDS option. This is the same option that the replicat uses to load the J$ tables, allowing only inserts and no updates or deletes. A few changes to the JKM will allow us to choose whether or not we want to load the Foundation schema tables via GoldenGate.

Edit the Journalizing Knowledge Module

To start, make a copy of the “JKM Oracle to Oracle Consistent (OGG Online)” so we don’t modify the original. Now we’re ready to make our changes.

Add New Options

A couple of new Options will need to be added to enable the additional feature of loading the foundation schema, while still maintaining the original JKM code. Option values are set during the configuration of the JKM on the Model, but can also have a default in the JKM.

APPLY_FOUNDATION

new-option-apply-fnd

This option, when true, will enable this step during the Start Journal process, allowing it to generate the source-to-foundation mapping statement in the Replicat (apply) parameter file.

FND_LSCHEMA

new-option-fnd-schema

This option will be set with Logical Schema name for the Foundation layer, and will be used to find the physical database schema name when output in the GoldenGate replicat parameter file.

Add a New Task

With the options created, we can now add the additional task to the JKM that will create the source to foundation table mappings in the GoldenGate replicat parameter file. The quickest way to add the task is to duplicate a current task. Open the JKM to the Tasks tab and scroll down to the “Create apply prm (3)” step. Right click the task and select Duplicate. A copy of the task will be created and in the order that we want, just after the step we duplicated.

Rename the step to “Create apply prm (4) RM”, adding the additional RM tag so it’s easily identifiable as a custom step. From the properties, open the Edit Expression dialog for the Target Command. The map statement, just below the OdiOutFile line, will need to be modified. First, remove the IF statement code, as the execution of this step will be driven by the APPLY_FOUNDATION option being set to true.

Here’s a look at the final code after editing.

map <%= odiRef.getObjectName("L", odiRef.getJrnInfo("TABLE_NAME"), odiRef.getOggModelInfo("SRC_LSCHEMA"), "D") %>, TARGET <%= odiRef.getSchemaName("" + odiRef.getOption("FND_LSCHEMA") + "","D") %>.<%= odiRef.getJrnInfo("TABLE_NAME") %>, KEYCOLS (<%= odiRef.getColList("", "[COL_NAME]", ", ", "", "PK") %>, FND_SCN)<%if (!odiRef.getOption("NB_APPLY_PROCESS").equals("1")) {%>, FILTER (@RANGE(#ODI_APPLY_NUMBER,<%= nbApplyProcesses %>,<%= odiRef.getColList("", "[COL_NAME]", ", ", "", "PK") %>))<% } %> INSERTALLRECORDS,
COLMAP (
USEDEFAULTS,
FND_COMMIT_DATE = @GETENV('GGHEADER' , 'COMMITTIMESTAMP'),
FND_SCN = @GETENV('TRANSACTION' , 'CSN'),
FND_DML_TYPE = @GETENV('GGHEADER' , 'OPTYPE')
);

The output of this step is going to be a mapping for each source-to-foundation table in the GoldenGate replicat parameter file, similar to this:

map PM_SRC.SRC_CITY, TARGET EDW_FND.SRC_CITY, KEYCOLS (CITY_ID, FND_SCN) INSERTALLRECORDS,
COLMAP (
USEDEFAULTS,
FND_COMMIT_DATE = @GETENV('GGHEADER' , 'COMMITTIMESTAMP'),
FND_SCN = @GETENV('TRANSACTION' , 'CSN'),
FND_DML_TYPE = @GETENV('GGHEADER' , 'OPTYPE')
);

The column mappings (COLMAP clause) are hard-coded into the JKM, with the parameter USEDEFAULTS mapping each column one-to-one. We also hard-code each foundation audit column mapping to the appropriate environment variable from the GoldenGate trail file. Learn more about the GETENV GoldenGate function here.

The bulk of the editing on this step is done to the MAP statement. The out-of-the-box JKM is setup to apply transactional changes to both the J$, or change table, and fully replicated table. Now we need to add the mapping to the foundation table. In order to do so, we first need to identify the foundation schema and table name for the target table using the ODI Substitution API.

map ... TARGET <%= odiRef.getSchemaName("" + odiRef.getOption("FND_LSCHEMA") + "", "D") %> ...

The nested Substitution API call allows us to get the physical database schema name based on the ODI Logical Schema that we will set in the option FND_LSCHEMA, during setup of the JKM on the ODI Model. Then, we concatenate the target table name with a dot (.) in between to get the fully qualified table name (e.g. EDW_FND.SRC_CITY).

... KEYCOLS (<%= odiRef.getColList("", "[COL_NAME]", ", ", "", "PK") %>, FND_SCN) ...

We also added the FND_SCN to the KEYCOLS clause, forcing the uniqueness of each row in the foundation tables. Because we only insert records into this table, the natural key will most likely be duplicated numerous times should a record be updated or deleted on the source.

Set Options

The previously created task,  “Create apply prm (4) RM”, should be set to execute only when the APPLY_FOUNDATION option is “true”. On this step, go to the Properties window and choose the Options tab. Deselect all options except APPLY_FOUNDATION, and when Start Journal is run, this step will be skipped unless APPLY_FOUNDATION is true.

jkm-set-task-option

Edit Task

Finally, we need to make a simple change to the “Execute apply commands online” task. First, add the custom step indicator (in my example, RM) to the end of the task name. In the target command expression, comment out the “start replicat …” command by using a double-dash.

--start replicat ...

This prevents GoldenGate from starting the replicat process automatically, as we’ll first need to complete an initial load of the source data to the target before we can begin replication of new transactions.

Additional Setup

The GoldenGate Manager and JAgent are ready to go, as is the customized “JKM Oracle to Oracle Consistent (OGG Online)” Journalizing Knowledge Module. Now we need to setup the Topology for both GoldenGate and the data sources.

Setup GoldenGate Topology - Data Servers

In order to properly use the “online” integration between GoldenGate and Oracle Data Integrator, a connection must be setup for the GoldenGate source and target. These will be created as ODI Data Servers, just as you would create an Oracle database connection. But, rather than provide a JDBC url, we will enter connection information for the JAgent that we configured in the initial post in the series.

First, open up the Physical Architecture under the Topology navigator and find the Oracle GoldenGate technology. Right-click and create a new Data Server.

create-ogg-dataserver

Fill out the information regarding the GoldenGate JAgent and Manager. To find the JAgent port, browse to the GG_HOME/cfg directory and open “Config.properties” in a text viewer. Down towards the bottom, the “jagent.rmi.port”, which is used when OEM is enabled, can be found.

####################################################################
## jagent.rmi.port ###
## RMI Port which EM Agent will use to connect to JAgent ###
## RMI Port will only be used if agent.type.enabled=OEM ###
####################################################################
jagent.rmi.port=5572

The rest of the connection information can be recalled from the JAgent setup.

setup-ogg-dataserver

Once completed, test the connection to ensure all of the parameters are correct. Be sure to setup a Data Server for both the source and target, as each will have its own JAgent connection information.

Setup GoldenGate Topology - Schemas

Now that the connection is set, the Physical Schema for both the GoldenGate source and target must be created. These schemas tie directly to the GoldenGate process groups and will be the name of the generated parameter files. Under the source Data Server, create a new Physical Schema. Choose the process type of “Capture”, provide a name (8 characters or less due to GoldenGate restrictions), and enter the trail file paths for the source and target trail files.

Create the Logical Schema just as you would with any other ODI Technology, and the extract process group schema is set.

For the target, or replicat, process group, perform the same actions on the GoldenGate target Data Server. This time, we just need to specify the target trail file directory, the discard directory (where GoldenGate reporting and discarded records will be stored), and the source definitions directory. The source definitions file is a GoldenGate representation of the source table structure, used when the source and target table structures do not match. The Online JKM will create and place this file in the source definitions directory.

Again, setup the Logical Schema as usual and the connections and process group schemas are ready to go!

The final piece of the puzzle is to setup the source and target data warehouse Data Servers, Physical Schemas, and Logical Schemas. Use the standard best practices for this setup, and then it’s time to create ODI Models and start journalizing. In the next post, Part 4 of the series, we’ll walk through applying the JKM to the source Model and start journalizing using the Online approach to GoldenGate and ODI integration.

Categories: BI & Warehousing

Going Beyond the Summary Advisor with TimesTen for Exalytics

Rittman Mead Consulting - Sun, 2014-06-22 17:18

I’m over in Seattle at the moment for ODTUG KScope’14, and one of the topics I’m presenting on this week is the use of TimesTen with OBIEE; what I’m going to talk about is taking TimesTen beyond its current use as an in-memory cache for aggregates created by the Summary Advisor, and instead using it to store additional custom aggregates, “hot” data from the source data warehouse, and other custom structures created using SQL*Developer and potentially loaded using Oracle Data Integrator. The point of the session is to show one of the main benefits of TimesTen even in the world of the new In-Memory Option for Oracle Database – it’s a flexible but Oracle-compatible database that’s typically under the control of the BI team and open to much more customisation than most customers realise.

To illustrate the point, I’m going to run through an example using the Oracle Database Sales History (SH) schema as the source in this article, with the example going through five stages of development:

1. First, I’ll load TimesTen for Exalytics in the usual way, by having the Summary Advisor run some recommendations and then generate aggregates to load into TimesTen.

2. Then, I’ll add some of my own aggregates into TimesTen, by using the Aggregate Persistence Wizard, so that we’ve now got a base set of aggregates from me (ones that I know we’ll need) plus whatever the Summary Advisor recommends based on historic query patterns.

3. Next, I’ll use TimesTen to hold what we refer to as “hot data” – the actual transactional data from the source database in this case from the last six months. So now, TimesTen will contain both the full transaction-level data that users are most likely to query at detail-level, plus aggregations of the full dataset over time, giving us even more chance that TimesTen will be able to fulfil a user’s query.

4. Then I’ll supplement this data in TimesTen with some additional reference data, that I’ll bring in from file using a tool like Oracle Data Integrator or the Oracle Database external table feature, modifying the existing TimesTen table structure using SQL*Developer to hold these additional dimension attributes.

5. Finally, I’ll set up incremental refresh of all of this, in order to avoid the full table-drop-and-reload approach that the Summary Advisor and Aggregate Persistence Wizard use, and to make data refreshing more efficient

Let’s start with steps 1 and 2, creating the aggregate layer for the underlying Sales History Oracle data source.

1. Creating Initial Aggregate using the Summary Advisor

We can use OBIEE’s Summary Advisor, and the underlying Aggregate Persistence mechanism that the Summary Advisor uses, to build the in-memory aggregates for our underlying datasource. Starting with the simplest part first, we’ll use the Summary Advisor to create a set of aggregates based off of recommendations from historic query data.

Running the Summary Advisor in my case brings up a recommendation to create three aggregates:

Sshot 4

Running the script that the Summary Advisor generates brings up one of the issues though that you’re likely to hit when using this tool – your RPD has to be absolutely “rock solid” in terms of correctness, otherwise your aggregate build will fail. In the case of this example, the summary advisor aggregate persistence script starts-off running OK, but then errors at the end when one of the aggregate tables fails to build:

c:\Middleware\Oracle_BI1\bifoundation\server\bin>nqcmd -d coreapplication_OH799
36239 -u weblogic -p welcome1 -s c:\ttscripts\summ_adv_SH_full_blog.sql

-------------------------------------------------------------------------------
Oracle BI ODBC Client
Copyright (c) 1997-2013 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------


create aggregates

"ag_561184940"
for "Sales History (Oracle + TT)"."SALES"("AMOUNT_SOLD","QUANTITY_SOLD")
at levels ("Sales History (Oracle + TT)"."CustomerDim"."City", "Sales History
Oracle + TT)"."TimesDim"."Month", "Sales History (Oracle + TT)"."ProductsDim"."Category")
using connection pool "tt_aggr_store"."Connection Pool"
in "tt_aggr_store".."EXALYTICS",

…

"ag_1162984574"
for "Sales History (Oracle + TT)"."SALES"("AMOUNT_SOLD","QUANTITY_SOLD")
at levels ("Sales History (Oracle + TT)"."TimesDim"."Year", "Sales History (Or
cle + TT)"."ProductsDim"."Category", "Sales History (Oracle + TT)"."PromoDim"."Promo Subcategory")
using connection pool "tt_aggr_store"."Connection Pool"
in "tt_aggr_store".."EXALYTICS"
[10058][State: S1000] [NQODBC] [SQL_STATE: S1000] [nQSError: 10058] A general error has occurred.
Statement preparation failed


Processed: 1 queries
Encountered 1 errors

Looking at the list of tables left in TimesTen at that point, I can see that one of the dimensions failed to build, which then cascading down to the dependent fact table build failing:

C:\Users\Administrator>ttisql

Copyright (c) 1996, 2013, Oracle and/or its affiliates. All rights reserved.

Type ? or "help" for help, type "exit" to quit ttIsql.

Command> connect "DSN=tt_exalytics_3;uid=exalytics;pwd=welcome1";

Connection successful: DSN=tt_exalytics_3;UID=exalytics;DataStore=C:\TimesTen\tt_data_3;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=C:\TimesTen\TT1122~2\bin\ttdv1122.dll;LogDir=C:\TimesTen\tt_data_3;PermSize=4000;TempSize=2000;LockWait=60010.0;SQLQueryTimeout=60000;TypeMode=0;QueryThreshold=60000;PLSCOPE_SETTINGS=IDENTIFIERS:NONE;RangeIndexType=1;(Default setting AutoCommit=1)

Command> tables;

  EXALYTICS.SA_CATEGOR0000AFE8
  EXALYTICS.SA_CHANNEL0000B8F9
  EXALYTICS.SA_CITY0000AC6A
  EXALYTICS.SA_MONTH0000AC81
  EXALYTICS.SA_PROMO_S0000B8E8
  EXALYTICS.TEST

6 tables found.

Summary Advisor aggregates failing to build is something that regular Exalytics developers will be used to, at least from when they first use the tool. The trick to it is to make sure you run the Model Checker, within the BI Administration Tool, before you try and generate some aggregates; whilst your RPD might pass the regular consistency check and be valid as a repository that can go online and be used for reporting, it may not be consistent or logically correct from an aggregation standpoint, and looking at the NQQuery.log file, I can see that the first aggregate failed to build because aggregate persistence rejected the shared product dimension all of the aggregate fact tables used.

Running the model checker on my business model, I can see there are two errors listed for the product dimension, and a bunch of warnings where I’ve not added dimension attribute columns to their relevant dimension level.

Sshot 5

Running the logical SQL given in the two error messages at the start, I can see that there’s an error in the data that I’m using for my Products dimension data source, in that one or more of the dimension levels contains keys that are duplicates – in this case because  there are two “Accessories” product subcategories with different IDs. I go into SQL*Developer and correct this issue, and also add the logical columns listed as “warnings” into their respective logical dimension levels, so that running the model checker again gives me this message instead:

Sshot 6

I then pick two of the aggregate recommendations and run the script for them, so in the end I’m left with the aggregate tables below in my RPD.

Sshot 7

Thereafter, I can run the Summary Advisor again to generate some more recommendations, or I can add my own specific aggregates into the TimesTen database using the Aggregate Persistence Wizard, for example to add an aggregate I know users are going to want to use.

2. Adding Custom Aggregates using the Aggregate Persistence Wizard

Generating aggregates using the Aggregate Persistence Wizard uses the same underlying mechanism to create and populate the aggregate tables, but in this instance I select the specific measures, dimensions and hierarchy levels I want to aggregate by. In this instance, I use the Aggregate Persistence Wizard to create an additional aggregate using the Promotions dimension, as I know some new reports coming along will make use of this pre-aggregation.

Sshot 9

So where I am at the end of all this is with a TimesTen aggregate data mart containing three aggregate fact tables – two recommended by the Summary Advisor, one I added myself – along with supporting dimension tables.

Sshot 11

And this is great for handling dashboard queries that request summarised data, through a graph or pivot table analysis. But what about when users want to drill-down to the detail, or run reports against recent transactional activity? That’s where the second TimesTen use-case, “hot data”, comes in.

3. Populating TimesTen with Transactional “Hot Data”

So far we’ve looked at populating our TimesTen database with aggregate data, but what about making the most recent set of transactions available to users also in TimesTen, so that analyses running against recent activity even at the detail level run fast too? To do this we can use the ttImportFromOracle utility to replicate into TimesTen the Sales History schema, and then edit the data loading script it generates to only load the last six months of data.

This utility only works for Oracle sources, and ships with recent versions of TimesTen in the /support directory. Using it I can have the utility scan the data in my source database in order to recommend the most space-efficient TimesTen datatypes, and I can also use it to recommend compression settings for use with TimesTen for Exalytics Columnar Compression feature.

As ttImportFromOracle requires you to have a TimesTen schema with the same name as the one you’re importing from in Oracle, I create a new schema in my TimesTen for Exalytics database, ready for importing from the Oracle source:

create user sh identified by welcome1;
grant create session to sh;
grant create table to sh;
grant select on SYS.OBJ$ to sh;
grant admin to sh;

Then I can use the utility to create the DDL and data loading scripts that I’ll then use to import the hot data from Oracle.

c:\TimesTen\tt1122_64_3\support>ttImportFromOracle.exe -oraconn sh/password@orcl  -tables CUSTOMERS PRODUCTS SALES CHANNELS PROMOTIONS TIMES -typeMap 2,1 -compression 1

Beginning processing
Resolving any tablename wildcards
Eliminating any duplicate tables
Getting metadata from source
Generating database user list
Assigning TimesTen datatypes
Analyzing source tables (this may take some time)
Analyzing table 'SH.CUSTOMERS' ...
Analyzing table 'SH.PRODUCTS' ...
Analyzing table 'SH.SALES' ...
Analyzing table 'SH.CHANNELS' ...
Analyzing table 'SH.PROMOTIONS' ...
Analyzing table 'SH.TIMES' ...
Optimizing TimesTen datatypes
Estimating compression ratios
Generating output files
Finished processing

This set of scripts creates the tables and indexes to hold the replicated data from Oracle, and I can edit the “LoadData.sql” script that comes with the DDL scripts to load just the last six months of data – to do this I split the “ttLoadFromOracle” TimesTen procedure calls in this script into ones for the dimensions and one for the fact table load, which after amending it to load just six months data looks like this:

timing 1; 

call ttLoadFromOracle('SH', 'SALES', 'SELECT S.PROD_ID, S.CUST_ID, S.TIME_ID, S.CHANNEL_ID, S.PROMO_ID, S.QUANTITY_SOLD, S.AMOUNT_SOLD 
FROM SH.SALES S, SH.TIMES T 
WHERE S.TIME_ID = T.TIME_ID 
AND  T.CALENDAR_MONTH_DESC > ''2001-06''');

Then I connect to TimesTen as the SH user, providing the password to the Oracle database as part of the connect string, and then run the scripts I’ve just generated:

Command> connect "DSN=tt_exalytics_3;uid=sh;password=welcome1;oraclepwd=password";
Command> run c:\ttimportscripts\CreateTables.sql
Command> run c:\ttimportscripts\LoadData_dims.sql
Command> run c:\ttimportscripts\LoadData_fact.sql
Command> run c:\ttimportscripts\CreateIndexes.sql
Command> run c:\ttimportscripts\UpdateStats.sql

I can then go over to SQL*Developer and see the new TimesTen tables there, with the optimised datatypes provided by ttImportFromOracle:

Sshot 13

and also import these new tables into the RPD, adding an extra schema alongside the aggregate tables:

Sshot 12

Then, the way to make use of this six months of transactional data in TimesTen is to set up “fragmentation” in the OBIEE RPD, so that the BI Server goes to the TimesTen data source when queries require data from the past six months, and the standard Oracle datasource if data further back from that is required; in addition, because of the aggregates we’ve also set up, it’ll go to the TimesTen aggregate tables if queries request aggregated data, so TimesTen performs the role of holding both “hot” data, and aggregate data.

The first step in setting this up is to map in the new TimesTen table as additional logical table sources into the existing business model. In the screenshot below, you can see the new TimesTen logical table sources added to the existing Oracle, and TimesTen aggregate logical table sources, and when you do this make sure you remove any additional logical table keys that might come across when you map in the new TimesTen tables.

Sshot 14

Then, for just the fact table within the logical iodel, edit the Oracle detail-level and the TimesTen “hot data” detail level table sources, add fragmentation conditions to define what time period each source covers, like this:

NewImage

Do this just for the fact table logical table source; then, when a query comes through to the BI Server, if it needs to refer to the TimesTen logical table source it’ll join-out to the relevant TimesTen hot data logical table sources to get the rest of the required data, and when it needs to go to the Oracle fact table logical table source, it’ll join-out to the Oracle LTSs, as shown in the diagram below.

NewImage

Then, when you incude the TimesTen aggregate tables into the architecture as well, you’ve got three potential sources of data for user reports, all handled seamlessly and automatically by the BI Server; the Oracle and TimesTen “hot” data sources for detail-level reporting, one or the other (or both) used for queries at the transaction level, and the TimesTen aggregate tables when they would answer a query faster than rolling-up the detail-level sources on-the-fly.

NewImage

4. Supplementing with Additional Reference Data

One of the advantages of using a mid-tier database like TimesTen, that’s under the control of the BI department, is that you can be a lot more “agile” in terms of bringing in new data sources, than is often the case when you’re reporting against a data warehouse that’s strictly governed and a shared corporate resource. TimesTen lends itself well to agile development too, in that you can work with it using tools like SQL*Developer and easily load data into it using ETL tools such as Oracle Data Integrator, or leverage the connectivity features in the Oracle Database and then use ttImportFromOracle.

Working with TimesTen database structures is more or less the same process as working with Oracle ones, but there’s a couple of differences you need to be aware of when developing your TimesTen tables. Adding new columns to a table is possible, but these can end up stored “out of line” with the main column set and can cause performance issues when queried, or compatibility issues when loading using tools like ttImportfFromOracle. If you’re using TimesTen to store “hot” data, make sure you’ve got corresponding tables for this reference data in the source Oracle database, and make sure you select the most appropriate TimesTen datatypes for your new data (for example, TT_TINYINT and TT_INTEGER) rather than just the default Oracle-like datatypes.

Most importantly, for all of your TimesTen tables, make sure you run the Index Advisor after you start to run user reports in order to ensure you’re using the right indexes for your particular query patterns, and make sure you gather stats on your TimesTen tables as well as the Oracle ones, so that the TimeTen query optimiser can generate the most efficient query plans.

5. Setting up Incremental Refresh

The last thing we want to set up for our TimesTen analytic data mart is some way to incrementally refresh the aggregates we built using the Summary Advisor and Aggregate Persistence Wizard. The way these tools work is that they drop, and then recreate the aggregate tables each time you load them, and add and remove them from the RPD at the start and end of the data load, as shown in the diagram below:

NewImage

What you can do, though, as outlined in this earlier blog post by Robin Moffatt, is take the POPULATE BI Server command that you’ll find in the nqquery.log file after an aggregate build, and use it to just refresh the aggregates in-place, without dropping the tables beforehand or removing them from the RPD, as shown in the diagram below.

NewImage

Looking through the nqquery.log file I can see various POPULATE commands for the TimesTen dimension tables, like this:

populate "SA_Promo_C0000B8E4" mode ( append table connection pool "tt_aggr_store"."Connection Pool") as  select_business_model "Sales History (Oracle + TT)"."PROMOTIONS"."PROMO_CATEGORY" as "PROMO_CATE0000B8C6","Sales History (Oracle + TT)"."PROMOTIONS"."PROMO_CATEGORY_ID" as "PROMO_CATE0000B8C7", RCOUNT (1) as "Promo_C_0000B8E4SK" from "Sales History (Oracle + TT)"; [[

If I then couple that with the INACTIVE_SCHEMAS variable being set to the TimesTen physical data source, add a command to truncate the dimension table before the load, and then alter the fact table POPULATE command’s SELECT statement to only load in just this past month’s data, deleting that data down from the target fact table beforehand, I’ve got a full incremental refresh I can run from a cron job or other scheduler. The script excerpt below shows such an example, with just a few of the dimensions being reloaded.

For the “hot data” tables it’s a similar case of taking the existing LoadData.sql and either modifying it to load in just the new transactions, or reload all six months if it’s not easier to do just that.

execute physical connection pool "tt_aggr_store"."Connection Pool" truncate table SA_Categor0000AFE8;
 
SET VARIABLE INACTIVE_SCHEMAS='"tt_aggr_store".."EXALYTICS_AGG_WIZ"':
populate "SA_Categor0000AFE8" mode ( append table connection pool "tt_aggr_store"."Connection Pool") as  select_business_model "SH (Oracle)"."PRODUCTS"."PROD_CATEGORY_DESC" as "PROD_CATEG0000AFB4", RCOUNT (1) as "Categor_0000AFE8SK" from "SH (Oracle)";
 
execute physical connection pool "tt_aggr_store"."Connection Pool" truncate table SA_Month0000AC81;
 
SET VARIABLE INACTIVE_SCHEMAS='"tt_aggr_store"..”EXALYTICS"':
populate "SA_Month0000AC81" mode ( append table connection pool "tt_aggr_store"."Connection Pool") as  select_business_model "SH (Oracle)"."TIMES"."CALENDAR_MONTH_DESC" as "CALENDAR_M0000AC0A","SH (Oracle)"."TIMES"."CALENDAR_QUARTER_DESC" as "CALENDAR_Q0000AC0E","SH (Oracle)"."TIMES"."CALENDAR_YEAR" as "CALENDAR_Y0000AC12", RCOUNT (1) as "Month_0000AC81SK" from "SH (Oracle)";
 
execute physical connection pool "tt_aggr_store"."Connection Pool" truncate table SA_City0000AC6A;
 
SET VARIABLE INACTIVE_SCHEMAS='"tt_aggr_store".."EXALYTICS"':
populate "SA_City0000AC6A" mode ( append table connection pool "tt_aggr_store"."Connection Pool") as  select_business_model "SH (Oracle)"."CUSTOMERS"."CUST_CITY" as "CUST_CITY0000ABC5","SH (Oracle)"."CUSTOMERS"."CUST_STATE_PROVINCE" as "CUST_STATE0000ABD4", RCOUNT (1) as "City_0000AC6ASK" from "SH (Oracle)";
 
execute physical connection pool "tt_aggr_store"."Connection Pool" delete from ag_SALES where CALENDAR_M0000AC0A = '2001-12';
 
SET VARIABLE INACTIVE_SCHEMAS='"tt_aggr_store".."EXALYTICS"':
populate "ag_SALES" mode ( append table connection pool "tt_aggr_store"."Connection Pool") as  
select_business_model "SH (Oracle)"."CUSTOMERS"."CUST_CITY" as "CUST_CITY0000ABC5",
                      "SH (Oracle)"."SALES"."AMOUNT_SOLD" as "AMOUNT_SOL0000ABF6",
                      "SH (Oracle)"."SALES"."QUANTITY_SOLD" as "QUANTITY_S0000ABFB",
                      "SH (Oracle)"."TIMES"."CALENDAR_MONTH_DESC" as "CALENDAR_M0000AC0A",
                      "SH (Oracle)"."PRODUCTS"."PROD_CATEGORY_DESC" as "PROD_CATEG0000AFB4" 
                      from "SH (Oracle)"
                      where "SH (Oracle)"."TIMES"."CALENDAR_MONTH_DESC" = '2001-12';

Conclusion

So what we’ve got here then is a much-expanded use of TimesTen in the context of Exalytics and OBIEE; we’ve not only used it as an in-memory cache for Summary Advisor aggregates, but we’ve used the same underlying mechanism to create other aggregates that we know will be useful for future queries.

We’ve also made use of the 1-2TB of RAM on the Exalytics server to also cache the last six months of detail-level transactional data, making it even more likely that TimesTen will be able to answer all of the users’ queries.

Once you’ve got your data in TimesTen, you’ve got access to the same full set of analysis functions that are available when reporting against regular Oracle databases, with TimesTen for Exalytics giving you more native aggregation and analysis functions compared to standard TimesTen, and the BI Server “functionally compensating” for anything not natively available in TimesTen by performing the calculation itself, using the raw data provided by TimesTen

 Finally, we’ve talked about how we can load in additional datasets into TimesTen via ODI or ttImportFromOracle, potentially using the latter in conjunction with external tables to bring in file data, and then looked at what’s involved in trickle-feeding the TimesTen cache rather than dropping and reloading it each time.

If you’re at KScope’14 in Seattle this week, I’m presenting on this topic on the Wednesday; if not, and you’re potentially interested in us helping you get more out of your TimesTen for Exalytics install, just drop me at line at mark.rittman@rittmanmead.com.

Categories: BI & Warehousing

Rittman Mead at ODTUG KScope’14, Seattle

Rittman Mead Consulting - Fri, 2014-06-20 03:42

NewImage

Next week is ODTUG KScope’14 in Seattle, USA, and Rittman Mead will be delivering a number of presentations over the week. Coming over from Europe we have Jérôme Françoisse and myself, and we’ll be joined by Michael Rainey, Daniel Adams, Charles Elliott from Rittman Mead in the US. We’re also very pleased to be co-presenting with Nicholas Hurt from IFPI who some of you might know from the Brighton BI Forum this year, who’s talking with Michael Rainey about their Oracle data warehouse upgrade story.

Here’s details of the Rittman Mead sessions and speakers during the week:

  • Daniel Adams : “Hands-On Training: OBIEE Data Visualization: The “How” and the “Why”?” – Monday June 23rd 2014 1.15pm – 3.30pm, Room 2A/2B
  • Jérôme Françoisse : “Oracle Data Integrator 12c New features” – Monday June 23rd 1.15pm – 2.15pm, Room 616/615
  • Mark Rittman : “Deploying OBIEE in the Cloud: Options and Deployment Scenarios” – Monday June 23rd 3.45pm – 4.45pm, Room 615/616
  • Mark Rittman : “OBIEE, Hadoop and Big Data Analysis” – Tuesday June 24th 11.15am – 12.15pm, Room 602/603/604
  • Michael Rainey and Nicholas Hurt (IFPI) : “Real-Time Data Warehouse Upgrade: Success Stories” – Tuesday June 24th 2014 2pm – 3pm, Room 614
  • Charles Elliott : “OBIEE and Essbase – The Circle of Life” – Wednesday June 25th 11.15am – 12.15pm
  • Mark Rittman : “TimesTen as your OBIEE Analyic “Sandbox” – Wednesday June 25th 3.15pm – 4.15pm, Room 615/616

We’ll also be around the event and Seattle all week, so if you’ve got any questions you’d like answered, or would like to talk to us about how we could help you with an Oracle BI, data integration, data warehousing or big data implementation, stop one of us for a chat or drop me a line at mark.rittman@rittmanmead.com.

Categories: BI & Warehousing

SampleApp v406 – Automatic startup of OBIEE

Rittman Mead Consulting - Tue, 2014-06-17 00:20

Last week Oracle released v406 of SampleApp. SampleApp is a one-stop-shop for a demonstration of pretty much any conceivable thing that OBIEE is capable of, and then some more on top of it. It is a VirtualBox appliance with everything on the one machine (Database, OBIEE, Endeca, TimesTen, Essbase, etc), and demonstrates basic analysis building techniques through to dashboarding, Mobile App Designer, analysis visualisations with D3, ADF, JavaScript, backend hackery with undocumented NQS calls (hi Christian!), and much, much more.

So, SampleApp is awesome, but … there’s no such thing as absolute perfection ;-) One of the things that is still missing from it is the automatic start/stop of OBIEE when you bootup/shutdown the machine respectively. Setting it up is easy to do, as I demonstrate below. I’ve also put my cheatsheet for whipping SampleApp into line for my day-to-day use, focussed on the commandline and automation (because respectively that’s where I spend most of my time on a server and because I’m lazy).

The OBIEE init.d service script that I demonstrate here is available for use on any Linux installation of OBIEE. For more information, see the Rittman Mead public scripts github repository here: http://ritt.md/init.d

Before we get started I’m assuming here that you’ve:

  • downloaded the 28GB worth of zip files
  • Unpacked them using 7zip
  • Found 70+GB of disc space free and imported the OVF into VirtualBox
  • Started up the VM

There’s full instructions in the SampleApp_QuickDeploymentGuide–406.pdf, available in the SampleApp v406 Documentation download from the same page as the SampleApp image itself.

So to make OBIEE start automatically, the first thing we need to do is make sure that the database (where the RCU schemas are) is doing the same, by setting it up as a service (init.d). This is based on this article if you want more details about quite how it works, but for know you just need to copy and paste this whole code block into the command prompt on SampleApp to create the necessary files. If you can’t copy & paste between your host and the Virtualbox guest, just go to this blog from Firefox within the SampleApp VM itself.
sa08
(Or if you know what you’re doing, SSH onto the server and paste the text into an SSH client on your host machine.)

Copy and paste this whole code block into the command prompt:

# Create startup/shutdown script files
mkdir -p /home/oracle/scripts
chown oracle.oinstall /home/oracle/scripts
cat>>/home/oracle/scripts/startup.sh<<EEOF
#!/bin/bash

# Start Listener
lsnrctl start

# Start Database
sqlplus / as sysdba << EOF
STARTUP;
ALTER PLUGGABLE DATABASE ALL OPEN;
EXIT;
EOF
EEOF

cat>>/home/oracle/scripts/shutdown.sh<<EEOF
#!/bin/bash

# Stop Database
sqlplus / as sysdba << EOF
ALTER PLUGGABLE DATABASE ALL CLOSE IMMEDIATE;
SHUTDOWN IMMEDIATE;
EXIT;
EOF

# Stop Listener
lsnrctl stop
EEOF

# Make them executable
chmod u+x /home/oracle/scripts/startup.sh /home/oracle/scripts/shutdown.sh
chown oracle.oinstall /home/oracle/scripts/startup.sh /home/oracle/scripts/shutdown.sh

# Create service script
cat>/tmp/dbora<<EOF
#!/bin/sh
# chkconfig: 345 90 25
# description: Oracle auto start-stop script.
#
# Set ORA_OWNER to the user id of the owner of the 
# Oracle database software.

ORA_OWNER=oracle

case "\$1" in
    'start')
        # Start the Oracle databases:
        # The following command assumes that the oracle login 
        # will not prompt the user for any values
        su - \$ORA_OWNER -c "/home/oracle/scripts/startup.sh >> /home/oracle/scripts/startup_shutdown.log 2>&1"
        touch /var/lock/subsys/dbora
        ;;
    'stop')
        # Stop the Oracle databases:
        # The following command assumes that the oracle login 
        # will not prompt the user for any values
        su - \$ORA_OWNER -c "/home/oracle/scripts/shutdown.sh >> /home/oracle/scripts/startup_shutdown.log 2>&1"
        rm -f /var/lock/subsys/dbora
        ;;
esac
EOF
sudo mv /tmp/dbora /etc/init.d/dbora
sudo chown root. /etc/init.d/dbora

# Make the service script executable
sudo chmod 750 /etc/init.d/dbora

# Associate the dbora service with the appropriate run levels and set it to auto-start using the following command.
sudo chkconfig --add dbora

On SampleApp v406 there is an Oracle 12c container database (CDB), with two “pluggable” databases (PDB) within it. Assuming you’ve not started the database yet, trying to connect to one of the PDBs for the RCU schema will fail:

[oracle@demo ~]$ sqlplus BIEE_BIPLATFORM/Oracle123@localhost:1521/pdborcl

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jun 17 03:03:51 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-12541: TNS:no listener

Now run the service start (which will happen automatically at boot)

sudo service dbora start

And check the status again:

[oracle@demo ~]$ sqlplus BIEE_BIPLATFORM/Oracle123@localhost:1521/pdborcl

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jun 17 03:06:12 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Tue Jun 17 2014 03:02:09 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>

Now we can set up OBIEE as a service that can start automatically at boot time. This is using a script that I wrote and is shared on the Rittman Mead public scripts github repository here: http://ritt.md/init.d. To install it on SampleApp v406 needs a couple of changes for the environment paths and dependencies etc, which I’ve incorporated in this code block. Again, copy and paste the whole thing into a command prompt on SampleApp.

# lsof is used in the script and isn't installed by default, so let's install it: 
sudo yum -y install lsof

# Now fetch the init.d script itself
sudo wget --no-check-certificate https://raw.githubusercontent.com/RittmanMead/scripts/master/obi/init.d/obiee -O /etc/init.d/obiee

# Update the FMW_HOME path in the script
# If you're doing this manually, you just neeed to change the line 
# "FMW_HOME=" and put in the FMW_HOME path for your installation. 
# In the case of SampleApp v406 it is /app/oracle/biee
sudo sed -i -e 's/FMW_HOME=.*$/FMW_HOME=\/app\/oracle\/biee/g' /etc/init.d/obiee

# Make the script executable
sudo chmod 750 /etc/init.d/obiee

You should now be able to run the following status command to see if OBIEE is running or not:

sudo service obiee status

and if it’s not running, start it up:

sudo service obiee start


To shut it down:

sudo service obiee stop

You can read more about the script here.

There’s one more step to run to make OBIEE start automatically at bootup:

sudo chkconfig --add obiee

Now that the service scripts are in place, restart the machine and check they work:

sudo shutdown -r now

When the VM restarts, it may appear to “hang” on boot – the progress bar will show and “Oracle Linux Server 6.5”, but no desktop. That’s because the services are starting up, and you can switch to the console view to see this. On my Mac I press Shift-Function-Backspace to do this, it may vary on other host operating systems (try ALT + d on Windows):

Once the desktop appears you should be able to launch Firefox and go straight to OBIEE, up and running

There are some additional tweaks I usually make to any new server I work with:

  1. Install screen (don’t know what screen is? Read this!):
    sudo yum -y install screen

    Note I’m using sudo which this version of SampleApp thankfully has configured for the oracle user – previous versions didn’t IIRC.
  2. Configure screen with a nice statusbar:

    cat > ~/.screenrc <

    I’m using the bash “here document” functionality here to embed the contents of a document in a command statement. It means I can cut and paste it from my clipboard (if you’re on a Mac, you really should check out Alfred, which has a clipboard history manager, so I always have this screenrc and many other snippets available to paste within just a couple of key presses). Cut and paste a single command is easier (and thus less error-prone) than explaining what content to edit into which text file with which text editor.

  3. Set up SSH keys. I wrote about SSH keys previously on this blog (in fact, all of the things on this checklist are covered there). Put simply, it makes logging in much faster and removes the problem of fat-fingering the password:

    As with the screen configuration in the previous step, I use a snippet of code from my clipboard, pasted into the shell of any new server I’m working on:

    # setup ssh key
    mkdir ~/.ssh
    cat >> ~/.ssh/authorized_keys <<EOF
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMvNjJ8y1a3+NR5OEZHjAd6HhPFdkOI4mDiIIegbRDG+ABhTQmE80DuDe0vp4tex7qefS1u5FG3Xf+hmKDBzKFyegpC+9oEmrtIPkuW7b1Uf/f7Dr6EjXbyA6x1LGEu3THCrGVfw5Mkakqzr43ZfGEFf8k0RiMroHQbp3vOcR+5y0Q132Tbrr0GPd0RPuj4cVOC7QQ6jXSs7TZWepzrcqEpB4M1+N3xS/jEQ5aiCY2FTDOUVj6Y4c0Ogg93bLos6JltPKznq08KXy0ZW/rWDindAmeL0ZAMuU12Qv1ehQZJsxjUwq3jz4yNMgXs5ba3nSvMAr7ZVRzbK5nLl7+eAN3 zaphod@beeblebrox
    EOF
    #
    chmod -R 700 ~/.ssh

    As a side node, that’s my public SSH key there. One’s public ssh key is just that – public. If you want to use it, go ahead, it just means I’ll be able to login to your server. That’s because I have the other half of the key-pair; the private key, and that is the one that should not be shared, because with it you can access any server that has been configured with the public key. Private SSH keys should be treated just as your most important passwords (and indeed, you should use a passphase when creating your private SSH key, to add a layer of security to it, so that it can only be used if the passphrase is known).

The next step is Installing obi-metrics-agent, Graphite, and collectl, but that’s a matter for another blog post :-)

Categories: BI & Warehousing

Introducing the Updated Oracle / Rittman Mead Information Management Reference Architecture Pt2. – Delivering the Data Factory

Rittman Mead Consulting - Mon, 2014-06-16 04:00

In my previous post on our updated Oracle Information Management Reference Architecture, jointly-developed with Oracle’s Enterprise Architecture team, we went through a conceptual and logical view of the information architecture, introducing new concepts like the Raw Data Reservoir, the Data Factory and the Discovery Lab. I said that the Data Factory, Data Reservoir, Enterprise Information Store and Reporting, together with the Discovery Lab, comprised the Information Platform, which could be used to create data applications (raw data plus the data factory), or more wide-ranging Information Solutions, with the Discovery Lab often acting as the place where new ideas and models were prototyped, and algorithms developed that could be productionized in the Execution area of the overall architecture.

NewImage

And this is typically where most information architectures leave you; with a good starting point and an overall schematic, but without any real guidance on how to deliver it. For example, how do you develop small pieces of functionality that may often be throwaway, and how do you take on larger projects? Is it possible to develop this type of architecture in an agile way, and if so, can the data reservoir part be delivered agile too? In the collaboration with Oracle’s Enterprise Architecture team this is where we made our major contribution, taking the ideas behind our ExtremeBI agile development approach and using them to deliver this updated Information Management architecture. Let’s go back a few steps though and think about why an agile, or at least a different, way of developing BI content is needed, and what part of the overall architecture are we developing for?

Within this overall architecture, the bit that developers such as ourselves typically add value for is the “data factory”; the part of the architecture that acts as the conduit, or interface, between the schema-on-read raw data reservoir and the schema-on-write enterprise information store.

NewImage

There’s other code to write too; interface and streaming code to get data into the event engine, for example, and more traditional ETL code to load data in from database and application stores. But what we’re most concerned about is the data models and reports that users request in their user stories and epics. If you take a look back at the logical view of our new information architecture, you can see that information can come in at any level in the architecture – raw data reservoir, foundation or access and performance, with different pros and cons for landing and accessing data at each layer, and the ability to take data from one layer and use it to load layers higher up in the layer hierarchy.

NewImage

And this is where our agile development methodology, “ExtremeBI”, comes in. ExtremeBI, for structured relational sources, typically uses Oracle GoldenGate to trickle-feed data of interest directly into the foundation layer of this information management data architecture, with developers then using that layer plus the access and performance layer, if needed, to close-off user stories as quickly as possible, as shown in the diagram below.

NewImage

In some cases it’s possible to model against just the foundation layer in OBIEE, where that foundation layer contains all the metadata and history columns that you need. One of the main parts of our ExtremeBI methodology is a technical process to make this modelling possible, and more importantly allow the underlying physical model to evolve in response to user stories about performance and analysis needs, without breaking existing reports or creating an unmanageable system.

NewImage

We’ll be documenting this process as part of the series of white papers we’ll be co-authoring with Oracle, and we also bring utilities and accelerators to ExtremeBI customer projects, along with continuous integration and regression testing tools, to help get usable software in the hands of end-users as soon as possible.

But what about the parts of the data factory that source data from the raw data store? Can we use similar techniques, and tools like ODI and OBIEE, to deliver working software to users within a short time span? Well in practice it’s not quite so easy; typically, getting information out of schema-on-read database involves writing code, and its easy to fall back into bad habits where everything is scripted, code is brittle and fragile, and no-one feels confident in evolving the model over time.

We think though it’s possible to use ExtremeBI techniques with schema-on-read sources thought, when you combine them with tools like Flume or GoldenGate for trickle-feed extraction and loading, and new capabilities in ODI and OBIEE where Hadoop data can be accessed via Hive. Flume is the industry-standard mechanism for transporting log entries from source to the Hadoop cluster, and GoldenGate has the ability through its Java API to land data in HDFS or Hive, when the source is a relational database (see MOS Doc.IDs 1586188.1 (Hive) and 1586210.1 (HDFS) for example code).

Hive, though slow for ad-hoc query access against Hadoop data, has a range of Serializer/Deserializer utilities that can translate semi-structured and NoSQL sources into more regular columns and tables, with Cloudera Impala removing the response-time issue and rapidly matching Hive in terms of plugins and SerDes. ODI, as I demonstrated in a five-part series on the blog last week, has a number of Hadoop-native knowledge modules and also leverages Oracle Loader for Hadoop to move data out of the raw data reservoir and into the foundation and access + performance layers.

The important thing to understand though when working with data in the updated Information Management architecture, and reporting against schema-on-read and schema-on-write sources, is that the most effective way to deliver real value for end-users is to have two phases to your development work;

NewImage

  • A discovery phase, typically done in the Discovery Lab sandbox or Rapid Development Sandboxes, that focuses on understanding the data and helping the users see where the value is, with less emphasis on governance and corporate standards, and
  • A commercial exploitation phase, when our understanding of the data has significantly increased and we start to add structure to the data, and where it’s an appropriate time to add governance and address data quality issues. Typically, this would be done in the Execution part of our architecture, taking our prototype design and adding it to the data factory as one of its interface routines.

We’ll be developing these ideas out further over the summer, with some white papers and a joint presentation at Openworld 2014 to formally launch it. In the meantime, keep an eye on the blog as we work through some of the key concepts.

 

Categories: BI & Warehousing

Oracle Tours Africa and the Middle East

Look Smarter Than You Are - Sun, 2014-06-15 10:33
Happy Father's Day, everyone!  I got up early this morning to write about my recent experience traveling the world on Oracle's behalf.  I got to attend the first annual Oracle Technology Network tour of Africa and the Middle East.  It made 2 stops in North Africa (both in Tunisia), 2 stops in Saudi Arabia, and the final stop was in Dubai, UAE.

Tariq Farooq first mentioned the idea of doing a MENA (Middle East & North Africa) tour to me in Beijing last fall.  He asked if I'd be willing to travel half-way around the world to speak to people in English that primarily spoke French and Arabic, and I - of course - said "yes."  Here's Tariq being interviewed by Lillian Buziak at Collaborate 2014 (audio is a bit difficult to hear):


I had two reasons for wanting to go: I do love educating/evangelizing for Oracle EPM, BI, and Business Analytics.  The possibility of reaching new audiences for the first time was exciting. My other reason for going was that I wanted to experience totally different cultures than I ever have before.  I've spoken on 5 continents (now 6 after this tour and I'm anxiously awaiting the OTN Tour to Antarctica) before and have seen presented everywhere from a women's college in Mumbai that was 95F with no air conditioning in the presentation room to a ballroom in the Philippines that had 3 simultaneous English sessions going on (in one room!) all happily observed by smiling Filipinos.  From China to India to Australia to Germany, I have seen some amazing slices of life, but nothing prepared me for the differences I saw on this tour.

In each of the sections below, I have linked the header to a blog from my new best German friend, Bjoern Rost.  He blogged after every stop and unlike me, he actually understood all the Oracle RDBMS sessions on the tour.  Visit http://portrix-systems.de/blog/author/brost/ to see his entertaining blog posts.  (Warning: though I think Bjoern is hilarious, being German, you may find his posts to be 'not funny.'  German humor is an acquired taste.)

I left for the first stop, Tunisia, on Memorial Day (in the USA), May 26, 2014...




Tunisia, May 27To get to Tunisia in time for my session, I left Dallas. Texas at 10AM on Monday, flew to JFK (New York City), flew to Rome, flew to Tunis, and had a nice car waiting for me at the airport compliments of our hosts in Tunisia.  I landed at 10:30AM on Tuesday and considering I was flying to Africa, I felt that the trip went by quickly.  I made it through customs in Tunis in about 15 minutes, walked out the front door of the hotel, and was in an entirely different world.

Speaking in North America, South America, Europe, Asia, and Australia had done absolutely nothing to prepare me for Africa.  The closest thing I could compare it to in my life (but the comparison does not do it justice) was the cities of India: chaotic, dirty, cramped, foreign, and chaotic (worth mentioning again).  Now take all that, remove the cows, and make it Muslim.

My host picked me up in a nice car and we began the hour drive to Beja where the conference was being held.  We passed mounds of trash piled up in the center of the roads though thankfully the heat (high was ~75F when I arrived) didn't make the place smell horribly.  Traffic laws seemed to be non-existent, but it moved fairly quickly being in the middle of the day.  I loved looking out the window at the shops along the road and carts selling watermelon approximately every 100 feet (I was told by my host that it was watermelon season).

We left the city about 20 minutes after I got in the car and I was suddenly in Tuscany.  At least, it looked like Tuscany: fields of amber waves of grass, wide open spaces, wildflowers, lakes, olive groves, country life, gorgeous hills... truly one of the most beautiful countrysides I've seen in my entire life.  Since I had been traveling for over a day, the pleasant scenery soon lulled me to sleep.


My driver woke me up when we got to the technical university in Beja. I walked in to find Tariq trying to explain Oracle Enterprise Manager to a bunch of college students who didn't seem to understand databases let alone Oracle.  I tried to hide in one of the back seats, but Tariq immediately called me up on stage to answer a question about how to develop optimal databases.

Not long after I got there, we broke for lunch.  Our hosts took us to a traditional Tunisian restaurant in downtown Beja.  They were kind enough to make me vegetarian food.  Lunch is apparently a sacred event not-to-be-hurried in Tunisia, so we made it back to the university over 2 hours after we left, fully satiated.

I volunteered to give a session introducing Big Data and Analytics to the college kids, because I felt that it required little technical background.  It seemed to go over well.  My favorite part was when I told a joke about the differences in social media sites and only 10 people laughed.  The people on either side of those 10 then asked them to repeat in Arabic and French what I had said, which caused those people to laugh.  They then shared it around the room and it was like a disease vector of laughter that took 2 minutes to make it around to the 150+ students in the room.  In case you haven't seen it, here's essentially what I said out-loud:
After the sessions were over, I asked one of the professors why everyone listened so intently if they had no background in Oracle.  I mentally wondered if it was because I was an awesome presenter bringing the gospel of Oracle to the future of Africa.  I was told "they didn't understand a lot of what you were saying, but they love listening to people speak English."  So much for my future African disciples.
Our hosts offered to drive us to Dougga, the so-called "best preserved Roman ruins outside of Italy."  It sounded like an exaggeration, but it was actually an understatement. Dougga was once a "small" Roman town on the fringes of the empire... and the miles of town are for the most part still there.  We arrived shortly after they closed the gates for the day.  Our hosts got out of the front car in our 4-car caravan to talk to the guards.  I was in the last car and saw an interesting polite dialog when our hosts started pointing to my car.  I waved back.  The guard smiled and raised the gates for our caravan to enter.  I wondered if bribing had occurred, so I asked how we got in after hours.  Our gracious hosts explained that I was renowned historian, Edward Roske, a visiting professor from the United States of America whose sole purpose for being in Tunisia was to see the Dougga ruins.  They said I should take lots of pictures and walk around looking officially important.  I discovered later that they weren't kidding: they really did tell this to the guard, so I took at least 50 pictures and took some very official selfies to help sell my renowned historian status.




The ruins were truly majestic.  Every time I came around a bend, there was another temple, theatre, circus, road, market, tunnel, column, arch, statue, or something else 2,000 years old to be seen.  It is all in a semi-wild state with no borders separating the ruins from the countryside.  There were even wildflowers growing in the central square:
My favorite moment of the entire trip occurred when I broke away from the rest of our group to go explore some arches on the edge of the ruins.  I went to take a picture of one of the doorways, and I got photobombed:
I went through the doorway to discover a local sheepherder grazing his sheep right in the ruins:

They started on the edge of the ruins but eventually the herder marched his sheep right down the center of the 2,000 year old road leading through Dougga.  Tariq decided to join their herd:
I can't stress enough how amazing this site is.  I would encourage people to visit Tunisia if for no other reason than to see Dougga and Carthage.  You have never felt Roman society like you can wandering around the ancient town with only sheep to keep you company.

Eventually, the guards at the entrance (the only guards in the place, so far as we could tell) came to find our renowned historian group because they wanted to go home.  We stayed the night in the Golden Tulip Hotel in Carthage which was a 4-star hotel for under $200 USD per night.  I recommend it to anyone.  The next morning, the OTN MENA 5 (Tariq, Mike Ault, Bjoern, me, and Jim Czuprynski) headed for the flight to Cairo then on the Riyadh.


Saudi Arabia, May 29-31While the Islam is a part of the culture in Tunisia, Islam is the culture in Saudi Arabia.  I have never seen a country more dominated by a single religion than Saudi Arabia.  It is one of the most difficult places in the world to get a visa (they suspended tourist visas 5 years ago) and it's even harder for a non-Muslim like me.  I spent 4 hours clearing customs which gave me a lot of time to study up on what I was in for.

My guidebook (and several websites) told me about all the things I wasn't allowed to do, say, or maybe even think when I got to Saudi.  Here's what I was told versus what actually happened:
- Muslims everywhere.  Yes, 100% true.  They have calls to prayers everywhere and we had to stop presenting when it was time for prayer.  The whole city stops, for that matter, when it's prayer time.  I was lucky enough to be in a public park for evening prayer one day.  The sounds of the call to prayers across the city were beautiful.

- Traffic fatalities.  90% true.  Saudi apparently has the highest incidence of traffic fatalities in the world because traffic laws are more like traffic vague suggestions only to be followed if everyone has plenty of time and sort of feels like it.  I was prepared to almost die every time I got in a car, and while I saw no deaths, I saw multiple car accidents of the fender bender type each day I was in Saudi.  At one point, we were stopped at a red-light to make a left turn.  With traffic coming from both directions in front of us, a car behind us who wanted to make a left-hand turn felt that our stopping was delaying his day.  He didn't honk or behave rudely in any way: he just drove around us and made the left-turn on the red into oncoming traffic.  No one seemed annoyed at the man for doing it.
- Pornography.  100% nonexistent in Saudi.  They even go through the magazines in the shops and black out with a sharpie anything that's considered too revealing (shoulders, waists, knees, etc.).  They also sharply monitor the web and block out any site deemed inappropriate (including Bing.com with safe search set to anything but strict).
- Pictures of other people.  0% true.  I was told before I went that Muslims do not believe in having images taken of people.  What I actually found was the most selfie-absorbed culture I've ever seen, and I live in America with a teenage child.  I couldn't walk 10 feet at the Riyadh or Jeddah events without someone taking a picture.  I was also told that you couldn't take pictures of public buildings or in public buildings (like the national museum).  Totally untrue: people were taking pictures of just about anything except women.
- Women being covered.  100% true.  All the women wore black abbeyahs at all times.  You're not allowed to film them or talk to them.  That said, I didn't see that many.  Both the events in Riyadh and Jeddah were male-only.  We did see women in the public places particularly near retail outlets and in the city parks & museums.
- Men wearing suits.  25% true.  For the most part, the men wore traditional dress shirts and head coverings.  I wore a suit (no tie) which considering it was 110F+, was quite a sacrifice.  My fellow presenters wore ties in addition to their suits which proves they're more willing to sacrifice for the cause of Oracle.
- No alcohol. 100% true but weird.  The first night I got to Riyadh, I opened my minibar to find... a Budweiser.  Closer examination revealed it was a "Budweiser NA" signifying no alcohol.  Every restaurant we went to offered us "Saudi champagne" or "Saudi wine" which apparently means alcohol-taste without any actual alcohol.  Since I hate the taste of alcohol, I didn't think non-alcoholic alcohol would taste any better, so I avoided it.
- No narcotics.  100% true so far as I was willing to test it.  I actually panicked during customs at the thought that maybe I had some prescription drug in my laptop bag that had a narcotic in it.  Narcotics are a capital crime in Saudi, and I spent most of my 4 hours wondering if Ambien counts as a narcotic.  Luckily for me, I wasn't executed for sleep aid smuggling.

After the worst customs experience of my entire life at the airport in Riyadh, I got to the Marriott hotel for about 4 hours sleep before the day was to begin.  I got to go up to the concierge floor for an elaborate breakfast and a great view of the Riyadh skyline.

The Riyadh venue was spectacular.  The hosts (eSolutions) put on one of the best events I've ever attended on an OTN tour.  From the venue to the signage to the elaborate Lebanese food lunch to the photographer to the videographer to the speaker gifts, it was all top-notch.  Like in Beja, I presented on "Taming Big Data with Analytics" to an enthusiastic audience.  There were about 75 (all male) people there including several male children of the attendees.  They were some of the best behaved children I have ever seen even though some were elementary school age.
I got introduced at one point as "Edward Roske from the United States, a rich man who did not inherit his business from his father."  That was unique and though it left me speechless, the audience seemed very impressed at my ability to become head of a business without my father having to die first.

After the conference, our hosts took us to the National Museum.  Like much of Riyadh, it looks like it was built in the last 5 years, and in the case of a museum, this worked very well.  Since they built it all at once (versus many amazing museums around the world that were created over hundreds of years), it was able to tell a complete story from the beginning of the universe up through today (as opposed to just having rooms of collections).  It was a very Muslim-centric view of history, but I found the educational aspects fascinating.  The sign on the way in said no photography, but since the massive museum apparently only had 4 people working in it (all at the front desk), everyone ignored it.  I found a cube in the first hall that seemed Essbase-like.

My favorite exhibit was a scale model of Mecca and Medina as viewed at night.  Being non-Muslim, I will never see Mecca, so this is as close as I will ever get:
After the museum, our hosts took us to Kingdom Centre, the tallest building in Saudi Arabia (for now).  It has 30+ open stories at the top with a skybridge connecting them.  Here's a view from the ground of the building (notice the necklace like architecture with the bridge on top) and a view from the building of the ground:

Our hosts took us to dinner (which was unfortunately a meat-on-a-stick restaurant, unfortunately for me since I'm a vegetarian).  Since I wasn't able to eat much, I got to talk at length to the CEO of eSolutions.  He was a fascinating man who told me all about how Saudi Arabia and the Gulf Coast Community is ready for analytics and reporting.  He was a former Pakistani military man, but his love of country and family shined through in everything he said.

The following morning, we journeyed to Jeddah over on the Red Sea coast of Saudi near Mecca.  The weather was like Riyadh except with crazy humidity.  It reminded me of Houston on the hottest day of summer.  We were in Jeddah for less than 24 hours so I don't admittedly have a ton to say about it.  I saw nothing more than the airport, the drive to/from the hotel, the venue where we presented, and a fast casual restaurant (more tasty Lebanese food!) where we had dinner.

The venue was small since it was a half-day event and lightly attended.  I did ask if I could present on a different topic since I had gotten a bit bored talking Big Data all the time.  This time, I spoke on "In-Memory Databases" which is a hot topic these days thanks to the SAP guys saying "Hana" at least once every sentence.  After the morning event, the OTN MENA 5-1 (Bjoern went straight to Dubai since there weren't enough speaker slots in Jeddah) headed for the airport for the flight to the United Arab Emirates.  Getting out of Saudi took 2 minutes at Customs which goes to show, I guess, that they're a lot happier to get rid of you than let you in.

Dubai UAE, June 1I honestly don't know where to begin with Dubai.  It is truly one of the most amazing cities on Earth and almost indescribable to anyone who hasn't actually witnessed it.  My flight landed on Saturday evening in the largest airport I think I've ever seen.  Clearing customs took minutes then I headed for the cleanest (and probably newest since it just opened in 2009) train system in the world.  The view from the Dubai Metro was stunning.  If Riyadh looks like it was built in the last 10 years, Dubai looks like it was built in the last 10 minutes.  It reminds me of New York City if they took out all the advertising, 90% of the people, and any building under 1,000 feet tall.


I was lucky enough to stay at a hotel in the Burj Khalifa, the tallest building in the world (by far: the Burj is over 200 stories tall at over 2,500 feet).  The Burj sets all kinds of "tallest" records including the tallest outdoor observation deck.  Here's a view of several buildings below that are all over 1,000 feet tall (with an Edward in it for perspective).

My room had a view of the Dubai Fountains.  No offense, Vegas, but these fountains put the Bellagio fountains to shame. They're more agile, faster, better lit, larger, and frankly, classier. And if you're staying in the hotel in the Burj Khalifa, you can listen to the fountain music through the TV:

At the base of the Burj Khalifa is the Dubai Mall, the largest mall in the world (you hear a lot in Dubai of "that's the largest _____ in the world").  It's 3-4 times the size of Mall of America if that puts it in perspective.  It has over 1,200 shops, over 150 restaurants, and some of the strangest (yet up-scale) stores you've ever seen.  There was one selling full-size metal camels, for instance (these were the only camels I saw on my trip).  I ate at the Dubai Mall every day I was in Dubai and felt I could eat there every day for a year without repeating an entree.  I am normally not a fan of malls, but I loved the Dubai Mall.  It wasn't crowded and the people that were there kept to themselves.  This could actually be said about everywhere in the Middle East: the people are nice, but they keep to themselves.  For an introvert like me, it's heaven.  You can be alone in a crowd.


The first full day I was there, I presented in the morning, burned my feet on the sands of a beautiful beach at lunch, and was skiing in the afternoon.  Yes, skiing.  The Mall of the Emirates (another mall that dwarfs anything we have in the USA) is not only massive, it has its own indoor ski resort inside.  It's not a tiny ski hill either: it's 1,200 feet long with three separate runs (a blue/intermediate run, a green/beginner run, and a small terrain park).  They also have lots of other fun activities to do including a penguin exhibit, ziplining over the ski hill, sledding, and large transparent hamster balls to roll down the hill in.

For around $50 USD, you get a lift ticket, ski pants, ski jacket, boots, skis, and poles for 2 hours.  A full-day pass is only around $15 USD more but I opted for 2 hours.  (You can only ski indoors in Dubai so much, obviously.)  There were at most 15 people skiing, but there were hundreds of people from the Middle East paying their $50 to ride up and down the ski lift basking in the glory of being cold.  The whole place is chilled to around 23F and the snow is glorious: it's soft and velvety because they actually make it snow indoors every night (unlike the ice blowers we use for man-made "snow" in the USA).  I skied for my full 2 hours without a break and I loved that I was so cold by the end that I had to go back into the mall and get a soy hot chocolate at Starbucks.

I had too many amazing experience to recount (and this is already seeming a bit like an advertisement for Dubai) but the most fundamentally changing experience of my trip was a visit to a mosque in Dubai.  Hosted by an eloquent British Muslim woman in a black abbeyah, she spent an hour educating a group of Westerners all about Islam.  I was taken aback by how... peaceful their religion is.  She covered the 5 pillars of Islam in a way that made me understand 1,000,000,000+ Muslims far better than I ever have.  I think that if everyone in the Western world could attend that one hour I did, we would have a level of cultural understanding that would ease a ton of our current fears.  They let us take lots of pictures and even let us video them doing 5 minutes of prayers.  It was moving and if you are ever in Dubai, make sure you visit the Jumeirah Mosque during one of their visitation hours (it's free unlike most everything else in Dubai).

My final presentation was in Dubai and it was a bittersweet end to a whirlwind week.  Every single day was spent traveling, speaking, or both, so it was nice to finally have a break.  That said, I will consider the other members of the OTN MENA 5 to be friends for life and I miss them already.  As I finished my presentation on Big Data (for the third time on the trip), I looked out at the anxious faces in the audience and realized that I would miss the Muslim world far more than I ever expected to.  

If there's ever a 2nd OTN tour of the Middle East, Africa, or both, sign me up.  Until then, thank you for letting me be a part of the most inspirational, educational tour I've ever experienced.

Categories: BI & Warehousing

Introducing the Updated Oracle / Rittman Mead Information Management Reference Architecture Pt1. – Information Architecture and the “Data Factory”

Rittman Mead Consulting - Fri, 2014-06-13 05:39

One of the things at Rittman Mead that we’re really interested in, is the architecture of “information management” systems and how these change over time as thinking, and product capabilities, evolve. In fact we often collaborate with the Enterprise Architecture team within Oracle, giving input into the architecture designs they come up with, and more recently working on a full-blown collaboration with them to come up with a next-generation Information Management architecture. I these two posts I wanted to share some of our recent thinking in this area, looking first at our new proposed architecture, and then in the second post talking about how we’d use agile development methods, in-particular our “ExtremeBI” development approach, to deliver it.

But first, some history. Back in 2009 I blogged about a first-generation DW reference architecture which introduced a couple of new concepts, based on new capabilities from tools such as OBIEE plus some thinking we, and the Enterprise Architecture team at Oracle, had been doing over the years. This reference architecture introduced the concept of “Foundation” and “Access and Performance” layers, and recognised the reality that Kimball-type star schemas were great for querying but not so good for long-term, query-neutral storage of data, whilst Inmon-style EDW models were great as a long-term, process-neutral system of record, but not so good for running user queries on. This new architecture included both of these design approaches, with the foundation layer forming the “information management” layer and the access and performance layer being the “information access” layer. Most importantly, tools like OBIEE made it possible for enterprises to create metadata layers that potentially accessed all layers in this model, so users could query the foundation layer if needed as well as the access and performance layer, if the foundation layer was a better source of data for a particular reports.

NewImage

A second revision to this model, a couple of years later, expanded on the original one and introduced another two interesting concepts, brought upon by the introduction of tools like Endeca Information Discovery, and the rise of unstructured and semi-structured data sources. This new architecture added unstructured and semi-structured sources into the model, and also introduced the concept of “sandboxes”, areas of the information management model that allowed more free-form, exploratory BI applications to be built.

NewImage

But in-practice, this idea of “unstructured” and “semi-structured” sources wasn’t all that helpful. What really started to make an impact in the past couple of years is the increasing use of “schema-on-read” databases, where we trade-off the performance and defined structure of traditional relational 3NF and star schemas for the flexibility and “time-to-value” provided by key-value store databases. The Endeca Server is a good example of these types of database, where the Endeca Server allows rapid storage of loosely-associated datasets and tools like Endeca Studio then apply a structure to the data, at the time of analysis. Schema-on-read databases are great for fast, flexible access to datasets, but the cost of ETL is then borne by each system that accesses the data.

Probably the most well-known examples of schema-on-read sources though are Hadoop, and NoSQL databases. Coupled with their ability to store lots of detail-level data at relatively low cost, Hadoop and NoSQL databases have significantly affected the overall landscape for BI, data warehousing and business analytics, and we thought it was about time for a new reference architecture that fully-incorporated the capabilities and latest thinking around this area. Back at the start of 2014 myself, Jon Mead and Stewart Bryson met up with Oracle’s Andrew Bond in his team for a series of workshops, and what came out of it was an updated Information Management Architecture *and* a development methodology for delivering it. Let’s start off then by looking at this updated architecture from a conceptual view.

NewImage

At a conceptual level, we build on this idea of sandbox environment and formally separate things out into the Execution area – business-as-usual, production and development areas – and an Innovation area, where we build on the idea of a sandbox and rename it the “Discovery lab”. The Discovery lab is where, for want of a better word, the “data scientists” work, with fewer constraints on development and whose inputs are events and data, and outputs are the discovery output that can be the prototype and inspiration for designs going into the execution area.

The main “engine” of the Execution area is our enterprise store of data, this time broken down into four areas:

  • A “data reservoir” where we store all incoming events and data at detail-level, typically on HDFS. This blog article by Oracle’s Jean-Pierre Dijcks sets out the concept of a data reservoir well, and I like this blog by Scaleabilities’ Jeff Needham where he makes the case for calling it a “data reservoir” that can ingest, process and analyse data rather than a “data lake”, which implies a passive store.
  • An Enterprise Data Store, analogous to the enterprise data warehouses we use today, and a reporting component, typically in our case OBIEE
  • Most importantly, the new concept of a “data factory”, a conduit between the data reservoir and the enterprise information store

Together, the execution and innovation layers form our “information platform”, with the event engine feeding real-time events into the platform and outputting them into the data reservoir, and traditional ETL routines loading structured data from the enterprise into the enterprise information store.

This conceptual architecture then permits several types of information application. For example, the data reservoir and the data factory together could support what we call “data applications”, applications working on semi-structured, large and low-granularity data sets such as those used for genomic analysis.

NewImage

Other applications might be more traditional BI and data warehousing applications, but with the addition of data from the data reservoir and the analysis capabilities of Hadoop.

NewImage

The discovery lab can be a standalone area, or the insights and discovery it outputs can be used as inputs into the main information platform. More event-based data will typically come in via the event engine, with its output going into the data reservoir and supporting “next-best-decision” applications like Oracle Real-Time Decisions.

Another way of looking at this architecture is from a logical perspective, in particular focusing on the data layers and access/loading processes to load them. The diagram below is our latest version of the two diagrams at the start of this article, and as you can see we’ve kept the data sources and BI element much the same, and kept the concept of the sandbox, in this case refined as the “discovery lab sandbox”.

NewImage

What is different this time though is the middle bit; we’ve lost the staging area and replaced it with the raw data reservoir, added a “Rapid Development Sandbox”, and drawn the main layers as a slanted set of stacked areas. So why?

What we’re trying to show with the slanted data layers is the relative cost of data ingestion (loading), and the relative cost of accessing it (information interpretation). For the raw data reservoir, for example, there’s little cost in ingesting the data – maybe copy some files to HDFS, or use Flume or GoldenGate to capture log or transaction data to HDFS or Hive, but the cost is then borne in accession this typically “schema-on-read” data source. As you go up the stack, there’s a bit more work in landing data into the Foundation layer – ETL routines, GoldenGate routines, some data cleaning and constraint checking, for example – but it’s correspondingly easier to get data out. For the Access and Performance Layer there’s the most cost in getting data in, but then users have very little work to do when getting data out.

Data can move up the stack from Raw Data Reservoir to Foundation, or directly into Access and Performance, or it could be landed at levels above Raw Data Reservoir, for example in our ExtremeBI approach where we use GoldenGate to replicate source system tables directly into Foundation without going through a staging layer. The Rapid Development Sandboxes are there to support agile, iterative development, with the output from them either being the result in itself, or their designs and insights being used to create more formal projects and data structures.

From a more product-centric perspective, you can overlay these types of diagrams with specific schematics for example enterprises. For example, in the diagram below you can see Oracle NoSQL database being see with HDFS and the Oracle Big Data Connectors to capture and store events from Complex Event Processing, and then outputs from CEP being also fed into a more traditional, “high density data” store as well as directly into a decision engine.

NewImage

So this all sounds great, but how do you build it? Do we have to use the (discredited) step-by-step, waterfall method to build this type of architecture, and in particular the key “data factory” element that provides connectivity between the Raw Data Reservoir and the Enterprise Information Store? And can we apply agile methods to big data sources, as well as regular databases and applications? Check back on Monday for our thoughts on how this should be done.

Categories: BI & Warehousing

End-to-End ODI12c ETL on Oracle Big Data Appliance Pt.5 : Bulk Unload to Oracle

Rittman Mead Consulting - Fri, 2014-06-13 04:00

All week I’ve been looking at what’s involved in moving data around Hadoop on the Oracle Big Data Appliance, using ODI12c to orchestrate the end-to-end process. Using web log data from the Rittman Mead website, I first landed the log data on HDFS using Apache Flume, then transformed the data in several stages using Hive transformations built using ODI mappings and knowledge modules. The links below give the previous articles in this series, where you can read how I transformed data using Hive, Sqoop and Python streaming:

At the end of all these ODI12c transformations, I’m left with a Hive table containing summarised web page accesses, enhanced with reference data on the page title and author, and with the entries geocoded with the name of the country associated with the page access. In many cases, this is where I’d leave the data, but quite often it’d be useful to then export the data out of Hadoop and into an Oracle database, so that I could load it into a data warehouse, or analyse it using a richer SQL dialect than I can get with HiveQL. I could use Sqoop to get the data out of Hive and into Oracle, but for larger exports in particular I’d get better unload performance by using Oracle Loader for Hadoop, one of the Oracle Big Data Connectors that typically come with Oracle Big Data Appliance.

There’s actually two Oracle Big Data Connectors that deal with getting data out of Hadoop and into Oracle; Oracle Direct Connector for HDFS (ODCH) gives you the ability to define an Oracle Database External Table over a HDFS file or Hive table, and is useful if you just want to access the data in Hadoop without actually loading it into an Oracle Database. 

NewImage

The way ODCH works is that it adds a special type of adapter to the external table feature in Oracle Database, that gives the database the ability to stream HDFS files in a similar way to how external tables stream regular file data into the database. This makes ODCH very convenient if you just want access to HDFS or Hive data from an Oracle database, but it suffers from the same performance penalty as regular external flat file access, in that you can’t index the table and so forth; so if you want high-performance access to data held in a Hadoop system, and you need this access more than just occasionally, you’re better off copying the data right into the Oracle database, and this is where Oracle Loader for Hadoop comes in.

Oracle Loader for Hadoop (OLH) is a bulk-unloader for Hadoop that uses MapReduce technology to sort, partition and organise the data extraction at the Hadoop end, and then fast-unload the data into the Oracle Database typically much faster than a Sqoop data transfer.

NewImage

OLH is, however, a bit fiddly to setup, so what’s useful is that ODI has a knowledge module, IKM File/Hive to Oracle, that can set up both OLH and ODCH jobs. This knowledge module does have its own setup process that itself is a bit involved – mostly around environment variables on your Linux system – but once its done, each OLH data transfer is then very easy to setup, and as long as you don’t then go on to do any more transformation on the Oracle data post-unload, it’s covered by the ODI limited-use license you get when you buy it in conjunction with the BDA.

So the key things to get OLH working with ODI are firstly, to install OLH on the BDA node that’s running the ODI12c agent (or in my case, full ODI Studio), and then set up the Oracle user’s environment variables so OLH works, and ODI knows where to find any additional Hadoop / Hive libraries and configuration files to work with OLH. In my case, this meant adding the following entries to my .bashrc file:

export HIVE_HOME=/usr/lib/hive
export HADOOP_CLASSPATH=/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/*:/etc/hive/conf:$HIVE_HOME/lib/hive-metastore-0.12.0-cdh5.0.1.jar:$HIVE_HOME/lib/libthrift.jar:$HIVE_HOME/lib/libfb303-0.9.0.jar:$HIVE_HOME/lib/hive-common-0.12.0-cdh5.0.1.jar:$HIVE_HOME/lib/hive-exec-0.12.0-cdh5.0.1.jar
export OLH_HOME=/home/oracle/oracle/product/oraloader-3.0.0-h2
export HADOOP_HOME=/usr/lib/hadoop
export JAVA_HOME=/usr/java/jdk1.7.0_60
export ODI_HIVE_SESSION_JARS=/usr/lib/hive/lib/hive-contrib.jar
export ODI_OLH_JARS=/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/ojdbc6.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/orai18n.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/orai18n-utility.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/orai18n-mapping.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/orai18n-collation.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/oraclepki.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/osdt_cert.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/osdt_core.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/commons-math-2.2.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/jackson-core-asl-1.8.8.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/jackson-mapper-asl-1.8.8.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/avro-1.7.3.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/avro-mapred-1.7.3-hadoop2.jar,/home/oracle/oracle/product/oraloader-3.0.0-h2/jlib/oraloader.jar,/usr/lib/hive/lib/hive-metastore.jar,/usr/lib/hive/lib/libthrift-0.9.0.cloudera.2.jar,/usr/lib/hive/lib/libfb303-0.9.0.jar,/usr/lib/hive/lib/hive-common-0.12.0-cdh5.0.1.jar,/usr/lib/hive/lib/hive-exec.jar

where OLH was installed to /home/oracle/oracle/product/oraloader-3.0.0-h2, Hadoop was installed to /usr/lib/hadoop and Hive to /usr/lib/hive. ODI_HIVE_SESSION_JARS is optional, but if you do anything fancy with SerDes in your mappings you’re best adding this entry in, to the hive-contrib.jar file (or technically the symbolic link to it). Getting this right was the hardest part about setting this KM up, but once it’s done, it’s done.

Then, to set the ODI parts up, I first create a table in my Oracle 11.2.0.3 database with matching datatypes for the Hive table i’m going to unload; varchar2 for string, number for int, etc, and then create entries in the ODI topology and Designer navigator. Then, I create a new mapping in ODI12c, where I copy the columns from my Hive table into the Oracle table, like this (note that I could map from an HDFS file instead, if I wanted to):

NewImage

Now it’s a case of configuring the Physical part of the mapping. There’s two steps to this; first, you have to change the default LKM setting for the access table in the physical mapping, the ACCESS_F object in the TARGET_GROUP below:

NewImage

By default this sets itself to LKM SQL to Oracle (as you’ve got an Oracle target), but we need to change this to a special one, LKM SQL Multi-Connect (below), that delegates the multi-connect capabilities to the downstream node, so you can use a multi-connect IKM such as IKM File/Hive to Oracle. 

NewImage

 

Then, and only once you’ve selected this LKM, you can select IKM File/Hive to Oracle for the target table KM.

NewImage

There’s not much you need to enable with this KM to at least get started. Set OLH_OUTPUT_MODE to JDBC (you can enable OCI later on instead, for potentially faster unloading) and set MAPRED_OUTPUT_BASE_DIR to an HDFS directory that the agent’s OS user (typically “oracle”) can read and write to – in my case, I created one specifically for this KM, using Hue’s File System browser tool. Then, if all’s set up correctly you should be able to execute the mapping and view the results in the Operator navigator – note that in my case, it took several attempts before I finally got all the environment variable settings right. Once done, you should then be able to view the Hive table output in your Oracle table, using SQL Developer.

NewImage

So now, we’ve got each step for our end-to-end process. The only thing left to do is to pull all of these steps, including the ODI Procedure to run the sqoop transfer, into an ODI package or load plan, so that we can run the steps in sequence:

NewImage

Then, finally, I can run the whole process as a single execution, moving data from Flume capture through Hadoop ETL steps, to the unload into an Oracle table, all from within ODI12c.

NewImage

So there you have it – Hadoop-native ETL processing, with the metadata-driven design, error handling and heterogenous connectivity provided by ODI12c.

Categories: BI & Warehousing