Rittman Mead Consulting
If you follow Blogs and Tweets from the Oracle community you won’t have missed hearing about the recent release of the first patch-set for Oracle 12c. With this release there are some significant pieces of new functionality that will be of interest to Data Warehouse DBAs and architects. The headline feature that most Oracle followers will know of is the new in-memory option. In my opinion this is a game-changer for how we design reporting architectures; it gives us an effective way to build operational reporting over the reference data architecture described by Mark Rittman a few weeks ago. Of course, the database team here at Rittman Mead have been rolling up our sleeves and getting into in-memory technology for quite a while now, Mark even featured in the official launch presentation by Larry Ellison with the now famous “so easy it’s boring” quote. Last week Mark published the first of our Rittman Mead in-memory articles, with the promise of more in-memory articles to come including my article for the next edition of UKOUG’s “Oracle Scene”.
However, the in-memory option is not the only new feature that is going to be a benefit to us in the BI/DW world. One of the new features I am going to describe is Exadata only, but the first one I am going to mention is generally available in the 184.108.40.206 database.
Typically, data warehouse queries are different from those seen in the OLTP world – in DW we tend to access a large number of rows and probably aggregate things up to answer some business question. Often we are not using indexes and instead scanning tables or table partitions is the norm. Usually, the data we need to aggregate is widely scattered across the table or partition. Data Warehouse queries often look at records that share a set of common attributes; we look at the sales for the ‘ACME’ widget or the value of items shipped to Arizona. For us there can be great advantage if data we use together is stored together, and this is where Attribute Clustering can pay a part.
Attribute Clustering is usually configured on the table at at DDL time and in-effect controls the ordering of data inserted by DIRECT PATH operations, Oracle does not enforce this ordering for conventional inserts, this may not be an issue in data warehouses as bulk-batch operations typically use APPEND inserts, which are direct path inserts, or partition operations, it may be more of an issue with some of the real-time conventional path loading paradigms. In addition to Direct Path load operations Attribute Clustering can also occur when you do Alter table MOVE type operations (this also includes operations such as PARTITION SPLIT). On the surface, Attribute Clustering sounds little different to using an ORDER by on an append insert and hoping that Oracle actually stores the data where you expect it to. However, Attribute Clustering gives us two other possibilities in how we can order the data in the cluster.
Firstly, we can cluster on columns from JOINED dimension tables, for example in a SALES DW we may have a sales fact with a product key at the SKU level, but we often join to the product dimension and report at the Product Category level. In this case we can cluster our sales fact table so that each product category appears in the same cluster. For example, we have just opened a chain a supermarkets with a wide but uninspiring range of brands and products (see the tiny piece of our product dimension table below)
As you can see, our Product PK has no relationship at all to the type of product being sold. In our Kimball-style data warehouse we typically store the product key on the fact table and join to the product dimension to obtain all of the other product attributes and hierarchy members. This is essentially what we can do with join Attribute Clustering, in our example we can cluster our fact table on PRODUCT_CATEGORY so that all of the Laundry sales are physically close to each other in the Fact table.
CREATE TABLE rm_sales ( product_idNUMBER NOT NULL, store_id NUMBER NOT NULL, sales_date DATE NOT NULL, loyalty_card_id NUMBER , quantity_sold NUMBER(3) NOT NULL, value_sold NUMBER(10,2) NOT NULL ) CLUSTERING rm_sales JOIN products ON (rm_sales.product_id = products.product_pk) BY LINEAR ORDER (sales_date, product_category, store_id);
Notice we are clustering on a join to the product dimension table’s “product_category” column, we are also clustering on sales_date, this is especially important in the case of partitioned fact tables so that the benefits of clustering align to the partitioning strategy. We are also not restricted in our clustering to just one join, if we wanted to we could also cluster our sales by store region e.g. the Colorado laundry product sales are located in the same area of the sales table. To use Join Attribute Clustering we need to define the PK / FK relationships between fact and dimension, however it is always good practice to have that in place as it helps the CBO so much with query plan evaluation
Secondly, notice the BY LINEAR ORDER clause in the table DDL. Of the two ordering options, Linear Order is the most basic form of clustering, it this case we have our data structured so that all the items for a sales day are clustered together and within that cluster we order by product category and those categories are in turn ordered by store_id. The other way we can cluster is BY INTERLEAVED ORDER; here, Oracle maps a combination of dimensional values to a single cluster value using a z-order curve fitting approach. This sounds complex but it ensures that items that are frequently queried together are co-located in the disk blocks in the storage.
Interleaved ordering is probably the best choice for data warehousing at it aligns well with how we access data in our queries. Although we could include all of the dimension keys in our ordering list, it is going to be more benefit to just include a subset of dimensions; typically for retail I’d go with DATE (or something that correlates to the time based partition key of the fact table), the product and the store. Of course we can again join to the dimension tables and cluster at higher hierarchy levels such as product category and store region. The Oracle 12c Data Warehousing guide gives some good advice, but you can’t go far wrong if you think about clustering items together that will be queried together
Clustering data can give us some advantages in performance. Better data compression and improved index range scans spring to mind, but to get most benefits we should also look at another new feature, zone-maps. Unlike Attribute Clustering, Zone Maps are Engineered Systems only, In a way they are similar to storage indexes already found on Exadata, but they have some additional advantages, they are also somewhat different from zone maps encountered in other DB vendors’ products such as Netezza.
In Exadata, a storage index can provide the maximum and minimum values encountered for a column in storage cell. I say “can” as there is no guarantee that range for a given column is held in the storage index. Zone Maps on the other hand will always provide maxima and minima for all of the columns specified at zone map creation. The zone map is orientated in terms of contiguous database blocks and is materialized so that it is physically persisted in the database and thus survives DB startups. Like Materialized views Materialized zone maps can become stale and need to be maintained.
We can define a zone map on one or more table columns and just like Attribute Clustering we may also create zone maps on table joins. As a table can only have one zone map it is important to include all of the columns you wish to track. Zone Maps are designed to work well with attribute clustering, in fact it is just a simple DDL statement to add a zone-map to an Attribute Clustered table so that the zone map tracks the same attributes as the clustering. This is where we get the major performance boost from attribute clustering, Instead of looking at the whole table the zone map tells us which ranges of database blocks contain data that matches our query predicates.
Zone Maps with Attribute Clustering gives us another powerful tool to boost DW performance on Exadata – we can do star queries without resorting to bitmap indexes and we minimise IO when scanning fact tables as we only need look where we know the data to be. Exciting times!
The In-Memory Option for Oracle Database 12c became available a few weeks ago with the 220.127.116.11 database patchset, adding column-store and in-memory capabilities to the Oracle Database. Unlike pure in-memory databases such as Oracle TimesTen, the in-memory option adds an in-memory column-store feature to the regular row-based storage in the Oracle database, creating in-memory copies of selected row-store tables in a compressed column-based storage format, with the whole process being automatic and and enabled by issuing just a couple of commands – to the point where my feedback at the end of beta testing was that it was “almost boring” – said slightly tongue-in-cheek…
But of course adding in-memory capabilities to the Oracle database is anything but boring – whilst TimesTen on Exalytics has given us the ability to store aggregate tables, and “hot data” tables in a dedicated in-memory database co-located with the BI mid-tier, TimesTen is another set of skills to learn and another product to manage. Moreover, you’ve got to somehow get the data you want to aggregate or otherwise cache out of your data warehouse and into TimesTen, and then you’ve got the even more difficult task of keeping that TimesTen dataset in-sync with your main data warehouse data. Most importantly though, even with the 2TB and 4TB versions of Exalytics many data warehouses won’t fit entirely into memory, so you’re going to end-up with some of your data in TimesTen and some in your data warehouse, and with only the TimesTen dataset benefiting from in-memory speeds.
So what if you could enable the in-memory option on your data warehouse, and then just run it all in-memory there? To see how this worked, I thought it’d be interesting to take the flight delays dataset that comes with the latest OBIEE 11g v406 SampleApp, and copy that across to an Oracle 18.104.22.168.0 database to test it out with the in-memory option. This dataset comes with a set of dashboard pages and analyses and contains around 6m rows of data, and in SampleApp is accelerated by an aggregate schema, also stored in Oracle, that generally reduces query times down to just a few seconds.
So what I thought would be interesting to check would be first, whether enabling the in-memory option sped-up the dashboards, and second, whether having the in-memory option removed the need for the aggregate tables altogether. I was also interested to see whether putting the aggregate tables into memory improved their access time significantly, as anecdotally I’d heard that there wasn’t much clock-time difference between accessing aggregates in a TimesTen database vs. just creating them in the same data warehouse database as the source data. To set this all up then I created the new 22.214.171.124.0 database,. exported the BI_AIRLINES and BI_AIRLINES_AGG schemas out of the 12c database that comes with SampleApp v406, and then imported them into the same schema names in the new 126.96.36.199.0 instance, on the new VM (rather than on the SampleApp VM), and then amended the connection pool details in the SampleApp RPD to point to this new, external Oracle 188.8.131.52.0 database instance. The 184.108.40.206.0 database at this point had the following memory allocation:
Connected to: Oracle Database 12c Enterprise Edition Release 220.127.116.11.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 6325010432 bytes Fixed Size 2938448 bytes Variable Size 1207962032 bytes Database Buffers 5100273664 bytes Redo Buffers 13836288 bytes Database mounted. Database opened.
So I don’t end-up comparing the larger database instance on my 18.104.22.168.0 VM with the smaller one the airlines data came from on the VM, I created my baseline response time measurements on this new VM and with none of the in-memory features enabled, and ran some of the dashboard pages and clicked-on some of the prompt values – and the response time wasn’t bad, with just the odd analysis that took an excessive time to return. Mostly though, the results for each dashboard came back straight-away, what you’d expect when summary tables have been deployed to speed-up reporting.
Looking at the SQL being generated by the BI Server, you can quickly see why this is the case; the BI Server is using aggregate navigation and actually querying the BI_AIRLINES_AGG schema to return the aggregated results the dashboard, as you can see from one of the SQL statements retrieved from the NQQuery.log file below:
select sum(T255906.Dep_Delay_00039B26) as c1, sum(T255906.Arr_Delay_00039B22) as c2, sum(T255906.Z_of_Fligh00039B28) as c3, substr(T255216.Carrier00039BA9 , 1, 25) as c4, T255216.Carrier00039BA9 as c5, T255216.Carrier_Co00039BAA as c6 from BI_AIRLINES_AGG.SA_16_Dest00039D06 T255357, BI_AIRLINES_AGG.SA_Time_Mo00039CFB T255737, BI_AIRLINES_AGG.SA_31_Carr00039CEB T255216, BI_AIRLINES_AGG.FACT_AGG_OR_06M T255906 where ( T255216.Carrier_Co00039BAA = T255906.Carrier_Co00039BAA and T255357.Dest_Airpo00039C2A = T255906.Dest_Airpo00039C2A and T255737.Dep_Month00039C07 = T255906.Dep_Month00039C07 and substr(T255216.Carrier00039BA9 , 1, 25) = 'SunFlower Airlines' and (T255357.Dest_Regio00039C31 in ('Midwest Region', 'Northeast Region', 'South Region', 'West Region')) and T255737.Month_of_Y00039C0F between 3 and 12 ) group by T255216.Carrier00039BA9, T255216.Carrier_Co00039BAA order by c5, c6, c4
OBIEE will always use these aggregate tables if they’re available in the repository, so to enable me to test the reports with these aggregates disabled I borrowed the technique Robin introduced in his blog on testing aggregate navigation, and added a request variable prompt to the dashboard page that allows me to pass an INACTIVE_SCHEMAS value to the logical SQL queries issued for the dashboard analyses, and therefore tell the BI Server to ignore the aggregate schema and just use the detail-level BI_AIRLINES schema instead.
I therefore test the dashboard again, this time with the BI_AIRLINES_AGG schema disabled; this time, every dashboard took a while to display properly, with every analysis showing the “spinning clock” for twenty, thirty seconds a time. Comparing the response times to the ones a moment ago when aggregates were enabled, and you can see the difference.
So let’s set-up the in-memory option and see how it affects these two scenarios. The in-memory option for Oracle Database 12c 22.214.171.124.0 is installed by default as part of the core database, but doesn’t start working until you set a value for the INMEMORY_SIZE initialisation parameter – as you can see from the SQL*Plus output below, it’s currently set to zero, effectively disabling the feature:
SQL> conn / as sysdba Connected. SQL> show parameter INMEMORY NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ inmemory_clause_default string inmemory_force string DEFAULT inmemory_max_populate_servers integer 0 inmemory_query string ENABLE inmemory_size big integer 0 inmemory_trickle_repopulate_servers_ integer 1 percent optimizer_inmemory_aware boolean TRUE
The memory Oracle allocates to the in-memory option, via this initialisation parameter, comes out of the SGA and therefore before I set the INMEMORY_SIZE parameter, it’d be a good idea to increase the SGA_TARGET value to accommodate the 1G I’m then going to assign to INMEMORY_SIZE. I do both from the SQL*Plus command-prompt and then bounce the database to bring the new values into use:
SQL> alter system set inmemory_size = 1G scope=spfile; System altered. SQL> show parameter sga_target NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ sga_target big integer 6032M SQL> alter system set sga_target = 7032M scope=spfile; System altered. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 7381975040 bytes Fixed Size 2941480 bytes Variable Size 1207963096 bytes Database Buffers 5083496448 bytes Redo Buffers 13832192 bytes In-Memory Area 1073741824 bytes Database mounted. Database opened.
Now we’re at the point where we can enable the tables for in-memory access. Tables to be copied into memory can either be created like that at the start, or you can ALTER TABLE them afterwards and specify that they go into memory (or you can configure the in-memory settings at the tablespace level, or even specify particular columns or partitions to go into memory). The underlying data still gets stored row-wise on disk, but enabling a table for in-memory access tells the Oracle database to create column-store in-memory representations of the table and direct suitable queries to those copies, all the time in the background keeping the copy in-sync with the row-store base data.
I’ll start by enabling all of the BI_AIRLINES schema for in-memory access, as the biggest pay-off would be if then meant we didn’t need to maintain the aggregate tables. After running a SELECT table_name FROM user_tables to list out the table names, I then run a script to enable all the tables for in-memory access, with the in-memory copies being populated immediately:
alter table AIRCRAFT_GROUP inmemory priority high; alter table AIRCRAFT_TYPE inmemory priority high; alter table AIRLINES_USER_DATA inmemory priority high; alter table AIRLINE_ID inmemory priority high; alter table CANCELLATION inmemory priority high; alter table CARRIER_GROUP_NEW inmemory priority high; alter table CARRIER_REGION inmemory priority high; alter table DEPARBLK inmemory priority high; alter table DISTANCE_GROUP_250 inmemory priority high; alter table DOMESTIC_SEGMENT inmemory priority high; alter table OBIEE_COUNTY_HIER inmemory priority high; alter table OBIEE_GEO_AIRPORT_BRIDGE inmemory priority high; alter table OBIEE_GEO_ORIG inmemory priority high; alter table OBIEE_ROUTE inmemory priority high; alter table OBIEE_TIME_DAY_D inmemory priority high; alter table OBIEE_TIME_MTH_D inmemory priority high; alter table ONTIME_DELAY_GROUPS inmemory priority high; alter table PERFORMANCE inmemory priority high; alter table PERFORMANCE_ENDECA_MV inmemory priority high; alter table ROUTES_FOR_LINKS inmemory priority high; alter table SCHEDULES inmemory priority high; alter table SERVICE_CLASS inmemory priority high; alter table UNIQUE_CARRIERS inmemory priority high;
Looking at USER_TABLES from this schema, I can see all of the tables I selected now marked for in-memory access, for immediate loading:
SQL> @display_table_inmem_details.sql SQL> select table_name 2 , inmemory 3 , inmemory_priority 4 from user_tables 5 / TABLE_NAME INMEMORY INMEMORY -------------------- -------- -------- AIRCRAFT_GROUP ENABLED HIGH UNIQUE_CARRIERS ENABLED HIGH SERVICE_CLASS ENABLED HIGH SCHEDULES ENABLED HIGH ROUTES_FOR_LINKS ENABLED HIGH PERFORMANCE ENABLED HIGH ONTIME_DELAY_GROUPS ENABLED HIGH OBIEE_TIME_MTH_D ENABLED HIGH OBIEE_TIME_DAY_D ENABLED HIGH OBIEE_ROUTE ENABLED HIGH OBIEE_GEO_ORIG ENABLED HIGH TABLE_NAME INMEMORY INMEMORY -------------------- -------- -------- OBIEE_GEO_AIRPORT_BR ENABLED HIGH IDGE OBIEE_COUNTY_HIER ENABLED HIGH DOMESTIC_SEGMENT ENABLED HIGH DISTANCE_GROUP_250 ENABLED HIGH DEPARBLK ENABLED HIGH CARRIER_REGION ENABLED HIGH CARRIER_GROUP_NEW ENABLED HIGH CANCELLATION ENABLED HIGH AIRLINE_ID ENABLED HIGH TABLE_NAME INMEMORY INMEMORY -------------------- -------- -------- AIRLINES_USER_DATA ENABLED HIGH AIRLINES_PBLOB$ DISABLED AIRLINES_PART$ DISABLED AIRLINES_NODE_TZ$ DISABLED AIRLINES_NODE$ DISABLED AIRLINES_LINK_TZ$ DISABLED AIRLINES_LINK_SCH$ DISABLED AIRLINES_LINK$ DISABLED AIRLINES_AIRPORT_TZ$ DISABLED AIRCRAFT_TYPE ENABLED HIGH 30 rows selected.
And I can track the progress of the tables being copied into memory using the V$IM_SEGMENTS v$ view, like this:
SQL> @display_im_segments.sql SQL> set echo on SQL> set pages 200 SQL> col owner for a20 SQL> col name for a20 SQL> col status for a10 SQL> select v.owner 2 , v.segment_name name 3 , v.populate_status status 4 from v$im_segments v; OWNER NAME STATUS -------------------- -------------------- ---------- BI_AIRLINES OBIEE_COUNTY_HIER COMPLETED BI_AIRLINES PERFORMANCE STARTED BI_AIRLINES UNIQUE_CARRIERS COMPLETED BI_AIRLINES AIRLINES_LINK_TZ$ COMPLETED BI_AIRLINES OBIEE_TIME_MTH_D COMPLETED BI_AIRLINES AIRLINES_LINK_SCH$ COMPLETED BI_AIRLINES OBIEE_ROUTE COMPLETED BI_AIRLINES DOMESTIC_SEGMENT COMPLETED BI_AIRLINES AIRLINES_LINK$ COMPLETED BI_AIRLINES AIRLINE_ID COMPLETED BI_AIRLINES OBIEE_GEO_ORIG COMPLETED BI_AIRLINES AIRLINES_NODE$ COMPLETED BI_AIRLINES OBIEE_GEO_AIRPORT_BR COMPLETED IDGE BI_AIRLINES AIRLINES_NODE_TZ$ COMPLETED BI_AIRLINES OBIEE_TIME_DAY_D COMPLETED
Note that most of the tables went into memory immediately, but one (PERFORMANCE) is taking a while because it’s so big. Also note that not all tables are listed in the v$ view yet, as the database hasn’t got around to adding them into memory yet, or it might choose not to populate them if it feels the memory could be used better elsewhere. You can alter the priority of these in-memory copy processes if you want, and decide whether the copying happens immediately, or when the table is first scanned (accessed).
Running the dashboards again, with the request variable prompt set to disallow the aggregate schema, gave me the response times below – the in-memory queries were much faster than the row-based non in-memory ones, but they weren’t down to the response time of the dashboards right at the start, where all data is stored row-wise but we’ve got aggregate tables to speed up the queries (note I’ve reordered the bars so the non in-memory queries with no aggregate tables are on the left of each series, as the slowest of all approaches)
Taking a look at one of the physical SQL queries for a cross-tab (hierarchical columns) analysis, you can see that in-memory table access is happening:
WITH SAWITH0 AS (select sum(T233937.ACTUALELAPSEDTIME) as c1, sum(T233937.ARRDELAYMINUTES) as c2, concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50)) as c3, T233820.D_NAME as c4, T233820.R_NAME as c5 from BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ , BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ , BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ , BI_AIRLINES.PERFORMANCE T233937 /* 00 PERFORMANCE FULL */ where ( T233484.AIRPORT = T233937.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233732.Cal_Month = T233937.MONTH and T233732.Cal_Qtr = T233937.QUARTER and T233732.Cal_Year = T233937.YEAR and T233732.Cal_Month between 6 and 12 and T233937.MONTH between 6 and 12 ) group by T233820.D_NAME, T233820.R_NAME, concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50))), SAWITH1 AS (select sum(T233609.PASSENGERS) as c1, T233820.R_NAME as c2, T233820.D_NAME as c3, concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50)) as c4, sum(T233609.PASSENGERS_MILES) as c5 from BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ , BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ , BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ , BI_AIRLINES.DOMESTIC_SEGMENT T233609 /* 01 DOMESTIC Segment */ where ( T233484.AIRPORT = T233609.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233609.MONTH = T233732.Cal_Month and T233609.QUARTER = T233732.Cal_Qtr and T233609.YEAR = T233732.Cal_Year and T233609.MONTH between 6 and 12 and T233732.Cal_Month between 6 and 12 ) group by T233820.D_NAME, T233820.R_NAME, concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50))) select 2 as c1, case when D1.c3 is not null then D1.c3 when D2.c4 is not null then D2.c4 end as c2, case when D1.c4 is not null then D1.c4 when D2.c3 is not null then D2.c3 end as c3, case when D1.c5 is not null then D1.c5 when D2.c2 is not null then D2.c2 end as c4, 'All Orig Airports' as c5, 1 as c6, case when D1.c4 is not null then D1.c4 when D2.c3 is not null then D2.c3 end as c7, case when D1.c5 is not null then D1.c5 when D2.c2 is not null then D2.c2 end as c8, cast(D1.c2 as DOUBLE PRECISION ) / nullif( D1.c1, 0) * 100 as c9, D2.c5 as c10, D2.c1 as c14 from SAWITH0 D1 full outer join SAWITH1 D2 On D1.c3 = D2.c4 and SYS_OP_MAP_NONNULL(D1.c4) = SYS_OP_MAP_NONNULL(D2.c3) order by c4, c3, c2 SQL> @complex_query_explain.sql Explained. SQL> set lines 200 SQL> set pages 0 SQL> select * from table(dbms_xplan.display); Plan hash value: 3097908901 ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 2673K| 1392M| | 112K (1)| 00:00:05 | | | | | 1 | PX COORDINATOR | | | | | | | | | | | 2 | PX SEND QC (ORDER) | :TQ10006 | 2673K| 1392M| | 112K (1)| 00:00:05 | Q1,06 | P->S | QC (ORDER) | | 3 | SORT ORDER BY | | 2673K| 1392M| 1492M| 112K (1)| 00:00:05 | Q1,06 | PCWP | | | 4 | PX RECEIVE | | 2673K| 1392M| | 26819 (1)| 00:00:02 | Q1,06 | PCWP | | | 5 | PX SEND RANGE | :TQ10005 | 2673K| 1392M| | 26819 (1)| 00:00:02 | Q1,05 | P->P | RANGE | | 6 | VIEW | VW_FOJ_0 | 2673K| 1392M| | 26819 (1)| 00:00:02 | Q1,05 | PCWP | | |* 7 | HASH JOIN FULL OUTER BUFFERED | | 2673K| 1392M| | 26819 (1)| 00:00:02 | Q1,05 | PCWP | | | 8 | PX RECEIVE | | 136 | 37128 | | 144 (7)| 00:00:01 | Q1,05 | PCWP | | | 9 | PX SEND HASH | :TQ10003 | 136 | 37128 | | 144 (7)| 00:00:01 | Q1,03 | S->P | HASH | | 10 | PX SELECTOR | | | | | | | Q1,03 | SCWC | | | 11 | VIEW | | 136 | 37128 | | 144 (7)| 00:00:01 | Q1,03 | SCWC | | | 12 | HASH GROUP BY | | 136 | 17408 | | 144 (7)| 00:00:01 | Q1,03 | SCWC | | |* 13 | HASH JOIN | | 136 | 17408 | | 143 (6)| 00:00:01 | Q1,03 | SCWC | | | 14 | JOIN FILTER CREATE | :BF0000 | 136 | 11288 | | 142 (6)| 00:00:01 | Q1,03 | PCWP | | |* 15 | HASH JOIN | | 136 | 11288 | | 142 (6)| 00:00:01 | Q1,03 | SCWC | | | 16 | JOIN FILTER CREATE | :BF0001 | 136 | 4896 | | 142 (6)| 00:00:01 | Q1,03 | PCWP | | |* 17 | HASH JOIN | | 136 | 4896 | | 142 (6)| 00:00:01 | Q1,03 | SCWC | | |* 18 | TABLE ACCESS INMEMORY FULL | OBIEE_TIME_MTH_D | 161 | 1610 | | 1 (0)| 00:00:01 | Q1,03 | SCWP | | |* 19 | TABLE ACCESS INMEMORY FULL | DOMESTIC_SEGMENT | 771 | 20046 | | 141 (6)| 00:00:01 | Q1,03 | SCWP | | | 20 | JOIN FILTER USE | :BF0001 | 1787 | 83989 | | 1 (0)| 00:00:01 | Q1,03 | PCWP | | |* 21 | TABLE ACCESS INMEMORY FULL | OBIEE_GEO_AIRPORT_BRIDGE | 1787 | 83989 | | 1 (0)| 00:00:01 | Q1,03 | SCWP | | | 22 | JOIN FILTER USE | :BF0000 | 3221 | 141K| | 1 (0)| 00:00:01 | Q1,03 | PCWP | | |* 23 | TABLE ACCESS INMEMORY FULL | OBIEE_COUNTY_HIER | 3221 | 141K| | 1 (0)| 00:00:01 | Q1,03 | SCWP | | | 24 | PX RECEIVE | | 2255K| 587M| | 26673 (1)| 00:00:02 | Q1,05 | PCWP | | | 25 | PX SEND HASH | :TQ10004 | 2255K| 587M| | 26673 (1)| 00:00:02 | Q1,04 | P->P | HASH | | 26 | VIEW | | 2255K| 587M| | 26673 (1)| 00:00:02 | Q1,04 | PCWP | | | 27 | HASH GROUP BY | | 2255K| 264M| 489M| 26673 (1)| 00:00:02 | Q1,04 | PCWP | | | 28 | PX RECEIVE | | 2255K| 264M| | 26673 (1)| 00:00:02 | Q1,04 | PCWP | | | 29 | PX SEND HASH | :TQ10002 | 2255K| 264M| | 26673 (1)| 00:00:02 | Q1,02 | P->P | HASH | | 30 | HASH GROUP BY | | 2255K| 264M| 489M| 26673 (1)| 00:00:02 | Q1,02 | PCWP | | |* 31 | HASH JOIN | | 3761K| 441M| | 841 (9)| 00:00:01 | Q1,02 | PCWP | | | 32 | PX RECEIVE | | 161 | 1610 | | 1 (0)| 00:00:01 | Q1,02 | PCWP | | | 33 | PX SEND BROADCAST | :TQ10000 | 161 | 1610 | | 1 (0)| 00:00:01 | Q1,00 | S->P | BROADCAST | | 34 | PX SELECTOR | | | | | | | Q1,00 | SCWC | | |* 35 | TABLE ACCESS INMEMORY FULL | OBIEE_TIME_MTH_D | 161 | 1610 | | 1 (0)| 00:00:01 | Q1,00 | SCWP | | |* 36 | HASH JOIN | | 3773K| 406M| | 838 (9)| 00:00:01 | Q1,02 | PCWP | | | 37 | PX RECEIVE | | 1787 | 160K| | 2 (0)| 00:00:01 | Q1,02 | PCWP | | | 38 | PX SEND BROADCAST | :TQ10001 | 1787 | 160K| | 2 (0)| 00:00:01 | Q1,01 | S->P | BROADCAST | | 39 | PX SELECTOR | | | | | | | Q1,01 | SCWC | | |* 40 | HASH JOIN | | 1787 | 160K| | 2 (0)| 00:00:01 | Q1,01 | SCWC | | | 41 | TABLE ACCESS INMEMORY FULL| OBIEE_GEO_AIRPORT_BRIDGE | 1787 | 83989 | | 1 (0)| 00:00:01 | Q1,01 | SCWP | | | 42 | TABLE ACCESS INMEMORY FULL| OBIEE_COUNTY_HIER | 3221 | 141K| | 1 (0)| 00:00:01 | Q1,01 | SCWP | | | 43 | PX BLOCK ITERATOR | | 3773K| 75M| | 834 (9)| 00:00:01 | Q1,02 | PCWC | | |* 44 | TABLE ACCESS INMEMORY FULL | PERFORMANCE | 3773K| 75M| | 834 (9)| 00:00:01 | Q1,02 | PCWP | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 7 - access("D1"."C3"="D2"."C4" AND SYS_OP_MAP_NONNULL("D1"."C4")=SYS_OP_MAP_NONNULL("D2"."C3")) 13 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE") 15 - access("T233484"."AIRPORT"="T233609"."ORIGIN") 17 - access("T233732"."CAL_MONTH"=TO_NUMBER("T233609"."MONTH") AND "T233732"."CAL_QTR"=TO_NUMBER("T233609"."QUARTER") AND "T233732"."CAL_YEAR"=TO_NUMBER("T233609"."YEAR")) 18 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12) filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12) 19 - inmemory(TO_NUMBER("T233609"."MONTH")>=6 AND TO_NUMBER("T233609"."MONTH")<=12) filter(TO_NUMBER("T233609"."MONTH")>=6 AND TO_NUMBER("T233609"."MONTH")<=12) 21 - inmemory(SYS_OP_BLOOM_FILTER(:BF0001,"T233484"."AIRPORT")) filter(SYS_OP_BLOOM_FILTER(:BF0001,"T233484"."AIRPORT")) 23 - inmemory(SYS_OP_BLOOM_FILTER(:BF0000,"T233820"."SC_CODE")) filter(SYS_OP_BLOOM_FILTER(:BF0000,"T233820"."SC_CODE")) 31 - access("T233732"."CAL_MONTH"="T233937"."MONTH" AND "T233732"."CAL_QTR"="T233937"."QUARTER" AND "T233732"."CAL_YEAR"="T233937"."YEAR") 35 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12) filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12) 36 - access("T233484"."AIRPORT"="T233937"."ORIGIN") 40 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE") 44 - inmemory("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12) filter("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12) Note ----- - dynamic statistics used: dynamic sampling (level=AUTO) - Degree of Parallelism is 2 because of table property 80 rows selected.
Indeed, looking at the queries the BI Server is sending to the database it’s not too surprising the in-memory difference in this case wasn’t too dramatic. According to the docs (emphasis mine):
“Storing a database object in the IM column store can improve performance significantly for the following types of operations performed on the database object
- A query that scans a large number of rows and applies filters that use operators such as the following: =, <, >, and IN
- A query that selects a small number of columns from a table or materialized view with a large number of columns, such as a query that selects five columns from a table with 100 columns
- A query that joins a small table to a large table
- A query that aggregates data
The IM column store does not improve performance for the following types of operations:
- Queries with complex predicates
- Queries that select a large number of columns
- Queries that return a large number of rows
- Queries with multiple large table joins”
and our query certainly has complex predicates, returns a fair few rows, has large and lots of joins etc. Taking a more simple query that you’d likely write yourself if querying a data warehouse, you can see the in-memory table access being used again but a much simpler, an cheaper explain plan:
SQL> @inmem_explain.sql SQL> set echo on SQL> explain plan for 2 select /*+ INMEMORY */ sum(T233937.ACTUALELAPSEDTIME) as c1, 3 sum(T233937.WEATHERDELAY) as c2, 4 sum(T233937.SECURITYDELAY) as c3, 5 sum(T233937.NASDELAY) as c4, 6 sum(T233937.LATEAIRCRAFTDELAY) as c5, 7 sum(T233937.ARRDELAYMINUTES) as c6, 8 sum(T233937.CARRIERDELAY) as c7, 9 sum(nvl(casewhen T233937.CANCELLED < 1 then T233937.FLIGHTS end , 0)) as c8, 10 T233820.D_NAME as c9 11 from 12 BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ , 13 BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ , 14 BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233497 /* 12 GEO_AIPORT_DEST */ , 15 BI_AIRLINES.OBIEE_COUNTY_HIER T233831 /* 13 COUNTY_HIER_DEST */ , 16 BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ , 17 BI_AIRLINES.DISTANCE_GROUP_250 T233594 /* 19 DISTANCE_GROUP_250 */ , 18 BI_AIRLINES.PERFORMANCE T233937 /* 00 PERFORMANCE FULL */ 19 where ( T233484.AIRPORT = T233937.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233497.AIRPORT = T233937.DEST and T233497.STCTY_FIPS = T233831.SC_CODE and T233594.DESCRIPTION = '1000-1249 Miles' and T233594.CODE = T233937.DISTANCEGROUP and T233732.Cal_Month = T233937.MONTH and T233732.Cal_Qtr = T233937.QUARTER and T233732.Cal_Year = T233937.YEAR and T233831.R_NAME = 'Northeast Region' and T233732.Cal_Month between 6 and 12 and T233937.MONTH between 6 and 12 ) 20 group by T233820.D_NAME 21 order by c9 22 / Explained. SQL> set lines 300 SQL> set pages 0 SQL> select * from table(dbms_xplan.display); Plan hash value: 3055743864 ----------------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib | ----------------------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 9 | 1314 | 883 (13)| 00:00:01 | | | | | 1 | PX COORDINATOR | | | | | | | | | | 2 | PX SEND QC (ORDER) | :TQ10006 | 9 | 1314 | 883 (13)| 00:00:01 | Q1,06 | P->S | QC (ORDER) | | 3 | SORT GROUP BY | | 9 | 1314 | 883 (13)| 00:00:01 | Q1,06 | PCWP | | | 4 | PX RECEIVE | | 9 | 1314 | 883 (13)| 00:00:01 | Q1,06 | PCWP | | | 5 | PX SEND RANGE | :TQ10005 | 9 | 1314 | 883 (13)| 00:00:01 | Q1,05 | P->P | RANGE | | 6 | HASH GROUP BY | | 9 | 1314 | 883 (13)| 00:00:01 | Q1,05 | PCWP | | |* 7 | HASH JOIN | | 60775 | 8665K| 882 (13)| 00:00:01 | Q1,05 | PCWP | | | 8 | PX RECEIVE | | 3221 | 99851 | 1 (0)| 00:00:01 | Q1,05 | PCWP | | | 9 | PX SEND BROADCAST | :TQ10000 | 3221 | 99851 | 1 (0)| 00:00:01 | Q1,00 | S->P | BROADCAST | | 10 | PX SELECTOR | | | | | | Q1,00 | SCWC | | | 11 | TABLE ACCESS INMEMORY FULL | OBIEE_COUNTY_HIER | 3221 | 99851 | 1 (0)| 00:00:01 | Q1,00 | SCWP | | |* 12 | HASH JOIN | | 60775 | 6825K| 881 (13)| 00:00:01 | Q1,05 | PCWP | | | 13 | PX RECEIVE | | 1787 | 17870 | 1 (0)| 00:00:01 | Q1,05 | PCWP | | | 14 | PX SEND BROADCAST | :TQ10001 | 1787 | 17870 | 1 (0)| 00:00:01 | Q1,01 | S->P | BROADCAST | | 15 | PX SELECTOR | | | | | | Q1,01 | SCWC | | | 16 | TABLE ACCESS INMEMORY FULL | OBIEE_GEO_AIRPORT_BRIDGE | 1787 | 17870 | 1 (0)| 00:00:01 | Q1,01 | SCWP | | |* 17 | HASH JOIN | | 60775 | 6231K| 880 (13)| 00:00:01 | Q1,05 | PCWP | | | 18 | PX RECEIVE | | 161 | 1610 | 1 (0)| 00:00:01 | Q1,05 | PCWP | | | 19 | PX SEND BROADCAST | :TQ10002 | 161 | 1610 | 1 (0)| 00:00:01 | Q1,02 | S->P | BROADCAST | | 20 | PX SELECTOR | | | | | | Q1,02 | SCWC | | |* 21 | TABLE ACCESS INMEMORY FULL | OBIEE_TIME_MTH_D | 161 | 1610 | 1 (0)| 00:00:01 | Q1,02 | SCWP | | |* 22 | HASH JOIN | | 60964 | 5655K| 879 (13)| 00:00:01 | Q1,05 | PCWP | | | 23 | JOIN FILTER CREATE | :BF0000 | 1 | 19 | 1 (0)| 00:00:01 | Q1,05 | PCWP | | | 24 | PX RECEIVE | | 1 | 19 | 1 (0)| 00:00:01 | Q1,05 | PCWP | | | 25 | PX SEND BROADCAST | :TQ10003 | 1 | 19 | 1 (0)| 00:00:01 | Q1,03 | S->P | BROADCAST | | 26 | PX SELECTOR | | | | | | Q1,03 | SCWC | | |* 27 | TABLE ACCESS INMEMORY FULL | DISTANCE_GROUP_250 | 1 | 19 | 1 (0)| 00:00:01 | Q1,03 | SCWP | | |* 28 | HASH JOIN | | 670K| 48M| 878 (13)| 00:00:01 | Q1,05 | PCWP | | | 29 | JOIN FILTER CREATE | :BF0001 | 318 | 9540 | 2 (0)| 00:00:01 | Q1,05 | PCWP | | | 30 | PX RECEIVE | | 318 | 9540 | 2 (0)| 00:00:01 | Q1,05 | PCWP | | | 31 | PX SEND BROADCAST | :TQ10004 | 318 | 9540 | 2 (0)| 00:00:01 | Q1,04 | S->P | BROADCAST | | 32 | PX SELECTOR | | | | | | Q1,04 | SCWC | | |* 33 | HASH JOIN | | 318 | 9540 | 2 (0)| 00:00:01 | Q1,04 | SCWC | | | 34 | JOIN FILTER CREATE | :BF0002 | 217 | 4340 | 1 (0)| 00:00:01 | Q1,04 | PCWP | | |* 35 | TABLE ACCESS INMEMORY FULL| OBIEE_COUNTY_HIER | 217 | 4340 | 1 (0)| 00:00:01 | Q1,04 | SCWP | | | 36 | JOIN FILTER USE | :BF0002 | 1787 | 17870 | 1 (0)| 00:00:01 | Q1,04 | PCWP | | |* 37 | TABLE ACCESS INMEMORY FULL| OBIEE_GEO_AIRPORT_BRIDGE | 1787 | 17870 | 1 (0)| 00:00:01 | Q1,04 | SCWP | | | 38 | JOIN FILTER USE | :BF0000 | 3773K| 165M| 874 (13)| 00:00:01 | Q1,05 | PCWP | | | 39 | JOIN FILTER USE | :BF0001 | 3773K| 165M| 874 (13)| 00:00:01 | Q1,05 | PCWP | | | 40 | PX BLOCK ITERATOR | | 3773K| 165M| 874 (13)| 00:00:01 | Q1,05 | PCWC | | |* 41 | TABLE ACCESS INMEMORY FULL | PERFORMANCE | 3773K| 165M| 874 (13)| 00:00:01 | Q1,05 | PCWP | | ----------------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 7 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE") 12 - access("T233484"."AIRPORT"="T233937"."ORIGIN") 17 - access("T233732"."CAL_MONTH"="T233937"."MONTH" AND "T233732"."CAL_QTR"="T233937"."QUARTER" AND "T233732"."CAL_YEAR"="T233937"."YEAR") 21 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12) filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12) 22 - access("T233594"."CODE"="T233937"."DISTANCEGROUP") 27 - inmemory("T233594"."DESCRIPTION"='1000-1249 Miles') filter("T233594"."DESCRIPTION"='1000-1249 Miles') 28 - access("T233497"."AIRPORT"="T233937"."DEST") 33 - access("T233497"."STCTY_FIPS"="T233831"."SC_CODE") 35 - inmemory("T233831"."R_NAME"='Northeast Region') filter("T233831"."R_NAME"='Northeast Region') 37 - inmemory(SYS_OP_BLOOM_FILTER(:BF0002,"T233497"."STCTY_FIPS")) filter(SYS_OP_BLOOM_FILTER(:BF0002,"T233497"."STCTY_FIPS")) 41 - inmemory("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12 AND SYS_OP_BLOOM_FILTER_LIST(SYS_OP_BLOOM_FILTER(:BF0001,"T233937"."DEST"),SYS _OP_BLOOM_FILTER(:BF0000,"T233937"."DISTANCEGROUP"))) filter("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12 AND SYS_OP_BLOOM_FILTER_LIST(SYS_OP_BLOOM_FILTER(:BF0001,"T233937"."DEST"),SYS_O P_BLOOM_FILTER(:BF0000,"T233937"."DISTANCEGROUP"))) Note ----- - dynamic statistics used: dynamic sampling (level=AUTO) - Degree of Parallelism is 2 because of table property 75 rows selected.
If I then turn-off the in-memory feature and regenerate the execution plan, you can see without in-memory the plan is around 5x as expensive:
SQL> alter system set INMEMORY_SIZE = 0 scope = spfile; System altered. SQL> shutdown immediate; ORA-01097: cannot shutdown while in a transaction - commit or rollback first SQL> rollback; Rollback complete. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 7398752256 bytes Fixed Size 2941528 bytes Variable Size 1056968104 bytes Database Buffers 6325010432 bytes Redo Buffers 13832192 bytes Database mounted. Database opened. SQL> @noinmem_explain.sql Explained. SQL> set lines 300 SQL> set pages 0 SQL> select * from table(dbms_xplan.display); Plan hash value: 2990499928 -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 9 | 1341 | 4086 (1)| 00:00:01 | | | | | 1 | TEMP TABLE TRANSFORMATION | | | | | | | | | | 2 | LOAD AS SELECT | SYS_TEMP_0FD9D6605_275335 | | | | | | | | |* 3 | HASH JOIN | | 318 | 9540 | 22 (0)| 00:00:01 | | | | |* 4 | TABLE ACCESS FULL | OBIEE_COUNTY_HIER | 217 | 4340 | 13 (0)| 00:00:01 | | | | | 5 | TABLE ACCESS FULL | OBIEE_GEO_AIRPORT_BRIDGE | 1787 | 17870 | 9 (0)| 00:00:01 | | | | | 6 | PX COORDINATOR | | | | | | | | | | 7 | PX SEND QC (ORDER) | :TQ10008 | 9 | 1341 | 4086 (1)| 00:00:01 | Q1,08 | P->S | QC (ORDER) | | 8 | SORT GROUP BY | | 9 | 1341 | 4086 (1)| 00:00:01 | Q1,08 | PCWP | | | 9 | PX RECEIVE | | 9 | 1341 | 4086 (1)| 00:00:01 | Q1,08 | PCWP | | | 10 | PX SEND RANGE | :TQ10007 | 9 | 1341 | 4086 (1)| 00:00:01 | Q1,07 | P->P | RANGE | | 11 | HASH GROUP BY | | 9 | 1341 | 4086 (1)| 00:00:01 | Q1,07 | PCWP | | |* 12 | HASH JOIN | | 281 | 41869 | 4085 (1)| 00:00:01 | Q1,07 | PCWP | | | 13 | PX RECEIVE | | 281 | 33158 | 4072 (1)| 00:00:01 | Q1,07 | PCWP | | | 14 | PX SEND HYBRID HASH | :TQ10005 | 281 | 33158 | 4072 (1)| 00:00:01 | Q1,05 | P->P | HYBRID HASH| | 15 | STATISTICS COLLECTOR | | | | | | Q1,05 | PCWC | | |* 16 | HASH JOIN BUFFERED | | 281 | 33158 | 4072 (1)| 00:00:01 | Q1,05 | PCWP | | | 17 | VIEW | VW_GBC_29 | 281 | 30348 | 4063 (1)| 00:00:01 | Q1,05 | PCWP | | | 18 | HASH GROUP BY | | 281 | 79523 | 4041 (1)| 00:00:01 | Q1,05 | PCWP | | | 19 | PX RECEIVE | | 281 | 79523 | 4041 (1)| 00:00:01 | Q1,05 | PCWP | | | 20 | PX SEND HASH | :TQ10003 | 281 | 79523 | 4041 (1)| 00:00:01 | Q1,03 | P->P | HASH | | 21 | HASH GROUP BY | | 281 | 79523 | 4041 (1)| 00:00:01 | Q1,03 | PCWP | | |* 22 | HASH JOIN | | 60853 | 16M| 4039 (1)| 00:00:01 | Q1,03 | PCWP | | | 23 | BUFFER SORT | | | | | | Q1,03 | PCWC | | | 24 | PX RECEIVE | | 318 | 1272 | 2 (0)| 00:00:01 | Q1,03 | PCWP | | | 25 | PX SEND BROADCAST | :TQ10000 | 318 | 1272 | 2 (0)| 00:00:01 | | S->P | BROADCAST | | 26 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6605_275335 | 318 | 1272 | 2 (0)| 00:00:01 | | | | |* 27 | HASH JOIN | | 60853 | 16M| 4037 (1)| 00:00:01 | Q1,03 | PCWP | | | 28 | PX RECEIVE | | 160 | 4640 | 4 (0)| 00:00:01 | Q1,03 | PCWP | | | 29 | PX SEND BROADCAST | :TQ10002 | 160 | 4640 | 4 (0)| 00:00:01 | Q1,02 | S->P | BROADCAST | | 30 | PX SELECTOR | | | | | | Q1,02 | SCWC | | | 31 | MERGE JOIN CARTESIAN | | 160 | 4640 | 4 (0)| 00:00:01 | Q1,02 | SCWC | | |* 32 | VIEW | index$_join$_006 | 1 | 19 | 2 (0)| 00:00:01 | Q1,02 | SCWC | | |* 33 | HASH JOIN | | | | | | Q1,02 | SCWC | | | 34 | BITMAP CONVERSION TO ROWIDS | | 1 | 19 | 1 (0)| 00:00:01 | Q1,02 | SCWC | | |* 35 | BITMAP INDEX SINGLE VALUE | M_INDEX32 | | | | | Q1,02 | SCWP | | | 36 | BITMAP CONVERSION TO ROWIDS | | 1 | 19 | 1 (0)| 00:00:01 | Q1,02 | SCWC | | | 37 | BITMAP INDEX FULL SCAN | INDEX4 | | | | | Q1,02 | SCWP | | | 38 | BUFFER SORT | | 161 | 1610 | 2 (0)| 00:00:01 | Q1,02 | SCWC | | | 39 | BITMAP CONVERSION TO ROWIDS | | 161 | 1610 | 2 (0)| 00:00:01 | Q1,02 | SCWC | | |* 40 | BITMAP INDEX FAST FULL SCAN | M_INDEX28 | | | | | Q1,02 | SCWP | | |* 41 | VIEW | VW_ST_167D3604 | 61043 | 14M| 4033 (1)| 00:00:01 | Q1,03 | PCWP | | | 42 | NESTED LOOPS | | 61043 | 4768K| 4029 (1)| 00:00:01 | Q1,03 | PCWP | | | 43 | BUFFER SORT | | | | | | Q1,03 | PCWC | | | 44 | PX RECEIVE | | | | | | Q1,03 | PCWP | | | 45 | PX SEND HASH (BLOCK ADDRESS) | :TQ10001 | | | | | | S->P | HASH (BLOCK| | 46 | BITMAP CONVERSION TO ROWIDS | | 61042 | 1311K| 365 (1)| 00:00:01 | | | | | 47 | BITMAP AND | | | | | | | | | | 48 | BITMAP MERGE | | | | | | | | | | 49 | BITMAP KEY ITERATION | | | | | | | | | |* 50 | VIEW | index$_join$_255 | 1 | 19 | 2 (0)| 00:00:01 | | | | |* 51 | HASH JOIN | | | | | | | | | | 52 | BITMAP CONVERSION TO ROWIDS| | 1 | 19 | 1 (0)| 00:00:01 | | | | |* 53 | BITMAP INDEX SINGLE VALUE | M_INDEX32 | | | | | | | | | 54 | BITMAP CONVERSION TO ROWIDS| | 1 | 19 | 1 (0)| 00:00:01 | | | | | 55 | BITMAP INDEX FULL SCAN | INDEX4 | | | | | | | | |* 56 | BITMAP INDEX RANGE SCAN | PERF_DISTANCEGRP | | | | | | | | | 57 | BITMAP MERGE | | | | | | | | | | 58 | BITMAP KEY ITERATION | | | | | | | | | | 59 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6605_275335 | 318 | 1272 | 2 (0)| 00:00:01 | | | | |* 60 | BITMAP INDEX RANGE SCAN | PERF_DEST | | | | | | | | | 61 | BITMAP MERGE | | | | | | | | | |* 62 | BITMAP INDEX RANGE SCAN | PERF_MONTH | | | | | | | | | 63 | TABLE ACCESS BY USER ROWID | PERFORMANCE | 1 | 58 | 3669 (1)| 00:00:01 | Q1,03 | PCWP | | | 64 | PX RECEIVE | | 1787 | 17870 | 9 (0)| 00:00:01 | Q1,05 | PCWP | | | 65 | PX SEND BROADCAST | :TQ10004 | 1787 | 17870 | 9 (0)| 00:00:01 | Q1,04 | S->P | BROADCAST | | 66 | PX SELECTOR | | | | | | Q1,04 | SCWC | | | 67 | TABLE ACCESS FULL | OBIEE_GEO_AIRPORT_BRIDGE | 1787 | 17870 | 9 (0)| 00:00:01 | Q1,04 | SCWP | | | 68 | PX RECEIVE | | 3221 | 99851 | 13 (0)| 00:00:01 | Q1,07 | PCWP | | | 69 | PX SEND HYBRID HASH | :TQ10006 | 3221 | 99851 | 13 (0)| 00:00:01 | Q1,06 | S->P | HYBRID HASH| | 70 | PX SELECTOR | | | | | | Q1,06 | SCWC | | | 71 | TABLE ACCESS FULL | OBIEE_COUNTY_HIER | 3221 | 99851 | 13 (0)| 00:00:01 | Q1,06 | SCWP | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("T233497"."STCTY_FIPS"="T233831"."SC_CODE") 4 - filter("T233831"."R_NAME"='Northeast Region') 12 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE") 16 - access("T233484"."AIRPORT"="ITEM_1") 22 - access("C0"="ITEM_1") 27 - access("T233732"."CAL_YEAR"="ITEM_5" AND "T233732"."CAL_QTR"="ITEM_4" AND "T233732"."CAL_MONTH"="ITEM_3" AND "T233594"."CODE"="ITEM_2") 32 - filter("T233594"."DESCRIPTION"='1000-1249 Miles') 33 - access(ROWID=ROWID) 35 - access("T233594"."DESCRIPTION"='1000-1249 Miles') 40 - filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12) 41 - filter("ITEM_3"<=12 AND "ITEM_3">=6) 50 - filter("T233594"."DESCRIPTION"='1000-1249 Miles') 51 - access(ROWID=ROWID) 53 - access("T233594"."DESCRIPTION"='1000-1249 Miles') 56 - access("T233937"."DISTANCEGROUP"="T233594"."CODE") 60 - access("T233937"."DEST"="C0") 62 - access("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12) Note ----- - dynamic statistics used: dynamic sampling (level=AUTO) - Degree of Parallelism is 2 because of table property - star transformation used for this statement 105 rows selected.
Running the actual queries in this case gives me a wall-time of around 4 seconds for the in-memory version, and 14 seconds when in-memory query is disabled … but the response time isn’t anywhere near the initial run where we had data stored row-wise but with aggregate tables, so let’s finish-off the testing by putting the aggregate table in-memory too, and see if that makes a difference.
And of course, it absolutely flew:
So to conclude from my look at the Oracle Database 12c In-Memory option with OBIEE11g v406 SampleApp, I’d say the following based on my initial tests:
- For BI-type reporting where you’re typically summarising lots of data, the in-memory option doesn’t remove the need for aggregate tables – you’ll still benefit significantly from having them, in my observation
- Where the in-memory option does benefit you is when you’re querying the detail-level data – it helps with aggregation but it’s main strength is fast filtering against subsets of columns
- Some of the more complex SQL queries issued by OBIEE’s BI Server, for example when creating lots of subtotals and totals against a dataset, reduce the effectiveness of the in-memory option – you’ll get the biggest speed improvement, at least at the moment, with queries with simpler predicates and not so complex joins
There might be more to the aggregation story in the end, though. Looking at the Oracle Database 12c In-Memory Option Technical White Paper, the in-memory option should in-fact help with aggregation through a new optimiser transformation called “vector group by”, a transformation that’s likened to a star transformation that uses CPU-efficient algorithms and a multi-dimensional array created on-the-fly in the PGA called an “in-memory accumulator”.
In fact, what we’ve heard is that many of the old Oracle OLAP team have moved over to the in-memory option team and were responsible for this feature, so I’ll be taking a closer look at in-memory aggregation in this new feature over the next few months. In my examples though, I didn’t see any examples of vector group by in the query execution plans, so I’m assuming either conditions weren’t right for it, or like star transformations there’s some combination of setting and query factors that need to be in place before it’ll appear in the execution plan (and queries presumably run that much faster).
For now though – that’s my first run-through of the 12c In-Memory Option in the context of a typical BI workload. Check back later in the year for more in-memory option postings, including hopefully something more on the in-memory aggregation feature.
Another OTN Tour Latin America has come and gone. This is the most important technical event in the region visiting 12 countries and with more than 2500 attendees in two weeks.
This year Rittman Mead was part of the OTN Tour in Buenos Aires (Argentina) and Montevideo (Uruguay) presenting about ODI and OGG.
We have started in Buenos Aires on August 11 for the first day of the OTN Tour in Argentina. I’ve talked about the integration of ODI and OGG 12c, explaining all the technical details to configure and how to implement it. Most of the attendees didn’t work with these tools (but were curious about them) so I personalised a little the presentation giving them first an introduction of ODI and OGG.
As the vice-president of the UYOUG (Uruguayan Oracle User Group) I’m part of the organisation of the OTN Tour in my country, so we needed to come back in the same Monday to adjust some last details to have everything ready for the event in Uruguay.
Most of the speakers came on Wednesday, and we have spent a great day with Michelle Malcher, Kamran Agayev, Hans Forbrich and Mike Dietrich. First, we went to lunch at the Mercado del Puerto, an emblematic place that has lot of “parrillas” (kind of barbecues) and then we gave them a little city tour which included a visit to El Cerro de Montevideo. Finally we visited one of the most important wineries in Uruguay, Bodega Bouza where we have a wine tour followed by an amazing wine tasting of a variety of wines including Tannat which is our insignia grape. You know…it is important to be relaxed before a conference :-)
The first day of the event in Uruguay was dedicated exclusively to technical sessions and in the second one we had the hands-on labs. The conference covered a wide range of topics from BI Mobile, e-Business Suite to how to upgrade to Oracle Database12c, Oracle Virtualization and Oracle RAC. All the sessions were packed with attendees.
The next day, we had labs with PCs with software already installed but attendees could came with their own laptops to install all the software needed for the hands-on. We had the famous RAC Attack! lead by Kamran and with the help of the ninjas Michelle, Hans and Nelson Calero, and an Oracle Virtualization lab by Hernan Petitti for 7 hours!
This is only the beginning for Rittman Mead in Latin America. There are a lot of things to come, so stay tuned!
If you’ve been following our postings on the blog over the past year, you’ll probably have seen quite a lot of activity around big data and Hadoop and in particular, what these technologies bring to the world of Oracle Business Intelligence, Oracle Data Warehousing and Oracle Data Integration. For anyone who’s not had a chance to read the posts and articles, the three links below are a great introduction to what we’ve been up to:
- A list of some of our “getting started with Hadoop” articles
- Rittman Mead Announce New Partnerships with Cloudera and Amazon Web Services
- Rittman Mead and Oracle Big Data Appliance
- Presentation at the recent Oracle Virtual Technology Summit on ODI12c for Hadoop / Big Data Integration
In addition, we recently took part in an OTN ArchBeat podcast with Stewart Bryson and Andrew Bond on the updated Oracle Information Management Reference Architecture we co-developed with Oracle’s Enterprise Architecture team, where you can hear me talk with Stewart and Andrew about how the updated architecture came about, the thinking behind it, and how concepts like the data reservoir and data factory can be delivered in an agile way.
- Podcast Show Notes: Redefining Information Management Architecture (including links to the three parts in the podcast)
- Introducing the Updated Oracle / Rittman Mead Information Management Reference Architecture : Part 1: Information Architecture and the Data Factory
- Introducing the Updated Oracle / Rittman Mead Information Management Reference Architecture : Part 2: Delivering the Data Factory
I’m also pleased to be delivering a number of presentations and seminars over the next few months, on Oracle and Cloudera’s Hadoop technology and how it applies to Oracle BI, DW and DI developers – if you’re part of a local Oracle user group and you’d like me to deliver one of them for your group, drop me an email at email@example.com.Slovenian Oracle User Group / Croatian Oracle User Group Conferences, October 2014
These two events run over consecutive days in Slovenia and Croatia, and I’m delivering the keynote at each on Analytics and Big Data, and a one-day seminar running on the Tuesday in Slovenia, and over the Wednesday and Thursday in Croatia. The theme of the seminar is around applying Hadoop and big data technologies to Oracle BI, DW and data integration, and is made up of four sessions:
Part 1 : Introduction to Hadoop and Big Data Technologies for Oracle BI & DW Developers
“In this session we’ll introduce some key Hadoop concepts including HDFS, MapReduce, Hive and NoSQL/HBase, with the focus on Oracle Big Data Appliance and Cloudera Distribution including Hadoop. We’ll explain how data is stored on a Hadoop system and the high-level ways it is accessed and analysed, and outline Oracle’s products in this area including the Big Data Connectors, Oracle Big Data SQL, and Oracle Business Intelligence (OBI) and Oracle Data Integrator (ODI).”
Part 2 : Hadoop and NoSQL Data Ingestion using Oracle Data Integrator 12c and Hadoop Technologies
“There are many ways to ingest (load) data into a Hadoop cluster, from file copying using the Hadoop Filesystem (FS) shell through to real-time streaming using technologies such as Flume and Hadoop streaming. In this session we’ll take a high-level look at the data ingestion options for Hadoop, and then show how Oracle Data Integrator and Oracle GoldenGate leverage these technologies to load and process data within your Hadoop cluster. We’ll also consider the updated Oracle Information Management Reference Architecture and look at the best places to land and process your enterprise data, using Hadoop’s schema-on-read approach to hold low-value, low-density raw data, and then use the concept of a “data factory” to load and process your data into more traditional Oracle relational storage, where we hold high-density, high-value data.”
Part 3 : Big Data Analysis using Hive, Pig, Spark and Oracle R Enterprise / Oracle R Advanced Analytics for Hadoop
“Data within a Hadoop cluster is typically analysed and processed using technologies such as Pig, Hive and Spark before being made available for wider use using products like Oracle Big Data SQL and Oracle Business Intelligence. In this session, we’ll introduce Pig and Hive as key analysis tools for working with Hadoop data using MapReduce, and then move on to Spark as the next-generation analysis platform typically being used on Hadoop clusters today. We’ll also look at the role of Oracle’s R technologies in this scenario, using Oracle R Enterprise and Oracle R Advanced Analytics for Hadoop to analyse and understand larger datasets than we could normally accommodate with desktop analysis environments.”
Part 4 : Visualizing Hadoop Datasets using Oracle Business Intelligence, Oracle BI Publisher and Oracle Endeca Information Discovery
“Once insights and analysis have been produced within your Hadoop cluster by analysts and technical staff, it’s usually the case that you want to share the output with a wider audience in the organisation. Oracle Business Intelligence has connectivity to Hadoop through Apache Hive compatibility, and other Oracle tools such as Oracle BI Publisher and Oracle Endeca Information Discovery can be used to visualise and publish Hadoop data. In this final session we’ll look at what’s involved in connecting these tools to your Hadoop environment, and also consider where data is optimally located when large amounts of Hadoop data need to be analysed alongside more traditional data warehouse datasets.”Oracle Openworld 2014 (ODTUG Sunday Symposium), September 2014
Along with another session later in the week on the upcoming Oracle BI Cloud Services, I’m doing a session on the User Group Sunday for ODTUG on ODI12c and the Big Data Connectors for ETL on Hadoop:
Deep Dive into Big Data ETL with Oracle Data Integrator 12c and Oracle Big Data Connectors [UGF9481]
“Much of the time required to work with big data sources is spent in the data acquisition, preparation, and transformation stages of a project before your data reaches a state suitable for analysis by your users. Oracle Data Integrator, together with Oracle Big Data Connectors, provides a means to efficiently load and unload data to and from Oracle Database into a Hadoop cluster and perform transformations on the data, either in raw form or in technologies such as Apache Hive or R. This presentation looks at how Oracle Data Integrator can form the centerpiece of your big data ETL strategy, within either a custom-built big data environment or one based on Oracle Big Data Appliance.”UK Oracle User Group Tech’14 Conference, December 2014
I’m delivering an extended version of my OOW presentation on the UKOUG Tech’14’s “Super Sunday” event, this time over 90 minutes rather than the 45 at OOW, giving me a bit more time for demos and discussion:
Deep-Dive into Big Data ETL using ODI12c and Oracle Big Data Connectors
“Much of the time required to work with Big Data sources is spent in the data aquisition, preparation and transformation stages of a project; before your data is in a state suitable for analysis by your users.Oracle Data Integrator, together with Oracle Big Data Connectors, provides a means to efficiently load and unload data from Oracle Database into a Hadoop cluster, and perform transformations on the data either in raw form or technologies such as Apache Hive or R. In this presentation, we will look at how ODI can form the centrepiece of your Big Data ETL strategy, either within a custom-built Big Data environment or one based on Oracle Big Data Appliance.”Oracle DW Global Leaders’ Meeting, Dubai, December 2014
The Oracle DW Global Leaders forum is an invite-only group organised by Oracle and attended by select customers and associate partners, one of which is Rittman Mead. I’ll be delivering the technical seminar at the end of the second day, which will run over two sessions and will be based on the main points from the one-day seminars I’m running in Croatia and Slovenia.
From Hadoop to dashboards, via ODI and the BDA – the complete trail : Part 1 and Part 2
“Join Rittman Mead for this afternoon workshop, taking you through data acquisition and transformation in Hadoop using ODI, Cloudera CDH and Oracle Big Data Appliance, through to reporting on that data using OBIEE, Endeca and Oracle Big Data SQL. Hear our project experiences, and tips and techniques based on real-world implementations”
Keep an eye out for more Hadoop and big data content over the next few weeks, including a look at MongoDB and NoSQL-type databases, and how they can be used alongside Oracle BI, DW and data integration tools.
There’s a good chance that, if you’re reading this, you likely perform some reporting, analytics, data stewardship role or probably some combination of all three. And be it for a large corporation or a small company, there are likely standards and practices that pertain to how the above jobs are performed on a day to day basis; not easily changed and perpetually validated by big budgets and long careers. It is equally likely that deeply ingrained within these reporting practices lies some moderate to heavy implementation of Excel. It wasn’t long ago that I found myself utilizing the spreadsheet program on a daily basis and for hours upon hours at a time.
What this essentially amounted to:
- Pulling down large amounts of data from our department’s data model using large SQL queries that themselves could take most of the day to elucidate, let alone waiting on the query to yield results, which could easily warrant a bathroom break, a phone call, or if you were feeling adventurous, catching up on email.
- Validating your results
- Exporting to Excel (key step here!)Massaging and formatting your data by implementing innumerable and often unwieldy functions that deserved their own time slot on your schedule for the day to figure out
- Proofing your analysis so that it got to management in ship shape
- Hoping that an analyst from another department who utilized the same metric on their report and who would be at the same meeting actually coincided with yours
Fast forward a bit and I’m sitting here, writing this blog as a sort of proverbial white flag in the great battle between Excel and the behemoth that is OBIEE. And just what is this white flag? Why, it’s Oracle’s most recent iteration of Smart View, which provides expanded functionality and support for the Microsoft Office Suite of programs. Namely, its golden boy, Excel. That’s right, Excel, the darling of office staff everywhere, the program upon which empires rise and fall. In paraphrasing a quote from www.cfo.com, some 64% of public and private companies still use Excel and other “manual” solutions to perform their finance functions. So, in the world of the spreadsheet, when does it makes sense to cross that blurry line from cell to subject area? Smart View now makes answering that question much easier. It seems that they’ve really gotten a grasp on the formatting shortcomings of the last version and made up for it in spades. Or, so at least they claim.
The Test Run – OBIEE to Excel
The example below illustrates a simple import via Smart View. I generated a dashboard in Answers which mimics that of an Excel design I found online. Thank the good folks over at www.chandoo.org for their excellent skills in Excel dashboarding and for providing plenty of great examples. The dashboard contains a table with a selection of KPI’s that the user may then choose to sort on via a View Selector (each view has been sorted on a different KPI and is on a different Compound Layout). Upon selecting a KPI, the analysis will then display the Top 10 products by the KPI selected. In addition, the table contains conditional formatting which simply alerts users to the variance between different KPI’s and their targets. Lastly, there is a scatter plot view which displays our Product dimension as seen through the lens of Revenue and Quantity. Per the most recent Oracle documentation, we shouldn’t have any trouble including the current selections of a dashboard prompt either. Let’s see how it performs when we move it over to Excel.
“OBIEE report and page prompts are fully supported as part of the import process. Dashboards can be imported through Oracle Smart View on a per page basis or the entire dashboard. Prompts are applied at the current state of the logged in user. Future releases of the product will support dashboard prompts directly through Microsoft Office.”
And there you have it! Excel displays our table and graph views as per the most recent selection from the Dashboard prompt. But wait! Our conditional formatting seems to be missing and to prove this, this is even the case when exported directly from the analysis view as an Excel workbook.
For our second scenario, let’s see how Excel handles a simpler, heat map style conditional formatting. I’ve made a simple table on our dashboard that measures Revenue, Quantity Sold, and the Average Order in $. I set up conditional formatting around the Average Order measure to see how Excel handles importing the color scheme for the currently selected Time parameters on the dashboard.
Contrastingly, we see that Smart View has preserved a simpler, Heatmap style of conditional formatting when imported from OBIEE through Smart View. So, perhaps it is Excel’s lack of corresponding graphic in the previous example that has caused the migration snafu? OBIEE doesn’t even seem to render our arrow graphics as per the documentation.
“Oracle BI Customizations and View Standards – The Import of Oracle BI content can leverage the customizations and view standards used within an OBIEE environment. All view designed modifications such as conditional formatting, background colors or data configuration is automatically translated to the Microsoft Office environment.”
Excel to OBIEE
Let’s see what the latest edition of Smart View offers when moving an analysis from Excel to OBIEE.
Because we weren’t able to import our full table view, why don’t we construct it using the View Designer? The interface looks clean and provides an intuitive approach to producing basic Answers views. Accessing our subject area, I simply selected the columns that matched those on our Answers analysis. After clicking ‘OK’, sorting on our Revenue column from largest to smallest and doing a little deleting, we have a pseudo ‘Top 10’ analysis by Revenue. Given the aesthetic attributes of our Answers analysis, lets see how we’re going to replicate this in Excel.
After selecting the table, we can navigate to the Design tab under ‘Table Tools’ and select an alternating Grey scheme which gives us the ‘Enable Alternate Styling’ design quality. Now lets add some formulas and conditional formatting that will give us our Calculated column equivalents. We can insert two rows, one between Revenue and Target, and between Qty and Target, to make room for conditional formatting and Excel’s Icon Sets feature. We then create a simple formula that subtracts Revenue and Quantity from their respective targets in the column between the two, assign conditional formatting and voila! Excel even has a check box that lets you show the arrow only.
From Excel, we can select Publish View to deposit our analysis into our Shared Folder. The results indicate a sort of ‘two way street’ between Smart View and Excel and vice versa. Neither totally supports the formatting capabilities of the other, as if to say Smart View is giving ground with every new release. In this blog, we’ve taken a look at how Smart View handles some mildly complex conditional formatting and what it takes to replicate this feature in native Excel. In a user environment where reports are flying back and forth between the two platforms, Smart View definitely makes sense, however it might be advisable to simply deliver the minimum of what is needed and let an end user make any formatting based modifications. After all, who would want to do all that work only to have it lost in translation?
One of the main things I get asked to do here at Rittman Mead, is deliver the OBIEE front-end training course (TRN 202). This a great course that has served both us, and our clients well over the years. It has always been in high demand and always delivered with great feedback from those in attendance. However, as with all things in life and business, there is going to be room for improvement and opportunities to provide even more value to our clients. Of all the feedback I receive from delivering the course, my favorite is that we do an incredible job delivering both the content and providing real business scenarios on how we have used this tool in the consulting field. Attendees will ask me how a feature works, and how I have used it with current and former clients, 100% of the time.
This year at KSCope ’14 in Seattle, we were asked to deliver a 2 hour front-end training course. Our normal front-end course runs a span of two days and covers just about every feature you can use all the way from Answers and Dashboards, to BI Publisher. Before the invitation to KScope ’14, we had bee tooling with the idea to deliver a course that not only teaches attendees on how to navigate OBIEE and use it’s features, but also emphasizes the business value behind why those features exist in the first place. We felt that too often users are given a quick overview of what the tool includes, but left figure out on their own how to extract the most value. It is one thing to create a graph in Answers, and another to know what the best graph to use might be. So in preparation for the KScope session, we decided to build the content around not only how to develop in OBIEE, but also why, as a business user, you would choose one layout/graph/feature over another. As you would expect, the turn out for the session was fantastic, we had over 70 plus pre-register, with another 10 on the waiting list. This was proof that there is an impending need to pull as much business value out of the tool as there is to simply learn how to use it. We were so encouraged by the attendance and feedback from this event, that we spent the next several weeks developing what is called the “Business Enablement Bootcamp”. It is a 3 day course that will cover Answers, Dashboards, Action Framework, BI Publisher, and the new Mobile App Designer. This is an exciting time for us in that we not only get show people how to use all of the great features that are built into the tool, but to also incorporate years of consulting experience and hundreds of client engagements right into the content. Below I have listed a breakdown of the material and the value it will provide.
Whenever we deliver our OBIEE 5-day bootcamp, which covers everything from infrastructure to the front end, Answers is one of the key components that we teach. Answers is the building block for analysis in OBIEE. While this portion of the tool is relatively intuitive to get started with, there are so many valuable nuances and settings that can get over looked without proper instruction. In order to get the most out of the tool, a business user needs be able to not only create basic analyses, but be able to use many of the advanced features such as hierarchical columns, master-detail, and selection steps. Knowing how and why to use these features is a key component to gaining valuable insight for your business users.
This one in particular is dear to my heart. To create an analysis and share it on a dashboard is one thing, but to tell a particular story with a series of visualizations strategically placed on a dashboard is something entirely different. Like anything else business intelligence, optimal visualization and best practices are learned skills that take time and practice. Valuable skills like making the most of your white space, choosing the correct visualizations, and formatting will be covered. When you provide your user base with the knowledge and skills to tell the best story, there will be no time wasted with clumsy iterations and guesswork as to what is the best way to present your data. This training will provide some simple parameters to work within, so that users can quickly gather requirements and develop dashboards that more polish and relevance than ever before.
Whenever I deliver any form of front end training, I always feel like this piece of OBIEE is either overlooked, undervalued, or both. This is because most users are either unaware of it’s use, or really don’t have a clear idea of its value and functionality. It’s as if it is viewed as an add-on in the sense that is just simply a nice feature. The action framework is something that when properly taught how to navigate, or given demonstration of its value, it will indeed become an invaluable piece of the stack. In order to get the most out of your catalog, users need to be shown how to strategically place action links to give the ability to drill across to analyses and add more context for discovery. These are just a few capabilities within the action framework that when shown how and when to use it, can add valuable insight (not to mention convenience) to an organization.
Bi Publisher/Mobile App Designer
Along with the action framework, this particular piece of the tool has the tendency to get overlooked, or simply give users cold feet about implementing it to complement answers. I actually would have agreed with these feelings before the release of 126.96.36.199. Before this release, a user would need to have a pretty advanced knowledge of data modeling. However, users can now simply pick any subject area, and use the report creation wizard to be off and running creating pixel perfect reports in no time. Also, the new Mobile App Designer on top of the publisher platform is another welcomed addition to this tool. Being the visual person that I am, I think that this is where this pixel perfect tool really shines. Objects just look a lot more polished right out of the box, without having to spend a lot of time formatting the same way you would have to in answers. During training, attendees will be exposed the many of the new features within BIP and MAD, as well as how to use them to complement answers and dashboards.
Third Party Visualizations
While having the ability to implement third party visualizations like D3 and Flot into OBIEE is more of an advanced skill, the market and need seems to be growing for this. While Oracle has done some good things in past releases with new visualizations like performance tiles and waterfall charts, we all know that business requirements can be demanding at times and may require going elsewhere to appease the masses. You can visit https://github.com/mbostock/d3/wiki/Gallery to see some of the other available visualizations beyond what is available in OBIEE. During training, attendees will learn the value of when and why external visualizations might be useful, as well as a high level view of how they can be implemented.
Users often make the mistake of viewing each piece of the front end stack as separate entities, and without proper training this is very understandable. Even though they are separate pieces of the product, they are all meant to work together and enhance the “Business Intelligence” of an organization. Without training the business to complement one piece to another, it will always be viewed as just another frustrating tool that they don’t have enough time to learn on their own. This tool is meant to empower your organization to have everything they need to make the most informed and timely decisions, let us use our experience to enable your business.
Over the past couple of years Rittman Mead have been broadening our skills and competencies out from core OBIEE, ODI and Oracle data warehousing into the new “emerging” analytic platforms: R and database advanced analytics, Hadoop, cloud and clustered/distributed systems. As we talked about in the recent series of updated Oracle Information Management Reference Architecture blog posts and my initial look at the Oracle Big Data SQL product, our customers are increasingly looking to complement their core Oracle analytics platform with ones to handle unstructured and big data, and as technologists we’re always interesting in what else we can use to help our customers get more insight out of their (total) dataset.
An area we’ve particularly focused on over the past year has been Hadoop and R analysis, with the recent announcement of our partnering with Cloudera and the recruitment of a big data and advanced analytics team operating our of our Brighton, UK office. We’ve also started to work on a number of projects and proof of concepts with customers in the UK and Europe, working mainly with core Oracle BI, DW and ETL customers looking to make their first move into Hadoop and big data. The usual pattern of engagement is for us to engage with some business users looking to analyse a dataset hitherto too large or too unstructured to load into their Oracle data warehouse, or where they recognise the need for more advanced analytics tools such as R, MapReduce and Spark but need some help getting started. Most often we put together a PoC Hadoop cluster for them using virtualization technology on existing hardware they own, allowing them to get started quickly and with no initial licensing outlay, with our preferred Hadoop distribution being Cloudera CDH, the same Hadoop distribution that comes on the Oracle Big Data Appliance. Projects then typically move on to Hadoop running directly on physical hardware, in a couple of cases Oracle’s Big Data Appliance, usually in conjunction with Oracle Database, Oracle Exadata and Oracle Exalytics for reporting.
One such project started off by the customer wanting to analyse a dataset that was too large for the space available in their Oracle database and that they couldn’t easily process or analyse using the SQL-based tools they usually used; in addition, like most large organisations, database and hardware provisioning took a long time and they needed to get the project moving quickly. We came in and quickly put together a virtualised Hadoop cluster together for them, on re-purposed hardware and using the free (Standard) edition of Cloudera CDH4, and then used the trial version of Oracle Big Data Connectors along with SFTP transfers to get data into the cluster and then analysed.
The PoC itself then ran for just over a month with the bulk of the analysis being done using Oracle R Advanced Analytics for Hadoop, an extension to R that allows you to use Hive tables as a data source and create MapReduce jobs from within R itself; the output from the exercise was a series of specific-answer-to-specific-question R graphs that solved an immediate problem for the client, and showed the value of further investment in the technology and our services – the screenshot below shows a typical ORAAH session, in this case analyzing the flight delays dataset that you can also find on the Exalytics server and in smaller form in OBIEE 11g’s SampleApp dataset.
That project has now moved onto a larger phase of work with Oracle Big Data Appliance used as the Hadoop platform rather than VMs, and Cloudera Hadoop upgraded from the free, unsupported Standard version to Cloudera Enterprise. The VMs in fact worked pretty well and had the advantage that they could be quickly spun-up and housed temporarily on an existing server, but were restricted by the RAM that we could assign to each VM – 2GB initially, quickly upgraded to 8GB per VM, and the fact that they were sharing CPU and IO resources. Big Data Appliance, by contrast, has 64GB or RAM per node – something that’s increasingly important now in-memory tools like Impala are begin used – and has InfiniBand networking between the nodes as well as fast network connections out to the wider network, something thats often overlooked when speccing up a Hadoop system.
The support setup for the BDA is pretty good as well; from a sysadmin perspective there’s a lights-out ILOM console for low-level administration, as well as plugins for Oracle Enterprise Manager 12c (screenshot below), and Oracle support the whole package, typically handling the hardware support themselves and delegating to Cloudera for more Hadoop-specific queries. I’ve raised several SRs on client support contracts since starting work on BDAs, and I’ve not had any problem with questions not being answered or buck-passing between Oracle and Cloudera.
One thing that’s been interesting is the amount of actual work that you need to do with the Big Data Appliance beyond the actual installation and initial configuration by Oracle to “on-board” it into the typical enterprise environment. BDAs are left with customers in a fully-working state, but like Exalytics and Exadata though, initial install and configuration is just the start, and you’ve then got to integrate the platform in with your corporate systems and get developers on-boarded onto the platform. Tasks we’ve typically provided assistance with on projects like these include:
- Configuring Cloudera Manager and Hue to connect to the corporate LDAP directory, and working with their security team to create LDAP groups for developer and administrative access that we then used to restrict and control access to these tools
- Configuring other tools such as RStudio Server so that developers can be more productive on the platform
- Putting in place an HDFS directory structure to support incoming data loads and data archiving, as well as directories to hold the output datasets from the analysis work we’re doing – all within the POSIX security setup that HDFS currently uses which limits us to just granting owner, group and world permissions on directories
- Working with the client’s infrastructure team on things like alerting, troubleshooting and setting up backup and recovery – something that’s surprisingly tricky in the Hadoop world as Cloudera’s backup tools only backup from Hadoop-to-Hadoop, and by definition your Hadoop system is going to hold a lot of data, the volume of which your current backup tools aren’t going to easily handle
Once things are set up though you’ve got a pretty comprehensive platform that can be expanded up from the initial six nodes our customers’ systems typically start with to the full eighteen node cluster, and can use tools such as ODI to do data loading and movement, Spark and MapReduce to process and analyse data, and Hive, Impala and Pig to provide end-user access. The diagram below shows a typical future-state architecture we propose for clients on this initial BDA “starter config” where we’ve moved up to CDH5.x, with Spark and YARN generally used as the processing framework and with additional products such as MongoDB used for document-type storage and analysis:
Something that’s turned out to be more of an issue on projects than I’d originally anticipated is complying with corporate security policies. By definition, most customers who buy an Oracle Big Data Appliance and going to be large customers with an existing Oracle database estate, and if they deal with the public they’re going to have pretty strict security and privacy rules you’ll need to adhere to. Something that’s surprising therefore to most customers new to Hadoop is how insecure or at least easily compromised the average Hadoop cluster is, with Hadoop FS shell security relying on trusted networks and incoming user connections and interfaces such as ODBC not checking passwords at all.
Hadoop and the BDA only becomes what’s termed “secure” when you link it to a Kerebos server, but not every customer has Kerebos set up and unless you enable this feature right at the start when you set up the BDA, it’s a fairly involved task to add retrospectively. Moreover, customers are used to fine-grained access control to their data, a single security model over their data and a good understanding in their heads as to how security works on their database, whereas Hadoop is still a collection of fairly-loosely coupled components with pretty primitive access controls, and no easy way to delete or redact data, for example, when a particular country’s privacy laws in-theory mandate this.
Like everything there’s a solution if you’re creative enough, with tools such as Apache Sentry providing role-based access control over Hive and Impala tables, alternative storage tools like HBase that permit read, write, update and delete operations on data rather than just HDFS’s insert and (table or partition-level) delete, and tools like Cloudera Navigator and BDA features like Oracle Audit Vault that provide administrators with some sort of oversight as to who’s accessing what data and when. As I mentioned in my blog post a couple of weeks ago, Oracle’s Big Data SQL product addresses this requirement pretty well, potentially allowing us to apply Oracle security over both relational, and Hadoop, datasets, but for now we’re working within current CDH4 capabilities and planning on introducing Apache Sentry for role-based access control to Hive and Impala in the coming weeks. We’re also looking at implementing Cloudera’s “secure gateway” cluster topology with all access restricted to just a single gateway Hadoop node, and the cluster itself firewalled-off with external access to just that gateway node and HTTP / REST API access to the various cluster services, for example as shown in the diagram below:
My main focus on Hadoop projects has been on the overall Hadoop system architecture, and interacting with the client’s infrastructure and security teams to help them adopt the BDA and take over its maintenance. From the analysis side, it’s been equally as interesting, with a number of projects using tools such as R, Oracle R Advanced Analytics for Hadoop and core Hive/MapReduce for data analysis, Flume, Java and Python for data ingestion and processing, and most recently OBIEE11g for publishing the results out to a wider audience. Following the development model that we outlined in the second post in our updated Information Management Reference Architecture blog series, we typically split delivery of each project’s output into two distinct phases; a discovery phase, typically done using RStudio and Oracle R Advanced Analytics for Hadoop, where we explore and start understanding the dataset, presenting initial findings to the business and using their feedback and direction to inform the second phase; and a second, commercial exploitation phase where we use the discovery phases’ outputs and models to drive a more structured dimensional model with output begin in the form of OBIEE analyses and dashboards.
We looked at several options for providing the datasets for OBIEE to query, with our initial idea being to connect OBIEE directly to Hive and Impala and let the users query the data in-place, directly on the Hadoop cluster, with an architecture like the one in the diagram below:
In fact this turned out to not be possible, as whilst OBIEE 188.8.131.52 can access Apache Hive datasources, it currently only ships with HiveServer1 ODBC support, and no support for Cloudera Impala, which means we need to wait for a subsequent release of OBIEE11g to be able to report against the ODBC interfaces provided by CDH4 and CDH5 on the BDA (although ironically, you can get HiveServer2 and Impala working on OBIEE 184.108.40.206 on Windows, though this platform isn’t officially supported by Oracle for Hadoop access, only Linux). Whichever way though, it soon became apparent that even if we could get Hive and Impala access working, in reality it made more sense to use Hadoop as the data ingestion and processing platform – providing access to data analysts at this point if they wanted access to the raw datasets – but with the output of this then being loaded into an Oracle Exadata database, either via Sqoop or via Oracle Loader for Hadoop and ideally orchestrated by Oracle Data Integrator 12c, and users then querying these Oracle tables rather than the Hive and Impala ones on the BDA, as shown in the diagram below.
In-practice, Oracle SQL is far more complete and expressive than HiveQL and Impala SQL and it makes more sense to use Oracle as the query platform for the vast majority of users, with data analysts and data scientists still able to access the raw data on Hadoop using tools like Hive, R and (when we move to CDH5) Spark.
The final thing that’s been interesting about working on Hadoop and Big Data Appliance projects is that 80% of it, in my opinion, is just the same as working on large enterprise data warehouse projects, with 20% being “the magic”. A large portion of your time is spent on analysing and setting up feeds into the system, just in this case you use tools like Flume instead of GoldenGate (though GoldenGate can also load into HDFS and Hive, something that’s useful for transactional database data sources vs. Flume’s focus on file and server log data sources). Another big part of the work is data processing, ingestion, reformatting and combining, again skills an ETL developer would have (though there’s much more reliance, at this point, on command-line tools and Unix utilities, albeit with a place for tools like ODI once you get to the set-based filtering, joining and aggregating phase). In most cases, the output of your analysis and processing will be Hive and Impala tables so that results can be analysed using tools such as OBIEE, and you therefore need skills in areas such as dimensional modelling, business analysis and dashboard prototyping as well as tool-specific skills such as OBIEE RPD development.
Where the “magic” happens, of course, is the data preparation and analysis that you do once the data is loaded, quite intensively and interactively in the discovery phase and then in the form of MapReduce and Spark jobs, Sqoop loads and Oozie workflows once you know what you’re after and need to process the data into something more tabular for tools like OBIEE to access. We’re building up a team competent in techniques such as large-scale data analysis, data visualisation, statistical analysis, text classification and sentiment analysis, and use of NoSQL and JSON-type data sources, which combined with our core BI, DW and ETL teams allows us to cover the project from end-to-end. It’s still relatively early days but we’re encouraged by the response from our project customers so far, and – to be honest – the quality of the Oracle big data products and the Cloudera platform they’re based around – and we’re looking forward to helping other Oracle customers get the most out of their adoption of these new technologies.
If you’re an Oracle customer looking to make their first move into the worlds of Hadoop, big data and advanced analytics techniques, feel free to drop me an email at firstname.lastname@example.org for some initial advice and guidance – the fact we come from an Oracle-centric background as well typically makes it easier for us to relate these new concepts to the ones you’re typically more familiar with. Similarly, if you’re about to bring on-board an Oracle Big Data Appliance system and want to know how best to integrate it in with your existing Oracle BI, DW, data integration and systems management estate, get in contact and I’d be happy to share experiences and our delivery approach.
Rittman Mead are proud to announce that we’ve been nominated by UKOUG members and Oracle customers for five categories in the upcoming UKOUG Parter of the Year Awards 2014; Business Intelligence, Training, Managed Services, Operating Systems Storage and Hardware, and Emerging Partner, reflecting the range of products and services we now offer for customers in the UK and around the world.
Although Rittman Mead are a worldwide organisation with offices in the US, India, Australia and now South Africa, our main operation is in the UK and for many years we’ve been a partner member of the UK Oracle User Group (UKOUG). Our consultants speak at UKOUG Special Interest Group events as well as the Tech and Apps conferences in December each year, we write articles for Oracle Scene, the UKOUG members’ magazine, and several of our team including Jon and myself have held various roles including SIG chair and deputy chair, board member and even editor of Oracle Scene.
Partners, along with Oracle customers and of course Oracle themselves, are a key part of the UK Oracle ecosystem and to recognise their contribution the UKOUG recently brought in their Partner of the Year Awards that are voted on by UKOUG members and Oracle customers in the region. As these awards are voted on by actual users and customers we’ve been especially pleased over the years to win several Oracle Business Intelligence Partner of the Year Gold awards, and last year we were honoured to receive awards in five categories, including Business Intelligence Partner of the Year, Training Partner of the Year and Engineered Systems Partner of the Year.
This year we’ve been nominated again in five categories, and if you like what we do we’d really appreciate your vote, which you can cast at any time up to the closing date, September 15th 2014. Voting is open to UKOUG members and Oracle customers and only takes a few minutes – the voting form is here and you don’t need to be a UKOUG member, only an Oracle end-user or customer – these awards are a great recognition for the hard work out team puts in, so thanks in advance for any votes you can put in for us!
Oracle announced their Big Data SQL product a couple of weeks ago, which effectively extends Exadata’s query-offloading to Hadoop data sources. I covered the launch a few days afterwards, focusing on how it implements Exadata’s SmartScan on Hive and NoSQL data sources and provides a single metadata catalog over both relational, and Hadoop, data sources. In a Twitter conversation later in the day though, I made the comment that in my opinion, the biggest benefit of Big Data SQL will be in its ability to extend Oracle’s security model to Hadoop data sources, because Hadoop security Hadoop security is still a bit of a mess:
lang=”en”>To me the greatest benefit of Big Data SQL is the single security model; even with Sentry, Hadoop security is fragmented and a mess (IMO)
— Mark Rittman (@markrittman) July 17, 2014
I’ve been working on an Oracle Big Data Appliance project over the past few weeks, as the technical architect and initial sysadmin for the cluster, and it’s given me a first-hand experience of what security’s like on a Hadoop cluster. Over the past few weeks I’ve had to come up with a security policy covering HDFS, Hive and the Cloudera management tools (Cloudera Manager, Hue etc), and try and implement an access and authorisation approach that ensures only designated people can log in, and when they’re in, they can only see the data they’re supposed to see. Hadoop at this point, to my mind, suffers from a couple of major issues when it comes to security:
- It’s fragmented, in that each tool or Hadoop product tends to have its own security setup, and the documentation is all split up, rapidly goes out of date, and is more of a reference than a tutorial (Cloudera’s Security documentation is one of the better examples, but it still splits the key information you need over several sections and several other docs)
If we take a typical security policy that a large enterprise customer’s going to want to put in place, it’ll look something like this:
- Users should only be able to log in via their corporate LDAP account, and we’ll want that login process to be secure so it can’t easily be bypassed
- We want to be able to secure our datasets, so that only authorised users can view particular datasets, and there’s likely to be some groups we grant read-only access to, and others we grant read-write
- The data loading processes for the Hadoop cluster need to be locked-down so they can’t overwrite the datasets of other applications
- Our security policy ideally needs to sync-up, or be an extension of, our existing enterprise security policy, not something we maintain separately
- We need to be able to audit and review who’s actually accessing what dataset, to ensure that these policies are being followed and enforced
- We also need the ability to obfuscate or depersonalise data before it gets into the cluster, and also have the option of encrypting the data at-rest as well as on-the-wire
Back in the early days of Hadoop these types of security policy weren’t often needed, as the users of the Hadoop cluster were typically a small set of data scientists or analysts who’d been cleared already to view and work with the data in the cluster (or more likely, they did it and just didn’t tell anyone). But as we move to enterprise information management architectures such as the one outlined in my two-part blog post series a few weeks ago (pt.1, pt.2), the users of Hadoop and other “data reservoir” data sources are likely to increase significantly in number as data from these systems becomes just another part of the general enterprise data set.
But in practice, this is hard to do. Let’s start with HDFS first, the Hadoop Distributed File System on which most Hadoop data is stored. HDFS aims to look as similar to a Linux or Unix-type filesystem as possible, with similar commands (mkdir, ls, chmod etc) and the same POSIX permissions model, where files and directories are associated with an owner and a group and where permissions are set for that owner, the group and all others. For example, in the HDFS file listing below, the “/user/cust_segment_analysis” directory is owned by the user “mrittman” and the group “marketing”, with the directory owner having full read, write and subdirectory traversal access to the directory, the group having read-only and subdirectory traversal access, and all others having no access at all.
[root@bdanode1 ~]# hadoop fs -ls /user Found 13 items drwxrwxrwx - admin admin 0 2014-06-02 16:06 /user/admin drwxr-x--- - mrittman marketing 0 2014-07-26 21:31 /user/cust_segment_analysis drwxr-xr-x - hdfs supergroup 0 2014-05-27 13:19 /user/hdfs drwxrwxrwx - mapred hadoop 0 2014-05-25 20:47 /user/history drwxrwxr-t - hive hive 0 2014-06-04 16:31 /user/hive drwxr-xr-x - hue hue 0 2014-05-31 18:51 /user/hue drwxrwxr-x - impala impala 0 2014-05-25 20:54 /user/impala drwxrwxr-x - oozie oozie 0 2014-05-25 20:52 /user/oozie drwxrwxrwx - oracle oracle 0 2014-06-09 21:38 /user/oracle drwxr-xr-x - root root 0 2014-06-06 16:25 /user/root drwxr-xr-x - sample sample 0 2014-05-31 18:51 /user/sample drwxr-x--x - spark spark 0 2014-05-25 20:45 /user/spark drwxrwxr-x - sqoop2 sqoop 0 2014-05-25 20:53 /user/sqoop2
Which all sounds great until you then have another group that needs read-write access to the directory, but you’re limited to just one group permissions setting for the directory which you’ve already used to set up read-only access for that particular group. If you therefore need to set up different sets of security access for different groups, you typically then end-up creating multiple HDFS directories and multiple copies of the dataset in question, assigning each copy to a different group, which isn’t all that convenient and gives you other problems in terms of maintenance and keeping it all in-sync.
What you of course need is something like the “access control lists” (ACLs) you get with operating systems like Windows NT and MacOS, where you can define an arbitrary number of user groups and then assign each of them their own permission set on the directory and the files it contains. The most recent versions of Hadoop actually implement a form of ACL for HDFS, with this feature making its way into the recently-released Cloudera CDH5.1, but these ACLs are an addition to the standard POSIX user, group, others model and aren’t recommended for all files in your HDFS filesystem as according to the Hadoop docs “Best practice is to rely on traditional permission bits to implement most permission requirements, and define a smaller number of ACLs to augment the permission bits with a few exceptional rules. A file with an ACL incurs an additional cost in memory in the NameNode compared to a file that has only permission bits.” Still, it’s better than not having them at all, and I’d imagine using this feature for particular directories and sets of files that need more than one set of group permissions configured for them.
In most cases though, the way you’ll present data out to non-technical end-users and applications is through Hive and Impala tables, or through tools like Pig and Spark. Under the covers, these tools still use HDFS permissions to control access to the data within Hive and Impala tables, but again by default you’re limited to granting access to whole HDFS directories, or the files contained within those directories. Something that addresses this issue is a product called Apache Sentry, an open-source project within the Hadoop family that enables role-based access control for Hive and Impala tables. Oracle are one of the co-founders of the Sentry project and include it in the base software on the Big Data Appliance, and using Sentry you can grant SELECT, INSERT or ALL privileges to a group on a particular Hive or Impala table, rather than on the underlying HDFS directories and files. A form of fine-grained access control can be set up using Sentry by creating views with particular row-level security settings, giving you the basics of a database-like security policy that you can apply over the main way that users access data in the cluster.
But Sentry itself has a few significant prerequisites – you have to enable Kerebos authentication on your cluster, which you should do anyway because of the risk of account spoofing, but is still a significant thing to set up – and of course you need to link Hive and Impala to your corporate LDAP server and configure them to work in the way that Sentry requires. Most importantly though, you’re still left with the situation where you’ve got two separate security setups – the one for your corporate data warehouse and relational data sources, and another for data accessed on Hadoop, and it’s still hard to be sure, what with all the disparate products and partially-complete open-source products, whether data in your Hadoop cluster is still really secure (though products like Cloudera Navigator aim to provide some form of data governance and auditing over these datasets); and, there’s still no straightforward way to remove individual customers’ data out of the Hadoop dataset (“data redaction”), no easy way to obfuscate or mask data, and no easy way (apart from the Hive views mentioned before) to restrict users to accessing only certain columns in a Hive or Impala table.
And so this is where Oracle’s Big Data SQL product could be very interesting. Big Data SQL takes the Exadata model of moving as much filtering and column-projection as it can to the storage server, adding Oracle SmartScan functionality to the Hadoop node and allowing it to understand the full Oracle SQL dialect (and PL/SQL security functions), rather than just the subset of SQL provided by HiveQL and Impala SQL.
More importantly, it’ll enable a single unified data dictionary over both Oracle and Hadoop data sources, presenting Hive tables and NoSQL data as regular Oracle tables and allowing the DBA to create data security, redaction and row-level filtering policies over both relational and Hadoop data – giving you potentially the ability to define a single security policy across all data in your overall information management architecture.
So I think this is actually a “big deal”, and potentially even more game-changing that the SmartScan functionality that got most of the attention with the Big Data SQL product launch. How well it’ll work in-practice, and how much will be enabled on day one it’s hard to say, but this feature meets a real need that our customers are finding now, so I’ll be very interested to try it out when the product becomes available (presumably) later in the year.
Oracle launched their Oracle Big Data SQL product earlier this week, and it’ll be of interest to anyone who saw our series of posts a few weeks ago about the updated Oracle Information Management Reference Architecture, where Hadoop now sits alongside traditional Oracle data warehouses to provide what’s termed a “data reservoir”. In this type of architecture, Hadoop and its underlying technologies HDFS, Hive and schema-on-read databases provide an extension to the more structured relational Oracle data warehouses, making it possible to store and analyse much larger sets of data with much more diverse data types and structures; the issue that customers face when trying to implement this architecture is that Hadoop is a bit of a “wild west” in terms of data access methods, security and metadata, making it difficult for enterprises to come up with a consistent, over-arching data strategy that works for both types of data store.
Oracle Big Data SQL attempts to address this issue by providing a SQL access layer over Hadoop, managed by the Oracle database and integrated in with the regular SQL engine within the database. Where it differs from SQL on Hadoop technologies such as Apache Hive and Cloudera Impala is that there’s a single unified data dictionary, single Oracle SQL dialect and the full management capabilities of the Oracle database over both sources, giving you the ability to define access controls over both sources, use full Oracle SQL (including analytic functions, complex joins and the like) without having to drop down into HiveQL or other Hadoop SQL dialects. Those of you who follow the blog or work with Oracle’s big data connector products probably know of a couple of current technologies that sound like this; Oracle Loader for Hadoop (OLH) is a bulk-unloader for Hadoop that copies Hive or HDFS data into an Oracle database typically faster than a tool like Sqoop, whilst Oracle Direct Connector for HDFS (ODCH) gives the database the ability to define external tables over Hive or HDFS data, and then query that data using regular Oracle SQL.
Where ODCH falls short is that it treats the HDFS and Hive data as a single stream, making it easy to read once but, like regular external tables, slow to access frequently as there’s no ability to define indexes over the Hadoop data; OLH is also good but you can only use it to bulk-load data into Oracle, you can’t use it to query data in-place. Oracle Big Data SQL uses an approach similar to ODCH but crucially, it uses some Exadata concepts to move processing down to the Hadoop cluster, just as Exadata moves processing down to the Exadata storage cells (so much so that the project was called “Project Exadoop” internally within Oracle up to the launch) – but also meaning that it’s Exadata only, and not available for Oracle Databases running on non-Exadata hardware.
As explained by the launch blog post by Oracle’s Dan McClary, Oracle Big Data SQL includes components that install on the Hadoop cluster nodes that provide the same “SmartScan” functionality that Exadata uses to reduce network traffic between storage servers and compute servers. In the case of Big Data SQL, this SmartScan functionality retrieves just the columns of data requested in the query (a process referred to as “column projection”), and also only sends back those rows that are requested by the query predicate.
Combined with Hive’s ability to map unstructured data sources into regular columns and tables, and Big Data SQL’s support for Oracle NoSQL database, the promise of this new technology is the ability to run queries against both relational, Hadoop and NoSQL data sources using a common data dictionary and common set of identity and data access controls.
There’s a couple of potential downsides, though. First-off, Big Data SQL will only be available as part of Oracle Big Data Appliance, which though an impressive bit of hardware and software is a much smaller market than the total set of Oracle customers looking to combine relational and Hadoop-based data; it’s also restricted to Oracle 12c on Exadata meaning you’ll most probably need to do a database upgrade even if you’ve already got the required Exadata servers in-place. Finally, it’s also restricted to the Oracle-specific distribution of Cloudera Hadoop, though if you’re using the BDA you’ll be using this anyway.
My other concern though is that Oracle now focus on SQL as their only access mechanism into Hadoop and big data, in a similar way to how they focused on SQL as their access route into OLAP when they incorporated Oracle Express into the Oracle Database, back in the mid-2000’s. Focusing on SQL over multidimensional languages such as Express 4GL and MDX meant you missed the real point of using a multidimensional, OLAP database – which of course was being able to use a multidimensional query language, and my concern with Big Data SQL is that we’ll end up focusing on that rather than languages such as Spark, Pig and NoSQL query languages which, combined with schema-on-read, is the real differentiator for Hadoop-based systems. As long as Big Data SQL is positioned as a “bonus” – a convenient way of getting data out of Hadoop once it’s been processed and analysed using more Hadoop-native technologies – then Big Data SQL will be a great enabling and acceptance technology for enterprises, rather than one that ends up restricting them.
We’re not aware of any beta program and I don’t think the launch webcast mentioned a specific date or BDA version when Big Data SQL will be out, but with Openworld coming up soon I’d expect to hear more about this over the next few months. We’re involved in a couple of significant Oracle Big Data Appliance implementations at the moment and this product would address a real, pressing need at the moment with our customers, so I’m looking forward to getting more involved in it over the next few months.
This article was updated on 18th July to add the fact that Big Data SQL is only available on Exadata, and is not a generic Oracle Database 12c technology.