Skip navigation.

Feed aggregator

Oracle Cloud Friendly - Red Samurai ADF Performance Audit

Andrejus Baranovski - Mon, 2014-09-01 13:04
I have deployed our tool for ADF performance monitoring to Oracle Java Cloud service. It runs perfectly on the cloud, monitors slow performance and allows to analyse collected performance data. All data is stored in Oracle Database Cloud.

You can access performance monitoring dashboard using this address. Access will be available for a month or so, until my trial account will expire. You would need to use following login credentials - user: redsam, password: We1come@, identity: ltredsamuraictrial99050. You can login and play online with our performance audit dashboard.

There is ADF demo application deployed as well, Summit App - accessible here (same login credentials). This standard demo application is extended with our audit listener to log slow ADF performance into Oracle Database Cloud instance. Use it to generate some audit data, simply by navigating through the application, updating and inserting data.

Red Samurai ADF Performance Audit tool and standard ADF sample application Summit are deployed to Oracle Java Cloud:


This is a main screen of Summit sample application (accessible here), I'm using it to track ADF performance. You could open this application, do several actions and check in performance dashboard to see if any slow actions were reported:


Red Samurai ADF Performance Audit tool, main dashboard displays various performance issues (accessible here) logged. Audited application is fairly basic, this is why I have set performance thresholds to relatively low values - VO execution time to 10 milliseconds, large fetch to 50 rows and slow activation to 300 milliseconds:


You could select issue from the list and check details -Application Module, View Object names. SQL statements with bind variables are available for slow SQL executions. There are very helpful features available, such as historical performance analysis option and application health automatic calculation.

Slow activation tab displays slow activation events, this helps to locate slowest activation Application Modules and possibly to eliminate such slow activations by Application Module parameters tuning:


Drill down displays large fetch and full scan issues for selected Application Module. The most problematic VO instances are displayed first in the graph:


Same graph can be displayed in table view (very helpful for performance troubleshooting) with special sorting method:


You could go and view weekly/monthly history for reported performance issues based on individual VO instance:


Overall system performance screen displays system statistics - queries, transactions, logged users, activations:


System load summary could be helpful, it allows to understand general system load and see the most frequent operations:

Oracle 12.1.0.2 and Data Warehouses

Rittman Mead Consulting - Mon, 2014-09-01 09:21

If you follow Blogs and Tweets from the Oracle community you won’t have missed hearing about the recent release of the first patch-set for Oracle 12c. With this release there are some significant pieces of new functionality that will be of interest to Data Warehouse DBAs and architects. The headline feature that most Oracle followers will know of is the new in-memory option. In my opinion this is a game-changer for how we design reporting architectures; it gives us an effective way to build operational reporting over the reference data architecture described by Mark Rittman a few weeks ago. Of course, the database team here at Rittman Mead have been rolling up our sleeves and getting into in-memory technology for quite a while now, Mark even featured in the official launch presentation by Larry Ellison with the now famous “so easy it’s boring” quote. Last week Mark published the first of our Rittman Mead in-memory articles, with the promise of more in-memory articles to come including my article for the next edition of UKOUG’s “Oracle Scene”.

However, the in-memory option is not the only new feature that is going to be a benefit to us in the BI/DW world. One of the new features I am going to describe is Exadata only, but the first one I am going to mention is generally available in the 12.1.0.2 database.

Typically, data warehouse queries are different from those seen in the OLTP world – in DW we tend to access a large number of rows and probably aggregate things up to answer some business question. Often we are not using indexes and instead scanning tables or table partitions is the norm. Usually, the data we need to aggregate is widely scattered across the table or partition. Data Warehouse queries often look at records that share a set of common attributes; we look at the sales for the ‘ACME’ widget or the value of items shipped to Arizona. For us there can be great advantage if data we use together is stored together, and this is where Attribute Clustering can pay a part.

Attribute Clustering is usually configured on the table at at DDL time and in-effect controls the ordering of data inserted by DIRECT PATH operations, Oracle does not enforce this ordering for conventional inserts, this may not be an issue in data warehouses as bulk-batch operations typically use APPEND inserts, which are direct path inserts, or partition operations, it may be more of an issue with some of the real-time conventional path loading paradigms. In addition to Direct Path load operations Attribute Clustering can also occur when you do Alter table MOVE type operations (this also includes operations such as PARTITION SPLIT). On the surface, Attribute Clustering sounds little different to using an ORDER by on an append insert and hoping that Oracle actually stores the data where you expect it to. However, Attribute Clustering gives us two other possibilities in how we can order the data in the cluster.

Firstly, we can cluster on columns from JOINED dimension tables, for example in a SALES DW we may have a sales fact with a product key at the SKU level, but we often join to the product dimension and report at the Product Category level. In this case we can cluster our sales fact table so that each product category appears in the same cluster. For example, we have just opened a chain a supermarkets with a wide but uninspiring range of brands and products (see the tiny piece of our product dimension table below)

NewImage

As you can see, our Product PK has no relationship at all to the type of product being sold. In our Kimball-style data warehouse we typically store the product key on the fact table and join to the product dimension to obtain all of the other product attributes and hierarchy members. This is essentially what we can do with join Attribute Clustering, in our example we can cluster our fact table on PRODUCT_CATEGORY so that all of the Laundry sales are physically close to each other in the Fact table.

CREATE TABLE rm_sales (
product_idNUMBER NOT NULL,
store_id        NUMBER NOT NULL,
sales_date      DATE NOT NULL,
loyalty_card_id NUMBER ,
quantity_sold  NUMBER(3) NOT NULL,
value_sold    NUMBER(10,2) NOT NULL
)
CLUSTERING
rm_sales JOIN products ON (rm_sales.product_id = products.product_pk)
BY LINEAR ORDER (sales_date, product_category, store_id);

Notice we are clustering on a join to the product dimension table’s “product_category” column, we are also clustering on sales_date, this is especially important in the case of partitioned fact tables so that the benefits of clustering align to the partitioning strategy.  We are also not restricted in our clustering to just one join, if we wanted to we could also cluster our sales by store region e.g. the Colorado laundry product sales are located in the same area of the sales table. To use Join Attribute Clustering we need to define the PK / FK relationships between fact and dimension, however it is always good practice to have that in place as it helps the CBO so much with query plan evaluation

Secondly, notice the BY LINEAR ORDER clause in the table DDL. Of the two ordering options, Linear Order is the most basic form of clustering, it this case we have our data structured so that all the items for a sales day are clustered together and within that cluster we order by product category and those categories are in turn ordered by store_id. The other way we can cluster is BY INTERLEAVED ORDER; here, Oracle maps a combination of dimensional values to a single cluster value using a z-order curve fitting approach. This sounds complex but it ensures that items that are frequently queried together are co-located in the disk blocks in the storage.

Interleaved ordering is probably the best choice for data warehousing at it aligns well with how we access data in our queries. Although we could include all of the dimension keys in our ordering list, it is going to be more benefit to just include a subset of dimensions; typically for retail I’d go with DATE (or something that correlates to the time based partition key of the fact table), the product  and the store. Of course we can again join to the dimension tables and cluster at higher hierarchy levels such as product category and store region. The Oracle 12c Data Warehousing guide gives some good advice, but you can’t go far wrong if you think about clustering items together that will be queried together

Clustering data can give us some advantages in performance. Better data compression and improved index range scans spring to mind, but to get most benefits we should also look at another new feature, zone-maps. Unlike Attribute Clustering, Zone Maps are Engineered Systems only, In a way they are similar to storage indexes already found on Exadata, but they have some additional advantages, they are also somewhat different from zone maps encountered in other DB vendors’ products such as Netezza.

In Exadata, a storage index can provide the maximum and minimum values encountered for a column in storage cell. I say “can” as there is no guarantee that range for a given column is held in the storage index. Zone Maps on the other hand will always provide maxima and minima for all of the columns specified at zone map creation. The zone map is orientated in terms of contiguous database blocks and is materialized so that it is physically persisted in the database and thus survives DB startups. Like Materialized views Materialized zone maps can become stale and need to be maintained.

We can define a zone map on one or more table columns and just like Attribute Clustering we may also create zone maps on table joins. As a table can only have one zone map it is important to include all of the columns you wish to track. Zone Maps are designed to work well with attribute clustering, in fact it is just a simple DDL statement to add a zone-map to an Attribute Clustered table so that the zone map tracks the same attributes as the clustering. This is where we get the major performance boost from attribute clustering, Instead of looking at the whole table the zone map tells us which ranges of database blocks contain data that matches our query predicates.

Zone Maps with Attribute Clustering gives us another powerful tool to boost DW performance on Exadata – we can do star queries without resorting to bitmap indexes and we minimise IO when scanning fact tables as we only need look where we know the data to be. Exciting times!

Categories: BI & Warehousing

Execution Plans

Jonathan Lewis - Mon, 2014-09-01 01:40

This is the index to a series of articles I’ve been writing for redgate, published on their AllThingsOracle site, about generating and reading execution plans. I’ve completed a few articles that haven’t yet been published, but I’ll add their URLs when they’re available.

I don’t really know how many parts it’s going to end up as – there’s an awful lot that that you could say about reading execution plans, even when you’re trying to cover just the basics; every time I’ve started writing an episode in the series it’s turned into two episodes.  I’ve delivered 10 parts to redgate so far; the active URLs below are the ones that they are currently online.

Chapter 9 has just been published, so I’ve popped this catalogue to the top of the stack.  Episode 10 is written, but waiting for its final proof read.

It’s only going to take a couple more installments and I’ll have finished the basic introduction to execution plans – so I’m looking for some ideas of what people really need to know about reading execution plans. If you have any suggestions about what features, or functions, or patterns of execution plans need a more detailed execution, please put them into writing in the comments and I’ll start work on addressing the commonest requirements.

 


Webcast - Oracle Multitenant Option

On their road to adoption of Oracle Database 12c, many are considering the Multitenant Database Architecture, where a multitenant container database can hold many pluggable databases, enabling...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Notes from a visit to Teradata

DBMS2 - Sun, 2014-08-31 03:17

I spent a day with Teradata in Rancho Bernardo last week. Most of what we discussed is confidential, but I think the non-confidential parts and my general impressions add up to enough for a post.

First, let’s catch up with some personnel gossip. So far as I can tell:

  • Scott Gnau runs most of Teradata’s development, product management, and product marketing, the big exception being that …
  • … Darryl McDonald run the apps part (Aprimo and so on), and no longer is head of marketing.
  • Oliver Ratzesberger runs Teradata’s software development.
  • Jeff Carter has returned to his roots and runs the hardware part, in place of Carson Schmidt.
  • Aster founders Mayank Bawa and Tasso Argyros have left Teradata (perhaps some earn-out period ended).
  • Carson is temporarily running Aster development (in place of Mayank), and has some sort of evangelism role waiting after that.
  • With the acquisition of Hadapt, Teradata gets some attention from Dan Abadi. Also, they’re retaining Justin Borgman.

The biggest change in my general impressions about Teradata is that they’re having smart thoughts about the cloud. At least, Oliver is. All details are confidential, and I wouldn’t necessarily expect them to become clear even in October (which once again is the month for Teradata’s user conference). My main concern about all that is whether Teradata’s engineering team can successfully execute on Oliver’s directives. I’m optimistic, but I don’t have a lot of detail to support my good feelings.

In some quick-and-dirty positioning and sales qualification notes, which crystallize what we already knew before:

  • The Teradata 1xxx series is focused on cost-per-bit.
  • The Teradata 2xxx series is focused on cost-per-query. It is commonly Teradata’s “lead” product, at least for new customers.
  • The Teradata 6xxx series is supposed to be above to do “everything”.
  • The Teradata Aster “Discovery Analytics” platform is sold mainly to customers who have a specific high-value problem to solve. (Randy Lea gave me a nice round dollar number, but I won’t share it.) I like that approach, as it obviates much of the concern about “Wait — is this strategic for us long-term, given that we also have both Teradata database and Hadoop clusters?”

Also:

  • 1xxx and 2xxx systems are meant to be I/O-constrained. 6xxx systems are meant to be constrained mainly by CPU, but every system will be I/O-constrained at some point.
  • There is at least one example of a Very Well Known organization buying Teradata’s Hadoop-only appliance despite not otherwise being a Hadoop customer. Teradata concedes, however, that this is not a common occurrence.
  • Customers are increasingly using co-location rather than their own data centers. Many colo organizations charge more or less strictly by floor space. Hence, there’s a push for maximum processing density per rack, power density and weight be damned.

Speaking of not being CPU-constrained — I heard 7-10% as an estimate for typical Hadoop utilization, and also 10-15%. While I didn’t ask, I presume these figures assume traditional MapReduce types of Hadoop workloads. I’m not sure why these figures are yet lower than eBay’s long-ago estimates of Hadoop “parallel efficiency”.

Like Carson used to do, Jeff shared a variety of hardware and networking tidbits with me. In particular:

  • Jeff is confident in Moore’s Law continuing for at least 5 more years. (I think that’s a near-consensus; the 2020s, however, are another matter.)
  • Teradata still uses SAS rather than SATA for all disk (spinning or solid-state) controllers. They’re now seeing 6-700 MB/sec/device on SSDs (Solid State Disk), up from 3-400.
  • SSD prices are down 60% over the past 6 months, vs. much slower declines previously.
  • Formerly a SanDisk/Pliant partisan, Teradata now thinks there are multiple vendors of good SSDs. (I’m not sure whether they’d be happy if I said which one they currently like best.)
  • Jeff foresees InfiniBand and Ethernet more or less merging. Right now Teradata is using a lot of 56 Gb/sec InfiniBand.

Since Oliver is now a Teradata mucky-muck, I asked about virtual data marts, an idea that he pretty much invented or at least popularized back in his eBay days. Comments included:

  • Teradata now calls them Data Labs.
  • Adoption is very high.
  • One major feature is “time boxing” — they expire after a period of time unless you renew them.
  • Analysis of virtual data mart usage is a good guide as to what you might want to add to your permanent data warehouse.

And I’ll stop here, although I hope that a couple more-focused posts will also eventually flow from the visit.

Categories: Other

Windows 7 error “key not valid for use in specified state”

The Oracle Instructor - Sun, 2014-08-31 00:37

When you see that error upon trying to install or upgrade something on your Windows 7 64-bit machine, chances are that it is caused by a Windows Security update that you need to uninstall. There is probably no point in messing around with the registry or the application that you want to upgrade. Instead, remove the Windows update KB2918614 like this:

Open the control panel, then click  Windows Update

Windows 7 control panelClick Update History and then Installed Updates:

Update HistoryScroll down to Microsoft Windows and look for KB2918614 (I have removed it already before I took the screenshot):

Uninstall KB2918614Finally, hide that update so you don’t get it installed later on again:

Hide KB2918614I’m using a corporate notebook with automatic Windows security updates coming from time to time and encountered that problem while trying to upgrade VirtualBox to version 4.3.12. It is not a VirtualBox issue, though, other installs or upgrades may fail for the same reason. For me, this was a serious problem, because I rely on virtual machines for many demonstrations. Kudos to the virtualbox.org forums! They helped me resolve that problem within a day. Thank you once again, guys! :-)


Tagged: #KB2918614
Categories: DBA Blogs

SLOB Failing To Generate AWR Reports? Red Hat Bug 919793!

Kevin Closson - Sat, 2014-08-30 12:40

This is a quick blog post to help folks that are testing with SLOB at high user (session) counts. The situation may arise where you are testing SLOB on a large configuration, with or without SQL*Net, and the SLOB driver (runit.sh) is failing to produce Automatic Workload Repository (a.k.a AWR) reports.

This problem will generally be seen on RHEL 6 variants that implement the much maligned /etc/security/limits.d/90-nproc.conf method of preventing fork bombs. For more information on this configuration file please refer to Red Hat bug 919793.

If you are not getting AWR reports under the condition I describe then the problem is most likely due to 90-nproc.conf short circuiting the ulimit(3) tuning you’ve established.

As an example remedy, please consider the following settings I recommended to my colleagues at VCE for performance testing of the vBlock Specialized System for High Performance Databases:

SS-HPDB-fork-bomb

 

 


Filed under: oracle

Handy pre-defined Oracle collections

Tony Andrews - Sat, 2014-08-30 05:09
Note to self: SYS.DBMS_DEBUG_VC2COLL is a handy pre-defined TABLE OF VARCHAR2(1000) SYS.KU$_VCNT is TABLE OF VARCHAR2(4000) Both are granted to public. Thanks to Eddie Awad's blog for these. Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2014/08/handy-pre-defined-oracle-collections.html

Simplified UI Rapid Development Kit Sends Oracle Partners Soaring in the Oracle Applications Cloud

Usable Apps - Sat, 2014-08-30 04:15

A glimpse into the action at the Oracle HCM Cloud Building Simplified UIs workshop with Hitachi Consulting by Georgia Price (@writeprecise

Building stylish, modern, and simplified UIs just got a whole lot easier. That’s thanks to a new kit developed by the Oracle Applications User Experience (OAUX) team that’s now available for all from the Usable Apps website.

The Oracle Applications Cloud Simplified User Interface Rapid Development Kit is a collection of code samples from the Oracle Platform Technology Solutions (PTS) Code Accelerator Kit, coded page templates and Oracle ADF components, wireframe stencils and examples, coding best practices, and user experience design patterns and guidance. It’s designed to help Oracle partners and developers quickly build—in a matter of hours—simplified UIs for their Oracle Applications Cloud use cases using Oracle ADF page types and components.

Simplified UI eBook

A key component of the simplified UI Rapid Development Kit—the Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook—in use. Pic: Sasha Boyko

The kit was put to the test last week by a group of Hitachi Consulting Services team members at an inaugural workshop on building simplified UIs for the Oracle HCM Cloud that was hosted by the OAUX team in the Oracle headquarters usability labs.

The results: impressive.

During the workshop, a broad range of participants—Hitachi Consulting VPs, senior managers, developers, designers, and architects—learned about the simplified UI design basics of glance, scan, commit and how to identify use cases for their business. Then, they collaboratively designed and built—from wireframe to actual code—three lightweight, tablet-first, intuitive solutions that simplify common, every day HCM tasks.

Sona Manzo (@sonajmanzo), Hitachi Consulting VP leading the company’s Oracle HCM Cloud practice, said, “This workshop was a fantastic opportunity for our team to come together and use the new Rapid Development Kit’s tool s and techniques to build actual solutions that meet specific customer use cases. We were able to take what was conceptual to a whole different level.”

Sona Manzo of Hitachi Consulting

Great leadership. Hitachi Consulting's Sona Manzo gets the whole team into the spirit of building simplified UIs. Pic: Martin Taylor

Workshop organizer and host Ultan O’Broin (@ultan), Director, OAUX, was pleased with the outcome as well: “That a key Oracle HCM Cloud solution partner came away with three wireframed or built simplified UIs and now understands what remains to be done to take that work to completion as a polished, deployed solution is a big win for all.”

Anna and Ultan Facilitate at the Workshop

OAUX Principal Interaction Designer Anna Budovsky (left) and Ultan O'Broin (right) facilitate Hitachi Consulting team members in working out solutions for customer use cases. Pics: Martin Taylor

Equally importantly, said Ultan, is what the OAUX team learned about “what such an Oracle partner needs to do or be able to do next to be successful.”

According to Misha Vaughan (@mishavaughan), Director of the OAUX Communications and Outreach team, folks are lining up to attend other building simplified UI workshops.

“The Oracle Applications Cloud partner community is catching wind of the new simplified UI rapid development kit. I'm delighted by the enthusiasm for the kit. If a partner is designing a cloud UI, they should be building with this kit,” said Misha.

Ultan isn’t surprised by the response. “The workshop and kit respond to a world that’s demanding easy ways to build superior, flexible, and yet simple enterprise user experiences using data in the cloud.”

The Oracle Applications Cloud Simplified User Interface Rapid Development Kit will now be featured at Oracle OpenWorld 2014 OAUX events and in OAUX communications and outreach worldwide. 

OSCON 2014: Complete Video Compilation

Surachart Opun - Sat, 2014-08-30 03:30
OSCON 2014 - Today, it's not only developers, system administrators or organizations have use the Open Source. Businesses have established to use the Open Source as well. So, you can not ignore about Open Source. At OSCON, you'll encounter the open source ecosystem. It helps digging deep into the business of open source.

Five Reasons to Attend OSCON: Get straight to the epicenter of all things open source and get better at what you do, Learn from the best and make valuable connections, Get solutions to your biggest challenges that you can apply today, See the latest developments, products, services, and career trends and Hear it first at OSCON.

It's very good idea to attend the OSCON, if you missed OSCON2014. I mention OSCON 2014: Complete Video Compilation. You can download these videos or view them through our HD player, and learn about open source with more than 350 presenters, including Matthew McCullough (GitHub), Leslie Hawthorn (Elasticsearch), James Turnbull (Docker), Andrei Alexandrescu (Facebook), Tim Berglund (DataStax), Paco Nathan (Zettacap), Kirsten Hunter (Akamai), Matt Ray (Chef Software, Inc.), and Damian Conway (Thoughtstream) among them. In these videos, you are able to see a lot of tracks (Business,Cloud,Community,Computational Thinking,Databases & Datastores,Education,Emerging anguages,Geek Lifestyle,Java & JVM,JavaScript - HTML5 - Web,Mobile Platforms,Open Hardware,Operations & System Admin,Perl,PHP,Python,Security,Tools & Techniques,User Experience).

You will able to learn many tracks as I told you. Anyway, Oreilly has improved video streaming and downloading. Additional, it's very useful for playback speed control and mobile viewing.

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Geo What?

Bradley Brown - Fri, 2014-08-29 23:54
You might hear the term geo something from time to time.  Geospatial is the full word, which means data associated with a particular location.  A location in the world can be identified by a latitude and longitude number.

Maybe you heard the terms geo blocking and/or geo white lists.  Geo blocking means that you want to block a specific location in the world.  A geo white lists would mean that someone only wants to allow specific locations (or areas) in the world to have access.  In other words, you allow access to everyone unless blocked or you can deny access to everyone unless you allow access to a location.

Geo Blocking Example
Let's say you have a deal where you have distribution rights for everywhere BUT the Philippines because that area is owned by someone else.  This is an example of geo blocking.  Everywhere in the world is allowed to buy your product EXCEPT someone located in the Philippines.

Geo White List Example
Let's say you have a deal where you have distribution rights for North America only (i.e. nowhere but Canada, United States and Mexico).  This is an example of a geo white list.  Everywhere in the world is NOT allowed to buy your product UNLESS someone is located in North America.

At InteliVideo we support both Geo Blocking and Geo White Lists.

Using #DB12c PDB as a Repository for #EM12c…. Not so fast!

DBASolved - Fri, 2014-08-29 14:51

Last year, I wrote a post on how to setup Oracle Enterprise Manger 12c (12.1.0.3) with Oracle Database 12c (12.1.0.1) and what you could expect.  At the time the configuration was not supported by Oracle and I only did it to see if it could be done.  If you would like to read that post it can be found here.

This past May (2014), I attended the Oracle Customer Advisory Board (CAB) for Oracle Enterprise Manager 12c. It was a great gathering of power users and customers of the Oracle Enterprise Manager product line with a lot of useful information being shared.  One thing that came out in the CAB is that Oracle Enterprise Manager 12c (12.1.0.4) is now fully supported against Oracle Database 12c (12.1.0.1 or later).  During the discussion, I asked if Pluggable Databases (PDBs) would be supported for the repository database; I was surprised when the initial answer was “Yes”.  I was excited to say the least and wanted to try it out.  The ability to house multiple monitoring tools in a single CDB via multiple PDBs would shrink many monitoring application databases footprints within organizations.

Unfortunately, Oracle and I don’t share the same outlook with that use case.  In doing more research, installations and discussing the use case with a few friends, inside and outside of Oracle, I can safely say that Pluggable Databases (PDBs) are not supported as a Oracle Management Repository (OMR).  Although a Pluggable Database (PDB) is not supported as the OMR, Oracle Enterprise Manger 12c (12.1.0.4) can still be ran with a Oracle Database 12c (12.1.0.1 or later) OMR as long as it is a traditional database (non-CDB).

In summary, I’ve answered two (2) questions for you.

1. Are Pluggable Databases (PDBs) supported as Oracle Management Repositories for EM12c?  -> NO

2. Can an Oracle Database 12c (12.1.0.1 or later), in traditional (non-CDB) mode, be used as an Oracle Management Repository? -> YES

In Image 1, you can see that I have EM12c (12.1.0.4) running with a DB12c repository.

Image 1:

12104_db12_oem.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Enjoy!

about.me: http://about.me/dbasolved

 


Filed under: Database, OEM
Categories: DBA Blogs

Taking a Look at the Oracle Database 12c In-Memory Option

Rittman Mead Consulting - Fri, 2014-08-29 13:43

The In-Memory Option for Oracle Database 12c became available a few weeks ago with the 12.1.0.2 database patchset, adding column-store and in-memory capabilities to the Oracle Database. Unlike pure in-memory databases such as Oracle TimesTen, the in-memory option adds an in-memory column-store feature to the regular row-based storage in the Oracle database, creating in-memory copies of selected row-store tables in a compressed column-based storage format, with the whole process being automatic and and enabled by issuing just a couple of commands – to the point where my feedback at the end of beta testing was that it was “almost boring” – said slightly tongue-in-cheek…

NewImage

But of course adding in-memory capabilities to the Oracle database is anything but boring – whilst TimesTen on Exalytics has given us the ability to store aggregate tables, and “hot data” tables  in a dedicated in-memory database co-located with the BI mid-tier, TimesTen is another set of skills to learn and another product to manage. Moreover, you’ve got to somehow get the data you want to aggregate or otherwise cache out of your data warehouse and into TimesTen, and then you’ve got the even more difficult task of keeping that TimesTen dataset in-sync with your main data warehouse data. Most importantly though, even with the 2TB and 4TB versions of Exalytics many data warehouses won’t fit entirely into memory, so you’re going to end-up with some of your data in TimesTen and some in your data warehouse, and with only the TimesTen dataset benefiting from in-memory speeds.

So what if you could enable the in-memory option on your data warehouse, and then just run it all in-memory there? To see how this worked, I thought it’d be interesting to take the flight delays dataset that comes with the latest OBIEE 11g v406 SampleApp, and copy that across to an Oracle 12.1.0.2.0 database to test it out with the in-memory option. This dataset comes with a set of dashboard pages and analyses and contains around 6m rows of data, and in SampleApp is accelerated by an aggregate schema, also stored in Oracle, that generally reduces query times down to just a few seconds.

NewImage

So what I thought would be interesting to check would be first, whether enabling the in-memory option sped-up the dashboards, and second, whether having the in-memory option removed the need for the aggregate tables altogether. I was also interested to see whether putting the aggregate tables into memory improved their access time significantly, as anecdotally I’d heard that there wasn’t much clock-time difference between accessing aggregates in a TimesTen database vs. just creating them in the same data warehouse database as the source data. To set this all up then I created the new 12.1.0.2.0 database,. exported the BI_AIRLINES and BI_AIRLINES_AGG schemas out of the 12c database that comes with SampleApp v406, and then imported them into the same schema names in the new 12.1.0.2.0 instance, on the new VM (rather than on the SampleApp VM), and then amended the connection pool details in the SampleApp RPD to point to this new, external Oracle 12.1.0.2.0 database instance. The 12.1.0.2.0 database at this point had the following memory allocation:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
 
Total System Global Area 6325010432 bytes
Fixed Size               2938448 bytes
Variable Size            1207962032 bytes
Database Buffers         5100273664 bytes
Redo Buffers             13836288 bytes
Database mounted.
Database opened.

So I don’t end-up comparing the larger database instance on my 12.1.0.2.0 VM with the smaller one the airlines data came from on the VM, I created my baseline response time measurements on this new VM and with none of the in-memory features enabled, and ran some of the dashboard pages and clicked-on some of the prompt values – and the response time wasn’t bad, with just the odd analysis that took an excessive time to return. Mostly though, the results for each dashboard came back straight-away, what you’d expect when summary tables have been deployed to speed-up reporting.

NewImage

Looking at the SQL being generated by the BI Server, you can quickly see why this is the case; the BI Server is using aggregate navigation and actually querying the BI_AIRLINES_AGG schema to return the aggregated results the dashboard, as you can see from one of the SQL statements retrieved from the NQQuery.log file below:

select sum(T255906.Dep_Delay_00039B26) as c1,
     sum(T255906.Arr_Delay_00039B22) as c2,
     sum(T255906.Z_of_Fligh00039B28) as c3,
     substr(T255216.Carrier00039BA9 , 1, 25) as c4,
     T255216.Carrier00039BA9 as c5,
     T255216.Carrier_Co00039BAA as c6
from 
     BI_AIRLINES_AGG.SA_16_Dest00039D06 T255357,
     BI_AIRLINES_AGG.SA_Time_Mo00039CFB T255737,
     BI_AIRLINES_AGG.SA_31_Carr00039CEB T255216,
     BI_AIRLINES_AGG.FACT_AGG_OR_06M T255906
where  ( T255216.Carrier_Co00039BAA = T255906.Carrier_Co00039BAA and T255357.Dest_Airpo00039C2A = T255906.Dest_Airpo00039C2A and T255737.Dep_Month00039C07 = T255906.Dep_Month00039C07 and substr(T255216.Carrier00039BA9 , 1, 25) = 'SunFlower Airlines' and (T255357.Dest_Regio00039C31 in ('Midwest Region', 'Northeast Region', 'South Region', 'West Region')) and T255737.Month_of_Y00039C0F between 3 and 12 ) 
group by T255216.Carrier00039BA9, T255216.Carrier_Co00039BAA
order by c5, c6, c4

OBIEE will always use these aggregate tables if they’re available in the repository, so to enable me to test the reports with these aggregates disabled I borrowed the technique Robin introduced in his blog on testing aggregate navigation, and added a request variable prompt to the dashboard page that allows me to pass an INACTIVE_SCHEMAS value to the logical SQL queries issued for the dashboard analyses, and therefore tell the BI Server to ignore the aggregate schema and just use the detail-level BI_AIRLINES schema instead.

NewImage

I therefore test the dashboard again, this time with the BI_AIRLINES_AGG schema disabled; this time, every dashboard took a while to display properly, with every analysis showing the “spinning clock” for twenty, thirty seconds a time. Comparing the response times to the ones a moment ago when aggregates were enabled, and you can see the difference.

NewImage

So let’s set-up the in-memory option and see how it affects these two scenarios. The in-memory option for Oracle Database 12c 12.1.0.2.0 is installed by default as part of the core database, but doesn’t start working until you set a value for the INMEMORY_SIZE initialisation parameter – as you can see from the SQL*Plus output below, it’s currently set to zero, effectively disabling the feature:

SQL> conn / as sysdba
Connected.

SQL> show parameter INMEMORY

NAME                     TYPE    VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default          string
inmemory_force               string  DEFAULT
inmemory_max_populate_servers        integer     0
inmemory_query               string  ENABLE
inmemory_size                big integer 0
inmemory_trickle_repopulate_servers_ integer     1
percent
optimizer_inmemory_aware         boolean     TRUE

The memory Oracle allocates to the in-memory option, via this initialisation parameter, comes out of the SGA and therefore before I set the INMEMORY_SIZE parameter, it’d be a good idea to increase the SGA_TARGET value to accommodate the 1G I’m then going to assign to INMEMORY_SIZE. I do both from the SQL*Plus command-prompt and then bounce the database to bring the new values into use:

SQL> alter system set inmemory_size = 1G scope=spfile;

System altered.

SQL> show parameter sga_target

NAME                     TYPE    VALUE
------------------------------------ ----------- ------------------------------
sga_target               big integer 6032M
SQL> alter system set sga_target = 7032M scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 7381975040 bytes
Fixed Size          2941480 bytes
Variable Size        1207963096 bytes
Database Buffers     5083496448 bytes
Redo Buffers           13832192 bytes
In-Memory Area       1073741824 bytes
Database mounted.
Database opened.

Now we’re at the point where we can enable the tables for in-memory access. Tables to be copied into memory can either be created like that at the start, or you can ALTER TABLE them afterwards and specify that they go into memory (or you can configure the in-memory settings at the tablespace level, or even specify particular columns or partitions to go into memory). The underlying data still gets stored row-wise on disk, but enabling a table for in-memory access tells the Oracle database to create column-store in-memory representations of the table and direct suitable queries to those copies, all the time in the background keeping the copy in-sync with the row-store base data.

I’ll start by enabling all of the BI_AIRLINES schema for in-memory access, as the biggest pay-off would be if then meant we didn’t need to maintain the aggregate tables. After running a SELECT table_name FROM user_tables to list out the table names, I then run a script to enable all the tables for in-memory access, with the in-memory copies being populated immediately:

alter table AIRCRAFT_GROUP inmemory priority high;
alter table AIRCRAFT_TYPE inmemory priority high;
alter table AIRLINES_USER_DATA inmemory priority high;
alter table AIRLINE_ID inmemory priority high;
alter table CANCELLATION inmemory priority high;
alter table CARRIER_GROUP_NEW inmemory priority high;
alter table CARRIER_REGION inmemory priority high;
alter table DEPARBLK inmemory priority high;
alter table DISTANCE_GROUP_250 inmemory priority high;
alter table DOMESTIC_SEGMENT inmemory priority high;
alter table OBIEE_COUNTY_HIER inmemory priority high;
alter table OBIEE_GEO_AIRPORT_BRIDGE inmemory priority high;
alter table OBIEE_GEO_ORIG inmemory priority high;
alter table OBIEE_ROUTE inmemory priority high;
alter table OBIEE_TIME_DAY_D inmemory priority high;
alter table OBIEE_TIME_MTH_D inmemory priority high;
alter table ONTIME_DELAY_GROUPS inmemory priority high;
alter table PERFORMANCE inmemory priority high;
alter table PERFORMANCE_ENDECA_MV inmemory priority high;
alter table ROUTES_FOR_LINKS inmemory priority high;
alter table SCHEDULES inmemory priority high;
alter table SERVICE_CLASS inmemory priority high;
alter table UNIQUE_CARRIERS inmemory priority high;

Looking at USER_TABLES from this schema, I can see all of the tables I selected now marked for in-memory access, for immediate loading:

SQL> @display_table_inmem_details.sql
SQL> select table_name
  2  ,    inmemory
  3  ,    inmemory_priority
  4  from   user_tables
  5  /
 
TABLE_NAME           INMEMORY INMEMORY
-------------------- -------- --------
AIRCRAFT_GROUP       ENABLED  HIGH
UNIQUE_CARRIERS      ENABLED  HIGH
SERVICE_CLASS        ENABLED  HIGH
SCHEDULES            ENABLED  HIGH
ROUTES_FOR_LINKS     ENABLED  HIGH
PERFORMANCE          ENABLED  HIGH
ONTIME_DELAY_GROUPS  ENABLED  HIGH
OBIEE_TIME_MTH_D     ENABLED  HIGH
OBIEE_TIME_DAY_D     ENABLED  HIGH
OBIEE_ROUTE          ENABLED  HIGH
OBIEE_GEO_ORIG       ENABLED  HIGH
 
TABLE_NAME    INMEMORY INMEMORY
-------------------- -------- --------
OBIEE_GEO_AIRPORT_BR ENABLED  HIGH
IDGE
 
OBIEE_COUNTY_HIER    ENABLED  HIGH
DOMESTIC_SEGMENT     ENABLED  HIGH
DISTANCE_GROUP_250   ENABLED  HIGH
DEPARBLK             ENABLED  HIGH
CARRIER_REGION       ENABLED  HIGH
CARRIER_GROUP_NEW    ENABLED  HIGH
CANCELLATION         ENABLED  HIGH
AIRLINE_ID           ENABLED  HIGH
 
TABLE_NAME           INMEMORY INMEMORY
-------------------- -------- --------
AIRLINES_USER_DATA   ENABLED  HIGH
AIRLINES_PBLOB$      DISABLED
AIRLINES_PART$       DISABLED
AIRLINES_NODE_TZ$    DISABLED
AIRLINES_NODE$       DISABLED
AIRLINES_LINK_TZ$    DISABLED
AIRLINES_LINK_SCH$   DISABLED
AIRLINES_LINK$       DISABLED
AIRLINES_AIRPORT_TZ$ DISABLED
AIRCRAFT_TYPE        ENABLED  HIGH
 
30 rows selected.

And I can track the progress of the tables being copied into memory using the V$IM_SEGMENTS v$ view, like this:

SQL> @display_im_segments.sql
SQL> set echo on
SQL> set pages 200
SQL> col owner for a20
SQL> col name for a20
SQL> col status for a10
SQL> select v.owner
  2  ,      v.segment_name name
  3  ,      v.populate_status status
  4  from   v$im_segments v;

OWNER            NAME         STATUS
-------------------- -------------------- ----------
BI_AIRLINES      OBIEE_COUNTY_HIER    COMPLETED
BI_AIRLINES      PERFORMANCE      STARTED
BI_AIRLINES      UNIQUE_CARRIERS      COMPLETED
BI_AIRLINES      AIRLINES_LINK_TZ$    COMPLETED
BI_AIRLINES      OBIEE_TIME_MTH_D     COMPLETED
BI_AIRLINES      AIRLINES_LINK_SCH$   COMPLETED
BI_AIRLINES      OBIEE_ROUTE      COMPLETED
BI_AIRLINES      DOMESTIC_SEGMENT     COMPLETED
BI_AIRLINES      AIRLINES_LINK$   COMPLETED
BI_AIRLINES      AIRLINE_ID       COMPLETED
BI_AIRLINES      OBIEE_GEO_ORIG   COMPLETED
BI_AIRLINES      AIRLINES_NODE$   COMPLETED
BI_AIRLINES      OBIEE_GEO_AIRPORT_BR COMPLETED
             IDGE

BI_AIRLINES      AIRLINES_NODE_TZ$    COMPLETED
BI_AIRLINES      OBIEE_TIME_DAY_D     COMPLETED

Note that most of the tables went into memory immediately, but one (PERFORMANCE) is taking a while because it’s so big. Also note that not all tables are listed in the v$ view yet, as the database hasn’t got around to adding them into memory yet, or it might choose not to populate them if it feels the memory could be used better elsewhere. You can alter the priority of these in-memory copy processes if you want, and decide whether the copying happens immediately, or when the table is first scanned (accessed).

Running the dashboards again, with the request variable prompt set to disallow the aggregate schema, gave me the response times below – the in-memory queries were much faster than the row-based non in-memory ones, but they weren’t down to the response time of the dashboards right at the start, where all data is stored row-wise but we’ve got aggregate tables to speed up the queries (note I’ve reordered the bars so the non in-memory queries with no aggregate tables are on the left of each series, as the slowest of all approaches)

NewImage

Taking a look at one of the physical SQL queries for a cross-tab (hierarchical columns) analysis, you can see that in-memory table access is happening:

WITH 
SAWITH0 AS (select sum(T233937.ACTUALELAPSEDTIME) as c1,
     sum(T233937.ARRDELAYMINUTES) as c2,
     concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50)) as c3,
     T233820.D_NAME as c4,
     T233820.R_NAME as c5
from 
     BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ ,
     BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ ,
     BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ ,
     BI_AIRLINES.PERFORMANCE T233937 /* 00 PERFORMANCE FULL */ 
where  ( T233484.AIRPORT = T233937.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233732.Cal_Month = T233937.MONTH and T233732.Cal_Qtr = T233937.QUARTER and T233732.Cal_Year = T233937.YEAR and T233732.Cal_Month between 6 and 12 and T233937.MONTH between 6 and 12 ) 
group by T233820.D_NAME, T233820.R_NAME, concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50))),
SAWITH1 AS (select sum(T233609.PASSENGERS) as c1,
     T233820.R_NAME as c2,
     T233820.D_NAME as c3,
     concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50)) as c4,
     sum(T233609.PASSENGERS_MILES) as c5
from 
     BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ ,
     BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ ,
     BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ ,
     BI_AIRLINES.DOMESTIC_SEGMENT T233609 /* 01 DOMESTIC Segment */ 
where  ( T233484.AIRPORT = T233609.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233609.MONTH = T233732.Cal_Month and T233609.QUARTER = T233732.Cal_Qtr and T233609.YEAR = T233732.Cal_Year and T233609.MONTH between 6 and 12 and T233732.Cal_Month between 6 and 12 ) 
group by T233820.D_NAME, T233820.R_NAME, concat(concat(T233484.AIRPORT, ' - '), substr(T233484.TR_AIRPORT_NAME , instr(T233484.TR_AIRPORT_NAME , ': ') + 2 , 50)))
select 2 as c1,
     case  when D1.c3 is not null then D1.c3 when D2.c4 is not null then D2.c4 end  as c2,
     case  when D1.c4 is not null then D1.c4 when D2.c3 is not null then D2.c3 end  as c3,
     case  when D1.c5 is not null then D1.c5 when D2.c2 is not null then D2.c2 end  as c4,
     'All Orig Airports' as c5,
     1 as c6,
     case  when D1.c4 is not null then D1.c4 when D2.c3 is not null then D2.c3 end  as c7,
     case  when D1.c5 is not null then D1.c5 when D2.c2 is not null then D2.c2 end  as c8,
     cast(D1.c2 as  DOUBLE PRECISION  ) / nullif( D1.c1, 0) * 100 as c9,
     D2.c5 as c10,
     D2.c1 as c14
from 
     SAWITH0 D1 full outer join SAWITH1 D2 On D1.c3 = D2.c4 and  SYS_OP_MAP_NONNULL(D1.c4) = SYS_OP_MAP_NONNULL(D2.c3) 
order by c4, c3, c2
SQL> @complex_query_explain.sql
 
Explained.

SQL> set lines 200
SQL> set pages 0
SQL> select * from table(dbms_xplan.display);
Plan hash value: 3097908901

----------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |              |  2673K|  1392M|   |   112K  (1)| 00:00:05 |    |      |        |
|   1 |  PX COORDINATOR                    |              |   |   |   |        |      |    |      |        |
|   2 |   PX SEND QC (ORDER)                   | :TQ10006         |  2673K|  1392M|   |   112K  (1)| 00:00:05 |  Q1,06 | P->S | QC (ORDER) |
|   3 |    SORT ORDER BY                   |              |  2673K|  1392M|  1492M|   112K  (1)| 00:00:05 |  Q1,06 | PCWP |        |
|   4 |     PX RECEIVE                     |              |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,06 | PCWP |        |
|   5 |      PX SEND RANGE                 | :TQ10005         |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,05 | P->P | RANGE      |
|   6 |       VIEW                     | VW_FOJ_0         |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,05 | PCWP |        |
|*  7 |        HASH JOIN FULL OUTER BUFFERED           |              |  2673K|  1392M|   | 26819   (1)| 00:00:02 |  Q1,05 | PCWP |        |
|   8 |     PX RECEIVE                 |              |   136 | 37128 |   |   144   (7)| 00:00:01 |  Q1,05 | PCWP |        |
|   9 |      PX SEND HASH                  | :TQ10003         |   136 | 37128 |   |   144   (7)| 00:00:01 |  Q1,03 | S->P | HASH       |
|  10 |       PX SELECTOR                  |              |   |   |   |        |      |  Q1,03 | SCWC |        |
|  11 |        VIEW                    |              |   136 | 37128 |   |   144   (7)| 00:00:01 |  Q1,03 | SCWC |        |
|  12 |         HASH GROUP BY              |              |   136 | 17408 |   |   144   (7)| 00:00:01 |  Q1,03 | SCWC |        |
|* 13 |          HASH JOIN                 |              |   136 | 17408 |   |   143   (6)| 00:00:01 |  Q1,03 | SCWC |        |
|  14 |           JOIN FILTER CREATE           | :BF0000          |   136 | 11288 |   |   142   (6)| 00:00:01 |  Q1,03 | PCWP |        |
|* 15 |            HASH JOIN               |              |   136 | 11288 |   |   142   (6)| 00:00:01 |  Q1,03 | SCWC |        |
|  16 |         JOIN FILTER CREATE         | :BF0001          |   136 |  4896 |   |   142   (6)| 00:00:01 |  Q1,03 | PCWP |        |
|* 17 |          HASH JOIN             |              |   136 |  4896 |   |   142   (6)| 00:00:01 |  Q1,03 | SCWC |        |
|* 18 |           TABLE ACCESS INMEMORY FULL   | OBIEE_TIME_MTH_D     |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|* 19 |           TABLE ACCESS INMEMORY FULL   | DOMESTIC_SEGMENT     |   771 | 20046 |   |   141   (6)| 00:00:01 |  Q1,03 | SCWP |        |
|  20 |         JOIN FILTER USE            | :BF0001          |  1787 | 83989 |   | 1   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|* 21 |          TABLE ACCESS INMEMORY FULL    | OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 83989 |   | 1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|  22 |           JOIN FILTER USE              | :BF0000          |  3221 |   141K|   | 1   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|* 23 |            TABLE ACCESS INMEMORY FULL      | OBIEE_COUNTY_HIER    |  3221 |   141K|   | 1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|  24 |     PX RECEIVE                 |              |  2255K|   587M|   | 26673   (1)| 00:00:02 |  Q1,05 | PCWP |        |
|  25 |      PX SEND HASH                  | :TQ10004         |  2255K|   587M|   | 26673   (1)| 00:00:02 |  Q1,04 | P->P | HASH       |
|  26 |       VIEW                     |              |  2255K|   587M|   | 26673   (1)| 00:00:02 |  Q1,04 | PCWP |        |
|  27 |        HASH GROUP BY               |              |  2255K|   264M|   489M| 26673   (1)| 00:00:02 |  Q1,04 | PCWP |        |
|  28 |         PX RECEIVE                 |              |  2255K|   264M|   | 26673   (1)| 00:00:02 |  Q1,04 | PCWP |        |
|  29 |          PX SEND HASH              | :TQ10002         |  2255K|   264M|   | 26673   (1)| 00:00:02 |  Q1,02 | P->P | HASH       |
|  30 |           HASH GROUP BY            |              |  2255K|   264M|   489M| 26673   (1)| 00:00:02 |  Q1,02 | PCWP |        |
|* 31 |            HASH JOIN               |              |  3761K|   441M|   |   841   (9)| 00:00:01 |  Q1,02 | PCWP |        |
|  32 |         PX RECEIVE             |              |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,02 | PCWP |        |
|  33 |          PX SEND BROADCAST         | :TQ10000         |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,00 | S->P | BROADCAST  |
|  34 |           PX SELECTOR              |              |   |   |   |        |      |  Q1,00 | SCWC |        |
|* 35 |            TABLE ACCESS INMEMORY FULL  | OBIEE_TIME_MTH_D     |   161 |  1610 |   | 1   (0)| 00:00:01 |  Q1,00 | SCWP |        |
|* 36 |         HASH JOIN              |              |  3773K|   406M|   |   838   (9)| 00:00:01 |  Q1,02 | PCWP |        |
|  37 |          PX RECEIVE            |              |  1787 |   160K|   | 2   (0)| 00:00:01 |  Q1,02 | PCWP |        |
|  38 |           PX SEND BROADCAST        | :TQ10001         |  1787 |   160K|   | 2   (0)| 00:00:01 |  Q1,01 | S->P | BROADCAST  |
|  39 |            PX SELECTOR             |              |   |   |   |        |      |  Q1,01 | SCWC |        |
|* 40 |             HASH JOIN              |              |  1787 |   160K|   | 2   (0)| 00:00:01 |  Q1,01 | SCWC |        |
|  41 |              TABLE ACCESS INMEMORY FULL| OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 83989 |   | 1   (0)| 00:00:01 |  Q1,01 | SCWP |        |
|  42 |              TABLE ACCESS INMEMORY FULL| OBIEE_COUNTY_HIER    |  3221 |   141K|   | 1   (0)| 00:00:01 |  Q1,01 | SCWP |        |
|  43 |          PX BLOCK ITERATOR         |              |  3773K|    75M|   |   834   (9)| 00:00:01 |  Q1,02 | PCWC |        |
|* 44 |           TABLE ACCESS INMEMORY FULL   | PERFORMANCE          |  3773K|    75M|   |   834   (9)| 00:00:01 |  Q1,02 | PCWP |        |
----------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("D1"."C3"="D2"."C4" AND SYS_OP_MAP_NONNULL("D1"."C4")=SYS_OP_MAP_NONNULL("D2"."C3"))
  13 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  15 - access("T233484"."AIRPORT"="T233609"."ORIGIN")
  17 - access("T233732"."CAL_MONTH"=TO_NUMBER("T233609"."MONTH") AND "T233732"."CAL_QTR"=TO_NUMBER("T233609"."QUARTER") AND
          "T233732"."CAL_YEAR"=TO_NUMBER("T233609"."YEAR"))
  18 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
       filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  19 - inmemory(TO_NUMBER("T233609"."MONTH")>=6 AND TO_NUMBER("T233609"."MONTH")<=12)
       filter(TO_NUMBER("T233609"."MONTH")>=6 AND TO_NUMBER("T233609"."MONTH")<=12)
  21 - inmemory(SYS_OP_BLOOM_FILTER(:BF0001,"T233484"."AIRPORT"))
       filter(SYS_OP_BLOOM_FILTER(:BF0001,"T233484"."AIRPORT"))
  23 - inmemory(SYS_OP_BLOOM_FILTER(:BF0000,"T233820"."SC_CODE"))
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"T233820"."SC_CODE"))
  31 - access("T233732"."CAL_MONTH"="T233937"."MONTH" AND "T233732"."CAL_QTR"="T233937"."QUARTER" AND "T233732"."CAL_YEAR"="T233937"."YEAR")
  35 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
       filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  36 - access("T233484"."AIRPORT"="T233937"."ORIGIN")
  40 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  44 - inmemory("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12)
       filter("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12)

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of table property

80 rows selected.

Indeed, looking at the queries the BI Server is sending to the database it’s not too surprising the in-memory difference in this case wasn’t too dramatic. According to the docs (emphasis mine):

Storing a database object in the IM column store can improve performance significantly for the following types of operations performed on the database object 

  • A query that scans a large number of rows and applies filters that use operators such as the following: =, <, >, and IN
  • A query that selects a small number of columns from a table or materialized view with a large number of columns, such as a query that selects five columns from a table with 100 columns
  • A query that joins a small table to a large table
  • A query that aggregates data

The IM column store does not improve performance for the following types of operations:

  • Queries with complex predicates
  • Queries that select a large number of columns
  • Queries that return a large number of rows
  • Queries with multiple large table joins

and our query certainly has complex predicates, returns a fair few rows, has large and lots of joins etc. Taking a more simple query that you’d likely write yourself if querying a data warehouse, you can see the in-memory table access being used again but a much simpler, an cheaper explain plan:

SQL> @inmem_explain.sql
SQL> set echo on
SQL> explain plan for
  2  select /*+ INMEMORY */ sum(T233937.ACTUALELAPSEDTIME) as c1,
  3   sum(T233937.WEATHERDELAY) as c2,
  4   sum(T233937.SECURITYDELAY) as c3,
  5   sum(T233937.NASDELAY) as c4,
  6   sum(T233937.LATEAIRCRAFTDELAY) as c5,
  7   sum(T233937.ARRDELAYMINUTES) as c6,
  8   sum(T233937.CARRIERDELAY) as c7,
  9   sum(nvl(casewhen T233937.CANCELLED < 1 then T233937.FLIGHTS end  , 0)) as c8,
 10   T233820.D_NAME as c9
 11  from
 12   BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233484 /* 10 GEO_AIRPORT_ORIGIN */ ,
 13   BI_AIRLINES.OBIEE_COUNTY_HIER T233820 /* 11 COUNTY_HIER_ORIGIN */ ,
 14   BI_AIRLINES.OBIEE_GEO_AIRPORT_BRIDGE T233497 /* 12 GEO_AIPORT_DEST */ ,
 15   BI_AIRLINES.OBIEE_COUNTY_HIER T233831 /* 13 COUNTY_HIER_DEST */ ,
 16   BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ ,
 17   BI_AIRLINES.DISTANCE_GROUP_250 T233594 /* 19 DISTANCE_GROUP_250 */ ,
 18   BI_AIRLINES.PERFORMANCE T233937 /* 00 PERFORMANCE FULL */
 19  where  ( T233484.AIRPORT = T233937.ORIGIN and T233484.STCTY_FIPS = T233820.SC_CODE and T233497.AIRPORT = T233937.DEST and T233497.STCTY_FIPS = T233831.SC_CODE and T233594.DESCRIPTION = '1000-1249 Miles' and T233594.CODE = T233937.DISTANCEGROUP and T233732.Cal_Month = T233937.MONTH and T233732.Cal_Qtr = T233937.QUARTER and T233732.Cal_Year = T233937.YEAR and T233831.R_NAME = 'Northeast Region' and T233732.Cal_Month between 6 and 12 and T233937.MONTH between 6 and 12 )
 20  group by T233820.D_NAME
 21  order by c9
 22  /
 
Explained.
 
SQL> set lines 300
SQL> set pages 0
SQL> select * from table(dbms_xplan.display);
Plan hash value: 3055743864

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                  |     9 |  1314 |   883  (13)| 00:00:01 |    |      |        |
|   1 |  PX COORDINATOR                 |                  |       |       |        |          |    |      |        |
|   2 |   PX SEND QC (ORDER)                | :TQ10006             |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,06 | P->S | QC (ORDER) |
|   3 |    SORT GROUP BY                |                  |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,06 | PCWP |        |
|   4 |     PX RECEIVE                  |                  |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,06 | PCWP |        |
|   5 |      PX SEND RANGE              | :TQ10005             |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,05 | P->P | RANGE       |
|   6 |       HASH GROUP BY             |                  |     9 |  1314 |   883  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|*  7 |        HASH JOIN                |                  | 60775 |  8665K|   882  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|   8 |     PX RECEIVE              |                  |  3221 | 99851 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|   9 |      PX SEND BROADCAST          | :TQ10000             |  3221 | 99851 |     1   (0)| 00:00:01 |  Q1,00 | S->P | BROADCAST  |
|  10 |       PX SELECTOR               |                  |       |       |        |          |  Q1,00 | SCWC |        |
|  11 |        TABLE ACCESS INMEMORY FULL       | OBIEE_COUNTY_HIER        |  3221 | 99851 |     1   (0)| 00:00:01 |  Q1,00 | SCWP |        |
|* 12 |     HASH JOIN               |                  | 60775 |  6825K|   881  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  13 |      PX RECEIVE             |                  |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  14 |       PX SEND BROADCAST         | :TQ10001             |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,01 | S->P | BROADCAST  |
|  15 |        PX SELECTOR              |                  |       |       |        |          |  Q1,01 | SCWC |        |
|  16 |         TABLE ACCESS INMEMORY FULL      | OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,01 | SCWP |        |
|* 17 |      HASH JOIN              |                  | 60775 |  6231K|   880  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  18 |       PX RECEIVE                |                  |   161 |  1610 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  19 |        PX SEND BROADCAST            | :TQ10002             |   161 |  1610 |     1   (0)| 00:00:01 |  Q1,02 | S->P | BROADCAST  |
|  20 |         PX SELECTOR             |                  |       |       |        |          |  Q1,02 | SCWC |        |
|* 21 |          TABLE ACCESS INMEMORY FULL     | OBIEE_TIME_MTH_D         |   161 |  1610 |     1   (0)| 00:00:01 |  Q1,02 | SCWP |        |
|* 22 |       HASH JOIN             |                  | 60964 |  5655K|   879  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  23 |        JOIN FILTER CREATE           | :BF0000              |     1 |    19 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  24 |         PX RECEIVE              |                  |     1 |    19 |     1   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  25 |          PX SEND BROADCAST          | :TQ10003             |     1 |    19 |     1   (0)| 00:00:01 |  Q1,03 | S->P | BROADCAST  |
|  26 |           PX SELECTOR           |                  |       |       |        |          |  Q1,03 | SCWC |        |
|* 27 |            TABLE ACCESS INMEMORY FULL   | DISTANCE_GROUP_250       |     1 |    19 |     1   (0)| 00:00:01 |  Q1,03 | SCWP |        |
|* 28 |        HASH JOIN                |                  |   670K|    48M|   878  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  29 |         JOIN FILTER CREATE          | :BF0001              |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  30 |          PX RECEIVE             |                  |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  31 |           PX SEND BROADCAST         | :TQ10004             |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,04 | S->P | BROADCAST  |
|  32 |            PX SELECTOR          |                  |       |       |        |          |  Q1,04 | SCWC |        |
|* 33 |         HASH JOIN           |                  |   318 |  9540 |     2   (0)| 00:00:01 |  Q1,04 | SCWC |        |
|  34 |          JOIN FILTER CREATE     | :BF0002              |   217 |  4340 |     1   (0)| 00:00:01 |  Q1,04 | PCWP |        |
|* 35 |           TABLE ACCESS INMEMORY FULL| OBIEE_COUNTY_HIER        |   217 |  4340 |     1   (0)| 00:00:01 |  Q1,04 | SCWP |        |
|  36 |          JOIN FILTER USE        | :BF0002              |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,04 | PCWP |        |
|* 37 |           TABLE ACCESS INMEMORY FULL| OBIEE_GEO_AIRPORT_BRIDGE |  1787 | 17870 |     1   (0)| 00:00:01 |  Q1,04 | SCWP |        |
|  38 |         JOIN FILTER USE         | :BF0000              |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  39 |          JOIN FILTER USE            | :BF0001              |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWP |        |
|  40 |           PX BLOCK ITERATOR         |                  |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWC |        |
|* 41 |            TABLE ACCESS INMEMORY FULL   | PERFORMANCE          |  3773K|   165M|   874  (13)| 00:00:01 |  Q1,05 | PCWP |        |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  12 - access("T233484"."AIRPORT"="T233937"."ORIGIN")
  17 - access("T233732"."CAL_MONTH"="T233937"."MONTH" AND "T233732"."CAL_QTR"="T233937"."QUARTER" AND "T233732"."CAL_YEAR"="T233937"."YEAR")
  21 - inmemory("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
       filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  22 - access("T233594"."CODE"="T233937"."DISTANCEGROUP")
  27 - inmemory("T233594"."DESCRIPTION"='1000-1249 Miles')
       filter("T233594"."DESCRIPTION"='1000-1249 Miles')
  28 - access("T233497"."AIRPORT"="T233937"."DEST")
  33 - access("T233497"."STCTY_FIPS"="T233831"."SC_CODE")
  35 - inmemory("T233831"."R_NAME"='Northeast Region')
       filter("T233831"."R_NAME"='Northeast Region')
  37 - inmemory(SYS_OP_BLOOM_FILTER(:BF0002,"T233497"."STCTY_FIPS"))
       filter(SYS_OP_BLOOM_FILTER(:BF0002,"T233497"."STCTY_FIPS"))
  41 - inmemory("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12 AND SYS_OP_BLOOM_FILTER_LIST(SYS_OP_BLOOM_FILTER(:BF0001,"T233937"."DEST"),SYS
          _OP_BLOOM_FILTER(:BF0000,"T233937"."DISTANCEGROUP")))
       filter("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12 AND SYS_OP_BLOOM_FILTER_LIST(SYS_OP_BLOOM_FILTER(:BF0001,"T233937"."DEST"),SYS_O
          P_BLOOM_FILTER(:BF0000,"T233937"."DISTANCEGROUP")))

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of table property

75 rows selected.

If I then turn-off the in-memory feature and regenerate the execution plan, you can see without in-memory the plan is around 5x as expensive:

SQL> alter system set INMEMORY_SIZE = 0 scope = spfile;

System altered.

SQL> shutdown immediate;
ORA-01097: cannot shutdown while in a transaction - commit or rollback first
SQL> rollback;

Rollback complete.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 7398752256 bytes
Fixed Size          2941528 bytes
Variable Size        1056968104 bytes
Database Buffers     6325010432 bytes
Redo Buffers           13832192 bytes
Database mounted.
Database opened.
SQL> @noinmem_explain.sql

Explained.

SQL> set lines 300
SQL> set pages 0
SQL> select * from table(dbms_xplan.display);
Plan hash value: 2990499928

--------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name              | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |               |     9 |  1341 |  4086   (1)| 00:00:01 |        |      |        |
|   1 |  TEMP TABLE TRANSFORMATION                |               |       |       |        |          |        |      |        |
|   2 |   LOAD AS SELECT                      | SYS_TEMP_0FD9D6605_275335 |       |       |        |          |        |      |        |
|*  3 |    HASH JOIN                          |               |   318 |  9540 |    22   (0)| 00:00:01 |        |      |        |
|*  4 |     TABLE ACCESS FULL                     | OBIEE_COUNTY_HIER         |   217 |  4340 |    13   (0)| 00:00:01 |        |      |        |
|   5 |     TABLE ACCESS FULL                     | OBIEE_GEO_AIRPORT_BRIDGE  |  1787 | 17870 |     9   (0)| 00:00:01 |        |      |        |
|   6 |   PX COORDINATOR                      |               |       |       |        |          |        |      |        |
|   7 |    PX SEND QC (ORDER)                     | :TQ10008              |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,08 | P->S | QC (ORDER) |
|   8 |     SORT GROUP BY                     |               |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,08 | PCWP |        |
|   9 |      PX RECEIVE                       |               |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,08 | PCWP |        |
|  10 |       PX SEND RANGE                   | :TQ10007              |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,07 | P->P | RANGE       |
|  11 |        HASH GROUP BY                      |               |     9 |  1341 |  4086   (1)| 00:00:01 |  Q1,07 | PCWP |        |
|* 12 |     HASH JOIN                     |               |   281 | 41869 |  4085   (1)| 00:00:01 |  Q1,07 | PCWP |        |
|  13 |      PX RECEIVE                   |               |   281 | 33158 |  4072   (1)| 00:00:01 |  Q1,07 | PCWP |        |
|  14 |       PX SEND HYBRID HASH                 | :TQ10005              |   281 | 33158 |  4072   (1)| 00:00:01 |  Q1,05 | P->P | HYBRID HASH|
|  15 |        STATISTICS COLLECTOR               |               |       |       |        |          |  Q1,05 | PCWC |        |
|* 16 |         HASH JOIN BUFFERED                |               |   281 | 33158 |  4072   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  17 |          VIEW                     | VW_GBC_29             |   281 | 30348 |  4063   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  18 |           HASH GROUP BY               |               |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  19 |            PX RECEIVE                 |               |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,05 | PCWP |        |
|  20 |         PX SEND HASH                  | :TQ10003              |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,03 | P->P | HASH    |
|  21 |          HASH GROUP BY                |               |   281 | 79523 |  4041   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|* 22 |           HASH JOIN               |               | 60853 |    16M|  4039   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  23 |            BUFFER SORT                |               |       |       |        |          |  Q1,03 | PCWC |        |
|  24 |             PX RECEIVE                |               |   318 |  1272 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|  25 |              PX SEND BROADCAST            | :TQ10000              |   318 |  1272 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
|  26 |               TABLE ACCESS FULL           | SYS_TEMP_0FD9D6605_275335 |   318 |  1272 |     2   (0)| 00:00:01 |        |      |        |
|* 27 |            HASH JOIN                  |               | 60853 |    16M|  4037   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  28 |             PX RECEIVE                |               |   160 |  4640 |     4   (0)| 00:00:01 |  Q1,03 | PCWP |        |
|  29 |              PX SEND BROADCAST            | :TQ10002              |   160 |  4640 |     4   (0)| 00:00:01 |  Q1,02 | S->P | BROADCAST  |
|  30 |               PX SELECTOR             |               |       |       |        |          |  Q1,02 | SCWC |        |
|  31 |                MERGE JOIN CARTESIAN       |               |   160 |  4640 |     4   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 32 |             VIEW                  | index$_join$_006          |     1 |    19 |     2   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 33 |              HASH JOIN            |               |       |       |        |          |  Q1,02 | SCWC |        |
|  34 |               BITMAP CONVERSION TO ROWIDS     |               |     1 |    19 |     1   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 35 |                BITMAP INDEX SINGLE VALUE      | M_INDEX32             |       |       |        |          |  Q1,02 | SCWP |        |
|  36 |               BITMAP CONVERSION TO ROWIDS     |               |     1 |    19 |     1   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|  37 |                BITMAP INDEX FULL SCAN     | INDEX4            |       |       |        |          |  Q1,02 | SCWP |        |
|  38 |             BUFFER SORT           |               |   161 |  1610 |     2   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|  39 |              BITMAP CONVERSION TO ROWIDS      |               |   161 |  1610 |     2   (0)| 00:00:01 |  Q1,02 | SCWC |        |
|* 40 |               BITMAP INDEX FAST FULL SCAN     | M_INDEX28             |       |       |        |          |  Q1,02 | SCWP |        |
|* 41 |             VIEW                  | VW_ST_167D3604        | 61043 |    14M|  4033   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  42 |              NESTED LOOPS             |               | 61043 |  4768K|  4029   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  43 |               BUFFER SORT             |               |       |       |        |          |  Q1,03 | PCWC |        |
|  44 |                PX RECEIVE             |               |       |       |        |          |  Q1,03 | PCWP |        |
|  45 |             PX SEND HASH (BLOCK ADDRESS)      | :TQ10001              |       |       |        |          |        | S->P | HASH (BLOCK|
|  46 |              BITMAP CONVERSION TO ROWIDS      |               | 61042 |  1311K|   365   (1)| 00:00:01 |        |      |        |
|  47 |               BITMAP AND              |               |       |       |        |          |        |      |        |
|  48 |                BITMAP MERGE           |               |       |       |        |          |        |      |        |
|  49 |                 BITMAP KEY ITERATION      |               |       |       |        |          |        |      |        |
|* 50 |                  VIEW             | index$_join$_255          |     1 |    19 |     2   (0)| 00:00:01 |        |      |        |
|* 51 |                   HASH JOIN           |               |       |       |        |          |        |      |        |
|  52 |                    BITMAP CONVERSION TO ROWIDS|               |     1 |    19 |     1   (0)| 00:00:01 |        |      |        |
|* 53 |                 BITMAP INDEX SINGLE VALUE | M_INDEX32             |       |       |        |          |        |      |        |
|  54 |                    BITMAP CONVERSION TO ROWIDS|               |     1 |    19 |     1   (0)| 00:00:01 |        |      |        |
|  55 |                 BITMAP INDEX FULL SCAN    | INDEX4            |       |       |        |          |        |      |        |
|* 56 |                  BITMAP INDEX RANGE SCAN      | PERF_DISTANCEGRP          |       |       |        |          |        |      |        |
|  57 |                BITMAP MERGE           |               |       |       |        |          |        |      |        |
|  58 |                 BITMAP KEY ITERATION      |               |       |       |        |          |        |      |        |
|  59 |                  TABLE ACCESS FULL        | SYS_TEMP_0FD9D6605_275335 |   318 |  1272 |     2   (0)| 00:00:01 |        |      |        |
|* 60 |                  BITMAP INDEX RANGE SCAN      | PERF_DEST             |       |       |        |          |        |      |        |
|  61 |                BITMAP MERGE           |               |       |       |        |          |        |      |        |
|* 62 |                 BITMAP INDEX RANGE SCAN   | PERF_MONTH            |       |       |        |          |        |      |        |
|  63 |               TABLE ACCESS BY USER ROWID      | PERFORMANCE           |     1 |    58 |  3669   (1)| 00:00:01 |  Q1,03 | PCWP |        |
|  64 |          PX RECEIVE                   |               |  1787 | 17870 |     9   (0)| 00:00:01 |  Q1,05 | PCWP |        |
|  65 |           PX SEND BROADCAST               | :TQ10004              |  1787 | 17870 |     9   (0)| 00:00:01 |  Q1,04 | S->P | BROADCAST  |
|  66 |            PX SELECTOR                |               |       |       |        |          |  Q1,04 | SCWC |        |
|  67 |         TABLE ACCESS FULL             | OBIEE_GEO_AIRPORT_BRIDGE  |  1787 | 17870 |     9   (0)| 00:00:01 |  Q1,04 | SCWP |        |
|  68 |      PX RECEIVE                   |               |  3221 | 99851 |    13   (0)| 00:00:01 |  Q1,07 | PCWP |        |
|  69 |       PX SEND HYBRID HASH                 | :TQ10006              |  3221 | 99851 |    13   (0)| 00:00:01 |  Q1,06 | S->P | HYBRID HASH|
|  70 |        PX SELECTOR                    |               |       |       |        |          |  Q1,06 | SCWC |        |
|  71 |         TABLE ACCESS FULL                 | OBIEE_COUNTY_HIER         |  3221 | 99851 |    13   (0)| 00:00:01 |  Q1,06 | SCWP |        |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T233497"."STCTY_FIPS"="T233831"."SC_CODE")
   4 - filter("T233831"."R_NAME"='Northeast Region')
  12 - access("T233484"."STCTY_FIPS"="T233820"."SC_CODE")
  16 - access("T233484"."AIRPORT"="ITEM_1")
  22 - access("C0"="ITEM_1")
  27 - access("T233732"."CAL_YEAR"="ITEM_5" AND "T233732"."CAL_QTR"="ITEM_4" AND "T233732"."CAL_MONTH"="ITEM_3" AND "T233594"."CODE"="ITEM_2")
  32 - filter("T233594"."DESCRIPTION"='1000-1249 Miles')
  33 - access(ROWID=ROWID)
  35 - access("T233594"."DESCRIPTION"='1000-1249 Miles')
  40 - filter("T233732"."CAL_MONTH">=6 AND "T233732"."CAL_MONTH"<=12)
  41 - filter("ITEM_3"<=12 AND "ITEM_3">=6)
  50 - filter("T233594"."DESCRIPTION"='1000-1249 Miles')
  51 - access(ROWID=ROWID)
  53 - access("T233594"."DESCRIPTION"='1000-1249 Miles')
  56 - access("T233937"."DISTANCEGROUP"="T233594"."CODE")
  60 - access("T233937"."DEST"="C0")
  62 - access("T233937"."MONTH">=6 AND "T233937"."MONTH"<=12)

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of table property
   - star transformation used for this statement

105 rows selected.

Running the actual queries in this case gives me a wall-time of around 4 seconds for the in-memory version, and 14 seconds when in-memory query is disabled.

But one other thing that I haven’t done yet is optimise the column-store compression format for querying. The default settings for column-store compression aim to balance query time with space saved, but I can change the in-memory compression type for my tables like this:

SQL> conn bi_airlines/BI_AIRLINES
Connected.
SQL> alter table AIRCRAFT_GROUP inmemory memcompress for query high priority high;

Table altered.

and this brings the query times down by a few seconds, once I turn the in-memory feature back on again by setting INMEMORY_SIZE to 1G. But the response time isn’t anywhere near the initial run where we had data stored row-wise but with aggregate tables, so let’s finish-off the testing by putting the aggregate table in-memory too, and see if that makes a difference.

And of course, it absolutely flew:

NewImage

So to conclude from my look at the Oracle Database 12c In-Memory option with OBIEE11g v406 SampleApp, I’d say the following based on my initial tests:

  • For BI-type reporting where you’re typically summarising lots of data, the in-memory option doesn’t remove the need for aggregate tables – you’ll still benefit significantly from having them, in my observation
  • Where the in-memory option does benefit you is when you’re querying the detail-level data – it helps with aggregation but it’s main strength is fast filtering against subsets of columns
  • Some of the more complex SQL queries issued by OBIEE’s BI Server, for example when creating lots of subtotals and totals against a dataset, reduce the effectiveness of the in-memory option – you’ll get the biggest speed improvement, at least at the moment, with queries with simpler predicates and not so complex joins

There might be more to the aggregation story in the end, though. Looking at the Oracle Database 12c In-Memory Option Technical White Paper, the in-memory option should in-fact help with aggregation through a new optimiser transformation called “vector group by”, a transformation that’s likened to a star transformation that uses CPU-efficient algorithms and a multi-dimensional array created on-the-fly in the PGA called an “in-memory accumulator”.

NewImage

In fact, what we’ve heard is that many of the old Oracle OLAP team have moved over to the in-memory option team and were responsible for this feature, so I’ll be taking a closer look at in-memory aggregation in this new feature over the next few months. In my examples though, I didn’t see any examples of vector group by in the query execution plans, so I’m assuming either conditions weren’t right for it, or like star transformations there’s some combination of setting and query factors that need to be in place before it’ll appear in the execution plan (and queries presumably run that much faster).

For now though – that’s my first run-through of the 12c In-Memory Option in the context of a typical BI workload. Check back later in the year for more in-memory option postings, including hopefully something more on the in-memory aggregation feature.

Categories: BI & Warehousing

Oracle Support and Services at Oracle OpenWorld 2014

Chris Warticki - Fri, 2014-08-29 12:21
Tips and Best Practices from the Front Lines Oracle Support and Services works side by side with millions of Oracle users around the world—we see what works and what doesn't, wins and losses. Through this experience we've captured tips and best practices to save you time, help you innovate, and get the most from your Oracle investment.

While you're at Oracle OpenWorld, we invite you to take advantage of this exclusive annual opportunity to interact with Oracle Support and Services experts. With 56 sessions, hands-on demos, special events, and more, you'll find relevant, useful information based on proven patterns of customer success.

Conference Highlights

For details on all sessions, please refer to the Oracle OpenWorld Content Catalog or the Focus On Oracle Services and Support document.

Support FocusOn Documents for Customers

End-To-End ADF Cloud Deployment Process

Andrejus Baranovski - Fri, 2014-08-29 11:37
ADF and ADF BC perfectly runs on Oracle Java Cloud. You could deploy regular ADF application straight away from familiar JDeveloper environment without any hassle. With this blog post I would like to walk through the process of migrating DB model to the cloud and deploying ADF application (enabled with ADF Security) to the cloud.

Here you can download sample application - TreeComponentsCloud.zip. This application is deployed and runs on Oracle Java Cloud, accessible through this link. Online access will be available until my Oracle Java Cloud trial subscription expires (in a month or so). You can login using following credentials - username: redsam, password: We1come@ and identity domain: ltredsamuraictrial99050.

First of all, we should prepare data model - basically you could migrate your local database to the cloud (including data) using JDeveloper wizards. Database Cart wizard could be used for this purpose, simply add all required tables to the cart and set a checkbox to include the data:


You would need to enable SFTP access and note down specific SFTP connection details for Oracle Database Cloud, read more about it in Oracle Java Cloud documentation section Building the Data Model. I have defined Oracle Database Cloud connection in JDeveloper for SFTP access:


Data Model and data upload to the cloud is very seamless process - it does everything just with one click. Entire structure is packaged into archive and sent over to the cloud:


When migration process is completed, we could double check if data is in the cloud. You could expand Oracle Database Cloud connection in JDeveloper and browse through the tables, data should be accessible:


Next we should enable secure access in the cloud. Oracle Java Cloud supports regular ADF Security setup. However, to render Oracle Java Cloud login page, you must include additional security constraint into web.xml (read more about ADF Security in Configuring Security section from Oracle Java Cloud documentation). Here you can see security constraint implemented in sample application web.xml:


You should define regular ADF Security permission for page access. I'm using custom application role - AccountantAppRole:


There is enterprise role AccountantRole defined and mapped with application role from above. This enterprise role is also defined in Oracle Java Cloud service:


Finally there is user defined - redsam, the same user is defined in Oracle Java Cloud service. This user is mapped with AccountantRole enterprise role:


I have defined AccountantRole role under Users group in Oracle Java Cloud service:


This role is mapped with redsam user in the same Oracle Java Cloud service:


Deployment process is identical to the one deploying to local WebLogic server, you could use the same JDeveloper wizard - only select Oracle Cloud as target Application Server from the list:


Once application is deployed, you could login to Oracle Java Cloud service control (looks quite similar to Oracle Enterprise Manager) and check application status, etc.:


Let's do a test now. I will try to login with a user who do not have access to the application. Our sample application is protected by ADF Security, Oracle Java Cloud renders login screen automatically (no need to implement it in your custom application):


Application access will be reported as unauthorised, as expected:


Login with a valid user - redsam (see all login credentials listed in the beginning of this post):


We can access application now. Browse through tree structure and even render a colourful chart:

New Revenue Opportunity for Video Producers and Videographers

Bradley Brown - Fri, 2014-08-29 09:38
I absolutely love it when we're able to generate income for people who share their knowledge through our platform.  Take my technical training videos (on Oracle Application Express) for example.  It's so cool that I can produce a set of videos, upload it into our platform and sell the material to people around the world - and I get to maintain my brand (I'm not lost in a marketplace) on my own website.  Anyone can be making money in no time.  We see it happening EVERY day!

At the same time, most videos (at least professional videos) are produced by a videographer.  My good friend Will and I used a videographer to create our Sled Like a Pro series which teaches people how to snowmobile.  Traditionally, videographers charge a fee for creating, producing, editing a video.  They might charge $50/hr or $200/hr (or more).  But once the video is finished, they typically turn their work over to someone who creates a DVD or simply uses it as they wish.  Photographers on the other hand often retain the rights to the photo and how you use it.

As a producer of video, what if you could negotiate a royalty on all revenue generated from the product you produced?  What if you could do this without impacting the price of product?  With InteliVideo we allow you to do this.  If you're a reseller of the InteliVideo platform, you'll receive 10% of the net revenue generated - whether it's platform fees or video/product sales.

Be a reseller for InteliVideo!  Sign up for a free account and let us know you're a videographer and that you have an interest in being a reseller.  We'll get you all set up.  We'll provide you with a URL that you can distribute in your emails, which will make sure you get credit for everyone who signs up.

Here's the best part.  Your customers will be able to provide their customers with exactly what they are looking for!  Our platform allows the end customers to watch videos anytime, anywhere.  They can download their video and watch it on a plane, train or automobile.  We protect your customer's content too, so only the app on the device can access the video.

Sign your customers up today!

Log Buffer #386, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-29 08:16

If you ever wanted an easy-peazy way to get few of the best blog posts of the week from Oracle, SQL Server and MySQL then Log Buffer Editions are the place to be.

Oracle:

The Product Management team have released a knowledge article for Enterprise Performance Management (EPM) 11.1.2.2.x and 11.1.2.3.x containing details for EPM support with Internet Explorer (IE) 11.

As if anyone needs to be reminded, there’s a ridiculous amount of hype surrounding clouds and big data. There’s always oodles of hype around any new technology that is not well understood.

By mapping an external table to some text file, you can view the file contents as if it were data in a database table.

Vikram has discovered a utility adopreports utility in R12.2.

As a lot of the new APEX 5 features are “by developers for developers”, this one is also a nifty little thing that make our lives easier.

SQL Server:

Data Mining: Part 15 Processing Data Mining components with SSIS.

SQL Server AlwaysOn Availability Groups Fail the Initial Failover Test.

Stairway to PowerPivot and DAX – Level 6: The DAX SUM() and SUMX() Functions.

Questions about T-SQL Expressions You Were Too Shy to Ask

SQL Server Service Engine fails to start after applying CU4 for SQL Server 2008 SP1.

MySQL:

MySQL for Visual Studio 1.2.x recently became a GA version. One of the main features included in this version was the new MySQL ASP.NET MVC Wizard.

Resources for Database Clusters: Performance Tuning for HAProxy, Support for MariaDB 10, Technical Blogs & More.

Trawling the binlog with FlexCDC and new FlexCDC plugins for MySQL

InnoDB provides a custom mutex and rw-lock implementation.

You probably already know that Sphinx supports MySQL binary network protocol. But, if you haven’t heard– Sphinx can be accessed with the regular ol’ MySQL API.

Categories: DBA Blogs

Presenting at OOW 2014

DBASolved - Fri, 2014-08-29 07:26

This year I’ll be presenting at Oracle Open World with many of the best in the industry.  If you are going to be in the San Francisco area between September 28 thru October 2 2014, stop by and check out the conference.  Registration information can be found here.

The topics which I’ll be presenting or assisting with this year are:

  • OTN RAC Attack – Sunday, September 28, 2014 – 9 am – 3 pm PST @ OTN Lounge
  • How many ways can I monitor Oracle GoldenGate – Sunday, September 28, 2014 0 3:30 pm – 4:15 pm PST @ Moscone South 309
  • Oracle Exadata’s Exachk and Oracle Enterprise Manager 12c: Keeping Up with Oracle Exadata – Thursday, October 2, 2014 10:45 am – 11:30 am PST @ Moscone South 310

Hope to see you there!

Enjoy!

about.me: http://about.me/dbasolved


Filed under: General
Categories: DBA Blogs

Remote Support for Windows/UNIX/LINUX: Additional Services Series Pt. 5 [VIDEO]

Chris Foot - Fri, 2014-08-29 06:17

Transcript

When outsourcing your operating system support, you want to know that you have expert professionals with knowledge of all your platforms handling your data. At RDX, that’s something you don’t have to worry about.

Welcome back to our Additional Services series!

Whether you use Windows, UNIX or LINUX systems, we support anything and everything an admin does onsite remotely. Our Windows OS tech support includes hardware selection, monitoring and tuning, among many others. We assume total ownership of everything: your server’s security, performance, availability and improvement, and we understand the mutually dependent OS/DB relationship that affects all these things. The same things goes with UNIX and LINUX.

Financially, you pay a single bill for both database and OS support services, and you only pay for the services you need, when you need them.

For more details on our extensive operating system support services, follow the link below. We’ll see you next time!

The post Remote Support for Windows/UNIX/LINUX: Additional Services Series Pt. 5 [VIDEO] appeared first on Remote DBA Experts.

PostgreSQL vs. MySQL: Part Two

Chris Foot - Fri, 2014-08-29 01:34

Part One outlined the histories and basic foundations of PostgreSQL and MySQL, respectively.

In Part Two, we'll focus on the benefits of using both of these structures and how remote DBA professionals use them to perform mission-critical functions for enterprises.

What is a relational database management system?
Before going into further detail on PostgreSQL and MySQL, it's important to define what RDMS is, as both of these systems subscribe to this model. According to DigitalOcean, RDMS stores information by identifying related pieces of data to form comprehensive sets, or schemas. The tables are easily queried by data analysts, applications and other entities because they are made of columns defined by attributes held in rows.

MySQL: Support, advantages and drawbacks

As Carla Schroder of OpenLogic noted, MySQL is a solid choice for IT professionals working with Web architectures. It's capable of organizing unstructured information, such as the kind of data found on Twitter, Facebook and Wikipedia (all of which are powered by MySQL). DigitalOcean asserted the platform possesses sound security functions for data access and tasks that are easy to perform.

As for the disadvantages, the latter source acknowledged MySQL can sanction read tasks really well but falls somewhat short when it comes to read-write. In addition, the platform lacks a full-text search component.

PostgreSQL: Support, advantages and drawbacks
DigitalOcean maintained PostgreSQL can handle a large variety of responsibilities

quite efficiently due to its high programmability and ACID compliance. Users can implement custom procedures, a few of which can be developed to simplify intricate, common database administration operations. Because it works objectively, it can support nesting and other powerful features. Complex, customized tasks can be easily implemented and deployed.

What are its shortcomings? For one thing, it's difficult for people to find hosting services for PostgreSQL because of the sheer amount of variations. Also, its read-heavy operations can be "overkill" as DigitalOcean described it.

The post PostgreSQL vs. MySQL: Part Two appeared first on Remote DBA Experts.