Skip navigation.

Feed aggregator

Configuring Python cx_Oracle and mod_wsgi on Oracle Linux

Christopher Jones - Mon, 2014-11-17 23:16

The Web Server Gateway Interface (WSGI) is a standardized interface between web servers and Python web frameworks or applications. Many frameworks including Django support WSGI.

This post is a brief how-to about configuring Apache's mod_wsgi with Python's cx_Oracle driver for Oracle Database. The steps are for Oracle Linux.

  1. Download Instant Client Basic & SDK ZIP files from OTN. For cx_Oracle 5.1, use the ZIPs, not the RPMs.

  2. As root, unzip the files to the same directory, e.g. /opt/oracle/instantclient_12_1:

    mkdir /opt/oracle
    cd /opt/oracle
    unzip /tmp/
    unzip /tmp/
  3. Configure Instant Client:

    cd /opt/oracle/instantclient_12_1
    ln -s
  4. Install the pip package management tool for Python by following and downloading Then run:

  5. Install cx_Oracle:

    export LD_RUN_PATH=/opt/oracle/instantclient_12_1
    export ORACLE_HOME=/opt/oracle/instantclient_12_1
    pip install cx_Oracle

    The key here is the use of LD_RUN_PATH. This obviates the need to later set LD_LIBRARY_PATH or configure ldconfig for cx_Oracle to find the Instant Client libraries. There is a cx_Oracle-specific variable FORCE_RPATH which has the same effect.

    Note the cx_Oracle installer overloads the meaning of ORACLE_HOME. This variable is not normally used with Instant Client.

    Neither ORACLE_HOME or LD_RUN_PATH need to be set at runtime.

    If you don't use LD_RUN_PATH or FORCE_RPATH during installation, you will need to make LD_LIBRARY_PATH available to the Apache process or use ldconfig to add Instant Client to the system wide library search path.

    Configuring ldconfig is an effective and commonly used solution. However it has a potential problem that if multiple Oracle products exist, with possibly differing versions of Oracle libraries on the same machine, then there might be library clashes. If you wish to use it, create a file /etc/ containing:


    Then update the linker cache by running:


    Alternatively set LD_LIBRARY_PATH in Apache's environment file, /etc/sysconfig/httpd. In Oracle Linux 6 use:

    export LD_LIBRARY_PATH=/opt/oracle/instantclient_12_1

    In Oracle Linux 7 use:


    In Oracle Linux 7, don't reference variables on the right-hand side of the equals sign since they won't be expanded.

    [The Apache environment configuration file location varies between Linux distributions. On OpenSUSE see /etc/sysconfig/apache2. On Debian-based distributions look at /etc/apache2/envvars].

  6. Set any other Oracle environment variables in the Apache environment configuration file /etc/sysconfig/httpd. For example:


    (Prefix any variable setting with export in Oracle Linux 6)

  7. Install mod_wsgi:

    yum install mod_wsgi
  8. Add this line to /etc/httpd/conf/httpd.conf:

    WSGIScriptAlias /wsgi_test /var/www/html/
  9. On Oracle Linux 6, start the web server with:

    service httpd start

    On Oracle Linux 7 use:

    systemctl start httpd.service
  10. Create a test file /var/www/html/ that connects to your database:

    #-*- coding: utf-8 -*-
    def query():
        import cx_Oracle
        db = cx_Oracle.connect("hr", "welcome", "localhost/orcl")
        cursor = db.cursor()
        cursor.execute("select city from locations where location_id = 2200")
        return cursor.fetchone()[0]
    def wsgi_test(environ, start_response):
        output = query()
        status = '200 OK'
        headers = [('Content-type', 'text/plain'),
    	       ('Content-Length', str(len(output)))]
        start_response(status, headers)
        yield output
    application = wsgi_test
  11. Load http://localhost/wsgi_test in a browser. The city of the queried location id will be displayed.

That's it. Let me know how it works for you.

Information on cx_Oracle can be found here.

Information on Oracle Linux can be found here.

Information on Oracle Database can be found here.

Off May Not Be Totally Off: Is Oracle In-Memory Database 12c ( Faster?

Off May Not Be Totally Off: Is Oracle In-Memory Database 12c ( Faster?
Most Oracle 12c installations will NOT be using the awesome Oracle Database in-memory features available starting in version This experiment is about the performance impact of upgrading to 12c but disabling the in-memory features.

Every experiment I have performed comparing buffer processing rates, clearly shows any version of 12c performs better than 11g. However, in my previous post, my experiment clearly showed a performance decrease after upgrading from to

This posting is about why this occurred and what to do about it. The bottom line is this: make sure "off" is "totally off."

Turn it totally off, not partially off
What I discovered is by default the in-memory column store feature is not "totally disabled." My experiment clearly indicates that unless the DBA takes action, not only could they be a license agreement violation but a partially disabled in-memory column store slightly slows logical IO processing compared to the 12c non in-memory column store option. Still, any 12c version processes buffer faster than 11g.

My experiment: specific and targeted
This is important: The results I published are based on a very specific and targeted test and not on a real production load. Do not use my results in making a "should I upgrade decision." That would be stupid and an inappropriate use of the my experimental results. But because I publish every aspect of my experiment and it is easily reproducible it is a valid data point with which to have a discussion and also highlight various situations that DBAs need to know about.

You can download all my experimental results HERE. This includes the raw sqlplus output, the data values, the free R statistics package commands, spreadsheet with data nicely formatted and lots of histograms.

The instance parameter settings and results
Let me explain this by first showing the instance parameters and then the experimental results. There are some good lessons to learn!

Pay close attention to the inmemory_force and inmemory_size instance parameters.

SQL> show parameter inmemory

------------------------------------ ----------- ------------------------------
inmemory_clause_default string
inmemory_force string DEFAULT
inmemory_max_populate_servers integer 0
inmemory_query string ENABLE
inmemory_size big integer 0
inmemory_trickle_repopulate_servers_ integer 1
optimizer_inmemory_aware boolean TRUE

SQL> show sga

Total System Global Area 7600078848 bytes
Fixed Size 3728544 bytes
Variable Size 1409289056 bytes
Database Buffers 6174015488 bytes
Redo Buffers 13045760 bytes

In my experiment using the above settings the median buffers processing rate was 549.4 LIO/ms. Looking at the inmemory_size and the SGA contents, I assumed the in-memory column store was disabled. If you look at the actual experimental result file "Full ds2-v12-1-0-2-ON.txt", which contain the explain plan of the SQL used in the experiment, there is no mention of the in-memory column store being used. My assumption, which I think is a fair one, was that the in-memory column store had been disabled.

As you'll see I was correct, but only partially correct.

The parameter settings below are when the in-memory column store was totally disabled. They key is changing the default inmemory_force parameter value from DEFAULT to OFF.

SQL> show parameter inmemory

------------------------------------ ----------- ------------------------------
inmemory_clause_default string
inmemory_force string OFF
inmemory_max_populate_servers integer 0
inmemory_query string ENABLE
inmemory_size big integer 0
inmemory_trickle_repopulate_servers_ integer 1
optimizer_inmemory_aware boolean TRUE
SQL> show sga

Total System Global Area 7600078848 bytes
Fixed Size 3728544 bytes
Variable Size 1291848544 bytes
Database Buffers 6291456000 bytes
Redo Buffers 13045760 bytes

Again, the SGA does not show any in-memory memory space. In my experiment with the above "totally off" settings, the median buffers processing rate was 573.5 LIO/ms compared to "partially off" 549.4 LIO/ms. Lesson: Make sure off is truly off.

It is an unfair comparison!
It is not fair to compare the "partially off" with the "totally off" test results. Now that I know the default inmemory_force must be changed to OFF, the real comparison should be made with the non in-memory column store version and the "totally disabled" in-memory column store version This is what I will summarize below. And don't forget all 12c versions showed a significant buffer processing increase compared to 11g.

The key question: Should I upgrade?
You may be thinking, if I'm NOT going to license and use the in-memory column store, should I upgrade to version Below is a summary of my experimental results followed by the key points.

1. The non column store version was able to process 1.1% more buffers/ms (median: 581.7 vs 573.5) compared to to "totally disabled" in-memory column store version While this is statistically significant, a 1.1% buffer processing difference is probably not going to make-or-break your upgrade.

2. Oracle Corporation, I'm told, knows about this situation and is working on a fix. But even if they don't fix it, in my opinion my experimental "data point" would not warrant not upgrading to the in-memory column store version even if you are NOT going to use the in-memory features.

3. Visually (see below) the non in-memory version and the "totally off" in-memory version samples sets look different. But they are pretty close. And as I mentioned above, statistically they are "different."

Note for the statistically curious: The red color non in-memory version data set is highly variable. I don't like to see this in my experiments. Usually this occurs when a mixed workload sometimes impacts performance, I don't take enough samples or my sample time duration is too short. To counteract this, in this experiment I captured 31 samples. I also performed the experiment multiple times and the results where similar. What I could have done was used more application data to increase the sample duration time. Perhaps that would have made the data clearer. I could have also used another SQL statement and method to create the logical IO load.
What I learned from this experiment
To summarize this experiment, four things come to mind:

1. If you are not using an Oracle Database feature, completely disable it. My mistake was thinking the in-memory column store was disabled when I set it's memory size to zero and "confirmed" it was off by looking at the SGA contents.

2. All versions of 12c I have tested are clearly faster at processing buffers than any version of 11g.

3. There is a very slight performance decrease when upgrading from Oracle Database version to

4. It is amazing to me that with all the new features poured into each new Oracle Database version the developers have been able to keep the core buffer processing rate nearly at or below the previous version. That is an incredible accomplishment. While some people may view this posting as a negative hit against the Oracle Database, it is actually a confirmation about how awesome the product is.

All the best in your Oracle performance tuning work!


Categories: DBA Blogs

Upgrading system's library/classes on 12c CDB/PDB environments

Marcelo Ochoa - Mon, 2014-11-17 17:41
Some days ago I found that the ODCI.jar included into 12c doesn't reflect latest update for oracle ODCI API.This API is used when writing new domain indexes such as Scotas OLS, pipe-line tables and many other cool stuff.ODCI.jar includes several Java classes which are wrappers of Oracle Object types such as ODCIArgDesc among others, the jar included into the RDBMS 11g/12c seem to be outdated, may be generated with 10g version database, for example it doesn't included attributes such as ODCICompQueryInfo which have information about Composite Domain Index (filter by/order by push predicates).The content of ODCI.jar is a set of classes generated by the tool JPublisher and looks like:oracle@localhost:/u01/app/oracle/product/$ jar tvf ODCI.jar
     0 Mon Jul 07 09:12:54 ART 2014 META-INF/
    71 Mon Jul 07 09:12:54 ART 2014 META-INF/MANIFEST.MF
  3501 Mon Jul 07 09:12:30 ART 2014 oracle/ODCI/ODCIArgDesc.class
  3339 Mon Jul 07 09:12:32 ART 2014 oracle/ODCI/ODCIArgDescList.class
  1725 Mon Jul 07 09:12:32 ART 2014 oracle/ODCI/ODCIArgDescRef.class
  2743 Mon Jul 07 09:12:52 ART 2014 oracle/ODCI/ODCIStatsOptions.class
  1770 Mon Jul 07 09:12:54 ART 2014 oracle/ODCI/ODCIStatsOptionsRef.classThe complete list of classes do not reflect the list of object types that latest 12c RDBMS have, this list is about 38 types expanded later to more than 60 classes:SQL> select * from dba_types where type_name like 'ODCI%'
38 rows selectedso there is a clear difference between the classes included at ODCI.jar and the actual list of object types included into the RDBMS.Obviously these classes could be re-generated using JPublisher but I'll have to provide an input file with a template for case sensitive names, typically used in Java.To quickly create a JPublisher input file I'll execute this anonymous PLSQL block on JDeveloper logged as SYS at the CDB:set long 10000 lines 500 pages 50 timing on echo on
set serveroutput on size 1000000
 for i in (select * from dba_types where type_name like 'ODCI%' order by type_name) loop
   if (i.typecode = 'COLLECTION') then
      dbms_output.put('SQL sys.'||i.type_name||' AS ');
      FOR j in (select * from dba_source where owner=i.owner AND NAME=i.type_name) loop
         if (substr(j.text,1,4) = 'TYPE') then
            dbms_output.put(substr(j.text,6,length(||' TRANSLATE ');
            dbms_output.put(upper(substr(j.text,instr(upper(j.text),' OF ')+4,length(j.text)-instr(upper(j.text),' OF ')-4))||' AS '||substr(j.text,instr(upper(j.text),' OF ')+4,length(j.text)-instr(upper(j.text),' OF ')-4));
         end if;
      end loop;
      dbms_output.put('SQL sys.'||i.type_name||' AS ');
      FOR j in (select * from dba_source where owner=i.owner AND NAME=i.type_name) loop
         if (substr(j.text,1,4) = 'TYPE') then
            dbms_output.put(substr(j.text,6,length(||' TRANSLATE ');
         end if;
         if (substr(j.text,1,1) = ' ') then
            dbms_output.put(upper(substr(j.text,3,instr(j.text,' ',3)-3))||' AS '||substr(j.text,3,instr(j.text,' ',3)-3)||', ');
         end if;
      end loop;
   end if;
 end loop;
end;finally editing this file manually to remove latest coma sign I'll get this mapping file for JPublisher.With above file is possible to use an Ant task calling JPublisher utiliy as:            description="Generate a new ODCI.jar file with ODCI types wrappers using JPublisher">
  by executing above Ant task I'll have a new ODCI.jar with a content like:oracle@localhost:/u01/app/oracle/product/$ jar tvf ODCI.jar
     0 Sun Nov 16 21:07:50 ART 2014 META-INF/
   106 Sun Nov 16 21:07:48 ART 2014 META-INF/MANIFEST.MF
     0 Sat Nov 15 15:17:40 ART 2014 oracle/
     0 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/
102696 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyData.class
  1993 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyDataRef.class
 17435 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyType.class
  1993 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyTypeRef.class
  3347 Sun Nov 16 21:07:46 ART 2014 oracle/ODCI/ODCIArgDesc.class
  2814 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCIArgDescList.class
  2033 Sun Nov 16 21:07:46 ART 2014 oracle/ODCI/ODCIArgDescRef.class
  2083 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCITabFuncStatsRef.class
  2657 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCIVarchar2List.classWell now the new ODCI.jar is ready for uploading into the CDB, to simplify this task I'll put directly in a same directory as the original one:oracle@localhost:/u01/app/oracle/product/$ mv ODCI.jar ODCI.jar.orig
oracle@localhost:/u01/app/oracle/product/$ mv /tmp/ODCI.jar ./ODCI.jarNOTE: These next paragraph are examples to show that it will fail, see next paragraph to see the correct way.
To upload this new file into the CDB logged as SYS I'll execute:SQL> ALTER SESSION SET CONTAINER = CDB$ROOT;
SQL> exec sys.dbms_java.loadjava('-f -r -v -s -g public rdbms/jlib/ODCI.jar');to check if it works OK, I'll execute:SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
63 rows selected.I assume a this point that a new jar uploaded into the CDB root means that all PDB will inherit this new implementation as a new binary/library file patched at ORACLE_HOME does, but this is not how the class loading system works into the multitenant environment, to check that I'll re-execute above query but using the PDB$SEED container (the template used for new databases):SQL> ALTER SESSION SET CONTAINER = PDB$SEED;
SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
28 rows selected.similar result will be displayed in any other PDB running/mounted on that CDB, more on this if I'll check a Java code on some PDB this exception will be thrown:Exception in thread "Root Thread" java.lang.IncompatibleClassChangeError
 at oracle.jpub.runtime.MutableArray.getOracleArray(
 at oracle.jpub.runtime.MutableArray.getObjectArray(
 at oracle.jpub.runtime.MutableArray.getObjectArray(
 at oracle.ODCI.ODCIColInfoList.getArray(
 at com.scotas.solr.odci.SolrDomainIndex.ODCIIndexCreate( is because a code was compiled with latest API and the container have an oldest one.So I'll re-load the new ODCI.jar into PDB$SEED and my PDBs, using similar approach as in the CDB for example:SQL> ALTER SESSION SET CONTAINER = PDB$SEED;
SQL> exec sys.dbms_java.loadjava('-f -r -v -s -g public rdbms/jlib/ODCI.jar');
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable databasethis is because the PDB are blocked from altering classes inherit from the CDB.
As I mentioned early above way are incorrect when dealing in multitenant environments.To fix that there is Perl script named, it automatically takes care of loading on ROOT first, then on PDB$SEED, then any/all open PDBs specified in the command line.In my case I'll execute:# $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/ -u SYS -d $ORACLE_HOME/rdbms/admin -b initsoxx_output initsoxx.sqlbefore doing that is necessary to open all PDB (read write, or in restrict mode) or specifying which PDB will be patched. Note that I used initsoxx.sql script, this script is used by default during RDBMS installation to upload ODCI.jar.Now I'll check if all PDBs have consistent ODCI classes.SQL> ALTER SESSION SET CONTAINER = PDB$SEED;  
Session altered.
SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
63 rows selected.
Session altered.
SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
63 rows selected.Finally all PDBs where patched with a new library.More information about Development Java within RDBMS in multitenant environments are in this presentation, The impact of MultiTenant Architecture in the develop of Java within the RDBMS, for Spanish readers there is video with audio at YouTube from my talk at OTN Tour 14 ArOUG:

Enabling Agents of Change

Linda Fishman Hoyle - Mon, 2014-11-17 15:29

The Oracle Value Chain Summit is rapidly becoming the premier Supply Chain event in the industry.  Our upcoming 2015 Value Chain Summit marks the third year that this great event has been held, and we expect this summit to be bigger and better than ever. This event is all about you, our customer. You empower your supply chain, and the Value Chain Summit promises to provide you with the insight, contacts and tools you need to become “Agents of Change” for your supply chain.

Watch this video (by Oracle's Jon Chorley, pictured left) to learn more about the great things we have planned for the 2015 Value Chain Summit. Plus, you can take advantage of the Early Bird rate through the end of November, and save $300 off the regular rate. Combine this offer with special group rates, and save BIG!  


"Why Not Oracle Cloud?" for Fast-Growing, Mid-Sized Organizations

Linda Fishman Hoyle - Mon, 2014-11-17 15:11

In the past, mid-size and smaller companies have had to settle for “lightweight or scoped down ERP solutions.” Generally, these tier-2 solutions don’t have the functionality that a company needs to grow and operate globally.

In this article in Profit magazine, Oracle’s Rondy Ng, Senior Vice President of Applications Development, advises companies to choose wisely when looking at cloud-based ERP solutions to avoid expense, risk, and disruption down the road.

Ng asserts that Oracle ERP Cloud is the ERP solution for Fortune 500 companies, as well as for those who don’t have any designs to be one. There’s no need to settle. He makes a great case for choosing cloud and choosing Oracle.

New Interaction Hub Data Sheet Available

PeopleSoft Technology Blog - Mon, 2014-11-17 14:55

In support of the recently released Revision 3 of the PeopleSoft Interaction Hub, we've just produced the latest data sheet for the Hub, which can be found here.  This paper covers the highlights of the new release, and describes the our overall direction for the product.  The prime topics we cover are as follows:

  • Setting up and running a cluster of PeopleSoft applications using the Interaction Hub
  • Content Management
  • Branding and the User Experience
  • Using the Hub with the new Fluid User Interface
There is much more collateral about the Interaction Hub on My Oracle Support

Visualization on How the undergraduate tuition has increased over the years

Nilesh Jethwa - Mon, 2014-11-17 12:54

Average undergraduate tuition and fees and room and board rates



These figures are inflation adjusted and look how just the tuition fees have increased compared to the Dorm and Board rates

Now comparing the rate increase for 2-year program


So for the 2 year program, the board rates have remained at the same level compared to the dorm rates.

Now check out the interesting graph for 4 year program below



Comparing the slope of 2 year Board rates to the 4 year Board rates, the 4 year has significant increase


If price of meals is same for both programs then both 4 year and 2 year programs should have the same slope. So why is the 4 year slope different than 2 year?

Now, let see about the Dorm rates



And finally the 4 year vs 2 year Tuition rates


Here is the data table for the above visualization

Musings on Samsung Developer Conference 2014

Oracle AppsLab - Mon, 2014-11-17 11:18

This year some of us at the AppsLab attended the Samsung Developer Conference aka #SDC2014. Last year it was Samsung’s first attempt and we were also there. The quality and caliber of presentations increased tenfold from last year. Frankly, Samsung is doing it really hard to resist to join their ecosystem.


Here are some of the trends I observed:

Wearables and Health:

There was a huge emphasis in Samsung’s commitment with wearable technology. They released a new Tizen based smartwatch (Samsung Gear S) as well as a biometric reference design hardware and software called SIMBAND. Along with their wearable strategy they also released S.A.M.I, a cloud repository to store all this data. All this ties together with their vision of “Voice of the Body.”

Voice of the Body from Samsung on Vimeo.

During the second day keynote we got to hear from Mounir Zok Senior Sports Technologist of the United States Olympic Committee. He told us of how wearable technology is changing they way Olympic athletes are training. It was only a couple years ago when athletes still had to go to a lab and “fake” actual activities to get feedback. Now they can actually get real data on the field thanks to wearable technology.

Virtual Reality:

Samsung released the Gear VR in partnership with Oculus. This goggles can only work with a mounted Galaxy Note 4 in the front. The gaming experiences with this VR devices are amazing. But they are also exploring other cases like virtual tourism and virtual movie experiences. They released a 3D 360+spherical view camera called “Project Beyond.”

IoT – Home Automation:

Samsung is betting big with IoT and Home Automation and they are putting their money where their mouth is by acquiring SmartThings. The SmartThings platform is open sourced and has the ability to integrate with a myriad of other  home automation products. They showcased a smart home powered by SmartThings platform.

Mobile Innovation: 

I actually really like their new Galaxy Note Edge phablet. Samsung is showing true innovation here with the “edge” part of the device. It has it’s own SDK and it feels great on the hand!

Overall I’m pretty impressed with what Samsung is doing. It seems like their spaghetti-on-the-wall approach (throwing a bunch spaghetti and see what sticks) is starting to pay off.  Their whole UX across devices looks seamless. And in my humble approach they are getting ready to take off on their own without having to use Android for their mobile devices. Tizen keeps maturing, but I shall leave that for another post!

Please feel free to share your experience with Samsung devices as well!Possibly Related Posts:

Asteroid Hackathon – The Winning Team

Oracle AppsLab - Mon, 2014-11-17 09:57

Editorial Note: This is a guest post by friend of the ‘Lab and colleague DJ Ursal. Also be sure to check out our Hackathon entry here:


EchoUser (@EchoUser), in partnership with SpaceGAMBIT, Maui Makers, the Minor Planet Center, NASA, the SETI Institute, and Further by Design, hosted an Asteroid Hackathon. The event was in response to the NASA Grand Challenge, “focused on finding all asteroid threats to human populations and knowing what to do about them.”

I had a wonderful opportunity to participate in the Asteriod Hackathon last week. MY team name was NOVA. Our team comprised for 4 team members – DJ Ursal, Kris Robison, Daniel Schwartz, Raj Krishnamurthy

We were given live data from NASA and Minor Planet site and literally just had 5 hours to put together a working prototype and solution to the Asteroid big data problem.  We created a web application (works not only on your MAC or PC but also on your iPad and your latest Nexus 7 Android devices) which would help scientists, astronomers and anyone who is interested in Asteriods discover, learn and share information in a fun and interactive way.


Our main them was Finding Asteroids Before They Find Us. The goal was to help discover, learn and share Asteroids information to increase awareness within the community.  We created an interactive web app that allowed users to make use of chart filters to find out about the risk for possibilities of future impact with Earth. Find out about the distance of the asteroids to Earth, absolute brightness and rotation of the Asteroid. It allowed users to click and drag on any chart to filter, so that they could transform the filters in multidimensional  way in order to explorer, discover , interesting facts and share data on asteroids with riends and community. We made use of Major Tom who  is an astronaut referenced in David Bowie’s songs “Space Oddity. “Space Oddity” depicts an astronaut who casually slips the bonds of the world to journey beyond the stars. Users could post questions to Major Tom and could also play his song.

The single most important element about WINNING this hackathon  strategically was  team composition. Having a team that is effective working together. Collaboration and communication skills were the two of most critical personal skills demanded of all members as time was limited and communication and coordination of utmost importance.

Winning TEAM NOVA- DJ Ursal, Kris Robison, Daniel Schwartz, Raj Krishnamurthy Possibly Related Posts:

November 20: ICA Fluor Taleo Reference Forum (Spanish)

Linda Fishman Hoyle - Mon, 2014-11-17 09:49

Join us for an Oracle Taleo Customer Reference Forum on Thursday, November 20, 2014, at 2:00 p.m. CDT with ICA Fluor, addressed to the Spanish-speaking communities. ICA Fluor is a joint venture between the Mexican construction and engineering company ICA and Fluor Corporation, one of the world’s largest engineering, procurement, and construction companies.

Since 1993, ICA Fluor has had exclusive rights for both companies for the development of industrial projects in Mexico, Central America, and the Caribbean.

In this session, Mr. Jorge Luis Becerril Sanchez, ICA Fluor’s  HR Manager, shares why they chose to implement Oracle Taleo Recruiting.

Invite your customers and prospects. You can register now to attend the live Spanish Forum session on Thursday, November 20 at 2:00 p.m. CDT and learn more from ICA Fluor directly.

Asteroid Hackathon

Oracle AppsLab - Mon, 2014-11-17 09:49

A couple weeks ago Jeremy Ashley (@jrwashley), Bill Kraus, Raymond Xie and I participated in the Asteroid Hackathon hosted by @EchoUser. The main focus was “to engage astronomers, other space nerds, and the general public, with information, not just data.”


As you might already know, we here at the AppsLab, are big fans of Hackathons as well as ShipIt days or FedEx days. The ability to get together, get our collective minds together and being able to create something in a short amount of time is truly amazing. It also helps to keep us on our toes, technically and creatively.

Our team built what we called “The Daily Asteroid.” The idea behind our project was to highlight the asteroid profile of the current date’s closed approach to Earth or near Earth object (NEO) data. What this means is to show which asteroid is the closest to earth today. A user could “favorite” today’s asteroid and start a conversation with other users about it, using a social network like Twitter.

Screen Shot 2014-11-17 at 9.47.36 AM

We also added the ability to change the asteroid properties (size, type, velocity, angle) and play a scenario to see what damage could it cause if it hit the earth. And to finish up,  we created an Asteroid Hotline using Twilio (@twilio) where you can call to get the latest NEO info using your phone!

We were lucky to be awarded 3rd place or “Best Engagement,” and we had a blast doing it. Considering the small amount time we had, we came out really proud of our results.Possibly Related Posts:

Oracle locks: Identifiying blocking sessions

Yann Neuhaus - Mon, 2014-11-17 09:17

When you have sessions blocked on locks, you probably have all information about the waiters (they call you and anyway their waiting session is visible in v$session our ASH). But you usually need to get enough information that help to identify the blocker.

Here is a query I use to get that quickly, based on V$WAIT_CHAINS

Here is the result I want to get:

session                 wait event                                      minutes USER PRO
----------------------- ----------------------------------------------- ------- ---- ---
 ABCLBP1 '831,54109@1'  SQL*Net message from client                        13.5 SYS  sql
  ABCLBP4 '395,21891@4' enq: TX - row lock contention on TABLE             13.2 SYS  SQL
                          "SYS"."TEST_FRANCK" on rowid AAC0aCAAnAAABSCAAA

I have information about blocking session, waiting session, the type of lock (here TX - row lock) and because it is a row lock I want to know the ROWID of the locked row. 

Here is the query I used to get it:

column "wait event" format a50 word_wrap
column "session" format a25
column "minutes" format 9999D9
column CHAIN_ID noprint
column N noprint
column l noprint
with w as (
 chain_id,rownum n,level l
 ,lpad(' ',level,' ')||(select instance_name from gv$instance where inst_id=w.instance)||' '''||w.sid||','||w.sess_serial#||'@'||w.instance||'''' "session"
 ,lpad(' ',level,' ')||w.wait_event_text ||
   when w.wait_event_text like 'enq: TM%' then
    ' mode '||decode(w.p1 ,1414332418,'Row-S' ,1414332419,'Row-X' ,1414332420,'Share' ,1414332421,'Share RX' ,1414332422,'eXclusive')
     ||( select ' on '||object_type||' "'||owner||'"."'||object_name||'" ' from all_objects where object_id=w.p2 )
   when w.wait_event_text like 'enq: TX%' then
     select ' on '||object_type||' "'||owner||'"."'||object_name||'" on rowid '
     from all_objects ,dba_data_files where object_id=w.row_wait_obj# and w.row_wait_file#=file_id
   end "wait event"
 , w.in_wait_secs/60 "minutes"
 , s.username , s.program
 from v$wait_chains w join gv$session s on (s.sid=w.sid and s.serial#=w.sess_serial# and s.inst_id=w.instance)
 connect by prior w.sid=w.blocker_sid and prior w.sess_serial#=w.blocker_sess_serial# and prior w.instance = w.blocker_instance
 start with w.blocker_sid is null
select * from w where chain_id in (select chain_id from w group by chain_id having max("minutes") >= 1 and max(l)>1 )
order by n

This query retrieves the wait chains where a session is waiting for more than one minute on a table lock (TM) or row lock (TX) .

When it is a table lock (TM), I get the locked object_id from the P2 parameter, in order to know the table name.

When it is a row lock, I get the table and rowid from V$SESSION. Note that I have to join with dba_data_files in order to convert the absolute file_id to a relative one, and to join to dba_objects in order to convert the object_id to the data_object_id one - in order to built the ROWID.

More information about ROWID, relative file number and data object id in my previous post: From 8.0 extended rowid to 12c pluggable db: Why Oracle Database is still a great software


Think Stats, 2nd Edition Exploratory Data Analysis By Allen B. Downey; O'Reilly Media

Surachart Opun - Mon, 2014-11-17 08:15
Lots of Python with data analysis books. This might be a good one that is able to help readers perform statistical analysis with programs written in Python. Think Stats, 2nd Edition Exploratory Data Analysis by Allen B. Downey(@allendowney).
This second edition of Think Stats includes the chapters from the first edition, many of them substantially revised, and new chapters on regression, time series analysis, survival analysis, and analytic methods. Additional, It uses uses pandas, SciPy, or StatsModels in Python. Author developed this book using Anaconda from Continuum Analytics. Readers should use it, that will easy from them. Anyway, I tested on Ubuntu and installed pandas, NumPy, SciPy, StatsModels, and matplotlib packages. This book has 14 chapters relate with processes that author works with a dataset. It's for intermediate reader. So, Readers should know how to program (In a book uses Python), and skill in mathematical + statistical.
Each chapter includes exercises that readers can practice and get more understood. Free Sampler
  • Develop an understanding of probability and statistics by writing and testing code.
  • Run experiments to test statistical behavior, such as generating samples from several distributions.
  • Use simulations to understand concepts that are hard to grasp mathematically.
  • Import data from most sources with Python, rather than rely on data that’s cleaned and formatted for statistics tools.
  • Use statistical inference to answer questions about real-world data.
surachart@surachart:~/ThinkStats2/code$ pwd
surachart@surachart:~/ThinkStats2/code$ ipython notebook  --ip= --pylab=inline &
[1] 11324
surachart@surachart:~/ThinkStats2/code$ 2014-11-17 19:39:43.201 [NotebookApp] Using existing profile dir: u'/home/surachart/.config/ipython/profile_default'
2014-11-17 19:39:43.210 [NotebookApp] Using system MathJax
2014-11-17 19:39:43.234 [NotebookApp] Serving notebooks from local directory: /home/surachart/ThinkStats2/code
2014-11-17 19:39:43.235 [NotebookApp] The IPython Notebook is running at:
2014-11-17 19:39:43.236 [NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
2014-11-17 19:39:43.236 [NotebookApp] WARNING | No web browser found: could not locate runnable browser.
2014-11-17 19:39:56.120 [NotebookApp] Connecting to: tcp://
2014-11-17 19:39:56.127 [NotebookApp] Kernel started: f24554a8-539f-426e-9010-cb3aa3386613
2014-11-17 19:39:56.506 [NotebookApp] Connecting to: tcp://
2014-11-17 19:39:56.512 [NotebookApp] Connecting to: tcp://
2014-11-17 19:39:56.516 [NotebookApp] Connecting to: tcp://
Book: Think Stats, 2nd Edition Exploratory Data Analysis
Author: Allen B. Downey(@allendowney)Written By: Surachart Opun
Categories: DBA Blogs

Plan puzzle

Jonathan Lewis - Mon, 2014-11-17 07:43

I was in Munich a few weeks ago running a course on Designing Optimal SQL and Troubleshooting and Tuning, but just before I flew in to Munich one of the attendees emailed me with an example of a statement that behaved a little strangely and asked me if we could look at it during the course.  It displays an odd little feature, and I thought it might be interesting to write up what I did to find out what was going on. We’ll start with the problem query and execution plan:

select     s section_size,
           max(program_id) ,avg(program_id)
from       fullclones_test
cross join  (select distinct section_size s from fullclones_test)
where      section_size = (select min(section_size) from fullclones_test)
and        clone_size >= s
group by   s
order by   1; 

Since I found this a little difficult to follow (with repetitions of the same table name, and column aliases switching back and forth) I did a little cosmetic modification; all I’ve done is add table aliases and query block names, then arranged the text for my visual benefit. The aliases and query block names can help when dissecting the anomaly.

	/*+ qb_name(main) */
	ftv.s	section_size
	fullclones_test	ft1
cross join
	select	/*+ qb_name(inline) */
		distinct ft2.section_size s
	from	fullclones_test	ft2
	)	ftv
where	ft1.section_size = (
		select 	/*+ qb_name(subq) */
		from	fullclones_test	ft3
and	ft1.clone_size >= ftv.s
group by
order by

This query ran reasonably quickly (about half a second), producing the following execution plan:

| Id  | Operation              | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT       |                 |      1 |        |   404 (100)|      4 |00:00:00.05 |    1116 |       |       |          |
|   1 |  SORT GROUP BY NOSORT  |                 |      1 |      5 |   404   (6)|      4 |00:00:00.05 |    1116 |       |       |          |
|   2 |   MERGE JOIN           |                 |      1 |  48637 |   299   (7)|  50361 |00:00:00.58 |    1116 |       |       |          |
|   3 |    SORT JOIN           |                 |      1 |      5 |   114  (11)|      5 |00:00:00.02 |     372 |  2048 |  2048 | 2048  (0)|
|   4 |     VIEW               |                 |      1 |      5 |   114  (11)|      5 |00:00:00.02 |     372 |       |       |          |
|   5 |      HASH UNIQUE       |                 |      1 |      5 |   114  (11)|      5 |00:00:00.02 |     372 |  5894K|  3254K|  884K (0)|
|   6 |       TABLE ACCESS FULL| FULLCLONES_TEST |      1 |  99999 |   105   (3)|  99999 |00:00:00.31 |     372 |       |       |          |
|*  7 |    SORT JOIN           |                 |      5 |  20000 |   185   (4)|  50361 |00:00:00.16 |     744 |   619K|   472K|  550K (0)|
|*  8 |     TABLE ACCESS FULL  | FULLCLONES_TEST |      1 |  20000 |   106   (4)|  20076 |00:00:00.09 |     744 |       |       |          |
|   9 |      SORT AGGREGATE    |                 |      1 |      1 |            |      1 |00:00:00.01 |     372 |       |       |          |
|  10 |       TABLE ACCESS FULL| FULLCLONES_TEST |      1 |  99999 |   105   (3)|  99999 |00:00:00.29 |     372 |       |       |          |

Query Block Name / Object Alias (identified by operation id):
   1 - SEL$071BB01A
   4 - INLINE       / FTV@SEL$1
   5 - INLINE   
   6 - INLINE       / FT2@INLINE   
   8 - SEL$071BB01A / FT1@SEL$1
   9 - SUBQ 
  10 - SUBQ         / FT3@SUBQ

Predicate Information (identified by operation id):
   7 - access("FT1"."CLONE_SIZE">="FTV"."S")
   8 - filter("FT1"."SECTION_SIZE"=)

As you can see by comparing the block name / object alias information, we can identify a single full tablescan being executed at line 9 to produce the min(section_size) in the subquery.

We can also see that the “select distinct” executes at lines 5/6 to produce 5 rows which are then joined with a merge join to the first full tablescan of t1.

If you’re wondering about the appearance of a sel$1 despite my efforts to name every query block, that’s the (almost) inevitable side effect of using ANSI syntax – virtually every join after the first two tables will introduce a new (unnameable) query block to introduce the next table.

Now here’s the anomaly: if we eliminate the avg() from the select list we’re going to produce a result that ought to require less work – but look what happens:

| Id  | Operation              | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT       |                 |      1 |        | 10802 (100)|      4 |00:02:48.83 |    1116 |       |       |          |
|   1 |  SORT GROUP BY NOSORT  |                 |      1 |      5 | 10802  (94)|      4 |00:02:48.83 |    1116 |       |       |          |
|   2 |   MERGE JOIN           |                 |      1 |    972M| 10697  (95)|   1007M|03:21:28.41 |    1116 |       |       |          |
|   3 |    SORT JOIN           |                 |      1 |  99999 |   380   (4)|  80042 |00:00:00.39 |     372 |  2037K|   674K| 1810K (0)|
|   4 |     TABLE ACCESS FULL  | FULLCLONES_TEST |      1 |  99999 |   105   (3)|  99999 |00:00:00.26 |     372 |       |       |          |
|*  5 |    SORT JOIN           |                 |  80042 |  20000 |   185   (4)|   1007M|00:57:11.13 |     744 |   619K|   472K|  550K (0)|
|*  6 |     TABLE ACCESS FULL  | FULLCLONES_TEST |      1 |  20000 |   106   (4)|  20076 |00:00:00.11 |     744 |       |       |          |
|   7 |      SORT AGGREGATE    |                 |      1 |      1 |            |      1 |00:00:00.01 |     372 |       |       |          |
|   8 |       TABLE ACCESS FULL| FULLCLONES_TEST |      1 |  99999 |   105   (3)|  99999 |00:00:00.28 |     372 |       |       |          |

Query Block Name / Object Alias (identified by operation id):
   1 - SEL$6B65F52B
   4 - SEL$6B65F52B / FT2@INLINE
   6 - SEL$6B65F52B / FT1@SEL$1
   7 - SUBQ
   8 - SUBQ         / FT3@SUBQ

Predicate Information (identified by operation id):
   5 - access("FT1"."CLONE_SIZE">="FT2"."SECTION_SIZE")
   6 - filter("FT1"."SECTION_SIZE"=)

Ignore the timings from lines 2 and 5 – I was using the hint gather_plan_statistics to collect the rowsource execution stats, and those lines are showing a massive sampling error. The query took about 7 minutes 30 seconds to run. The key difference is that line 4 shows that the “select distinct” is NOT aggregated early – the optimizer has used complex view merging to “join then aggregate” rather than “aggregate then join”. As you can see, this was a bad choice and the join has produced over a billion (US) rows at line 2 which then have to aggregated down to just 4 rows in line 1.

The question then is why ? If I put a /*+ no_merge */ hint in query block named “inline” the optimizer accepts the hint and goes back to the plan that aggregates early and runs very quickly – so it’s not a question of the optimizer bypassing some mechanism to avoid getting the wrong answer. I think the only option available to use for further investigation at this point is to examine the 10053 (optimizer) trace file to see what’s going on.

From the (12c)  trace file where we select the avg() we see the following lines:

OJE: Begin: find best directive for query block INLINE (#0)
OJE: End: finding best directive for query block INLINE (#0)
CVM: CBQT Marking query block INLINE (#0) as valid for CVM.
CVM:   Not Merging INLINE (#0) into SEL$1 (#0) due to CBQT directive.

From the equivalent position in the trace file where we select only the max() we see the lines:

OJE: Begin: find best directive for query block INLINE (#0)
OJE: End: finding best directive for query block INLINE (#0)
CVM:   Merging SPJ view INLINE (#0) into SEL$1 (#0)

It’s always hard to decide exactly WHY things happen – but it looks as if the optimizer merges the view heuristically in the max() case “because it can”, but has a heuristic (internal directive) that stops it from merging in the avg() case. What a good job we have hints !


In cases like this it’s always possible that there’s a generic case that might produce wrong results even though the specific case won’t – so it’s worth spending a little time thinking about how the wrong results might appear. It’s also worth keep hold of the SQL as a test case because if there’s a generic block in place to handle specific cases you may find that future enhancements allow the block to be lifted for some cases, and it’s nice to be able to check for such cases as you upgrade.

On the other hand, you can get back to the efficient plan if you change the inline view to be:

        select  /*+
                ft2.section_size s , count(*) ct
        from    fullclones_test ft2
        group by
        )       ftv

That’s just a little bit surprising – but I’m not going to pursue this one any further, beyond noting that there are some interesting anomalies available with inline aggregate views, even in

Footnote 2:

If you want to experiment further, here’s the SQL to generate the data set:

create table fullclones_test (
        program_id      number(5,0),
        start_line      number,
        clone_id        number(5,0),
        clone_line      number,
        clone_size      number,
        range_start     number,
        section_size    number(4,0)

insert into fullclones_test (
        program_id, start_line, clone_id, clone_line,
        clone_size, range_start, section_size
connect by
        level <100000


exec dbms_stats.gather_table_stats(user,'fullclones_test')

Finally, for consistently reproducible results I had engineered my system stats as follows:


What Mode of Oracle 12c Unified Auditing Are You Using and Default Auditing Policies?

Continuing our blog series on Oracle 12 Unified Auditing, how do you know what mode of Unified Auditing that you are using? Use the following SQL –


The result will be TRUE or FALSE.  If TRUE, the database is using PURE Unified Auditing.  If FALSE, the database is using Mixed Mode, which is the Oracle 12c default.  Remember that V$OPTION shows what database options are installed, and V$PARAMETER shows the startup parameters for the options which have been installed.  Unified Auditing is enabled by being installed and not by being configured in V$PARAMETER.

Unified Auditing is configured through policies.  If Oracle 12c tenant databases (PDBs) are being used, these polices can be applied to common objects in all PDBs or to individual PDBs.  The table below show the policies installed and/or enabled by default –

Unified Audit Polices Installed With Oracle 12c

Policy Name

Default Enabled




Secure configuration audit options



Oracle Real Application Security administrative actions on application users, roles, and policies.



Run-time Oracle Real Application Security session actions and namespace actions



Commonly used user account and privilege settings for create user, role, and privilege grants



Audits commonly used Oracle Database parameter settings, e.g., the initialization file (spfile) changes

To query what policies have been defined you may use –


To query what polices have been enabled you may use –


If you have questions, please contact us at


For more information on Unified Auditing can be found here:

Tags: AuditingOracle Database
Categories: APPS Blogs, Security Blogs on AIX

Laurent Schneider - Mon, 2014-11-17 04:22

just released today

only in Enterprise Edition at the moment, and now available on HPUX, zLinux and AIX

#oracle12c #db12102 is out for a bunch of platform #aix #os390 #hpux #zlinux

— laurentsch (@laurentsch) November 17, 2014

This is the first and last patchset for 12cR1

#oracle is the last patch set for Release 12.1.

— laurentsch (@laurentsch) October 9, 2014

My planning for DOAG 2014

Yann Neuhaus - Mon, 2014-11-17 01:53

I'm quickly checking the planning for DOAG these 3 days and here is the list of sessions I would like to attend.

There are still a lot of interesting ones even if my choice is limited by the language (I would like to understand German but I'm limited so sessions in English). And I've still some concurrency issues to solve because I cannot be at two places at the same time.

PeopleSoft and Docker's value proposition

Javier Delgado - Sun, 2014-11-16 12:58
If you haven't heard yet about Docker and/or container technologies, you will soon do. Docker has made one of the biggest impacts in the IT industry in 2014. Since the release of its 1.0 version on past June, it has captured the attention of many big IT vendors, including Google, Microsoft and Amazon. As far as I'm aware, Oracle has not announced any initiative with Docker, except for the Oracle Linux container. Still, Docker can be used with PeopleSoft, and it can actually simplify your PeopleSoft system administration. Let's see how.

What is Container Technology?
Docker is an open platform to build, ship, and run distributed applications. Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

In a way, it is similar to virtualization technologies like VMWare or Virtualbox where you can get an image of a machine and run it anywhere you have the player installed. Docker is similar except that it just virtualizes the application and its dependencies, not the full machine.

Docker virtual machines are called containers. They run as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.

Docker uses a layered file system for its containers, in a way that they can be updated by just including the changes since the last update. This greatly reduces the volume of information that needs to be shipped to deliver an update.

How can it be used with PeopleSoft?
As we have seen, Docker containers are much easier to deploy than an entire virtual machine. This means that activities such as installations can be greatly simplified. All you need is to have Docker installed and then download the PeopleSoft container. Of course, this requires that you first do an installation within a Docker container, but this is not more complex than doing an usual installation, it just requires some Docker knowledge in order to take advantage of all its features. Under my point of view, if you are doing a new installation, you should seriously consider Docker. At BNB we have prepared containers with the latest PeopleSoft HCM and FSCM installations so we can quickly deploy them to our customers.

Also, when you make a change to a Docker container, just the incremental changes are applied to existing running instances. This poses a great advantage when you apply a patch or run a PeopleTools upgrade. If you want to apply the patches to a new environments, you just need to make sure that you apply the latest container changes in all the servers running the environment.

Isolation between running instances is also a major advantage when you have multiple environments in the same server. Suppose you want to apply the later Tuxedo patch just in the Development environment, which coexists with other environments on the same server. Unless you had one Tuxedo installation for each environment (which is possible but normally unlikely), you would need to go ahead and hope the patch did not break anything (to be honest, this happens very rarely with Tuxedo, but some other product patches are not so reliable). If you have a separate container for the Development environment you can apply the patch just to it and later deploy the changes to the rest of environments.

Last but not least, the reduced size of Docker containers compared to an entire virtual machine greatly simplifies the distribution to and from the cloud. Docker is of great help if you want to move your on premise infrastructure to the cloud (or the other way around). This is even applicable when you want to keep a contingency system in the cloud, as delivering the incremental container changes made to your on premise system requires less time than using other methods.

Not only that, Docker can be hosted in most operating systems. This means that moving a container from one public cloud facility to another is significantly easier than it was with previous technologies. Exporting a virtual machine from Amazon EC2 to Google Cloud was quite complex (and under some circumstances even not possible).

But as any other technology, Docker is no panacea. It has some limitations that may restrict its adoption for your PeopleSoft installation. The main ones I can think of are:

  • Currently there is no support for containers using Windows as a guest operating system. This is not surprising, as Docker in intimately linked to Unix/Linux capabilities. Still, Microsoft has announced a partnership with Docker that will hopefully help to overcome this limitation. For the moment, you will not be able to use Docker for certain PeopleSoft components, such as the PSNT Process Scheduler, which is bad news if you are still using Crystal Reports or Winword reports. Also, if you are using Microsoft SQL Server as your database, this may be a major limitation.

  • Docker is most useful when used for applications, but not data. Logs, traces and databases should normally be kept out of the Docker container.

Although container technology is still in its initial steps, significant benefits are evident for maintaining and deploying applications, PeopleSoft included. Surely enough, the innovation coming on this area will have a big impact in the way PeopleSoft systems are administered.

PS: I would like to thank Nicolás Zocco for his invaluable research on this topic, particularly in installing the proof of concept using PeopleSoft and Docker.

12c SQL Plan Directive: state has changed between and

Yann Neuhaus - Sun, 2014-11-16 09:38

SQL plan Directives have been introduced in 12c. If you have scripts that check their states (and I have that as I prefer to monitor closely the features that are new) you probably have seen a difference when going from - the first release of 12c and the only one avilable yet in Standard Edition - and - the first patchest. I'll explain here what are the SQL Plan Directive states and how they changed.

When a SQL Plan Directive is created, it's state is NEW. It means that a misestimate has been encountered but the reason has not been yet determined. Remember that the directive is created only at execution time, so very limited information is available, except the fact that A-Rows is different that E-Rows.

On a subsequent optimization (same or different statement that uses the same table and same column predicates) the optimizer sees the SQL Plan Directive and can update it with more information. If the reason of misestimation is that some statistics are missing then the state of the directive is changed from NEW to MISSING_STATS. 

MISSING_STATS directives lead to short term and long term solutions:

  • each new query will solve missing stats by gathering more statistics with Dynamic Sampling
  • the next dbms_stats gathering will gather extended statistics to definitly fix the issue

Once the latter is done, we can expect to have good estimations without Dynamic Sampling anymore, thanks to the extended statistics. But that has to be checked. So the next query optimization will check it and update the SQL plan Directive accordingly:

  • HAS_STATS to show that statistics (extended statistics here) are now sufficient to get correct estimations
  • PERMANENT to show that the misestimation is still there and extended statistics have to be ignored because they were calculated for an issue that they do not solve.
Finally, the HAS_STATS SQL Plan Directives are purged after some weeks as they are not needed anymore - the issue being solved definitely.   So what has changed in ?   There are only two states now:
  • 'USABLE' that covers the 'NEW', 'MISSING_STATS' and 'PERMANENT' which means that a directive is there to be evaluated by the optimizer, but the issue is not solved yet.
  • 'SUPERSEDED' when it has been solved (the 'HAS_STATS') or it is redundant with another directive, which means that the issue is solved somewhere else.
This is a simplification, but if you want to have the same level of detail that you had in then you can get it from the 'internal state' which is exposed in XML in the NOTES column.   Here is an example of two USABLE state:  
SQL> select directive_id,type,state,reason,notes from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' );

        DIRECTIVE_ID TYPE              STATE         REASON
-------------------- ---------------- ---------- ------------------------------------


which is 'NEW' and  
        DIRECTIVE_ID TYPE              STATE         REASON
-------------------- ---------------- ---------- ------------------------------------


which is 'MISSING_STATS'.   And a 'SUPERSEDED' once dbms_stats has run:  
        DIRECTIVE_ID TYPE              STATE         REASON
-------------------- ---------------- ---------- ------------------------------------


which is 'HAS_STATS'
Note that the xml tags were eaten by my blog editor. They are: internal_state,redundant,spd_text
We do full demos of SQL Plan Directives in our 'Oracle 12c new features workshop' and 'Oracle performance tuning workshop'. It's a great feature that brings the CBO to another level of intelligence. And there are some misconceptions about them. Some people think that they store statistics. But that's wrong. Statistics come from cardinality feedback, dynamic sampling, or object statistics. There is no need for another component to store them. The only thing that is stored by SQL Plan Directives are their state. Which makes the state a very important information - and the reason for that blog post.