Skip navigation.

Feed aggregator

node-oracledb 1.2.0 is on NPM (Node.js add-on for Oracle Database)

Christopher Jones - Fri, 2015-09-25 08:28

Version 1.2 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM

A lot of good changes have been made.

Our thanks to Bruno Jouhier from Sage for his work on adding RAW support and for fixes for LOB stability. Thanks also go to Bill Christo for pushing us on some Installation topics - look out for his full article on Windows Installation that OTN will be publishing soon.

An annotated list of the changes in this releases are:

  • Added support for RAW data type.

    Bruno contributed a patch to add support for the Oracle RAW datatype. This data type maps to a JavaScript Buffer for inserts, queries and for binding to PL/SQL. Binding RAW for DML RETURNING is not supported. There is an example showing inserting and querying in examples/raw1.js

  • Added a type property to the Lob class to distinguish CLOB and BLOB types.

    This small change will allow introspection on Lob instances so applications can more easily decide how to handle the data.

  • Changed write-only attributes of Connection objects to work with console.log().

    The Connection object had three write-only attributes (action, module, clientId) used for end-to-end tracing and mid-tier authentication. Because they were write-only, anyone doing a simple console.log() on the connection object got a confusing message often leading to the impression that connection had failed. The attributes are write-only for the reasons described in the documentation. With the change in v1.2, a Connection object can now be displayed. The three attributes will show as null (see the doc) while the non- write-only attribute stmtCacheSize will show an actual value. With hindsight the three attributes should have be set via a setter, but they aren't.

  • Added a check to make sure maxRows is greater than zero for non-ResultSet queries.

    If you want to get metaData for a query without getting rows, specify resultSet:true and prefetchRows:0 in the query options (and remember to close the ResultSet).

  • Improved installer messages for Oracle client header and library detection on Linux, OS X and Solaris.

    Some upfront checks now aid detection of invalid environments earlier.

  • Optimized CLOB memory allocation to account for different database-to-client character set expansions.

    In line with the optimization for string buffers in v1.1, users of AL32UTF8 databases will see reduced memory consumption when fetching CLOBs.

  • Fixed a crash while reading a LOB from a closed connection

  • Fixed a crash when selecting multiple rows with LOB values.

    Another fix by Bruno.

  • Corrected the order of Stream 'end' and 'close' events when reading a LOB.

    Bruno was busy this release and sent in a pull request for this too.

  • Fixed AIX-specific REF CURSOR related failures.

  • Fixed intermittent crash while setting fetchAsString, and incorrect output while reading the value.

  • Added a check to return an NJS error when an invalid DML RETURN statement does not give an ORA error.

  • Removed non-portable memory allocation for queries that return NULL.

  • Fixed encoding issues with several files that caused compilation warnings in some Windows environments.

  • Made installation halt sooner for Node.js versions currently known to be unusable.

  • Fixed typo in examples/dbmsoutputgetline.js

Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line!

Installation instructions are here.

Node-oracledb documentation is here.

Log Buffer #442: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-09-25 07:24

This Log Buffer Edition collects and then showers around some of the information-rich blog posts from Oracle, SQL Server and MySQL.


  • Generic Java Server for Static content and Directory Listing using API mode or command line mode.
  • OEM agents on each host upload data to your management servers every few minutes.
  • exitcommit … or your career down the drain.
  • Encryption is the Easy Part; Managing those Keys is Difficult.
  • Managing the OBIEE BI Server Cache from ODI 12c.

SQL Server:

  • Email addresses are very prevalent in IT systems and often used as a natural primary key. The repetitive storage of email addresses in multiple tables is a bad design choice. Following is a design pattern to store email addresses in a single table and to retrieve them efficiently.
  • OpenStack: The Good and Not-So-Good Bits.
  • By defining server- and database-level audits, you can record just about any kind of event that occurs in SQL Server, which can be an invaluable source of security troubleshooting and forensic information when security breaches occur.
  • Koen Verbeeck shows how to easily extract metadata from files in your directories with Power Query.
  • Implementing John Conway’s Game of Life in Microsoft SQL Server.


  • Oracle HA, DR, data warehouse loading, and license reduction through edge apps.
  • Easy Load-balancing and High-availability using MySQL Router.
  • When hosting data on Amazon turns bloodsport.
  • MySQL 5.7 Labs — Using Loopback Fast Path With Windows 8/2012.
  • How to evaluate if MySQL table can be recovered.


Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

Using the BI Server Metadata Web Service for Automated RPD Modifications

Rittman Mead Consulting - Fri, 2015-09-25 06:33

A little-known new feature of OBIEE 11g is a web service interface to the BI Server. Called the “BI Server Metadata Web Service” it gives a route into making calls into the BI Server using SOAP-based web services calls. Why is this useful? Because it means you can make any call to the BI Server (such as SAPurgeAllCache) from any machine without needing to install any OBIEE-related artefacts such as nqcmd, JDBC drivers, etc. The IT world has been evolving over the past decade or more towards a more service-based architecture (remember the fuss about SOA?) where we can piece together the functionality we need rather than having one monolithic black box trying to do everything. Being able to make use of this approach in our BI deployments is a really good thing. We can do simple things like BI Server cache management using a web service call, but we can also do more funky things, such as actually updating repository variable values in real time – and we can do it from within our ETL jobs, as part of an automated deployment script, and so on.

Calling the BI Server Metadata Web Service

First off, let’s get the web service configured and working. The documentation for the BI Server Metadata Web Service can be found here, and it’s important to read it if you’re planning on using this. What I describe here is the basic way to get it up and running.

Configuring Security

We need to configure the security against the web service to define what kind of authentication is required by it to use. If you don’t do this, you won’t be able to make calls to it. Setting up the security is a case of attaching a security policy in WebLogic Server to the web service (called AdminService) itself. I’ve used oracle/wss_http_token_service_policy which means that the credentials can be passed through using standard HTTP Basic authentication.

You can do this through Enterprise Manager:

Or you can do it through WLST using the attachWebServicePolicy call.

You also need to configure WSM, adding some entries to the credential store as detailed here.

Testing the Web Service

The great thing about web services is that they can be used from pretty much anywhere. Many software languages will have libraries for making SOAP calls, and if they don’t, it’s just HTTP under the covers so you can brew your own. For my testing I’m using the free version of SoapUI. In anger, I’d use something like curl for a standalone script or hook it up to ODI for integration in the batch.

Let’s fire up SoapUI and create a new project. Web services provide a Web Service Definition Language (WSDL) that describes what and how they work, which is pretty handy. We can pass this WSDL to SoapUI for it to automatically build some sample requests for us. The WSDL for this web service is


Where biserver is your biserver (duh) and port is the managed server port (usually 9704, or 7780).

We’ve now got a short but sweet list of the methods we can invoke:

Expand out callProcedureWithResults and double click on Request 1. This is a template SOAP message that SoapUI has created for you, based on the WSDL.

Edit the XML to remove the parameters section, and just to test things out in procedureName put GetOBISVersion(). Your XML request should look like this:

<soapenv:Envelope xmlns:soapenv="" xmlns:ws="">  

If you try and run this now (green arrow or Cmd-Enter on the Mac) you’ll see the exact same XML appear on the right pane, which is strange… but click on Raw and you’ll see what the problem is :

Remember the security stuff we set up on the server previously? Well as the client we now need to keep our side of the bargain, and pass across our authentication with the SOAP call. Under the covers this is a case of sending HTTP Basic auth (since we’re using oracle/wss_http_token_service_policy), and how you do this depends on how you’re making the call to the web service. In SoapUI you click on the Auth button at the bottom of the screen, Add New Authentication, Type: Basic, and then put your OBIEE server username/password in.

Now you can click submit on the request again, and you should see in the response pane (on the right) in the XML view details of your BI Server version (you may have to scroll to the right to see it)

This simple test is just about validating the end-to-end calling of a BI Server procedure from a web service. Now we can get funky with it….

Updating Repository Variables Programatically

Repository variables in OBIEE are “global”, in that every user session sees the same value. They’re often used for holding things like “what was the last close of business date”, or “when was the data last updated in the data warehouse”. Traditionally these have been defined as dynamic repository variables with an initialisation block that polled a database query on a predefined schedule to get the current value. This meant that to have a variable showing when your data was last loaded you’d need to (a) get your ETL to update a row in a database table with a timestamp and then (b) write an init block to poll that table to get the value. That polling of the table would have to be as frequent as you needed in order to show the correct value, so maybe every minute. It’s kinda messy, but it’s all we had. Here I’d like to show an alternative approach.

Let’s say we have a repository variable, LAST_DW_REFRESH. It’s a timestamp that we use in report predicates and titles so that when they are run we can show the correct data based on when the data was last loaded into the warehouse. Here’s a rather over-simplified example:

The Title view uses the code:

Data correct as of: @{biServer.variables['LAST_DW_REFRESH']}

Note that we’re referencing the variable here. We could also put it in the Filter clause of the analysis. Somewhat tenuously, let’s imagine we have a near-realtime system that we’re federating a DW across direct from OLTP, and we just want to show data from the last load into the DW:

For the purpose of this example, the report is less important than the diagnostics it gives us when run, in nqquery.log. First we see the inbound logical request:

-------------------- SQL Request, logical request hash:  
   0 s_0,  
   "A - Sample Sales"."Time"."T05 Per Name Year" s_1,  
   "A - Sample Sales"."Base Facts"."1- Revenue" s_2  
FROM "A - Sample Sales"  
("Time"."T00 Calendar Date" <  VALUEOF("LAST_DW_REFRESH"))  

Note the VALUEOF clause. When this is parsed out we get to see the actual value of the repository variable that OBIEE is going to execute the query with:

-------------------- Logical Request (before navigation): [[
    0 as c1 GB,  
    D0 Time.T05 Per Name Year as c2 GB,  
    1- Revenue:[DAggr(F0 Sales Base Measures.1- Revenue by [ D0 Time.T05 Per Name Year] )] as c3 GB  
    D0 Time.T00 Calendar Date < TIMESTAMP '2015-09-24 23:30:00.000'  
OrderBy: c1 asc, c2 asc NULLS LAST

We can see the value through the Administration Tool Manage Sessions page too, but it’s less convenient for tracking in testing:

If we update the RPD online with the Administration Tool (nothing fancy at this stage) to change the value of this static repository variable :


And then rerun the report, we can see the value has changed:

-------------------- Logical Request (before navigation): [[
    0 as c1 GB,  
    D0 Time.T05 Per Name Year as c2 GB,  
    1- Revenue:[DAggr(F0 Sales Base Measures.1- Revenue by [ D0 Time.T05 Per Name Year] )] as c3 GB  
    D0 Time.T00 Calendar Date < TIMESTAMP '2015-09-25 00:30:00.000'  
OrderBy: c1 asc, c2 asc NULLS LAST

Now let’s do this programatically. First off, the easy stuff, that’s been written about plenty before. Using biserverxmlgen we can create a XUDML version of the repository. Searching through this we can pull out the variable definition:

<Variable name="LAST_DW_REFRESH" id="3031:286125" uid="00000000-1604-1581-9cdc-7f0000010000">  
<Expr><![CDATA[TIMESTAMP '2015-09-25 00:30:00']]></Expr>  

and then wrap it in the correct XML structure and update the timestamp we want to use:

<?xml version="1.0" encoding="UTF-8" ?>  
<Repository xmlns:xsi="">  
        <Variable name="LAST_DW_REFRESH" id="3031:286125" uid="00000000-1604-1581-9cdc-7f0000010000">  
        <Expr><![CDATA[TIMESTAMP '2015-09-26 00:30:00']]></Expr>  

(you can also get this automagically by modifying the variable in the RPD, saving the RPD file, and then comparing it to the previous copy to generate a patch file with the Administration Tool or comparerpd)

Save this XML snippet, with the updated timestamp value, as last_dw_refresh.xml. Now to update the value on the BI Server in-flight, first using the OBIEE tool biserverxmlcli. This is available on all OBIEE servers and client installations – we’ll get to web services for making this update call remotely in a moment.

biserverxmlcli -D AnalyticsWeb -R Admin123 -U weblogic -P Admin123 -I last_dw_refresh.xml

Here -D is the BI Server DSN, -U / -P are the username/password credentials for the server, and the -R is the RPD password.

Running the analysis again shows that it is now working with the new value of the variable:

-------------------- Logical Request (before navigation): [[
    0 as c1 GB,  
    D0 Time.T05 Per Name Year as c2 GB,  
    1- Revenue:[DAggr(F0 Sales Base Measures.1- Revenue by [ D0 Time.T05 Per Name Year] )] as c3 GB  
    D0 Time.T00 Calendar Date < TIMESTAMP '2015-09-26 00:30:00.000'  
OrderBy: c1 asc, c2 asc NULLS LAST

Getting Funky – Updating RPD from Web Service

Let’s now bring these two things together – RPD updates (in this case to update a variable value, but could be anything), and BI Server calls.

In the above web service example I called the very simple GetOBISVersion. Now we’re going to use the slightly more complex NQSModifyMetadata. This is actually documented, and what we’re going to do is pass across the same XUDML that we sent to biserverxmlcli above, but through the web service. As a side note, you could also do this over JDBC if you wanted (under the covers, AdminService is just a web app with a JDBC connector to the BI Server).

I’m going to do this in SoapUI here for clarity but I actually used SUDS to prototype it and figure out the exact usage.

So as a quick recap, this is what we’re going to do:

  1. Update the value of a repository variable, using XUDML. We generated this XUDML from biserverxmlgen and wrapped it in the correct XML structure.

    We could also have got it through comparerpd or the Administration Tool ‘create patch’ function
  2. Use the BI Server’s NQSModifyMetadata call to push the XUDML to the BI Server online.

    We saw that biserverxmlcli can also be used as an alternative to this for making online updates through XUDML.
  3. Use the BI Server Metadata Web Service (AdminService) to invoke the NQSModifyMetadata call on the BI Server, using the callProcedureWithResults method

In SoapUI create a new request:

Set up the authentication:

Edit the XML SOAP message to remove parameters and specify the basic BI Server call:

<soapenv:Envelope xmlns:soapenv="" xmlns:ws="">  

Now the tricky bit – we need to cram the XUDML into our XML SOAP message. But, XUDML is its own kind of XML, and all sorts of grim things happen here if we’re not careful (because the XUDML gets swallowed up into the SOAP message, it all being XML). The solution I came up with (which may not be optimal…) is to encode all of the HTML entities, which a tool like this does (and if you’re using a client library like SUDS will happen automagically). So our XUDML, with another new timestamp for testing:

<?xml version="1.0" encoding="UTF-8" ?>  
<Repository xmlns:xsi="">  
        <Variable name="LAST_DW_REFRESH" id="3031:286125" uid="00000000-1604-1581-9cdc-7f0000010000">  
        <Expr><![CDATA[TIMESTAMP '2015-09-21 00:30:00']]></Expr>  


&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; ?&gt; &lt;Repository xmlns:xsi=&quot;;&gt; &lt;DECLARE&gt; &lt;Variable name=&quot;LAST_DW_REFRESH&quot; id=&quot;3031:286125&quot; uid=&quot;00000000-1604-1581-9cdc-7f0000010000&quot;&gt; &lt;Expr&gt;&lt;![CDATA[TIMESTAMP '2015-09-21 00:30:00']]&gt;&lt;/Expr&gt; &lt;/Variable&gt; &lt;/DECLARE&gt; &lt;/Repository&gt;

We’re not quite finished yet. Because this is actually a call (NQSModifyMetadata) nested in a call (callProcedureWithResults) we need to make sure NQSModifyMetadata gets the arguments (the XUDML chunk) through intact, so we wrap it in single quotes – which also need encoding (&apos;):

&apos;&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; ?&gt; &lt;Repository xmlns:xsi=&quot;;&gt; &lt;DECLARE&gt; &lt;Variable name=&quot;LAST_DW_REFRESH&quot; id=&quot;3031:286125&quot; uid=&quot;00000000-1604-1581-9cdc-7f0000010000&quot;&gt; &lt;Expr&gt;&lt;![CDATA[TIMESTAMP '2015-09-21 23:30:00']]&gt;&lt;/Expr&gt; &lt;/Variable&gt; &lt;/DECLARE&gt; &lt;/Repository&gt;&apos;

and then for final good measure, the single quotes around the timestamp need double-single quoting:

&apos;&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; ?&gt; &lt;Repository xmlns:xsi=&quot;;&gt; &lt;DECLARE&gt; &lt;Variable name=&quot;LAST_DW_REFRESH&quot; id=&quot;3031:286125&quot; uid=&quot;00000000-1604-1581-9cdc-7f0000010000&quot;&gt; &lt;Expr&gt;&lt;![CDATA[TIMESTAMP ''2015-09-21 23:30:00'']]&gt;&lt;/Expr&gt; &lt;/Variable&gt; &lt;/DECLARE&gt; &lt;/Repository&gt;&apos;

Nice, huh? The WSDL suggests that parameters for these calls should be able to be placed within the XML message as additional entities, which I wonder if would allow for proper encoding, but I couldn’t get it to work (I kept getting java.sql.SQLException: Parameter 1 is not bound).

So, stick this mess of encoding plus twiddles into your SOAP message and it should look like this: (watch out for line breaks; these can break things)

<soapenv:Envelope xmlns:soapenv="" xmlns:ws="">  
         <procedureName>NQSModifyMetadata(&apos;&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; ?&gt; &lt;Repository xmlns:xsi=&quot;;&gt; &lt;DECLARE&gt; &lt;Variable name=&quot;LAST_DW_REFRESH&quot; id=&quot;3031:286125&quot; uid=&quot;00000000-1604-1581-9cdc-7f0000010000&quot;&gt; &lt;Expr&gt;&lt;![CDATA[TIMESTAMP ''2015-09-21 23:30:00'']]&gt;&lt;/Expr&gt; &lt;/Variable&gt; &lt;/DECLARE&gt; &lt;/Repository&gt;&apos;)</procedureName>  

Hit run, and with a bit of luck you’ll get a “nothing to report” response :

If you look in nqquery.log you’ll see:

[...] NQSModifyMetadata started.  
[...] NQSModifyMetadata finished successfully.

and all-importantly when you run your report, the updated variable will be used:

-------------------- Logical Request (before navigation): [[
    0 as c1 GB,  
    D0 Time.T05 Per Name Year as c2 GB,  
    1- Revenue:[DAggr(F0 Sales Base Measures.1- Revenue by [ D0 Time.T05 Per Name Year] )] as c3 GB  
D0 Time.T00 Calendar Date < TIMESTAMP '2015-09-21 23:30:00.000'  
OrderBy: c1 asc, c2 asc NULLS LAST

If this doesn’t work … well, best of luck. Use nqquery.log, bi_server1.log (where the AdminService writes some diagnostics) to try and trace the issue. Also test calling NQSModifyMetadata from JDBC (or ODBC) directly, and then add in the additional layer of the web service call.

So, in the words of Mr Rittman, there you have it. Programatically updating the value of repository variables, or anything else, done online, and for bonus points done through a web service call making it possible to use without any local OBIEE client/server tools.

Categories: BI & Warehousing

Downloading VirtualBox VM “Oracle Enterprise Manager 12cR5”

Marco Gralike - Fri, 2015-09-25 06:11
I needed for some customer training a Oracle Enterprise Manager Cloud Control VirtualBox environment with…

WebLogic / Oracle FMW to RAC Database connection : Using Active GridLink ?

Online Apps DBA - Fri, 2015-09-25 05:40

weblogic_gridlinkOracle Weblogic Server is application server that is one of the important products from Oracle Fusion Middleware. Oracle WebLoic Server connects to database using datasource and if your database is Oracle RAC (almost all big enterprises uses RAC database for high availability at database tier), You can use one of the two options to connect

a) Multi Data Source – This is old way of connecting  Oracle WebLogic Server to RAC Database

b) GridLink – This is new and better way of connecting Oracle WebLogic (and FMW products) to RAC Database

We cover both these methods for WebLogic Server to RAC Database connection in our Oracle Fusion Middleware Training (next batch starts on 17th October, 2015. Register now for early bird discount – limited time only) , apart from this we also cover Architecture, High Availability, Disaster Recovery, Patching, Cloning, Installation, OHS-WebLogic Integration, SOA, WebCenter and lot more.

We also provide consulting services for Oracle Fusion Middleware where we recommended GridLink for WebLogic to RAC database connectivity to one of our customer.  Customer wanted to use GridLink with Weblogic 10.3.6 but with FAN option unchecked and no ONS details.

If you select GridLink, configuration screen prompts you to provide FAN & ONS details (more on ONS later).



We thought to discuss about risk of using GridLink but not providing ONS details (or do not select Enable FAN option) here but before that

Here are the features of ONS (Oracle Notification Service) and FAN(Fast Application Notification) :

  • Fast Connection Failover (FCF)
  • Runtime Connection Load-Balancing by distributing the runtime work requests to all active Oracle RAC instances, including those rejoining a cluster
  • Graceful shutdown of RAC instances
  • Abort and remove invalid connections from the connection pool

Active management of the connections in the pool is managed through the real time information that the connection pool receives from the RAC Oracle Notification Service (ONS)

According to the Bug 18467939 in WebLogic 11g (10.3.6), if you don’t configure ONS and leave option FAN unchecked then it is treated as a generic data source (that means you are literally using single instance database and not RAC).

From WebLogic Server 12c (12.1.2), a new Active GridLink Flag in the console is introduced so that it is possible to specify GridLink configuration, even if ONS is not set and FAN enabled is false.

That’s because there is a new 12c driver feature that automatically configures ONS without it being specified.

In WebLogic 11g (10.3.6) this driver is not supported therefore, active Grid link is defined as both FAN enabled true and ONS configured. If one of the information is missing,  then it is treated as generic.



Are you using WebLogic or Fusion Middleware with Oracle RAC Database and not using GridLink ?

Contact Us Need any help with your Fusion Middleware Implementation or WebLogic to RAC connection


If you need to learn Oracle Fusion Middleware (Architecture, Installation, High Availability, Disaster Recovery, Patching, Cloning, SSL etc ) then have a look at our Oracle Fusion Middleware Training (next batch starts on 17th October, 2015 – Register now for early bird discount, limited time only).

Note: We are so confident on our workshops that we provide 100% Money back guarantee, in unlikely case of you being not happy after 2 sessions, just drop us a mail before third session and We’ll refund FULL money (or ask us from our 100s of happy trainees in our private Facebook Group)

Looking to get more information or FREE tips on Oracle Administration then join our newsletter or for just specific to Oracle Fusion Middleware topics then click enter leave you name & email here (register for both if you wish to learn on Oracle Apps/Fusion Middleware/Identity & Access Management etc)


The post WebLogic / Oracle FMW to RAC Database connection : Using Active GridLink ? appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

ORDS... Yes we can!

Dimitri Gielis - Fri, 2015-09-25 03:55
On September 13th I got a nice surprise, an email of Steven Feuerstein with the message: "You have been selected as a finalist in the ORDS category for the 2015 Oracle Database Developer Choice Awards!"

It's always nice to get recognition for the efforts you do, so thanks so much already for the nomination. I didn't make publicity yet in order to get some up-votes, but I hope by reaching out to the people who read my blog, I gain some more up-votes :)


To be honest, I wondered why I deserved this nomination, especially in a category that I'm probably less "known" for.
After giving this more thought, I remembered ORDS actually started being very much linked to APEX. In the early days ORDS was even being called the "APEX Listener", and I was one of the early adopters and promoters of using the APEX Listener (now ORDS) in your APEX architecture.

In 2012 I literally travelled around the world (Belgium, the Netherlands, UK, San Francisco, San Antonio, Uruguay, Brazil, Chile) to talk about why you should move to the APEX Listener.

Moving to the APEX Listener from Dimitri Gielis
When looking back at that presentation, today it still stand; you still have different options for your APEX architecture, but we don't have to convince anymore about the benefits of choosing for ORDS. ORDS is now main-stream, widely adopted and proven technology. Unless it's a legacy system, I don't really see any reason anymore why you should not use ORDS in your architecture.

ORDS evolved a lot over time, and the new name reflects more what the core feature is "Oracle REST Data Services". REST web services became so important in the last years and that is exactly what I've been blogging and talking about lately (see further).

In my presentation of Microsoft Sharepoint and Oracle APEX integration (given in San Francisco, BeNeLux and Germany) I talk about the architecture and how you get your APEX data in Microsoft Sharepoint by using ORDS. But also the other way round, by using Sharepoint APIs, REST web services come into play. When you want to integrate with other systems, ORDS can really help.

Oracle Application Express (APEX) and Microsoft Sharepoint integration from Dimitri Gielis
I didn't blog much yet about a really interesting R&D project we've been working on in the last months - using wearables to capture sensor and patient data. At the Oracle Mobile day in the Netherlands I did a presentation which explains it in more detail. We developed native smarth phone applications that call REST web services all build in ORDS. ORDS is getting or pushing the data into our Oracle Database using JSON. Next to that we've dashboards in APEX to "see" and work with the data. We learned and still learn a lot of this project; about the volume of data, the security etc. but without ORDS it would not have been possible.

APEX Wearables from Dimitri Gielis
And finally a product where I'm really proud of APEX Office Print (AOP). I found it always a challenge to get documents out of APEX. I'll do some more blog posts about it in the future, but where APEX is so declarative to build your web applications, it's far from declarative to get documents out in Word, Excel, Powerpoint or PDF (at least without BI Publisher). With APEX Office Print we hope to address that. Just one APEX plugin where you define the template and which data you want to use and presto you get your document. Behind the scenes we use JSON and having ORDS makes it really easy to generate it. If you want to know more about the components behind the printing solution (and do it yourself), you'll find that in my presentation about Printing through Node.js which I presented in different countries and will present at Oracle Open World.

How to make APEX print through Node.js from Dimitri Gielis
Furthermore, if you're interested in reading more about JSON, I've done a series of blog posts and was interviewed by Oracle about it, you find the links here.

Hopefully by doing this post you see the power of ORDS and you get some ideas yourself how to best leverage the power of this wonderful piece of software. If you liked my "ORDS-efforts" and want to give me an up-vote in the Database Developer Choice Awards, I really appreciate that.

Thanks so much,

Categories: Development

Links for 2015-09-24 []

Categories: DBA Blogs

Partner Webcast – Why and How Your Business changes with Oracle Cloud

We are facing a new “revolution” in the business world. The capabilities that modern technology provides awake new expectations, new needs and wishes of consumers/customers and enable businesses to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Priority Service Infogram for 24-SEP-2015

Oracle Infogram - Thu, 2015-09-24 15:36

Oracle Data Masking and Subsetting sessions in Open World 2015, from Oracle Data Masking and Subsetting.
Mobile @ Oracle OpenWorld 2015, from The Oracle Mobile Platform Blog.
SOA & Java
Unleash the power of Java API’s on your WLST scripts! from SOA & BPM Partner Community Blog.
From the same source: Enable SOA Composer in SOA Suite 11
From The Java Source:
Perspectives on Java Evolution!
Java ME Embedded 8.2 Release!
Concurrency on the JVM
Lots of WebLogic goodies this week from WebLogic Partner Community EMEA:: Extending the Weblogic console by adding Java classes
WebLogic Partner Community Newsletter September 2015
Accessing WebLogic Logs via REST
Overview of WebLogic 12c RESTful Management Services
And from Oracle Partner Hub: ISV Migration Center Team:
WebLogic on ODA: Final release and new partner supported model
Patch Set Update: Hyperion Essbase, from Business Analytics - Proactive Support.
Configuring Secure NFS in Solaris 11, from What the krowteN?
New Demantra Log File Parser in PERL, from the Oracle Demantra blog.
From the Oracle E-Business Suite Support blog:
New EBS Oracle Receivables Period Close Analyzer
Webcast: EBS HCM Updates for Employer Shared Responsibility Reporting Under the Affordable Care Act
NEW! EBS Support Analyzer Bundle Menu Tool (Doc ID 1939637.1)
NEW!!! Submit your Enhancement Requests for Oracle Payables via the Community
From the Oracle E-Business Suite Technology blog:
Consultations with ATG Development at OpenWorld 2015

EBS Support Policy for Third-Party Client Components

Managing the OBIEE BI Server Cache from ODI 12c

Rittman Mead Consulting - Thu, 2015-09-24 13:47

I wrote recently about the OBIEE BI Server Cache and how useful it can be, but how important it is to manage it properly, both in the purging of stale data and seeding of new. In this article I want to show how to walk-the-walk and not just talk-the-talk (WAT? But you’re a consultant?!). ODI is the premier data integration tool on the market and one that we are great fans of here at Rittman Mead. We see a great many analytics implementations built with ODI for the data load (ELT, strictly speaking, rather than ETL) and then OBIEE for the analytics on top. Managing the BI Server cache from within your ODI batch makes a huge amount of sense. By purging and reseeding the cache directly after the data has been loaded into the database we can achieve optimal cache usage with no risk of stale data.

There are two options for cleanly hooking into OBIEE from ODI 12c with minimal fuss: JDBC, and Web Services. JDBC requires the OBIEE JDBC driver to be present on the ODI Agent machine, whilst Web Services have zero requirement on the ODI side, but a bit of config on the OBIEE side.

Setting up the BI Server JDBC Driver and Topology

Here I’m going to demonstrate using JDBC to connect to OBIEE from ODI. It’s a principle that was originally written up by Julien Testut here. We take the OBIEE JDBC driver bijdbc.jar from $FMW_HOME/Oracle_BI1/bifoundation/jdbc and copy it to our ODI machine. I’m just using a local agent for my testing, so put it in ~/.odi/oracledi/userlib/. For a standalone agent it should go in $AGENT_HOME/odi/agent/lib.

[oracle@ip-10-103-196-207 ~]$ cd /home/oracle/.odi/oracledi/userlib/  
[oracle@ip-10-103-196-207 userlib]$ ls -l  
total 200  
-rw-r----- 1 oracle oinstall    332 Feb 17  2014 additional_path.txt  
-rwxr-xr-x 1 oracle oinstall 199941 Sep 22 14:50 bijdbc.jar

Now fire up ODI Studio, sign in to your repository, and head to the Topology pane. Under Physical Architecture -> Technologies and you’ll see Oracle BI

Right click and select New Data Server. Give it a sensible name and put your standard OBIEE credentials (eg. weblogic) under the Connection section. Click the JDBC tab and click the search icon to the right of the JDBC Driver text box. Select the default,, and then in the JDBC Url box put your server and port (9703, unless you’ve changed the listen port of OBIEE BI Server)

Now click Test Connection (save the data server when prompted, and click OK at the message about creating a physical schema), and select the Local Agent with which to run it. If you get an error then click Details to find out the problem.

One common problem can be the connection through to the OBIEE server port, so to cut ODI out of the equation try this from the command prompt on your ODI machine (assuming it’s *nix):

nc -vz 9703

If the host resolves correctly and the port is open then you should get:

Connection to 9703 port [tcp/*] succeeded!

If not you’ll get something like:

nc: port 9703 (tcp) failed: Connection refused

Check the usual suspects – firewall (eg iptables) on the OBIEE server, firewalls on the network between the ODI and OBIEE servers, etc.

Assuming you’ve got a working connection you now need to create a Physical Schema. Right click on the new data server and select New Physical Schema.

OBIEE’s BI Server acts as a “database” to clients, within which there are “schemas” (Subject Areas) and “tables” (Presentation Tables). On the New Physical Schema dialog you just need to set Catalog (Catalog), and when you click the drop-down you should see a list of the Subject Areas within your RPD. Pick one – it doesn’t matter which.

Save the physical schema (ignore the context message). At this point your Physical Architecture for Oracle BI should look like this:

Now under Logical Architecture locate the Oracle BI technology, right click on it and select New Logical Schema. From the Physical Schemas dropdown select the one that you’ve just created. Give a name to the Logical Schema.

Your Logical Architecture for Oracle BI should look like this:

Building the Cache Management Routine Full Cache Purge

Over in the Designer tab go to your ODI project into which you want to integrate the OBIEE cache management functions. Right click on Procedures and select Create New Procedure. Give it a name such as OBIEE Cache – Purge All and set the Target Technology to Oracle BI

Switch to the Tasks tab and add a new Task. Give it a name, and set the Schema to the logical schema that you defined above. Under Target Command enter the call you want to make to the BI Server, which in this case is

call SAPurgeAllCache();

Save the procedure and then from the toolbar menu click on Run. Over in the Operator tab you should see the session appear and soon after complete – all being well – successfully.

You can go and check your BI Server Cache from the OBIEE Administration Tool to confirm that it is now empty:

And confirm it through Usage Tracking:

From what I can see at the default log levels, nothing gets written to either nqquery.log or nqserver.log for this action unless there is an error in your syntax in which case it is logged in nqserver.log:

(For more information on that particular error see here)

Partial Cache Purge

This is the same pattern as above – create an ODI Procedure to call the relevant OBIEE command, which for purging by table is SAPurgeCacheByTable. We’re going to get a step more fancy now, and add a variable that we can pass in so that the Procedure is reusable multiple times over throughout the ODI execution for different tables.

First off create a new ODI Variable that will hold the name of the table to purge. If you’re working with multiple RPD Physical Database/Catalog/Schema objects you’ll want variables for those too:

Now create a Procedure as before, with the same settings as above but a different Target Command, based on SAPurgeCacheByTable and passing in the four parameters as single quoted, comma separated values. Note that these are the Database/Catalog/Schema/Table as defined in the RPD. So “Database” is not your TNS or anything like that, it’s whatever it’s called in the RPD Physical layer. Same for the other three identifiers. If there’s no Catalog (and often there isn’t) just leave it blank.

When including ODI Variable(s) make sure you still single-quote them. The command should look something like this:

Now let’s seed the OBIEE cache with a couple of queries, one of which uses the physical table and one of which doesn’t. When we run our ODI Procedure we should see one cache entry go and the other remain. Here’s the seeded cache:

And now after executing the procedure:

And confirmation through Usage Tracking of the command run:

Cache Seeding

As before, we use an ODI Procedure to call the relevant OBIEE command. To seed the cache we can use SASeedQuery which strictly speaking isn’t documented but a quick perusal of the nqquery.log when you run a cache-seeding OBIEE Agent shows that it is what is called in the background, so we’re going to use it here (and it’s mentioned in support documents on My Oracle Support, so it’s not a state secret). The documentation here gives some useful advice on what you should be seeding the cache with — not necessarily only exact copies of the dashboard queries that you want to get a cache hit for.

Since this is a cookie-cutter of what we just did previously you can use the Duplicate Selection option in ODI Designer to clone one of the other OBIEE Cache procedures that you’ve already created. Amend the Target Command to:

When you run this you should see a positive confirmation in the nqserver.log of the cache seed:

[2015-09-23T23:23:10.000+01:00] [OracleBIServerComponent] [TRACE:3]  
[USER-42] [] [ecid: 005874imI9nFw000jzwkno0007q700008K,0] [tid: 9057d700]  
[requestid: 477a0002] [sessionid: 477a0000] [username: weblogic]  
Query Result Cache: [59124]  
The query for user 'weblogic' was inserted into the query result cache.  
The filename is '/app/oracle/biee/instances/instance1/bifoundation/OracleBIServerComponent/coreapplication_obis1/cache/NQS__735866_84190_2.TBL'. [[

A very valid alternative to calling SASeedQuery would be to call the OBIEE SOA Web Service to trigger an OBIEE Agent that populated the cache (by setting ‘Destination’ to ‘Oracle BI Server Cache (For seeding cache)’). OBIEE Agents can also be ‘daisy chained’ so that one Agent calls another on completion, meaning that ODI could kick off a single ‘master’ OBIEE Agent which then triggered multiple ‘secondary’ OBIEE Agents. The advantage of this approach over SASeedQuery is that cache seeding is more likely to change as OBIEE usage patterns do, and it is easier for OBIEE developers to maintain all the cache seeding code within ‘their’ area (OBIEE Presentation Catalog) than put in a change request to the ODI developers each time to change a procedure.

Integrating it in the ODI batch

You’ve two options here, using Packages or Load Plans. Load Plans were introduced in ODI and are a clearer and more flexible of orchestrating the batch.

To use it in a load plan create a serial step that will call a mapping followed by the procedure to purge the affected table. In the procedure step in the load plan set the value for the variable. At the end of the load plan, call the OBIEE cache seed step:

Alternatively, to integrate the above procedures into a Package instead of a load plan you need to add two steps per mapping. First, the variable is updated to hold the name of the table just loaded, and then the OBIEE cache is purged for the affected table. At the end of the flow a call is made to reseed the cache:

These are some very simple examples, but hopefully illustrate the concept and the powerful nature of integrating OBIEE calls directly from ODI. For more information about OBIEE Cache Management, see my post here.

Categories: BI & Warehousing

SpyHunter 4 serial Download

Jithin Sarath - Thu, 2015-09-24 12:31
SpyHunter 4 crack has been voted as best anti-malware software. Download SpyHunter 4 Activation Code with email and password list generator patch free.

  • Very effective at removing hijacked browser search toolbars
  • Removed spyware variants that other major brands didn’t detect
  • Customized spyware fix option available
  • Easy to use interface
  • Supports Windows 8.1
Categories: DBA Blogs

Meet Pythian at AWS re:Invent 2015

Pythian Group - Thu, 2015-09-24 12:21


We’re pumped to be sponsoring our first AWS re:Invent show on October 6-9 at The Venetian in Las Vegas!

As an AWS Advanced Consulting Partner with AWS accredited experts on our team, we’ve helped our clients design, architect, build, migrate, and manage their workloads and applications on AWS.

If you’re going to be at the show next week, stop by our booth (#1255) or book a one-on-one session with one of our leading AWS technical experts to get answers to some of your biggest cloud challenges. From strategy development and workload assessment to tool selection and advanced cloud capabilities, we can help you at any stage of your cloud journey.

Don’t miss out! Schedule a one-on-one session with one of our technical experts:

Alex GorbachevCTO, Oracle ACE Director, Cloudera Champion of Big Data, specializes in cloud strategy and architecture, data and big data in the cloud, Securing data in the cloud, and IOT strategy.

Aaron Lee – VP Transformation Services, specializes in cloud strategy and transformation solutions, DevOps, big data, advanced analytics, and application platforms and migrations to AWS

Dennis Walker – Director, Engineering, specializes in cloud solution architecture and DevOps solutions for AWS.

Categories: DBA Blogs

EMEA Partners: Oracle Cloud Platform: Development Workshop for Partners (JCS, DevCS, MCS)

The Oracle team is pleased to invite your java developers and consultants to a 5-days hands-on workshop on how to develop, deploy and manage Java applications on Oracle Cloud platform. Oracle will...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Index Advanced Compression: Multi-Column Index Part II (Blow Out)

Richard Foote - Thu, 2015-09-24 02:00
I previously discussed how Index Advanced Compression can automatically determine not only the correct number of columns to compress, but also the correct number of columns to compress within specific leaf blocks of the index. However, this doesn’t mean we can just order the columns within the index without due consideration from a “compression” perspective. As […]
Categories: DBA Blogs

OBIEE BI Server Cache Management Strategies

Rittman Mead Consulting - Wed, 2015-09-23 11:36

The OBIEE BI Server cache can be one of the most effective ways of improving response times of OBIEE dashboards. By using data already in the cache it reduces load on the database, the network, and the BI Server.

Should you be using it? I always describe it as the “icing on the cake” – it’s not a fix for a badly-designed OBIEE system, but it does make a lot of sense to use once you’re happy that the foundations for the system are in place. If the foundations are not not in place? Then you’re just papering over the cracks and at some point it’s probably going to come back to bite you. As Mark Rittman put it nearly seven years ago, it’s “[…]usually the last desperate throw of the dice”. The phrase “technical debt”? Yeh, that. But, BI Server caching used after performance review and optimisation rather than instead of – then it’s a Good Thing.

So you’ve decided to use the BI Server cache, and merrily trotted over to Enterprise Manager to enable it, restarted the BI Server, and now your work is done, right? Not quite. Because the BI Server cache will start to store data from all the queries that you run, and use it to satisfy subsequent queries. Not only will it match on a direct hit for the same query, it will use a subset of an existing cache entry where appropriate, and can even aggregate up from what’s in the cache to satisfy a query at a higher level. Clever stuff. But, what happens when you load new data into your data warehouse? Well, the BI Server continues to serve requests out of the cache, because why shouldn’t it? And herein lies the problem with “just turn caching on”. You have to have a cache management strategy.

A cache management strategy sounds grand doesn’t it? But it boils down to two things:

  1. Accuracy – Flush any data from the cache that is now stale
  2. Speed – Prime the cache so that as many queries get a hit on it, first time
Maintaining an Accurate Cache

Every query that is run through the BI Server, whether from a Dashboard, Answers, or more funky routes such as custom ODBC clients or JDBC, will end up in cache. It’s possible to “seed” (“prime”/“warmup”) the cache explicitly, and this is discussed later. The only time you won’t see data in the cache is if (a) you have BI Server caching disabled, or (b) you’ve disabled the Cacheable option for a physical table that is involved in providing the data for the query being run.

You can see metadata for the current contents of the cache in the Administration Tool when connected online to the BI Server, through the Manage -> Cache menu option. This gives you lots of useful information (particularly when you come to optimising cache usage) including the size of each entry, when it was created, when it was last used, and so on.

Purging Options

So we’ve a spread of queries run that hit various dimension and fact tables and created lots of cache entries. Now we’ve loaded data into our underlying database, so we need to make sure that the next time a user runs an OBIEE query that uses the new data they can see it. Otherwise we commit the cardinal sin of any analytical system and show the user incorrect data which is a Bad Thing. It may be fast, but it’s WRONG….

We can purge the whole cache, but that’s a pretty brutal approach. The cache is persisted to disk and can hold lots of data stretching back months – to blitz all of that just because one table has some new data is overkill. A more targeted approach is to purge by physical database, physical table, or even logical query. When would you use these?

  • Purge entire cache – the nuclear option, but also the simplest. If your data model is small and a large proportion of the underlying physical tables may have changed data, then go for this
  • Purge by Physical Database – less brutal that clearing the whole cache, if you have various data sources that are loaded at different points in the batch schedule then targeting a particular physical database makes sense.
  • Purge by Physical Table – if many tables within your database have remained unchanged, whilst a large proportion of particular tables have changed (or it’s a small table) then this is a sensible option to run for each affected table
  • Purge by Query – If you add a few thousand rows to a billion row fact table, purging all references to that table from the cache would be a waste. Imagine you have a table with sales by day. You load new sales figures daily, so purging the cache by query for recent data is obviously necessary, but data from previous weeks and months may well remain untouched so it makes sense to leave queries against those in the cache. The specifics of this choice are down to you and your ETL process and business rules inherent in the data (maybe there shouldn’t be old data loaded, but what happens if there is? See above re. serving wrong data to users). This option is the most complex to maintain because you risk leaving behind in the cache data that may be stale but doesn’t match the precise set of queries that you purge against.

Which one is correct depends on

  1. your data load and how many tables you’ve changed
  2. your level of reliance on the cache (can you afford low cache hit ratio until it warms up again?)
  3. time to reseed new content

If you are heavily dependant on the cache and have large amounts of data in it, you are probably going to need to invest time in a precise and potentially complex cache purge strategy. Conversely if you use caching as the ‘icing on the cake’ and/or it’s quick to seed new content then the simplest option is to purge the entire cache. Simple is good; OBIEE has enough moving parts without adding to its complexity unnecessarily.

Note that OBIEE itself will perform cache purges in some situations including if a dynamic repository variable used by a Business Model (e.g. in a Logical Column) gets a new value through a scheduled initialisation block.

Performing the Purge

There are several ways in which we can purge the cache. First I’ll discuss the ones that I would not recommend except for manual testing:

  1. Administration Tool -> Manage -> Cache -> Purge. Doing this every time your ETL runs is not a sensible idea unless you enjoy watching paint dry (or need to manually purge it as part of a deployment of a new RPD etc).
  2. In the Physical table, setting Cache persistence time. Why not? Because this time period starts from when the data was loaded into the cache, not when the data was loaded into your database.
    An easy mistake to make would be to think that with a daily ETL run, setting the Cache persistence time to 1 day might be a good idea. It’s not, because if your ETL runs at 06:00 and someone runs a report at 05:00, there is a going to be a stale cache entry present for another 23 hours. Even if you use cache seeding, you’re still relinquishing control of the data accuracy in your cache. What happens if the ETL batch overruns or underruns?

    The only scenario in which I would use this option is if I was querying directly against a transactional system and wanted to minimise the number of hits OBIEE made against it – the trade-off being users would deliberately be seeing stale data (but sometimes this is an acceptable compromise, so long as it’s made clear in the presentation of the data).

So the two viable options for cache purging are:

  1. BI Server Cache Purge Procedures
  2. Event Polling Table
BI Server Cache Purge Procedures

These are often called “ODBC” Procedures but technically ODBC is just one – of several – ways that the commands can be sent to the BI Server to invoke.

As well as supporting queries for data from clients (such as Presentation Services) sent as Logical SQL, the BI Server also has its own set of procedures. Many of these are internal and mostly undocumented (Christian Berg does a great job of explaining them here, and they do creep into the documentation here and here), but there are some cache management ones that are fully supported and documented. They are:

  • SAPurgeCacheByQuery
  • SAPurgeCacheByTable
  • SAPurgeCacheByDatabase
  • SAPurgeAllCache

The names of these match up to the purge processes that I describe above. The syntax is in the documentation, but what I am interested in here is how you can invoke them. They are my preferred method for managing the BI Server cache because they enable you to tightly couple your data load (ETL) to your cache purge. Setting the cache to purge based on a drop-dead timer (whether crontab, tivoli, Agent/iBot, whatever) gives you a huge margin of error if your ETL runtime does not remain consistent. Whether it organically increases in runtime as data volumes increase, or it fails and has to be fixed and restarted, ETL does not always finish bang-on when it is ‘supposed’ to.

You can call these procedures in the several ways, including:

  1. nqcmd – one of the most common ways, repeated on many a blog, but requires nqcmd/OBIEE to be installed on the machine running it. nqcmd is a command-line ODBC client for connecting to the BI Server
  2. ODBC – requires BI to be installed on the machine running it in order to make the OBIEE ODBC driver available
  3. JDBC – just requires the OBIEE JDBC driver, which is a single .jar file and thus portable
  4. Web Service – the OBIEE BI Server Web Service can be used to invoke these procedures from any machine with no dependencies other than some WSM configuration on the OBIEE server side.

My preference is for JDBC or Web Service, because they can be called from anywhere. In larger organisations the team building the ETL may have very little to do with OBIEE, and so asking them to install OBIEE components on their server in order to trigger cache purging can be quite an ask. Using JDBC only a single .jar needs copying onto the server, and using the web service not even that:

curl --silent --header "Content-Type: text/xml;charset=UTF-8" \
--user weblogic:Admin123 \
--data @purge_cache_soap.xml \

[59118] Operation SAPurgeAllCache succeeded!

For details of configuring ODI to use the BI Server JDBC driver in order to tightly couple the cache management into an existing ODI load job, stay tuned for a future blog!

Event Polling Tables (EPT)

NB Not Event “Pooling” Tables as I’ve often seen this called

The second viable approach to automated cache purging is EPT, which is a decoupled approach to managing the cache purge, with two components:

  1. An application (your ETL) inserts a row into the table S_NQ_EPT (which is created at installation time by the RCU in the BIPLATFORM schema) with the name of the physical table in which data has been changed
  2. The BI Server polls (hence the name) the S_NQ_EPT table periodically, and if it finds entries in it, purges the cache of data that is from those tables.

So EPT is in a sense the equivalent of using SAPurgeCacheByTable, but in a manner that is not tightly coupled. It relies on configuring the BI Server for EPT, and there is no easy way to know from your ETL if the cache purge has actually happened. It also means that the cache remains stale potentially as long as the polling interval that you’ve configured. Depending on when you’re running your ETL and the usage patterns of your users this may not be an issue, but if you are running ETL whilst users are on the system (for example intra-day micro ETL batches) you could end up with users seeing stale data. Oracle themselves recommend not setting the polling interval any lower than 10 minutes.

EPT has the benefit of being very easy to implement on the ETL side, because it is simply a database table into which the ETL developers need to insert a row for each table that they update during the ETL.

Seeding the Cache

Bob runs an OBIEE dashboard, and the results are added to the cache so that when Bill runs the same dashboard Bill gets a great response rate because his dashboard runs straight from cache. Kinda sucks for Bob though, because his query ran slow as it wasn’t in the cache yet. What’d be nice would be that for the first user on a dashboard the results were already in cache. This is known as seeding the cache, or ‘priming’ it. Because the BI Server cache is not dumb and will hit the cache for queries that aren’t necessarily direct replicas of what previously ran working out the optimal way to seed the cache can take some trial and error careful research. The documentation does a good job of explaining what will and won’t qualify for a cache hit, and it’s worth reading this first.

There are several options for seeding the cache. These all assume you’ve figured out the queries that you want to run in order to load the results into cache.

  1. Run the analysis manually, which will return the analysis data to you and insert it into the BI Server Cache too.
  2. Create an Agent to run the analysis with destination set to Oracle BI Server Cache (For seeding cache), and then either:
    1. Schedule the analysis to run from an Agent on a schedule
    2. Trigger it from a Web Service in order to couple it to your ETL data load / cache purge batch steps.
  3. Use the BI Server Procedure SASeedQuery (which is what the Agent does in the background) to load the given query into cache without returning the data to the client. This is useful for doing over JDBC/ODBC/Web Service (as discussed for purging above). You could just run the Logical SQL itself, but you probably don’t want to pull the actual data back to the client, hence using the procedure call instead.
Sidenote – Checking the RPD for Cacheable Tables

The RPD Query Tool is great for finding objects matching certain criteria. However, it seems to invert results when looking for Cacheable Physical tables – if you add a filter of Cacheable = false you get physical tables where Cacheable is enabled! And the same in reverse (Cacheable = true -> shows Physical tables where Cacheable is disabled)

Day in the Life of an OBIEE Cache Entry (Who Said BI Was Boring?)

In this example here I’m running a very simple report from SampleApp v406:

The Logical SQL for this is:

   0 s_0,  
   "A - Sample Sales"."Time"."T02 Per Name Month" s_1,  
   "A - Sample Sales"."Base Facts"."1- Revenue" s_2  
FROM "A - Sample Sales"  

Why’s that useful to know? Because when working with the cache resubmitting queries is needed frequently and doing so directly from an interface like nqcmd is much faster (for me) than a web GUI. Horses for courses…

So I’ve run the query and now we have a cache entry for it. How do we know? Because we see it in the nqquery.log (and if you don’t have it enabled, go and enable it now):

[2015-09-23T15:58:18.000+01:00] [OracleBIServerComponent] [TRACE:3] [USER-42]  
[] [ecid: 00586hFR07mFw000jzwkno0005Qx00007U,0] [tid: 84a35700]  
[requestid: a9730015] [sessionid: a9730000] [username: weblogic]  
Query Result Cache: [59124] The query for user 'weblogic' was inserted into 
the query result cache.  
The filename is '/app/oracle/biee/instances/instance1/bifoundation/OracleBIServerComponent/coreapplication_obis1/cache/NQS__735866_57498_0.TBL'.

We see it in Usage Tracking (again, if you don’t have this enabled, go and enable it now):


We can also see it in the Administration Tool (when connected online to the BI Server):

We can even see it and touch it (figuratively) on disk:

So we have the data in the cache. The same query run again will now use the cache entry, as seen in nqquery.log:

[2015-09-23T16:09:24.000+01:00] [OracleBIServerComponent] [TRACE:3] [USER-21]
[] [ecid: 11d1def534ea1be0:6066a19d:14f636f1dea:-8000-000000000000b948,0:1:1:5]  
[tid: 87455700] 
[requestid: a9730017] [sessionid: a9730000] [username: weblogic]  
Cache Hit on query: [[  
Matching Query: SET VARIABLE QUERY_SRC_CD='Report',SAW_SRC_PATH='/users/weblogic/Cache Test 01',PREFERRED_CURRENCY='USD';SELECT  
   0 s_0,  
   "A - Sample Sales"."Time"."T02 Per Name Month" s_1,  
   "A - Sample Sales"."Base Facts"."1- Revenue" s_2  
FROM "A - Sample Sales"  

Created by:     weblogic

and in Usage Tracking:

“Interestingly” Usage Tracking shows a count of 1 for number of DB queries run, which we would not expect for a cache hit. The nqquery.log shows the same, but no query logged as being sent to the database, so I’m minded to dismiss this as an instrumentation bug.

Now what about if we want to run a query but not use the BI Server Cache? This is an easy one, plenty blogged about it elsewhere – use the Request Variable DISABLE_CACHE_HIT=1. This overrides the built in system session variable of the same name. Here I’m running it directly against the BI Server, prefixed onto my Logical SQL – if you want to run it from within OBIEE you need the Advanced tab in the Answers editor.

SET VARIABLE SAW_SRC_PATH='/users/weblogic/Cache Test 01',
   0 s_0,  
   "A - Sample Sales"."Time"."T02 Per Name Month" s_1,  
   "A - Sample Sales"."Base Facts"."1- Revenue" s_2  
FROM "A - Sample Sales"  

Now we get a cache ‘miss’, because we’ve specifically told the BI Server to not use the cache. As you’d expect, Usage Tracking shows no cache hit, but it does show a cache insert – because why shouldn’t it?

If you want to run a query without seeding the cache either, you can use DISABLE_CACHE_SEED=1:

SET VARIABLE SAW_SRC_PATH='/users/weblogic/Cache Test 01',
   0 s_0,  
   "A - Sample Sales"."Time"."T02 Per Name Month" s_1,  
   "A - Sample Sales"."Base Facts"."1- Revenue" s_2  
FROM "A - Sample Sales"  

These request variables can be set per analysis, or per user by creating a session initialisation block to assign the required values to the respective variables.

Cache Location

The BI Server cache is held on disk, so it goes without saying that storing it on fast (eg SSD) disk is a Good Idea. There’s no harm in giving it its own filesystem on *nix to isolate it from other work (in terms of filesystems filling up) and to make monitoring it super easy.

Use the DATA_STORAGE_PATHS configuration element in NQSConfig.ini to change the location of the BI Server cache.

  1. Use BI Server Caching as the ‘icing on the cake’ for performance in your OBIEE system. Make sure you have your house in order first – don’t use it to try to get around bad design.
  2. Use the SAPurgeCache procedures to directly invoke a purge, or the Event Polling Tables for a more loosely-coupled approach. Decide carefully which purge approach is best for your particular caching strategy.
  3. If using the SAPurgeCache procedures, use JDBC or Web Services to call them so that there is minimal/no installation required to call them from your ETL server.
  4. Invest time in working out an optimal cache seeding strategy, making use of Usage Tracking to track cache hit ratios.
  5. Integrate both purge and seeding into your ETL. Don’t use a schedule-based approach because it will come back to haunt you with its inflexibility and scope for error.
Categories: BI & Warehousing

iObit Driver Booster Pro 3 License Key

Jithin Sarath - Wed, 2015-09-23 10:59
iObit Driver Booster Pro 3 with license key is powerful tool to update pc/laptop drivers. Get driver booster 3 serial keys tested and verified [100% working].Driver Booster Pro 3 key is a Utility Program for your PC. After installing a Completely new Operating system, many of us get stuck on Driver issues. Microsoft usually release their Windows updates along with device drivers but some drivers can’t be found in Windows Update. One can search the Manufacturer website for drivers but Beginner PC Users are not aware of these things. By Keeping These things in our mind, Today We are Sharing Driver Booster Pro Free Download With serial key. You can easily update or install your Drivers with this One Click application.
Categories: DBA Blogs

When hosting data on Amazon turns bloodsport

Sean Hull - Wed, 2015-09-23 10:58
There’s a strong trend to automation across the cloud. That’s a great thing for startups because it reduces operational headaches & lets them focus on building products. Join 31,000 others and follow Sean Hull on twitter @hullsean. But as that trend begins to touch the database tier, all sorts of complications emerge. Let’s take a … Continue reading When hosting data on Amazon turns bloodsport →

tracefile tim to readable date format

Laurent Schneider - Wed, 2015-09-23 06:25

In trace file, the time is recorded, this could be used to measure time between two timestamps

But how do you convert 31796862227375 to a human format?

This is how I proceeded :

SQL> oradebug SETMYPID
Statement processed.
SQL> oradebug tracefile_name
SQL> alter session set 
Session altered.
SQL> select '&_DATE','Start' from dual;
2015-09-23_13:17:50 Start
SQL> select '&_DATE','End' from dual;
2015-09-23_13:18:38 End
SQL> alter session set sql_trace=false;
Session altered.

I use the sqlplus variable _date and waited about a minute between the select’s.

Now let’s look at the trace

PARSING IN CURSOR #4859519800 tim=31796814530524
select '2015-09-23_13:17:50','Start' from dual

PARSING IN CURSOR #4859511384 tim=31796862227375 
select '2015-09-23_13:18:38','End' from dual

Note the tim=

Between 13:17:50 and 13:18:38 I have 48 seconds; the difference between 31796862227375 and 31796814530524 is 47696851, roughly 48 millions of microseconds.

So the time on this specific version of Oracle is in microseconds.

Now if I substracted 31796862227375 from 2015-09-23_13:17:50 and I get 2014-09-20_12:50:08 on this specific instance.

So to convert tim= to human readable date, I simply add tim microseconds to Sep 20, 2014, 12:50:08.


Jonathan Lewis - Wed, 2015-09-23 01:17

Here’s a little note that I drafted (according to its date stamp) in January 2013 and then forgot to post. (Which adds a little irony to the title.)


Here’s an object lesson in (a) looking at what’s in front of you, and (b) how hard it is to remember all the details.

I ran a script today [ED: i.e. some time early Jan 2013] that I’ve have no problems with in earlier versions of Oracle, but today I was running it against for the first time, and hit a problem with autotrace:

SQL> set autotrace on
ORA-28002: the password will expire within 5 days

SP2-0619: Error while connecting
SP2-0611: Error enabling STATISTICS report

Anyone who reads the preceding text closely will see immediately what the problem is – but I saw the bottom line and immediately decided that I had forgotten to set up the plustrace role in this database. So I logged on as SYS and ran the script to create it ($ORACLE_HOME/sqlplus/admin/plustrce.sql) – and discovered that the role already existed. So I decided that I hadn’t granted plustrace to my test_user role, but that didn’t help; so I decided that this was clearly a case of a role that had to be assigned directly to a user, and that didn’t help.

THEN I read the error message properly, changed my password, and everything worked the way as expected.

That’s part (a) of the lesson, here’s part (b): I wrote about running int the same issue nearly two years ago (only that time I had a little excuse for not spotting the problem instantly).


Oracle Database Developer Choice Awards

Denes Kubicek - Tue, 2015-09-22 23:28
Don't forget to vote for the expert of your choice at There are a lot of good men there deserving your voice for their activities in the community and for their selfless engagement.

Categories: Development