Skip navigation.

Feed aggregator

Creepy Dolls - A Technology and Privacy Nightmare!

Abhinav Agarwal - Sat, 2015-09-26 10:28
This post was first published on LinkedIn on 20th May, 2015.

"Hi, I'm Chucky. Wanna play?"[1]  Fans of the horror film genre will surely recall these lines - innocent-sounding on their own, yet bone-chilling in the context of the scene in the movie - that Chucky, the possessed demonic doll, utters in the cult classic, "Child's Play". Called a "cheerfully energetic horror film" by Roger Ebert [2], the movie was released to more than a thousand screens on its debut in November 1988 [3]. It went on to spawn at least five sequels and developed a cult following of sorts over the next two decades [4].

Chucky the doll
(image credit: "Child's Play", Chucky the killer doll stays quiet around the adults - at least initially - but carries on secret conversations with Andy, and is persuasive enough to convince him to skip school and travel to downtown Chicago. Chucky understands how children think, and can evidently manipulate - or convince, depending on how you frame it - Andy into doing little favours for him. A doll that could speak, hear, see, understand, and have a conversation with a human in the eighties was the stuff out of science fiction, or in the case of "Child's Play" - out of a horror movie.

Edison Talking Doll.
Image credit: www.davescooltoys.comA realistic doll that could talk and converse was for long the "holy grail" of dollmakers [5]. It will come as a huge surprise to many - at least it did to me - that within a few years of the invention of the phonograph by Thomas Edison in 1877, a doll with a pre-recorded voice had been developed and marketed in 1890! It didn't have a very happy debut however. After "several years of experimentation and development", the Edison Talking Doll, when it launched in 1890, "was a dismal failure that was only marketed for a few short weeks."[6] Talking dolls seem to have made their entry into mainstream retail only with the advent of "Chatty Cathy" - released by Mattel in the 1960s - and which worked on a simple pull-string mechanism. The quest to make these dolls more interactive and more "intelligent" continued; "Amazing Amanda" was another milestone in this development; it incorporated "voice-recognition and memory chips, sensory technology and facial animatronics" [7]. It was touted as an "an evolutionary leap from earlier talking dolls like Chatty Cathy of the 1960's" by some analysts [8]. In some ways that assessment was not off-the-mark. After all, "Amazing Amanda" utilized RFID technology - among the hottest technology buzzwords a decade back. "Radio-frequency tags in Amanda's accessories - including toy food, potty and clothing - wirelessly inform the doll of what it is interacting with." This is what enabled "Amazing Amanda" to differentiate between "food" (pizza, or "cookies, pancakes and spaghetti") and "juice"[9]. "However, even with all these developments and capabilities, the universe of what these toys could was severely limited. At most they could recognize the voice of the child as its "mommy".
Amazing Amanda doll.
Image credit:amazing-amanda.fuzzup.netThey were constrained by both the high price of storage (Flash storage is much sturdier than spinning hard drives, but an order of magnitude costlier; this limits the amount of storage possible) and limited computational capability (putting in a high-end microprocessor inside every doll would make them prohibitively expensive). The flip side was that what the toys spoke in home to the children stayed at home. These toys had a limited set of pre-programmed sentences and emotions they could convey, and if you wanted something different, you went out and bought a new toy, or in some cases, a different cartridge.

That's where things stood. Till now.

Screenshot of ToyFair websiteBetween February 14-17, 2015, the Jacob K. Javits Convention Center in New York saw "the Western Hemisphere’s largest and most important toy show"[10] - the 2015 Toy Fair. This was a trade-show, which meant that "Toy Fair is not open to the public. NO ONE under the age of 18, including infants, will be admitted."[11] It featured a "record-breaking 422,000+ net square feet of exhibit space"[12] and hundreds of thousands of toys. Yet no children were allowed. Be that as it may, there was no dearth of, let's say, "innovative" toys. Apart from an "ultra creepy mechanical doll, complete with dead eyes", a fake fish pet that taken to a "whole new level of weird", or a "Doo Doo Head" doll that had the shape of you-guessed-it [13], of particular interest was a "Hello Barbie" doll, launched by the Fortune 500 behemoth, Mattel. This doll had several USPs to its credit. It featured voice-recognition software, voice recording capabilities, the ability to upload recorded conversations to a server (presumably Mattel's or ToyTalk's) in the cloud, over "Wi-Fi" - as a representative at the exhibition took pains to emphasize, repeatedly - and give "chatty responses."[14] This voice data would be processed and analyzed by the company's servers. The doll would learn the child's interests, and be able to carry on a conversation on those topics - made possible by the fact that the entire computational and learning capabilities of a server farm in the cloud could be accessed by every such toy. That the Barbie franchise is a vital one to Mattel could not be understated. The Barbie brand netted Mattel $1.2 billion in FY 2013 [15], but this represented a six per cent year-on-year decline. Mattel attributed that this decline in Barbie sales in part to "product innovation not being strong enough to drive growth." The message was clear. Something very "innovative" was needed to jump-start sales. To make that technological leap forward, Mattel decided to team up with ToyTalk.

ToyTalk is a San Francisco-based start-up, and its platform powered the voice-recognition software used by "Hello Barbie". ToyTalk is headed by "CEO Oren Jacob, Pixar's former CTO, who worked at the groundbreaking animation company for 20 years" [16], and which claimed "$31M in funding from Greylock Partners, Charles River Ventures, Khosla Ventures, True Ventures and First Round Capital as well as a number of angel investors." [17]

Cover of Misery, by Stephen King.
Published by Viking Press.The voice recognition software would allow Mattel and ToyTalk to learn the preferences of the child, and over time refine the responses that Barbie would communicate back. As the Mattel representative put it, "She's going to get to know all my likes and all my dislikes..."[18] - a statement that at one level reminds one of Annie Wilkes when she says, "I'm your number one fan."[19] We certainly don't want to be in Paul Sheldon shoes.

Hello Barbie's learning would start happening from the time the doll was switched on and connected to a Wi-Fi network. ToyTalk CEO Oren Jacob said, "we'll see week one what kids want to talk about or not" [20]. These recordings, once uploaded to the company's servers, would be used by "ToyTalk's speech recognition platform, currently powering the company's own interactive iPad apps including The Winston Show, SpeakaLegend, and SpeakaZoo" and which then "allows writers to create branching dialogue based on what children will potentially actually say, and collects kids' replies in the cloud for the writers to study and use in an evolving environment of topics and responses."[20]. Some unknown set of people. sitting in some unknown location, would potentially get to hear and listen to entire conversations of a child before his parents would.

If Mattel or ToyTalk did not anticipate the reaction this doll would generate, one can only put it down to the blissful disconnect from the real-world that Silicon Valley entrepreneurs often develop, surrounded as they are by similar-thinking digerati. In any case, the responses were swift, and in most cases brutal. The German magazine "Stern" headlined an article on the doll - "Mattel entwickelt die Stasi-Barbie" [21] Even without the benefit of translation, the word "Stasi" stood out like a red flag. In any case, if you wondered, the headline translated to "Mattel developed the Stasi Barbie" [22]. Stern "curtly re-baptised" it "Barbie IM". "The initials stand for “Inoffizieller Mitarbeiter”, informants who worked for East Germany’s infamous secret police, the Stasi, during the Cold War." [23] [24]. A Newsweek article carried a story, "Privacy Advocates Call Talking Barbie 'Surveillance Barbie'"[25]. France 24 wrote - "Germans balk at new ‘Soviet snitch’ Barbie" [26]. The ever-acerbic The Register digged into ToyTalk's privacy policy on the company's web site, and found these gems out [27]:
Screenshot of ToyTalk's Privacy page- "When users interact with ToyTalk, we may capture photographs or audio or video recordings (the "Recordings") of such interactions, depending upon the particular application being used.
- We may use, transcribe and store such Recordings to provide and maintain the Service, to develop, test or improve speech recognition technology and artificial intelligence algorithms, and for other research and development or internal purposes."

Further reading revealed that what your child spoke to the doll in the confines of his home in, say, suburban Troy Michigan, could end up travelling half the way across the world, to be stored on a server in a foreign country - "We may store and process personal information in the United States and other countries." [28]

What information would ToyTalk share with "Third Parties" was equally disturbing, both for the amount of information that could potentially be shared as well as for the vagueness in defining who these third-parties could possibly be - "Personal information"; "in an aggregated or anonymized form that does not directly identify you or others;"; "in connection with, or during negotiations of, any merger, sale of company assets, financing or acquisition, or in any other situation where personal information may be disclosed or transferred as one of the business assets of ToyTalk"; "We may also share feature extracted data and transcripts that are created from such Recordings, but from which any personal information has been removed, with Service Providers or other third parties for their use in developing, testing and improving speech recognition technology and artificial intelligence algorithms and for research and development or other purposes."[28] A child's speech, words, conversation, voice - as recorded by the doll - was the "business asset" of the company.

And lest the reader have any concerns about safety and security of the data on the company's servers, the following disclaimer put paid to any reassurances on that front also: "no security measures are perfect or impenetrable and no method of data transmission that can be guaranteed against any interception or other type of misuse."[28] If the sound of hands being washed-off could be put down on paper, that sentence above is what it could conceivably look like.

Apart from the firestorm of criticism described above, the advocacy group "Campaign for a Commercial Free Childhood" started a campaign to petition Mattel "CEO Christopher Sinclair to stop "Hello Barbie" immediately." [29]

The brouhaha over "Hello Barbie" is however only symptomatic of several larger issues that have emerged and intersect each other in varying degrees, raising important questions about technology, including the cloud, big data, the Internet of Things, data mining, analytics; privacy in an increasingly digital world; advertising and the ethics of marketing to children; law and how it is able to or unable to cope with an increasingly digitized society; and the impact on children and teens - sociological as well as psychological. Technology and Moore's Law [30] have combined with the convenience of broadband to make possible what would have been in the realm of science fiction even two decades ago.

The Internet, while opening up untold avenues of betterment for society at large, has however also revealed itself as not without a dark side - a dilemma universally common to almost every transformative change in society. From the possibly alienating effects of excessive addiction to the Internet to physiological changes that the very nature of the hyperlinked web engenders in humans - these are issues that are only recently beginning to attract the attention of academics and researchers. The basic and most fundamental notions of what people commonly understood as "privacy" are not only being challenged in today's digital world, but in most cases without even a modicum of understanding on the part of the affected party - you. In the nebulous space that hopefully still exists between those who believe in technology as the only solution capable of delivering a digital nirvana to all and every imaginable problem in society on the one hand and the Luddites who see every bit of technology as a rabid byte (that's a bad pun) against humanity lies a saner middle ground that seeks to understand and adapt technology for the betterment of humanity, society, and the world at large.

So what happened to Chucky? Well, as we know, it spawned a successful and profitable franchise of sequels and other assorted franchise. Which direction "Hello Barbie" takes is of less interest to me as the broader questions I raised in the previous paragraph.

[2] "Child's Play" review,
[5] "A Brief History of Talking Dolls--From Bebe Phonographe to Amazing Amanda",
[6] "Edison Talking Doll",

Disclaimer: Views expressed are personal.

© 2015, Abhinav Agarwal. All rights reserved.

Oracle database developer choice awards

Adrian Billington - Fri, 2015-09-25 18:34
I've been selected as a finalist in the 2015 Oracle Database Developer Choice Awards...! September 2015

Know What Ya Got

Floyd Teter - Fri, 2015-09-25 14:34
There are two extremely bad decisions commonly made with enterprise software, and I see both take place every day.

This doesn't work the way we expect.  File a bug.
Over the years, my own experience tells me that two-thirds of bugs filed aren't really bugs.  What we really have is a user who fails to understand how the software works.  And, yes, we can always respond with RTM (or worse), but the stream of bugs that aren't bugs continues to be filed.  Stop for a second and imagine all the software development productivity lost in addressing bugs that aren't bugs.  And we wonder why it takes so long to introduce new releases and features?

This doesn't work the way we expect.  We'll have to customize the software.
We're not talking about extensions or bolt-ons.  We're talking about changing the code delivered out of the box.  Seems like around 75 percent of Oracle enterprise applications customers customize delivered code in one way or another.  SaaS will cut this number way down, but it's still widely prevalent throughout the installed customer base of Oracle enterprise applications.
Why is customization bad?  First, it means a customer must have a team of developers to build and maintain the customization through it's lifecycle.  It's that maintenance part that gets really costly...each new update, each new release, each new expansion require testing and likely some change to the customized code.  And here comes the incredibly crazy part of customization:  I would confidently bet my lungs against yours that over two-thirds of those customizations are unnecessary to accomplish the purposes of the customer.  Because the software out of the box already has the functionality to achieve business goal in mind...but it's likely a new way of doing things, and many folks don't want to change.  Even when the change might be better.  So we customize the code.
What to do?
As a very young man, I spent some time as a Boy Scout.   I was a really lousy Boy Scout.  Nothing that I liked about the entire thing.  Later on, after I grew up, I developed a great appreciation for Scouting as a Scout Master.  Nevertheless, I was a lousy Scout and resented my folks for putting me through it.
My Scout Master was retired military.  Lots of gruffness at a time when I just wasn't ready for that. Only one thing he ever taught us stuck with me:  "when you realize you're lost, take a breath, know what ya got, and figure out how to use what you've got to get yourself unlost."
Years later, I got lost in the woods.  The advice from that Scout Master saved my life.  And it's been a great principle for me ever since, in many situations:  "know what ya got."
Most enterprise customers today don't know what they've got.  That knowledge gap leads to filing bugs that aren't really bugs and building customizations when existing functionality gets the job done.  And telling customers to RTM adds almost now value (heck, even most of the people building the software dread reading those things).  If those of us in the industry want our customers to succeed with our products, we have to help by showing and telling.  Which also means we have to earn the trust of our customers, because showing and telling achieves nothing if the audience fails to engage.
So you want to reduce the bogus bug filings?  Cut back on customizations that slow everyone down? Work with your customers.  Customer success starts there.

Attending Tech Conferences

Pythian Group - Fri, 2015-09-25 11:04


Now that I am in management, and have been for some time, I have still been attending tech shows and presenting at them a few times a year. When I came back to Pythian in February 2011, we had a handful of personnel doing sessions at the shows. Since then the number of people submitting abstracts and appearing at these shows has skyrocketed. I cannot take the credit single-handedly for that growth, but I am pleased that I have been a catalyst getting more Pythianites on the global stage where they should be. Bravo to all the new techs appearing at the shows.

I just attended the EastCoastOracle show in Raleigh/Durham USA and sat in on a steady stream of sessions and found the cloud material especially intriguing. Simon Pane, Roopesh Ramklass, Subhajit Das Chaudhuri, and Gleb Otochkin shone and had substance and pizazz in their sessions. Pythian should be proud.

The Pythian name was all over the show and we were tied with another company for most sessions. There was avid interest in what we do and I did get some “You work for Pythian!” comments from attendees. Our name is out there and getting “outer” every show.

Every show I attend I am thankful for the opportunity and Pythian’s support. It sweetens the deal when I look back at the number of presenters we send now as compared to 5 years ago. I value and I’m very honoured that so many colleagues have reached out to me over the years to get over that last remaining hurdle to entering the conference presentation life …


Discover more about our expertise in Cloud and Oracle.

Categories: DBA Blogs

My Sales Journey: #4

Pythian Group - Fri, 2015-09-25 10:53


Let me start today with some great news! This week I got my first meeting….Woohoo! It happened in totally unexpected ways. It happened on Twitter. Of the dozens of calls and emails that I had been sending out it happened with a quick, to the point tweet. It made me take a step back and look at what I was doing differently.

Sales is about reaching people with a message and hoping they get interested enough to talk to you. You have to reach people where they live. Today, people live on many different hubs. We live on Twitter, Facebook, Hootsuite, Snapchat, Email, Linkedin or in an office. People also do these things depending where they are geographically. New Yorkers are more likely to hang out on social media channels than Floridians who may love phone calls and a good chat. Reach your people where they live.

Sales is also a very creative channel. It forces you to think out of the box at all times. To be successful you must not only be creative with your messaging, you also have to be creative about how you will deliver that message. Make your message brief, relevant and fun.

I am an entrepreneur first and foremost, which makes me naturally curious about business. Reaching out to all these people is all about getting to know their business. I am genuinely interested in knowing who they are and how their business works. It is about meeting cool folks doing cool things. Be genuine. Business will follow.

Its been a good week! Do you have a story or two to share about where you meet your people. Have you had success over social media? If so, please share. I want to be inspired!



Categories: DBA Blogs

Master-Detail Pattern Implementation in ADF 12c Alta UI

Andrejus Baranovski - Fri, 2015-09-25 09:14
ADF 12c and Alta UI change the way how we used to implement UI in ADF. Let's take a look into Master-Detail. Before Alta UI usual implementation for Master-Detail would be based on vertical layout with master on the top and details below. Alta UI provides different patterns like left-right, bottom-top for Master-Detail implementation - Oracle Alta UI Patterns. I would recommend to watch a demo from Shay Shmeltzer, he explains how to build left-right Master-Detail - Developing Your First Oracle Alta UI page with Oracle ADF Faces. In my post I'm taking a step further and explaining how to manage Master-Detail relationship between different regions.

Here you can watch a demo of sample application developed for today post:

Country list on the left acts as a menu. User can select any country and this will force regions displayed in the central area to refresh and show information based on selected country. I'm keeping regions in synch without Contextual Events, using simple solution described in this post - Simple (Effective) Refresh Approach for ADF Regions.

Selection changes in Menu region trigger list filtering in Employees Directory tab:

Detail data displayed in Profile tab is also filtered. In the top block is displayed additional information for Master record. Employees from selected country are displayed in the form with navigation controls. Overview block displays information about all employees from selected country:

If country selection is changed, detail data is refreshed to stay in synch:

Administration tab contains table with pagination support, also filtered based on Master record change in countries:

Countries menu block is implemented in separate region. Selection listener is overwritten to process menu item selection:

Listener updates session scope variable with a key of selected country. Region refresh approach is implemented based on the method described in the post mention above:

Content region is implemented by TF with a router. Here we check if Country ID TF input variable is not empty and execute detail records filtering:

Country ID is defined as TF input parameter:

Session scope variable updated by Countries menu listener is set as a input value for region parameter. Each time when session scope variable is changed (new country is selected), region with such input parameter will be refreshed:

Region will be refreshed, only if Refresh property will be set to ifNeeded:

It might be enough to reload only one region. If another region contains PPR supported components (tables, forms) - data can be reloaded automatically with ChangeEventPolicy = ppr set for iterator:

Download sample application -

node-oracledb 1.2.0 is on NPM (Node.js add-on for Oracle Database)

Christopher Jones - Fri, 2015-09-25 08:28

Version 1.2 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM

A lot of good changes have been made.

Our thanks to Bruno Jouhier from Sage for his work on adding RAW support and for fixes for LOB stability. Thanks also go to Bill Christo for pushing us on some Installation topics - look out for his full article on Windows Installation that OTN will be publishing soon.

An annotated list of the changes in this releases are:

  • Added support for RAW data type.

    Bruno contributed a patch to add support for the Oracle RAW datatype. This data type maps to a JavaScript Buffer for inserts, queries and for binding to PL/SQL. Binding RAW for DML RETURNING is not supported. There is an example showing inserting and querying in examples/raw1.js

  • Added a type property to the Lob class to distinguish CLOB and BLOB types.

    This small change will allow introspection on Lob instances so applications can more easily decide how to handle the data.

  • Changed write-only attributes of Connection objects to work with console.log().

    The Connection object had three write-only attributes (action, module, clientId) used for end-to-end tracing and mid-tier authentication. Because they were write-only, anyone doing a simple console.log() on the connection object got a confusing message often leading to the impression that connection had failed. The attributes are write-only for the reasons described in the documentation. With the change in v1.2, a Connection object can now be displayed. The three attributes will show as null (see the doc) while the non- write-only attribute stmtCacheSize will show an actual value. With hindsight the three attributes should have be set via a setter, but they aren't.

  • Added a check to make sure maxRows is greater than zero for non-ResultSet queries.

    If you want to get metaData for a query without getting rows, specify resultSet:true and prefetchRows:0 in the query options (and remember to close the ResultSet).

  • Improved installer messages for Oracle client header and library detection on Linux, OS X and Solaris.

    Some upfront checks now aid detection of invalid environments earlier.

  • Optimized CLOB memory allocation to account for different database-to-client character set expansions.

    In line with the optimization for string buffers in v1.1, users of AL32UTF8 databases will see reduced memory consumption when fetching CLOBs.

  • Fixed a crash while reading a LOB from a closed connection

  • Fixed a crash when selecting multiple rows with LOB values.

    Another fix by Bruno.

  • Corrected the order of Stream 'end' and 'close' events when reading a LOB.

    Bruno was busy this release and sent in a pull request for this too.

  • Fixed AIX-specific REF CURSOR related failures.

  • Fixed intermittent crash while setting fetchAsString, and incorrect output while reading the value.

  • Added a check to return an NJS error when an invalid DML RETURN statement does not give an ORA error.

  • Removed non-portable memory allocation for queries that return NULL.

  • Fixed encoding issues with several files that caused compilation warnings in some Windows environments.

  • Made installation halt sooner for Node.js versions currently known to be unusable.

  • Fixed typo in examples/dbmsoutputgetline.js

Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line!

Installation instructions are here.

Node-oracledb documentation is here.

Log Buffer #442: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-09-25 07:24

This Log Buffer Edition collects and then showers around some of the information-rich blog posts from Oracle, SQL Server and MySQL.


  • Generic Java Server for Static content and Directory Listing using API mode or command line mode.
  • OEM agents on each host upload data to your management servers every few minutes.
  • exitcommit … or your career down the drain.
  • Encryption is the Easy Part; Managing those Keys is Difficult.
  • Managing the OBIEE BI Server Cache from ODI 12c.

SQL Server:

  • Email addresses are very prevalent in IT systems and often used as a natural primary key. The repetitive storage of email addresses in multiple tables is a bad design choice. Following is a design pattern to store email addresses in a single table and to retrieve them efficiently.
  • OpenStack: The Good and Not-So-Good Bits.
  • By defining server- and database-level audits, you can record just about any kind of event that occurs in SQL Server, which can be an invaluable source of security troubleshooting and forensic information when security breaches occur.
  • Koen Verbeeck shows how to easily extract metadata from files in your directories with Power Query.
  • Implementing John Conway’s Game of Life in Microsoft SQL Server.


  • Oracle HA, DR, data warehouse loading, and license reduction through edge apps.
  • Easy Load-balancing and High-availability using MySQL Router.
  • When hosting data on Amazon turns bloodsport.
  • MySQL 5.7 Labs — Using Loopback Fast Path With Windows 8/2012.
  • How to evaluate if MySQL table can be recovered.


Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

Using the BI Server Metadata Web Service for Automated RPD Modifications

Rittman Mead Consulting - Fri, 2015-09-25 06:33

A little-known new feature of OBIEE 11g is a web service interface to the BI Server. Called the “BI Server Metadata Web Service” it gives a route into making calls into the BI Server using SOAP-based web services calls. Why is this useful? Because it means you can make any call to the BI Server (such as SAPurgeAllCache) from any machine without needing to install any OBIEE-related artefacts such as nqcmd, JDBC drivers, etc. The IT world has been evolving over the past decade or more towards a more service-based architecture (remember the fuss about SOA?) where we can piece together the functionality we need rather than having one monolithic black box trying to do everything. Being able to make use of this approach in our BI deployments is a really good thing. We can do simple things like BI Server cache management using a web service call, but we can also do more funky things, such as actually updating repository variable values in real time – and we can do it from within our ETL jobs, as part of an automated deployment script, and so on.

Calling the BI Server Metadata Web Service

First off, let’s get the web service configured and working. The documentation for the BI Server Metadata Web Service can be found here, and it’s important to read it if you’re planning on using this. What I describe here is the basic way to get it up and running.

Configuring Security

We need to configure the security against the web service to define what kind of authentication is required by it to use. If you don’t do this, you won’t be able to make calls to it. Setting up the security is a case of attaching a security policy in WebLogic Server to the web service (called AdminService) itself. I’ve used oracle/wss_http_token_service_policy which means that the credentials can be passed through using standard HTTP Basic authentication.

You can do this through Enterprise Manager:

Or you can do it through WLST using the attachWebServicePolicy call.

You also need to configure WSM, adding some entries to the credential store as detailed here.

Testing the Web Service

The great thing about web services is that they can be used from pretty much anywhere. Many software languages will have libraries for making SOAP calls, and if they don’t, it’s just HTTP under the covers so you can brew your own. For my testing I’m using the free version of SoapUI. In anger, I’d use something like curl for a standalone script or hook it up to ODI for integration in the batch.

Let’s fire up SoapUI and create a new project. Web services provide a Web Service Definition Language (WSDL) that describes what and how they work, which is pretty handy. We can pass this WSDL to SoapUI for it to automatically build some sample requests for us. The WSDL for this web service is


Where biserver is your biserver (duh) and port is the managed server port (usually 9704, or 7780).

We’ve now got a short but sweet list of the methods we can invoke:

Expand out callProcedureWithResults and double click on Request 1. This is a template SOAP message that SoapUI has created for you, based on the WSDL.

Edit the XML to remove the parameters section, and just to test things out in procedureName put GetOBISVersion(). Your XML request should look like this:

<soapenv:Envelope xmlns:soapenv="" xmlns:ws="">  

If you try and run this now (green arrow or Cmd-Enter on the Mac) you’ll see the exact same XML appear on the right pane, which is strange… but click on Raw and you’ll see what the problem is :

Remember the security stuff we set up on the server previously? Well as the client we now need to keep our side of the bargain, and pass across our authentication with the SOAP call. Under the covers this is a case of sending HTTP Basic auth (since we’re using oracle/wss_http_token_service_policy), and how you do this depends on how you’re making the call to the web service. In SoapUI you click on the Auth button at the bottom of the screen, Add New Authentication, Type: Basic, and then put your OBIEE server username/password in.

Now you can click submit on the request again, and you should see in the response pane (on the right) in the XML view details of your BI Server version (you may have to scroll to the right to see it)

This simple test is just about validating the end-to-end calling of a BI Server procedure from a web service. Now we can get funky with it….

Updating Repository Variables Programatically

Repository variables in OBIEE are “global”, in that every user session sees the same value. They’re often used for holding things like “what was the last close of business date”, or “when was the data last updated in the data warehouse”. Traditionally these have been defined as dynamic repository variables with an initialisation block that polled a database query on a predefined schedule to get the current value. This meant that to have a variable showing when your data was last loaded you’d need to (a) get your ETL to update a row in a database table with a timestamp and then (b) write an init block to poll that table to get the value. That polling of the table would have to be as frequent as you needed in order to show the correct value, so maybe every minute. It’s kinda messy, but it’s all we had. Here I’d like to show an alternative approach.

Let’s say we have a repository variable, LAST_DW_REFRESH. It’s a timestamp that we use in report predicates and titles so that when they are run we can show the correct data based on when the data was last loaded into the warehouse. Here’s a rather over-simplified example:

The Title view uses the code:

Data correct as of: @{biServer.variables['LAST_DW_REFRESH']}

Note that we’re referencing the variable here. We could also put it in the Filter clause of the analysis. Somewhat tenuously, let’s imagine we have a near-realtime system that we’re federating a DW across direct from OLTP, and we just want to show data from the last load into the DW:

For the purpose of this example, the report is less important than the diagnostics it gives us when run, in nqquery.log. First we see the inbound logical request:

-------------------- SQL Request, logical request hash:  
   0 s_0,  
   "A - Sample Sales"."Time"."T05 Per Name Year" s_1,  
   "A - Sample Sales"."Base Facts"."1- Revenue" s_2  
FROM "A - Sample Sales"  
("Time"."T00 Calendar Date" <  VALUEOF("LAST_DW_REFRESH"))  

Note the VALUEOF clause. When this is parsed out we get to see the actual value of the repository variable that OBIEE is going to execute the query with:

-------------------- Logical Request (before navigation): [[
    0 as c1 GB,  
    D0 Time.T05 Per Name Year as c2 GB,  
    1- Revenue:[DAggr(F0 Sales Base Measures.1- Revenue by [ D0 Time.T05 Per Name Year] )] as c3 GB  
    D0 Time.T00 Calendar Date < TIMESTAMP '2015-09-24 23:30:00.000'  
OrderBy: c1 asc, c2 asc NULLS LAST

We can see the value through the Administration Tool Manage Sessions page too, but it’s less convenient for tracking in testing:

If we update the RPD online with the Administration Tool (nothing fancy at this stage) to change the value of this static repository variable :


And then rerun the report, we can see the value has changed:

-------------------- Logical Request (before navigation): [[
    0 as c1 GB,  
    D0 Time.T05 Per Name Year as c2 GB,  
    1- Revenue:[DAggr(F0 Sales Base Measures.1- Revenue by [ D0 Time.T05 Per Name Year] )] as c3 GB  
    D0 Time.T00 Calendar Date < TIMESTAMP '2015-09-25 00:30:00.000'  
OrderBy: c1 asc, c2 asc NULLS LAST

Now let’s do this programatically. First off, the easy stuff, that’s been written about plenty before. Using biserverxmlgen we can create a XUDML version of the repository. Searching through this we can pull out the variable definition:

<Variable name="LAST_DW_REFRESH" id="3031:286125" uid="00000000-1604-1581-9cdc-7f0000010000">  
<Expr><![CDATA[TIMESTAMP '2015-09-25 00:30:00']]></Expr>  

and then wrap it in the correct XML structure and update the timestamp we want to use:

<?xml version="1.0" encoding="UTF-8" ?>  
<Repository xmlns:xsi="">  
        <Variable name="LAST_DW_REFRESH" id="3031:286125" uid="00000000-1604-1581-9cdc-7f0000010000">  
        <Expr><![CDATA[TIMESTAMP '2015-09-26 00:30:00']]></Expr>  

(you can also get this automagically by modifying the variable in the RPD, saving the RPD file, and then comparing it to the previous copy to generate a patch file with the Administration Tool or comparerpd)

Save this XML snippet, with the updated timestamp value, as last_dw_refresh.xml. Now to update the value on the BI Server in-flight, first using the OBIEE tool biserverxmlcli. This is available on all OBIEE servers and client installations – we’ll get to web services for making this update call remotely in a moment.

biserverxmlcli -D AnalyticsWeb -R Admin123 -U weblogic -P Admin123 -I last_dw_refresh.xml

Here -D is the BI Server DSN, -U / -P are the username/password credentials for the server, and the -R is the RPD password.

Running the analysis again shows that it is now working with the new value of the variable:

-------------------- Logical Request (before navigation): [[
    0 as c1 GB,  
    D0 Time.T05 Per Name Year as c2 GB,  
    1- Revenue:[DAggr(F0 Sales Base Measures.1- Revenue by [ D0 Time.T05 Per Name Year] )] as c3 GB  
    D0 Time.T00 Calendar Date < TIMESTAMP '2015-09-26 00:30:00.000'  
OrderBy: c1 asc, c2 asc NULLS LAST

Getting Funky – Updating RPD from Web Service

Let’s now bring these two things together – RPD updates (in this case to update a variable value, but could be anything), and BI Server calls.

In the above web service example I called the very simple GetOBISVersion. Now we’re going to use the slightly more complex NQSModifyMetadata. This is actually documented, and what we’re going to do is pass across the same XUDML that we sent to biserverxmlcli above, but through the web service. As a side note, you could also do this over JDBC if you wanted (under the covers, AdminService is just a web app with a JDBC connector to the BI Server).

I’m going to do this in SoapUI here for clarity but I actually used SUDS to prototype it and figure out the exact usage.

So as a quick recap, this is what we’re going to do:

  1. Update the value of a repository variable, using XUDML. We generated this XUDML from biserverxmlgen and wrapped it in the correct XML structure.

    We could also have got it through comparerpd or the Administration Tool ‘create patch’ function
  2. Use the BI Server’s NQSModifyMetadata call to push the XUDML to the BI Server online.

    We saw that biserverxmlcli can also be used as an alternative to this for making online updates through XUDML.
  3. Use the BI Server Metadata Web Service (AdminService) to invoke the NQSModifyMetadata call on the BI Server, using the callProcedureWithResults method

In SoapUI create a new request:

Set up the authentication:

Edit the XML SOAP message to remove parameters and specify the basic BI Server call:

<soapenv:Envelope xmlns:soapenv="" xmlns:ws="">  

Now the tricky bit – we need to cram the XUDML into our XML SOAP message. But, XUDML is its own kind of XML, and all sorts of grim things happen here if we’re not careful (because the XUDML gets swallowed up into the SOAP message, it all being XML). The solution I came up with (which may not be optimal…) is to encode all of the HTML entities, which a tool like this does (and if you’re using a client library like SUDS will happen automagically). So our XUDML, with another new timestamp for testing:

<?xml version="1.0" encoding="UTF-8" ?>  
<Repository xmlns:xsi="">  
        <Variable name="LAST_DW_REFRESH" id="3031:286125" uid="00000000-1604-1581-9cdc-7f0000010000">  
        <Expr><![CDATA[TIMESTAMP '2015-09-21 00:30:00']]></Expr>  


&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; ?&gt; &lt;Repository xmlns:xsi=&quot;;&gt; &lt;DECLARE&gt; &lt;Variable name=&quot;LAST_DW_REFRESH&quot; id=&quot;3031:286125&quot; uid=&quot;00000000-1604-1581-9cdc-7f0000010000&quot;&gt; &lt;Expr&gt;&lt;![CDATA[TIMESTAMP '2015-09-21 00:30:00']]&gt;&lt;/Expr&gt; &lt;/Variable&gt; &lt;/DECLARE&gt; &lt;/Repository&gt;

We’re not quite finished yet. Because this is actually a call (NQSModifyMetadata) nested in a call (callProcedureWithResults) we need to make sure NQSModifyMetadata gets the arguments (the XUDML chunk) through intact, so we wrap it in single quotes – which also need encoding (&apos;):

&apos;&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; ?&gt; &lt;Repository xmlns:xsi=&quot;;&gt; &lt;DECLARE&gt; &lt;Variable name=&quot;LAST_DW_REFRESH&quot; id=&quot;3031:286125&quot; uid=&quot;00000000-1604-1581-9cdc-7f0000010000&quot;&gt; &lt;Expr&gt;&lt;![CDATA[TIMESTAMP '2015-09-21 23:30:00']]&gt;&lt;/Expr&gt; &lt;/Variable&gt; &lt;/DECLARE&gt; &lt;/Repository&gt;&apos;

and then for final good measure, the single quotes around the timestamp need double-single quoting:

&apos;&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; ?&gt; &lt;Repository xmlns:xsi=&quot;;&gt; &lt;DECLARE&gt; &lt;Variable name=&quot;LAST_DW_REFRESH&quot; id=&quot;3031:286125&quot; uid=&quot;00000000-1604-1581-9cdc-7f0000010000&quot;&gt; &lt;Expr&gt;&lt;![CDATA[TIMESTAMP ''2015-09-21 23:30:00'']]&gt;&lt;/Expr&gt; &lt;/Variable&gt; &lt;/DECLARE&gt; &lt;/Repository&gt;&apos;

Nice, huh? The WSDL suggests that parameters for these calls should be able to be placed within the XML message as additional entities, which I wonder if would allow for proper encoding, but I couldn’t get it to work (I kept getting java.sql.SQLException: Parameter 1 is not bound).

So, stick this mess of encoding plus twiddles into your SOAP message and it should look like this: (watch out for line breaks; these can break things)

<soapenv:Envelope xmlns:soapenv="" xmlns:ws="">  
         <procedureName>NQSModifyMetadata(&apos;&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; ?&gt; &lt;Repository xmlns:xsi=&quot;;&gt; &lt;DECLARE&gt; &lt;Variable name=&quot;LAST_DW_REFRESH&quot; id=&quot;3031:286125&quot; uid=&quot;00000000-1604-1581-9cdc-7f0000010000&quot;&gt; &lt;Expr&gt;&lt;![CDATA[TIMESTAMP ''2015-09-21 23:30:00'']]&gt;&lt;/Expr&gt; &lt;/Variable&gt; &lt;/DECLARE&gt; &lt;/Repository&gt;&apos;)</procedureName>  

Hit run, and with a bit of luck you’ll get a “nothing to report” response :

If you look in nqquery.log you’ll see:

[...] NQSModifyMetadata started.  
[...] NQSModifyMetadata finished successfully.

and all-importantly when you run your report, the updated variable will be used:

-------------------- Logical Request (before navigation): [[
    0 as c1 GB,  
    D0 Time.T05 Per Name Year as c2 GB,  
    1- Revenue:[DAggr(F0 Sales Base Measures.1- Revenue by [ D0 Time.T05 Per Name Year] )] as c3 GB  
D0 Time.T00 Calendar Date < TIMESTAMP '2015-09-21 23:30:00.000'  
OrderBy: c1 asc, c2 asc NULLS LAST

If this doesn’t work … well, best of luck. Use nqquery.log, bi_server1.log (where the AdminService writes some diagnostics) to try and trace the issue. Also test calling NQSModifyMetadata from JDBC (or ODBC) directly, and then add in the additional layer of the web service call.

So, in the words of Mr Rittman, there you have it. Programatically updating the value of repository variables, or anything else, done online, and for bonus points done through a web service call making it possible to use without any local OBIEE client/server tools.

Categories: BI & Warehousing

Downloading VirtualBox VM “Oracle Enterprise Manager 12cR5”

Marco Gralike - Fri, 2015-09-25 06:11
I needed for some customer training a Oracle Enterprise Manager Cloud Control VirtualBox environment with…

WebLogic / Oracle FMW to RAC Database connection : Using Active GridLink ?

Online Apps DBA - Fri, 2015-09-25 05:40

weblogic_gridlinkOracle Weblogic Server is application server that is one of the important products from Oracle Fusion Middleware. Oracle WebLoic Server connects to database using datasource and if your database is Oracle RAC (almost all big enterprises uses RAC database for high availability at database tier), You can use one of the two options to connect

a) Multi Data Source – This is old way of connecting  Oracle WebLogic Server to RAC Database

b) GridLink – This is new and better way of connecting Oracle WebLogic (and FMW products) to RAC Database

We cover both these methods for WebLogic Server to RAC Database connection in our Oracle Fusion Middleware Training (next batch starts on 17th October, 2015. Register now for early bird discount – limited time only) , apart from this we also cover Architecture, High Availability, Disaster Recovery, Patching, Cloning, Installation, OHS-WebLogic Integration, SOA, WebCenter and lot more.

We also provide consulting services for Oracle Fusion Middleware where we recommended GridLink for WebLogic to RAC database connectivity to one of our customer.  Customer wanted to use GridLink with Weblogic 10.3.6 but with FAN option unchecked and no ONS details.

If you select GridLink, configuration screen prompts you to provide FAN & ONS details (more on ONS later).



We thought to discuss about risk of using GridLink but not providing ONS details (or do not select Enable FAN option) here but before that

Here are the features of ONS (Oracle Notification Service) and FAN(Fast Application Notification) :

  • Fast Connection Failover (FCF)
  • Runtime Connection Load-Balancing by distributing the runtime work requests to all active Oracle RAC instances, including those rejoining a cluster
  • Graceful shutdown of RAC instances
  • Abort and remove invalid connections from the connection pool

Active management of the connections in the pool is managed through the real time information that the connection pool receives from the RAC Oracle Notification Service (ONS)

According to the Bug 18467939 in WebLogic 11g (10.3.6), if you don’t configure ONS and leave option FAN unchecked then it is treated as a generic data source (that means you are literally using single instance database and not RAC).

From WebLogic Server 12c (12.1.2), a new Active GridLink Flag in the console is introduced so that it is possible to specify GridLink configuration, even if ONS is not set and FAN enabled is false.

That’s because there is a new 12c driver feature that automatically configures ONS without it being specified.

In WebLogic 11g (10.3.6) this driver is not supported therefore, active Grid link is defined as both FAN enabled true and ONS configured. If one of the information is missing,  then it is treated as generic.



Are you using WebLogic or Fusion Middleware with Oracle RAC Database and not using GridLink ?

Contact Us Need any help with your Fusion Middleware Implementation or WebLogic to RAC connection


If you need to learn Oracle Fusion Middleware (Architecture, Installation, High Availability, Disaster Recovery, Patching, Cloning, SSL etc ) then have a look at our Oracle Fusion Middleware Training (next batch starts on 17th October, 2015 – Register now for early bird discount, limited time only).

Note: We are so confident on our workshops that we provide 100% Money back guarantee, in unlikely case of you being not happy after 2 sessions, just drop us a mail before third session and We’ll refund FULL money (or ask us from our 100s of happy trainees in our private Facebook Group)

Looking to get more information or FREE tips on Oracle Administration then join our newsletter or for just specific to Oracle Fusion Middleware topics then click enter leave you name & email here (register for both if you wish to learn on Oracle Apps/Fusion Middleware/Identity & Access Management etc)


The post WebLogic / Oracle FMW to RAC Database connection : Using Active GridLink ? appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

ORDS... Yes we can!

Dimitri Gielis - Fri, 2015-09-25 03:55
On September 13th I got a nice surprise, an email of Steven Feuerstein with the message: "You have been selected as a finalist in the ORDS category for the 2015 Oracle Database Developer Choice Awards!"

It's always nice to get recognition for the efforts you do, so thanks so much already for the nomination. I didn't make publicity yet in order to get some up-votes, but I hope by reaching out to the people who read my blog, I gain some more up-votes :)


To be honest, I wondered why I deserved this nomination, especially in a category that I'm probably less "known" for.
After giving this more thought, I remembered ORDS actually started being very much linked to APEX. In the early days ORDS was even being called the "APEX Listener", and I was one of the early adopters and promoters of using the APEX Listener (now ORDS) in your APEX architecture.

In 2012 I literally travelled around the world (Belgium, the Netherlands, UK, San Francisco, San Antonio, Uruguay, Brazil, Chile) to talk about why you should move to the APEX Listener.

Moving to the APEX Listener from Dimitri Gielis
When looking back at that presentation, today it still stand; you still have different options for your APEX architecture, but we don't have to convince anymore about the benefits of choosing for ORDS. ORDS is now main-stream, widely adopted and proven technology. Unless it's a legacy system, I don't really see any reason anymore why you should not use ORDS in your architecture.

ORDS evolved a lot over time, and the new name reflects more what the core feature is "Oracle REST Data Services". REST web services became so important in the last years and that is exactly what I've been blogging and talking about lately (see further).

In my presentation of Microsoft Sharepoint and Oracle APEX integration (given in San Francisco, BeNeLux and Germany) I talk about the architecture and how you get your APEX data in Microsoft Sharepoint by using ORDS. But also the other way round, by using Sharepoint APIs, REST web services come into play. When you want to integrate with other systems, ORDS can really help.

Oracle Application Express (APEX) and Microsoft Sharepoint integration from Dimitri Gielis
I didn't blog much yet about a really interesting R&D project we've been working on in the last months - using wearables to capture sensor and patient data. At the Oracle Mobile day in the Netherlands I did a presentation which explains it in more detail. We developed native smarth phone applications that call REST web services all build in ORDS. ORDS is getting or pushing the data into our Oracle Database using JSON. Next to that we've dashboards in APEX to "see" and work with the data. We learned and still learn a lot of this project; about the volume of data, the security etc. but without ORDS it would not have been possible.

APEX Wearables from Dimitri Gielis
And finally a product where I'm really proud of APEX Office Print (AOP). I found it always a challenge to get documents out of APEX. I'll do some more blog posts about it in the future, but where APEX is so declarative to build your web applications, it's far from declarative to get documents out in Word, Excel, Powerpoint or PDF (at least without BI Publisher). With APEX Office Print we hope to address that. Just one APEX plugin where you define the template and which data you want to use and presto you get your document. Behind the scenes we use JSON and having ORDS makes it really easy to generate it. If you want to know more about the components behind the printing solution (and do it yourself), you'll find that in my presentation about Printing through Node.js which I presented in different countries and will present at Oracle Open World.

How to make APEX print through Node.js from Dimitri Gielis
Furthermore, if you're interested in reading more about JSON, I've done a series of blog posts and was interviewed by Oracle about it, you find the links here.

Hopefully by doing this post you see the power of ORDS and you get some ideas yourself how to best leverage the power of this wonderful piece of software. If you liked my "ORDS-efforts" and want to give me an up-vote in the Database Developer Choice Awards, I really appreciate that.

Thanks so much,

Categories: Development

Links for 2015-09-24 []

Categories: DBA Blogs

Partner Webcast – Why and How Your Business changes with Oracle Cloud

We are facing a new “revolution” in the business world. The capabilities that modern technology provides awake new expectations, new needs and wishes of consumers/customers and enable businesses to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Priority Service Infogram for 24-SEP-2015

Oracle Infogram - Thu, 2015-09-24 15:36

Oracle Data Masking and Subsetting sessions in Open World 2015, from Oracle Data Masking and Subsetting.
Mobile @ Oracle OpenWorld 2015, from The Oracle Mobile Platform Blog.
SOA & Java
Unleash the power of Java API’s on your WLST scripts! from SOA & BPM Partner Community Blog.
From the same source: Enable SOA Composer in SOA Suite 11
From The Java Source:
Perspectives on Java Evolution!
Java ME Embedded 8.2 Release!
Concurrency on the JVM
Lots of WebLogic goodies this week from WebLogic Partner Community EMEA:: Extending the Weblogic console by adding Java classes
WebLogic Partner Community Newsletter September 2015
Accessing WebLogic Logs via REST
Overview of WebLogic 12c RESTful Management Services
And from Oracle Partner Hub: ISV Migration Center Team:
WebLogic on ODA: Final release and new partner supported model
Patch Set Update: Hyperion Essbase, from Business Analytics - Proactive Support.
Configuring Secure NFS in Solaris 11, from What the krowteN?
New Demantra Log File Parser in PERL, from the Oracle Demantra blog.
From the Oracle E-Business Suite Support blog:
New EBS Oracle Receivables Period Close Analyzer
Webcast: EBS HCM Updates for Employer Shared Responsibility Reporting Under the Affordable Care Act
NEW! EBS Support Analyzer Bundle Menu Tool (Doc ID 1939637.1)
NEW!!! Submit your Enhancement Requests for Oracle Payables via the Community
From the Oracle E-Business Suite Technology blog:
Consultations with ATG Development at OpenWorld 2015

EBS Support Policy for Third-Party Client Components

Managing the OBIEE BI Server Cache from ODI 12c

Rittman Mead Consulting - Thu, 2015-09-24 13:47

I wrote recently about the OBIEE BI Server Cache and how useful it can be, but how important it is to manage it properly, both in the purging of stale data and seeding of new. In this article I want to show how to walk-the-walk and not just talk-the-talk (WAT? But you’re a consultant?!). ODI is the premier data integration tool on the market and one that we are great fans of here at Rittman Mead. We see a great many analytics implementations built with ODI for the data load (ELT, strictly speaking, rather than ETL) and then OBIEE for the analytics on top. Managing the BI Server cache from within your ODI batch makes a huge amount of sense. By purging and reseeding the cache directly after the data has been loaded into the database we can achieve optimal cache usage with no risk of stale data.

There are two options for cleanly hooking into OBIEE from ODI 12c with minimal fuss: JDBC, and Web Services. JDBC requires the OBIEE JDBC driver to be present on the ODI Agent machine, whilst Web Services have zero requirement on the ODI side, but a bit of config on the OBIEE side.

Setting up the BI Server JDBC Driver and Topology

Here I’m going to demonstrate using JDBC to connect to OBIEE from ODI. It’s a principle that was originally written up by Julien Testut here. We take the OBIEE JDBC driver bijdbc.jar from $FMW_HOME/Oracle_BI1/bifoundation/jdbc and copy it to our ODI machine. I’m just using a local agent for my testing, so put it in ~/.odi/oracledi/userlib/. For a standalone agent it should go in $AGENT_HOME/odi/agent/lib.

[oracle@ip-10-103-196-207 ~]$ cd /home/oracle/.odi/oracledi/userlib/  
[oracle@ip-10-103-196-207 userlib]$ ls -l  
total 200  
-rw-r----- 1 oracle oinstall    332 Feb 17  2014 additional_path.txt  
-rwxr-xr-x 1 oracle oinstall 199941 Sep 22 14:50 bijdbc.jar

Now fire up ODI Studio, sign in to your repository, and head to the Topology pane. Under Physical Architecture -> Technologies and you’ll see Oracle BI

Right click and select New Data Server. Give it a sensible name and put your standard OBIEE credentials (eg. weblogic) under the Connection section. Click the JDBC tab and click the search icon to the right of the JDBC Driver text box. Select the default,, and then in the JDBC Url box put your server and port (9703, unless you’ve changed the listen port of OBIEE BI Server)

Now click Test Connection (save the data server when prompted, and click OK at the message about creating a physical schema), and select the Local Agent with which to run it. If you get an error then click Details to find out the problem.

One common problem can be the connection through to the OBIEE server port, so to cut ODI out of the equation try this from the command prompt on your ODI machine (assuming it’s *nix):

nc -vz 9703

If the host resolves correctly and the port is open then you should get:

Connection to 9703 port [tcp/*] succeeded!

If not you’ll get something like:

nc: port 9703 (tcp) failed: Connection refused

Check the usual suspects – firewall (eg iptables) on the OBIEE server, firewalls on the network between the ODI and OBIEE servers, etc.

Assuming you’ve got a working connection you now need to create a Physical Schema. Right click on the new data server and select New Physical Schema.

OBIEE’s BI Server acts as a “database” to clients, within which there are “schemas” (Subject Areas) and “tables” (Presentation Tables). On the New Physical Schema dialog you just need to set Catalog (Catalog), and when you click the drop-down you should see a list of the Subject Areas within your RPD. Pick one – it doesn’t matter which.

Save the physical schema (ignore the context message). At this point your Physical Architecture for Oracle BI should look like this:

Now under Logical Architecture locate the Oracle BI technology, right click on it and select New Logical Schema. From the Physical Schemas dropdown select the one that you’ve just created. Give a name to the Logical Schema.

Your Logical Architecture for Oracle BI should look like this:

Building the Cache Management Routine Full Cache Purge

Over in the Designer tab go to your ODI project into which you want to integrate the OBIEE cache management functions. Right click on Procedures and select Create New Procedure. Give it a name such as OBIEE Cache – Purge All and set the Target Technology to Oracle BI

Switch to the Tasks tab and add a new Task. Give it a name, and set the Schema to the logical schema that you defined above. Under Target Command enter the call you want to make to the BI Server, which in this case is

call SAPurgeAllCache();

Save the procedure and then from the toolbar menu click on Run. Over in the Operator tab you should see the session appear and soon after complete – all being well – successfully.

You can go and check your BI Server Cache from the OBIEE Administration Tool to confirm that it is now empty:

And confirm it through Usage Tracking:

From what I can see at the default log levels, nothing gets written to either nqquery.log or nqserver.log for this action unless there is an error in your syntax in which case it is logged in nqserver.log:

(For more information on that particular error see here)

Partial Cache Purge

This is the same pattern as above – create an ODI Procedure to call the relevant OBIEE command, which for purging by table is SAPurgeCacheByTable. We’re going to get a step more fancy now, and add a variable that we can pass in so that the Procedure is reusable multiple times over throughout the ODI execution for different tables.

First off create a new ODI Variable that will hold the name of the table to purge. If you’re working with multiple RPD Physical Database/Catalog/Schema objects you’ll want variables for those too:

Now create a Procedure as before, with the same settings as above but a different Target Command, based on SAPurgeCacheByTable and passing in the four parameters as single quoted, comma separated values. Note that these are the Database/Catalog/Schema/Table as defined in the RPD. So “Database” is not your TNS or anything like that, it’s whatever it’s called in the RPD Physical layer. Same for the other three identifiers. If there’s no Catalog (and often there isn’t) just leave it blank.

When including ODI Variable(s) make sure you still single-quote them. The command should look something like this:

Now let’s seed the OBIEE cache with a couple of queries, one of which uses the physical table and one of which doesn’t. When we run our ODI Procedure we should see one cache entry go and the other remain. Here’s the seeded cache:

And now after executing the procedure:

And confirmation through Usage Tracking of the command run:

Cache Seeding

As before, we use an ODI Procedure to call the relevant OBIEE command. To seed the cache we can use SASeedQuery which strictly speaking isn’t documented but a quick perusal of the nqquery.log when you run a cache-seeding OBIEE Agent shows that it is what is called in the background, so we’re going to use it here (and it’s mentioned in support documents on My Oracle Support, so it’s not a state secret). The documentation here gives some useful advice on what you should be seeding the cache with — not necessarily only exact copies of the dashboard queries that you want to get a cache hit for.

Since this is a cookie-cutter of what we just did previously you can use the Duplicate Selection option in ODI Designer to clone one of the other OBIEE Cache procedures that you’ve already created. Amend the Target Command to:

When you run this you should see a positive confirmation in the nqserver.log of the cache seed:

[2015-09-23T23:23:10.000+01:00] [OracleBIServerComponent] [TRACE:3]  
[USER-42] [] [ecid: 005874imI9nFw000jzwkno0007q700008K,0] [tid: 9057d700]  
[requestid: 477a0002] [sessionid: 477a0000] [username: weblogic]  
Query Result Cache: [59124]  
The query for user 'weblogic' was inserted into the query result cache.  
The filename is '/app/oracle/biee/instances/instance1/bifoundation/OracleBIServerComponent/coreapplication_obis1/cache/NQS__735866_84190_2.TBL'. [[

A very valid alternative to calling SASeedQuery would be to call the OBIEE SOA Web Service to trigger an OBIEE Agent that populated the cache (by setting ‘Destination’ to ‘Oracle BI Server Cache (For seeding cache)’). OBIEE Agents can also be ‘daisy chained’ so that one Agent calls another on completion, meaning that ODI could kick off a single ‘master’ OBIEE Agent which then triggered multiple ‘secondary’ OBIEE Agents. The advantage of this approach over SASeedQuery is that cache seeding is more likely to change as OBIEE usage patterns do, and it is easier for OBIEE developers to maintain all the cache seeding code within ‘their’ area (OBIEE Presentation Catalog) than put in a change request to the ODI developers each time to change a procedure.

Integrating it in the ODI batch

You’ve two options here, using Packages or Load Plans. Load Plans were introduced in ODI and are a clearer and more flexible of orchestrating the batch.

To use it in a load plan create a serial step that will call a mapping followed by the procedure to purge the affected table. In the procedure step in the load plan set the value for the variable. At the end of the load plan, call the OBIEE cache seed step:

Alternatively, to integrate the above procedures into a Package instead of a load plan you need to add two steps per mapping. First, the variable is updated to hold the name of the table just loaded, and then the OBIEE cache is purged for the affected table. At the end of the flow a call is made to reseed the cache:

These are some very simple examples, but hopefully illustrate the concept and the powerful nature of integrating OBIEE calls directly from ODI. For more information about OBIEE Cache Management, see my post here.

Categories: BI & Warehousing

SpyHunter 4 serial Download

Jithin Sarath - Thu, 2015-09-24 12:31
SpyHunter 4 crack has been voted as best anti-malware software. Download SpyHunter 4 Activation Code with email and password list generator patch free.

  • Very effective at removing hijacked browser search toolbars
  • Removed spyware variants that other major brands didn’t detect
  • Customized spyware fix option available
  • Easy to use interface
  • Supports Windows 8.1
Categories: DBA Blogs

Meet Pythian at AWS re:Invent 2015

Pythian Group - Thu, 2015-09-24 12:21


We’re pumped to be sponsoring our first AWS re:Invent show on October 6-9 at The Venetian in Las Vegas!

As an AWS Advanced Consulting Partner with AWS accredited experts on our team, we’ve helped our clients design, architect, build, migrate, and manage their workloads and applications on AWS.

If you’re going to be at the show next week, stop by our booth (#1255) or book a one-on-one session with one of our leading AWS technical experts to get answers to some of your biggest cloud challenges. From strategy development and workload assessment to tool selection and advanced cloud capabilities, we can help you at any stage of your cloud journey.

Don’t miss out! Schedule a one-on-one session with one of our technical experts:

Alex GorbachevCTO, Oracle ACE Director, Cloudera Champion of Big Data, specializes in cloud strategy and architecture, data and big data in the cloud, Securing data in the cloud, and IOT strategy.

Aaron Lee – VP Transformation Services, specializes in cloud strategy and transformation solutions, DevOps, big data, advanced analytics, and application platforms and migrations to AWS

Dennis Walker – Director, Engineering, specializes in cloud solution architecture and DevOps solutions for AWS.

Categories: DBA Blogs

EMEA Partners: Oracle Cloud Platform: Development Workshop for Partners (JCS, DevCS, MCS)

The Oracle team is pleased to invite your java developers and consultants to a 5-days hands-on workshop on how to develop, deploy and manage Java applications on Oracle Cloud platform. Oracle will...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Index Advanced Compression: Multi-Column Index Part II (Blow Out)

Richard Foote - Thu, 2015-09-24 02:00
I previously discussed how Index Advanced Compression can automatically determine not only the correct number of columns to compress, but also the correct number of columns to compress within specific leaf blocks of the index. However, this doesn’t mean we can just order the columns within the index without due consideration from a “compression” perspective. As […]
Categories: DBA Blogs