Skip navigation.

Feed aggregator

IoT Hackathon Field Report: Mexico Edition

Oracle AppsLab - Mon, 2015-05-25 12:02

I recently ventured down to Mexico to participate in an Internet of Things (IoT) hackathon organized by Laurie Pattison’s (@lsptahoe) Apps UX Innovation Events team with some of my fellow AppsLab members, Luis Galeana, Tony Orciuoli, and Osvaldo Villagrana.


Being the lone non-developer, I wasn’t sure how much I would be able to contribute—but I had done some research pertaining to our use case, so I felt I had at least that much to offer.

Our rather illustrious use case pertained to a perennial workplace problem—lines to use the bathroom. In MDC, there is a preponderance of men, and so apparently waiting can be an issue. Some of my research has found that elsewhere, where there are more women than men, lines to use the women’s bathroom in the office place can be a serious annoyance.

Thus was born what was originally playfully titled Bathroom Management (BM), though we ended up with Presence, which would work more generally as a presence management system that could also handle conference room reservations, among other things.

image1 image5

I had never been part of a hackathon, but I definitely found out the appeal. As a lover of deadlines, and my own experiences coding at night (definitely the best time for coding), it seems just right for this sort of thing. Free snacks and beverages, food carts for lunch and dinner, and a beautiful view from the MDC office 9th floor, it was an excellent setting.

I was able to help intermittently with thinking through some of the logic of our scheduling system, and with our pitch at the end, so I did feel I added something, even if the lion’s share of the work was done by the other three. Being a two-day hackathon, we had one late night, which I stuck around for, and ended up reading about and playing with Python, in the hopes it might come in handy. It didn’t, but there’s always next time.

Our presentation of Presence garnered some good laughs, which we didn’t quite expect, but at least everyone was engaged. We had a great demo showing our scheduling system for bathroom stalls, which included proximity sensors, sounds, and displays in the stall, and a web interface for scheduling, as well as IM, phone, and watch notifications when the stall you reserved becomes free.

We came in third, after two other solid entries, and took home the People’s Choice award, perhaps because our solution filled a real need in the office! I did learn a lot from the other winners, particularly on how we could have pitched it better to highlight the enterprise applicability. So again, there’s always next time.

All in all I found it highly favorable, and hope I have another chance to do it again in the future.

walkingPossibly Related Posts:

Upgrading to APEX 5 on Oracle XE 11g

The Anti-Kyte - Mon, 2015-05-25 07:27

It’s a Bank Holiday weekend here in the UK.
This is usually a time for doing odd-jobs as a distraction from watching the rain come down.
This time around, rather than subject you to another lament about the Great British Summer ( or lack thereof), I’m going to go through the steps needed to install APEX5 on Oracle 11gXE.

Now, I know that the documentation doesn’t mention Express Edition.
I also know that the instructions that Oracle do have for upgrading APEX on XE haven’t yet been updated to account for APEX5.
I know this because I’ve spent a wet Bank Holiday finding this stuff out the hard way so that (hopefully), you don’t have to.
What I’m going to cover here is :

  • Pre-installation checks
  • Getting APEX5
  • Installation
  • Configuration

I would say “let’s get cracking before the sun comes out”, but that would only give us until around the second week in July…

The environment

I’m dong this on Oracle Database Express Edition 11gR2.
As I’m doing this on a Linux machine, you may see the odd indication of this but the essential steps outlined here will apply for all Operating Systems
I’m starting with a clean installation of Oracle XE, so the current version of APEX is 4.0. However, these steps should still be valid when upgrading from any APEX4x version.

Incidentally, if you really want to hang on to the default XE Database Management Application, you’ll probably want to have a look at the steps required to back it up prior to upgrade.

Pre-Installation checks

The documentation details several checks. However, as we’re running on XE11g, we don’t have to worry too much about some of them.

If you really want to double-check….

Oracle Database Version

The minimum version required is 11.1.07. XE runs as we can see with the following query in SQL*Plus :

select banner
from v$version
where banner like 'Oracle Database%';

Oracle Database 11g Express Edition Release - 64bit Production

Oracle XML DB

That’s there as well, by default…

select comp_name, version, status
from dba_registry
where comp_id = 'XDB';

-------------------------------------------------- ------------------------------ --------------------------------------------
Oracle XML Database			  VALID

Web Listener Requirements

Oracle XE ships with the Embedded PL/SQL Gateway by default.
Provided you can hit the Database Home Page (e.g. by clicking on the Getting Started With Oracle Database 11g Express Edition desktop shortcut), you’re good to go.

There are some pre-requisites that you will need to verify.

Memory Target Setting

Once again, on a fresh XE11g install, you shouldn’t have any problems.
In memory terms, the smallest machine I’ve installed XE11g on had a total of 1GB RAM. Even in an environment as constrained as this, the MEMORY_TARGET should still meet the minimum requirement of 300MB.

To check, login to SQL*Plus and ….

SQL> show parameter memory_target

------------------------------------ ----------- ------------------------------
memory_target			     big integer 1G
Space Requirements

The easy bit first. If you’re planning to download the English Language Only version of APEX5, you’ll need 250MB of space on disk.
If you’re going for the full version, this goes up to 630 MB.

At this point, the installation instructions then start talking about the “Oracle Application Express tablespace”.
This is simply the default tablespace for the schema that will be the APEX owner. Once the installation happens this will be a schema called APEX_050000. It will be created with SYSAUX as it’s default tablespace. This is exactly the same as for the existing APEX04000 user that shipped with XE11g. Incidentally, we’ll also need to know the user’s temporary tablespace for the installation, so we may as well verify both of them now…

SQL> select default_tablespace, temporary_tablespace
  2  from dba_users
  3  where username = 'APEX_040000';

------------------------------ ------------------------------
SYSAUX			       TEMP


The amount of space required in SYSAUX is 220MB plus 60MB per additional language installed.
There is also a requirement for 100MB in the SYSTEM tablespace.

Working out how much space is available isn’t entirely straightforward. However, we can get most of the way with the following query :

select tablespace_name,
  (maxbytes - bytes) / 1024/1024 as "Available Space MB",
from dba_data_files
where tablespace_name in ('SYSAUX', 'SYSTEM')
TABLESPACE_NAME 	       FILE_NAME					  Available Space MB AUT
------------------------------ -------------------------------------------------- ------------------ ---
SYSAUX			       /u01/app/oracle/oradata/XE/sysaux.dbf			  31677.9844 YES
SYSTEM			       /u01/app/oracle/oradata/XE/system.dbf				 150 YES


If you’re query returns an Available Space MB figure less than the requirements, don’t worry too much.
Provided the tablespace is Autoextensible and there is enough space on disk, it will automatically grow as it needs more space.

Browser Version

The minimum requirements for Web browsers are :

  • Firefox version 35
  • Chrome version 40
  • Safari version 7
  • IE version 9
Getting APEX5

Right, once you’re happy with the pre-requisite steps, head over to the OTN APEX5 download page and download you’re required version.
This will be one of :

  • Oracle Application Express 5.0 – All languages
  • Oracle Application Express 5.0 – English language only

I’ve gone for the English language only version.

As mentioned previously, the Download Page does state that :

“Application Express 5.0 can also be used with Oracle Database 11g Express Edition (XE), but is supported only through the OTN discussion forum, and not through Oracle Support Services.”

However, the installation instructions page it links to has yet to be updated for APEX5 at the time of writing.

Anyway, I now have a file called in my Downloads directory.

As I’m on Linux, I’m going to unzip and deploy this to the ORACLE_BASE directory (/u01/app/oracle).
To avoid any issues with file permissions, I’ll do this as the oracle OS user.

I should point out that it doesn’t really matter where you deploy the files to. Also, you don’t have to be oracle to do this.
I’ve just done it this way to keep things simple.

cd $HOME/Downloads
sudo su oracle
unzip -d /u01/app/oracle

You’ll then see something like …

  inflating: /u01/app/oracle/apex/core/template.plb  
  inflating: /u01/app/oracle/apex/core/dev_grants.sql  
  inflating: /u01/app/oracle/apex/apxsqler.sql  
  inflating: /u01/app/oracle/apex/apxprereq.sql  
  inflating: /u01/app/oracle/apex/apxupgrd.sql  
  inflating: /u01/app/oracle/apex/apxconf.sql  
  inflating: /u01/app/oracle/apex/coreins5.sql  
  inflating: /u01/app/oracle/apex/apxdvins.sql  
  inflating: /u01/app/oracle/apex/apxchpwd.sql  
 extracting: /u01/app/oracle/apex/apxexit.sql  
  inflating: /u01/app/oracle/apex/catapx.sql  
  inflating: /u01/app/oracle/apex/apxe102.sql  

After that, you should have a sub-directory call apex where you’ve unzipped the file.
NOTE – you can stop being the oracle user now.


A bit of housekeeping to start with – we need to make sure that the APEX_PUBLIC_USER database account is unlocked :

select status
from dba_users
where username = 'APEX_PUBLIC_USER';

If the account_status is LOCKED then…

alter user apex_public_user account unlock;

User altered.

select account_status from dba_users where username = 'APEX_PUBLIC_USER';



NOTE – strictly speaking, you should also perform this check for the ANONYMOUS user. However, if Oracle XE is newly installed, or if you’re running the Embedded PL/SQL Gateway, it should be unlocked.
If you want to satisfy yourself that this is, in fact, the case :

select status
from dba_users
where username = 'ANONYMOUS';
Loading APEX5 into the database

As of APEX5, we now have the option of installing just the APEX runtime…but where’s the fun in that ?
We want the full-blown development environment…

The script we need to run to do the installation – apexins.sql – takes three parameters :

  • the default tablespace of the APEX owner schema
  • the default tablespace of the FLOWS_FILES schema
  • a temporary tablespace in the database
  • a virtual directory for APEX images
  • We already know that the default tablespace for the APEX owner is SYSAUX.
    We also know that the temporary tablespace is called TEMP.
    As for the FLOWS_FILES schema…

    SQL> select default_tablespace
      2  from dba_users
      3  where username = 'FLOWS_FILES';

    As for the virtual directory – “/i/” always seems to work.

    Now, change directory to the apex directory you’ve created as part of the unzip step, and connect to the database as sys as sysdba.

    cd /u01/app/oracle/apex 
    sqlplus sys as sysdba

    …and run the script…

    @apexins.sql SYSAUX SYSAUX TEMP /i/

    After several minutes worth of messages whizzing up your screen you’ll get :

    PL/SQL procedure successfully completed.
    -- Now beginning upgrade. This will take several minutes.-------
    -- Ensuring template names are unique -------

    …finally, you’ll get…

    Thank you for installing Oracle Application Express
    Oracle Application Express is installed in the APEX_050000 schema.
    The structure of the link to the Application Express administration services is as follows:
    http://host:port/pls/apex/apex_admin (Oracle HTTP Server with mod_plsql)
    http://host:port/apex/apex_admin     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
    http://host:port/apex/apex_admin     (Oracle REST Data Services)
    The structure of the link to the Application Express development interface is as follows:
    http://host:port/pls/apex (Oracle HTTP Server with mod_plsql)
    http://host:port/apex     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
    http://host:port/apex     (Oracle REST Data Services)
    PL/SQL procedure successfully completed.
    1 row selected.
    Disconnected from Oracle Database 11g Express Edition Release - 64bit Production
    mike@mike-Aspire-E3-112 /u01/app/oracle/apex $ 
    Configuration Re-setting the ADMIN password

    The next step is to set the APEX ADMIN password. Note that, even if you’ve already done this for the previously installed APEX version, you’ll need to do it again here, using the script that’s shipped with this version of APEX.
    Also, despite any configuration changes you may have made to the APEX password complexity rules, the password you set will need to conform to the following :

  • Password must contain at least 6 characters.
  • Password must contain at least one numeric character (0123456789).
  • Password must contain at least one punctuation character(!”#$%&()“*+,-/:;?_).
  • Password must contain at least one upper-case alphabetic character.
  • Password must not contain username.

Bearing this in mind, connect to the database again as SYS AS SYSDBA and you’ll be prompted as follows….


This script can be used to change the password of an Application Express
instance administrator. If the user does not yet exist, a user record will be
Enter the administrator's username [ADMIN] 
User "ADMIN" exists.
Enter ADMIN's email [ADMIN] 
Enter ADMIN's password [] 
Changed password of instance administrator ADMIN.
Load Images

One final step – we need to load the images.
As we’re running the Embedded PL/SQL Gateway, we’ll need to use the apex_epg_config.sql script.
This script takes, as a parameter, the name of the directory that you’ve extracted the apex zip into – i.e. without the /apex directory itself. As you can see from the output below, it does take a while (8 minutes in this case) :

@apex_epg_config.sql /u01/app/oracle

SQL> @apex_epg_config.sql /u01/app/oracle

. Loading images directory: /u01/app/oracle/apex/images
timing for: Load Images
Elapsed: 00:08:00.92
Post Installation Steps

Just before we can go and play with our shiny new APEX version, we need to do a little tidying.
First of all, confirm the port that the PL/SQL Gateway is listening on :

select dbms_xdb.gethttpport
from dual;

Then, finally, shutdown and re-start the database.

Once it comes back up, point your browser to :


…where port is the port number returned by the query above.

Login as user ADMIN with the password that you set for admin when you ran apxchpwd.sql. If all goes well, you should now see…


It may be raining outside, but at least you now have a cosy APEX5 installation to play with.

Filed under: APEX, Oracle Tagged: APEX5 on Oracle XE 11g, apexins.sql, apex_epg_config.sql, apxchpwd.sql, Oracle Application Express Tablespace

Worth Considering: Students can have their own perspectives on edtech initiatives

Michael Feldstein - Sun, 2015-05-24 15:06

By Phil HillMore Posts (317)

Triggered by Friday’s article on e-Literate TV, there have been some very interesting conversations both in the Chronicle comment thread and on the e-Literate TV site. The most, um, intense conversations have centered on the application of self-regulated learning (SRL) in combination with adaptive software (ALEKS) to redesign a remedial math course at Essex County College. Michael has been wading in very deep waters in the comment threads, trying emphasize variations of the following point.

But that debate should be in the context of what’s actually happening in real classrooms with real students, what the educational results are, and what the teachers and students involved think of their experiences.

Right now, the “sides” are having a fight–it’s not really a debate because the sides aren’t really talking to each other–in near total absence of any rational, educator-evaluated, evidence-based conversation about what these approaches are good for. One side says they will “fix” a “broken” education system, while the other side says they will “destroy” the education system. Well, what are the students saying?

One key theme coming through from comments at the Chronicle is what I perceive as an unhealthy cyncism that prevents many people from listening to students and faculty on the front lines (the ones taking redesigned courses) on their own merits. Michael called out this situation in the same comment:

What bothers me is the seemingly complete lack of interest among the commenters in this thread about actually hearing what these teachers and students have to say, and the disregard for the value of their perspectives. It is possible to raise legitimate concerns about techno-solutionism, anti-labor practices, and other serious abuses while simultaneously acknowledging that so-called “personalized learning” approaches can have real educational value when properly applied in the appropriate context by competent and concerned educators and serious students.

One of our primary goals for e-Literate TV is to give additional access to those on the front lines, thus allowing debates and conversations about the role of ed tech and personalized learning approaches. However, it is important to recognize that students can have their own perspectives and are not just robots who are told what to say and do. Consider the following panels discussion with students. To me, the students are quite well-spoken and have real insights.

Sade: A typical day is, like, you basically come in—you go and you log on and you do your ALEKS. You do it at your own pace. Every individual works at their own pace. That’s why I like it. Because some people are ahead, and if you’re in a typical, a regular class, then you have to go with the pace of everybody else. Even if you don’t understand, you have to be—you have to try to catch up. Here, you work at your own pace.

Viviane: It’s been a very good experience for basically the same reasons. Where you just sit and you work and if you can solve 10 problems in one hour, it’s better for you if you keep working at your own pace.

And there’s also—the professor that helps you, or you can even bother one of your classmates and say, “Hey, can you help me out over here with this problem?” or something like that. I mean it’s—I feel as if it’s a very interactive and open classroom.

As per other classes, I don’t think that a regular math class would be able—I mean you wouldn’t be able to sit and ask another classmate for help or anything like that. You would have to just wait for your professor.

Most students we talked to appreciated the self-paced nature of the lab portion (working on the computers emporium style with faculty roaming the room for one-on-one support), but it is very clear that the technology itself was one component of the solution. Students are reflecting back that it is the combination of self-paced design along with interactive support that is critical to success. Not only that, but note how students value the ability for peer support – students helping students. That design element of courses is often overlooked.

In another segment, students explored this concept in more depth with an additional element of ownership of the learning process.

Phil: Most of the students we talked to seem to have internalized the lessons of self-regulated learning and feel empowered to learn.

Sade: It’s really good because, for example, say I’m doing a topic, and I’m stalling. Vivian is faster than I am. I could work by my own pace and then it’s a professor there that I could raise my hand. “Excuse me. I don’t understand this. Could you help me with it?”—because everybody learns at their own pace. Everybody learns at their own pace.

Khalid: Yeah, we are typically just sitting down on the computer screen, but we’re sitting next to our classmates, so if there’s a problem on it, I could ask my classmate. Like, that’s actually the best thing about ALEKS, is that there’s an explain button right there.

We would do well to listen to students more often, judging input on their own merits.

Update: Fixed first video link.

The post Worth Considering: Students can have their own perspectives on edtech initiatives appeared first on e-Literate.

Deploying Oracle BI Mobile HD within Good, Oracle OMSS and Other Secure MDM Containers

Rittman Mead Consulting - Sun, 2015-05-24 10:53

A few months ago ODTUG announced their ODTUG Mobile Day in Utrecht, Netherlands and asked if I’d like to put forward an abstract for a presentation. Something that’s been on my mind for a while is the issues some of our customers have been reporting trying to run Oracle BI Mobile HD with the Good Mobile Device Management toolkit, so I thought this might be a good opportunity to look into the topic in more detail and try and come-up with some recommendations on how to get this working. As the ODTUG Mobile Day was for all Oracle developers, not just BI ones, I thought it’d be a good opportunity to look into iOS and Android development in-general as it relates to Oracle, and in-particular what’s involved in deploying into these “container” environments that many large enterprise customers insist on when deploying mobile applications. The presentation is available on Slideshare if you want to download it, but I thought I’d expand on some of the concepts and tips in a separate blog post as it’s all actually quite interesting.

As a bit of background, OBIEE itself can of course display its standard dashboards through mobile web browsers with graphics and charts switching to iOS and Android-friendly formats rather than trying to render in Flash. But standard dashboards and analyses look too small and are hard to work with in mobile browsers, so using this approach to mobile-enable your dashboards isn’t generally recommended. Instead of course, as most readers will already be aware, Oracle released a number of mobile clients for OBIEE over the years, with Oracle BI Mobile and Oracle BI Mobile App Designer available in the latest and suiting most end-user and customer needs for mobile access to their business analytics data.


Oracle BI Mobile App Designer apps can be deployed using SSL and SSO, whilst BI Mobile HD takes advantage of iOS and Android platform security features such as password storage in encrypted on-device keystores and remote-wipe of devices. But for some organizations that need higher-levels of device control and data security, this platform-level security isn’t enough as:

  • Unknown and unsecured devices accessing their networks (for example,via VPN) is considered an unacceptable security risk
  • For Android devices in-particular, there is known malware out on the internet that can compromise “rooted” devices
  • It’s possible to cut-and-paste sensitive information from BI Mobile applications into other applications
  • Users don’t always set secure passwords, and lost or stolen devices can potentially expose BI data stored on the device to unauthorised users

To address these issues, a number of Mobile Device Management vendors provide enterprise-level solutions to these issues, typically by having the customer specially-sign applications they wish to deploy to indicate they’ve been authorised for use, and by deploying those applications within managed containers that keep them separate from the general mobile apps on the users’ device. As well as specially signed and deployed applications such as BI Mobile HD, these MDM suites also typically provide secure and containerised web browsers and email devices, both of which need to work with Oracle BI and Oracle BI Mobile App Designer. Vendors in this space include Good Technology and MobileIron, and Oracle have their own MDM solution after they acquired Bitzer Mobile back in 2013. So how do these solutions work, why do some customers have trouble getting them working, and what’s the “preferred” approach if a customer asks you to just get it working for them?

To start with the simplest approach, the tested and recommended way to run Oracle BI Mobile HD, and Oracle BI Mobile App Designer through an MDM-supplied web browser, is to use Oracle Mobile Security Suite (OMSS), based on the technology Oracle acquired from Bitzer Mobile. OMSS is a complete platform for deploying mobile apps in a secure, managed container environment, and takes customer-signed iOS and Android applications and deploys them onto enterprise users’ mobile devices using a centralized console and management service.


To support deployment of Oracle BI Mobile into OMSS and other vendor MDM solutions, Oracle have made an un-packed and un-signed version of Oracle BI Mobile available for download on OTN as “Oracle Business Intelligence Mobile Security Toolkit”, and taking the iOS version as an example you can then compile this is Apple Xcode and sign it for deployment in your organisation’s OMSS installation. For some organisations this can be a bit of a challenge as they don’t generally use Macs and don’t do iOS development, but for Rittman Mead testing this out was pretty straightforward as we all use Macs for our consulting work and some of us play around with Xcode and iOS development in our spare time. What can also be a challenge is setting up an iOS Developer Account so that you can sign the BI Mobile HD application with your organization’s own certificate, but we set up such an account a couple of years ago and were able to get it all setup with just a couple of hours’ work. What you will need then to get this running (apart from the OMSS part that I’ll cover in a moment) is the following bits of hardware and software, in this case for the iOS version of BI Mobile HD:

  • An Apple Mac that can run recent versions of Xcode
  • An Apple Developer account that can develop and sign iOS applications, in your organization’s name and not an individual developer’s – note that you generally need to be registered with Dun and Bradstreet’s business verification service to set this up, which can take a few weeks if your entry is out-of-date or not matching your current company details
  • Oracle Business Intelligence Mobile Security Toolkit
  • Apple Xcode


Then it’s a case of setting up a new project in Xcode, selecting Single View Application and Universal as the device type, entering your project and organization’s information, and then merging the Oracle Mobile Security Toolkit’s project files with the empty project you just created and setting any other project properties.


At this point you should be able to run the application within the Xcode project and test it using Xcode’s built-in iPhone and iPad simulators, so that you can check all the features still work as expected.


Now comes the point where you sign the app you’ve just created so that you can then deploy it into OPSS for distribution to your users. This point is important to security-conscious customers as it’s now the customer, rather than Oracle, that have signed the app and you can be much more certain that no malware or other backdoors have been introduced into the code before compiling and distribution. Signing of the app takes place within Xcode, with a similar process being used for the Android version of BI Mobile HD within the Android Studio IDE.


The resulting compiled .app and .apk files are then uploaded into OMSS’s catalog for distribution to users, with provisioning taking place using emails set to corporate users that provide download links to these containerised, managed and secured versions of Oracle’s BI Mobile HD app.


So all of this looks pretty straightforward, albeit maybe a bit more complicated for organisations that don’t use Macs and don’t generally develop iOS applications – but Oracle partners such as ourselves can help with this if you’d like to offload this part of the process to a specialist team. Where things do sometimes get a bit more complicated is when other MDM vendor technologies are used, particularly Good Dynamics MDM solution that works in a slightly different way to Oracle Mobile Security Suite.

Unlike OMSS’s approach where it has you compile and sign BI Mobile HD within Apple’s own iOS application, Good requires you to build and export the unsigned Oracle Mobile Security Toolkit project in Xcode as an .ipa file, and then copy it along with your iOS Developer Program certificate and the certificate password into Good’s own Good Control Management Console. There your application is then combined with Good’s security libraries, signed with your certificate password and deployed as a “wrapped application” to then be distributed to users using a similar method to the one OMSS takes; however all mobile application access then goes through a Good proxy server, typically placed outside the main company network and providing secure communications between these managed applications running outside of the company firewall into that company’s secure servers – in this case, OBIEE11g.


There’s nothing inherently wrong with this compared to how OMSS does it, and organisations often pick Good Dynamics over other MDM solutions because of the extra functionality Good’s approach provides through the insertion of their security SDK into your mobile application; but its when organisations take advantage of these features to provide custom integration with their own security platform that problems can sometimes occur.

For example, a common issue we hear about when deploying Oracle BI Mobile HD using Good is when the customer tries to integrate their SSO solution into the user authentication process. Good’s security SDK makes it possible to intercept user login events and route the request to the customer SSO server, but it’s essential that control is passed back to the BI Server as if this re-routing hadn’t taken place and returning the authentication details the BI Server expects, and if the custom login process doesn’t quite do this then the authentication process fails. Another issue we heard about recently was recent versions of iOS (iOS 7) deprecating synchronous API calls but BI Mobile HD still making them; in this case Oracle supplied a patch and all calls are now made asynchronously but until then, deployment in the Good environment mysteriously failed.

What makes these issues doubly-tricky to identify and resolve is the restrictions most security-conscious enterprise customers place around disclosing details of their network setup, to the point where they often aren’t allowed to tell partners or Oracle Support any of the details they’ll need to work out how traffic passes around the network and over to OBIEE and the Good MDM environment. What troubleshooting often comes down to then is good old-fashioned packet-sniffing and investigation by someone familiar with OBIEE (in particular, the BI Server’s authentication process), Good’s security SDK and the customer’s network setup.


So given all of this, what is our recommendation for customers looking to implement mobile OBIEE11g clients within an MDM, container solution? My advice would be, where the customer doesn’t currently have an MDM solution and wants the easiest way to deploy Oracle BI Mobile within a secure container, go for the Oracle Mobile Security Suite option – it’s what Oracle support and test for, and as an Oracle solution it’s obviously going to be easier to go down this route than try and troubleshoot another vendor’s solution if things go wrong. But if an organization is insisting on deploying Oracle BI Mobile in a secure container its unlikely this is the first time they’ve had to do it, so there’s most probably already an MDM solution in-place and it’s likely to be from Good.

In this case, first of all remember that it should work and the two products are compatible; what you need to be aware of though is the correct way of linking BI Mobile HD in this environment to your corporate SSO and security platform, and work together with your network security team to quickly identify where the issue is coming from if things don’t work first time. Engaging with an experienced OBIEE partner such as Rittman Mead can of course help, and Oracle’s own product development and support teams have most probably seen most of the issues that can come up and can help if things get tricky. The team here at Rittman Mead have several customers successfully using Good and other vendor’s MDM solutions along with Oracle BI Mobile, and of course we can help with the app signing and deployment process if your organization doesn’t usually work with Macs or have experience with Xcode and Oracle Mobile Security Toolkit.

Finally, thanks to Chris Redgrave from the Rittman Mead team, and Oracle’s Matt Milella and Jacques Vigeant who helped me with the background research for this article and the ODTUG BI Mobile Day presentation. As I mentioned earlier the presentation from the ODTUG event is available on Slideshare, and there’s also walkthroughs for deploying BI Mobile HD within Oracle OMSS on iOS and Android on the OTN website.

Categories: BI & Warehousing

Parallel Execution -- 6 Parallel DML Restrictions

Hemant K Chitale - Sun, 2015-05-24 09:18
Here's link to the 11.2 documentation on Restrictions on Parallel DML


Categories: DBA Blogs

Migrating the XE Database Management Application to a new version of APEX

The Anti-Kyte - Sun, 2015-05-24 08:31

I must confess to a weakness when it comes to throwing stuff away.
This is particularly true of techie stuff.
Whilst I have occasionally cannibalised an old machine for parts, there is a regrettably large part of the garage reserved for “vintage” hardware that I might just need at some point.

I’ve recently added to this hoard. I’ve finally gone and got a replacement for my ageing netbook.
As part of the configuration of the new machine, I’ve installed Oracle XE again.

I’m now poised to attempt an upgrade to a shiny new version of APEX.

First of all though, if you are similarly keen to upgrade from the venerable APEX 4.0, which XE ships with, to something more modern, your hoarding instincts may kick-in when it comes to the default Database Management Application.

Once you upgrade APEX 4 to any subsequent version, this application “disappears”.
The functionality it offers is readily available through SQLDeveloper (or indeed, any of the major Oracle Database IDE’s).
Alternatively, it’s a fairly simple matter to come up with your own, improved version.

Not convinced ? Oh well, I suppose we’d better save it for re-deployment into your new APEX environment.

What I’m going to cover here is :

  • Backing up the default XE ADMIN application
  • Tweaking the APEX export file
  • Restoring the XE ADMIN application

I’ve tested this process against both APEX4.2 and APEX5.0 running on Oracle XE11g.
In the steps that follow, I’m assuming that you’re upgrading to APEX5.0.
The main difference here is the APEX owning schema.
For APEX4.2, the owner is APEX_040200, in APEX 5.0 it’s APEX_050000.
As the APEX upgrade takes place entirely within the database, the steps that follow are platform independent.

Incidentally, if you’re wondering exactly how you would upgrade XE11g to this APEX version, details will follow in my next post.

NOTE – I’m assuming here that you’re doing this on your own personal playground 11GXE database and have therefore not
worried too much about any security implications for some of the activities detailed below.

Right, let’s get started…

Backing up the XE ADMIN application

The script below uses the APEX4 PL/SQL API to create an export of the application. This is simply an SQL file that we’ll need to do some light hacking and then run it against the database once the APEX upgrade is completed.
The script ( saved as export_xe_app.sql) is :

	-- Must be logged on as SYSTEM
	-- Need to grant execute on UTL_FILE to SYSTEM
	l_fh utl_file.file_type;
	l_buffer varchar2(32767);
	l_amount pls_integer := 32767;
	l_clob clob;
	l_length pls_integer;
	l_pos pls_integer := 1;
	-- Get the source code for the XE Admin APEX application
	l_clob := wwv_flow_utilities.export_application_to_clob ( p_application_id   => 4950);
	l_length := dbms_lob.getlength(l_clob);
	-- Now write it to a file
	l_fh := utl_file.fopen( 'DATA_PUMP_DIR', 'f4950.sql', 'w');
		exit when l_pos > l_length;
		l_buffer := substr( l_clob, l_pos, 32767);
		utl_file.put(l_fh, l_buffer);
		l_pos := l_pos + length(l_buffer);
	end loop;

There are a couple of points to note prior to running this script.
The first is that it must be run as SYSTEM as this is the parsing schema for this application.
The second is that you will need to grant execute on UTL_FILE to SYSTEM.
So connect as SYS as sysdba and..

grant execute on utl_file to system

Now that’s done, connect as system and execute the script.

Once this is done, you should have a file called f4950.sql in the DATA_PUMP_DIR.
I’ve used this directory as it’s created by default when XE is installed. If you don’t know where this maps to on disk, then you can find it by running the following query :

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'

In my case (running on Linux) , this returns :


Tweaking the APEX Export

There are a couple of things that we need to change in the export script.
The first is the call to the USER_IS_DBA function.

This function was moved from the WWV_FLOW_LOGIN package to WWV_FLOW_SECURITY in APEX 4.2, where( as at APEX 5.0), it still resides.

Therefore, we need to amend line number 218 from …

  p_scheme=>'return nvl(wwv_flow.g_flow_step_id,-1) in (1,101)'||chr(10)||
'       or wwv_flow_login.user_is_dba(p_username => :APP_USER);',


  p_scheme=>'return nvl(wwv_flow.g_flow_step_id,-1) in (1,101)'||chr(10)||
'       or wwv_flow_security.user_is_dba(p_username => :APP_USER);',

The other thing to change is the name of the APEX owner.
In my case ( migrating to APEX5), this will change from APEX_040000 to APEX_050000.

So, line number 142 in the file changes from…

  p_owner => nvl(wwv_flow_application_install.get_schema,'APEX_040000'),


  p_owner => nvl(wwv_flow_application_install.get_schema,'APEX_050000'),
Deploying the application to the new APEX version

Before running our application export against the new APEX repository, we need to grant select on the relevant views directly to the APEX owner.
NOTE – I’m indebted to this article by Jeff Eberhard for this particular step:

grant select on dba_temp_files to APEX_050000;
grant select on v_$temp_extent_pool to APEX_050000;
grant select on v_$temp_space_header to APEX_050000;
grant select on v_$system_parameter to APEX_050000;
grant select on v_$session to APEX_050000;

Now that’s done, we can simply connect as SYSTEM and run our import.
The output should look like this (ignore the two spurious errors at the start) :

SQL> @f4950.sql
SP2-0734: unknown command beginning "Content-ty..." - rest of line ignored.
SP2-0734: unknown command beginning "Content-le..." - rest of line ignored.
Set Credentials...
Check Compatibility...
API Last Extended:20130101
Your Current Version:20130101
This import is compatible with version: 20100513
COMPATIBLE (You should be able to run this import without issues.)
Set Application ID...
...authorization schemes
...navigation bar entries
...application processes
...application items
...application level computations
...Application Tabs
...Application Parent Tabs
...Shared Lists of values
...Application Trees groups
...PAGE 0: 0
...PAGE 1: Home
...PAGE 2: Storage
...PAGE 4: Sessions
...PAGE 5: Backups
...PAGE 6: Parameters
...PAGE 7: Application Express
...PAGE 9: Tablespace Storage Details
...PAGE 10: Session Details
...PAGE 101: Login
...breadcrumbs templates for application: 4950
......Page template 936879405068865354
......Page template 936880509857865357
......Page template 936881728833865360
......Page template 936882328801865361
...button templates
......Button Template 936883817777865362
......Button Template 936884428414865363
...region templates
......region template 936886425092865365
......region template 936888203598865371
......region template 936889721148865373
......region template 936890925366865374
......region template 936891504925865375
......region template 936892120369865376
...List Templates
......list template 8548028083899041
......list template 936898727314865389 templates template 936900209975865391 template 936902514655865394
...label templates
......label template 936904524832865485
......label template 936904703644865485
...breadcrumb templates
......template 936904805731865485
...popup list of values templates
......template 936905603379865499
...calendar templates
...application themes
......theme 936905710643865499 options used by application 4950
...messages used by application: 4950
...dynamic translations used by application: 4950
...Language Maps for Application 4950
...web services (9iR2 or better)
...shared queries layouts
...authentication schemes
......scheme 936873424775859940

As we have imported the application with the same ID that it had originally ( 4950), the Getting Started with Oracle Database 11g Express Edition desktop icon should still work in exactly the same way…


NOTE – the first time you click on a tab that requires login, there is a bit of an issue.
Instead of prompting for login credentials, the text “Content-type:text/html; charset=utf8″ appears in the top-left of the page.
If you click the tab a second time, you will get prompted for login credentials as expected.

Once you’ve connected, the tabs should work pretty much as usual :




As a word of caution, I’d be wary of using the Application Express tab for APEX admin on the database.
Instead, I’d use the APEX URL specific to the installed APEX version for this purpose.

Right, off to sort out the clutter in the garage.

Filed under: APEX, Oracle, PL/SQL, SQL Tagged: dba_directories, UTL_FILE, wwv_flow_utilities.export_application_to_clob

New Oracle Magazine article on Oracle BI Cloud Service

Rittman Mead Consulting - Sun, 2015-05-24 07:42

NewImageThe May/June 2015 edition of Oracle Magazine is now out, and my Business Intelligence article in this edition is on Oracle BI Cloud Service (BICS). In “Upload, Model, Analyze and Report” I focus on the “departmental power-user” use-case where someone with a small team wants to share data and dashboards with others in the department, is familiar with OBIEE but wants to get something up-and-running quickly without having to include IT or the formal development process. By just uploading a spreadsheet of data and quickly modeling it into a star-schema using BICS’s simple-to-use web-based tools, you can create reports that can be shared with others in your team using OBIEE’s familiar dashboard interface.

Rittman Mead offer a number of services around Oracle BI Cloud Service and now have our own BICS “pod” for demonstration and evaluation purposes. Now that Oracle have released Visual Analyser early for BICS we’re expecting a lot of interest and demand for services, support and training around Oracle’s cloud version of OBIEE, so if you’re interested in moving part of your OBIEE estate in the cloud, or you’re a departmental BI lead looking to run OBIEE within your department without the need to get IT involved, drop us a line at and we’ll be pleased to help.

Categories: BI & Warehousing

ADF Goodies - Conveyor Belt Component and Alta UI

Andrejus Baranovski - Sat, 2015-05-23 09:59
It doesn't seem to be announced, but newly released ADF is shipped with Alta UI support. All you need to do, is to set alta skin name in trinidad config file. This enables applications running on ADF 11g platform to leverage new Oracle UI layout and to be prepared for ADF 12c upgrade. Besides Alta UI, there are several new UI components, one of them is Scrollable Bar (ConveyorBelt) - Displaying Components in a Scrollable Bar. This is simple, but quite useful component - I'm going to demonstrate how it works.

Conveyor Belt is able to render elements horizontally or vertically. This is perfect component to render key information, user could select one of the items available on the belt and get more info fetched. This is how it looks like - conveyor of employee pictures with first/last names, rendered in the vertical belt on the left:

If there are more items in the conveyor belt, than could fit into visible UI part - user could scroll (right/left and down/up). Here I scroll down and select another employee, his detail data is displayed in the main section:

As you could notice, UI is displayed using Alta skin. This is configured in trinidad file:

Here is the structure for implemented conveyor belt, items in the conveyor are stamped dynamically through ADF UI iterator component. Java Script client/server listeners are responsible to process item selection and keep detail info in synch for the selected employee:

This is the source code structure for conveyor belt UI implementation, based on the structure from above:

Conveyor item selection is processed in the server listener method, VO current row is set based on the selected key:

If you want to try it yourself, download sample application -

Another Take on Twilio Signal 2015

Oracle AppsLab - Fri, 2015-05-22 16:21

Editor’s note: Mark (@mvilrokx) and Raymond are at it again. Earlier in the week, they each provided a take on last weekend’s Bay Are Maker Faire, and this week, they both attended Twilio’s (@twilio) first developer conference, Signal. Mark’s take is here; now, it’s Raymond’s turn. Enjoy.

Twilio is no stranger to us at AppsLab. We have embedded Twilio Voice, SMS in applications such as Taleo Interview Evaluations, IoT call at Maker Faire 2014, and Daily Asteroid report, etc. It is simple yet powerful approach to achieve some real useful communication for some interesting projects.

But I never imagined Twilio is so big, that it is big enough to host a conference and get thousands of enthusiastic attendees.

They have come a long way – at the conference, they announcemed a slew of new products, and some of them are rightfully timely and empowering. A couple of samples:

  • Twilio Authy, a perfect way of embedding a two-factor Authentication into your next awesome and secure application.
  • Twilio Conference (basic, global, epic) – lets you bypass carrier lock-down, and bypass the nasty roaming.
  • Twilio Video – a Twilio flavored WebRTC, provides your application to conserve context over communication through voice, text, and video. That’s contextual communication even when you jump from your application over to communication channel.
  • Twilio IP Message – allow us embedded advanced messaging in all types of mobile and web apps

I think Twilio worked on its strength to position itself really well. They strive to provide composeable API as building block (just like Lego), and make it easy for developer to embed communication capability, and non-friction from users (no need to install anything).

In the current world, you pretty much have one app for one of anything, you have one app for ordering pizza, one app for calling taxi. Let’s Magic help you, a service built on Twilio, by just texting your desire to a number, and “hopefully” your wish is fulfilled :) That’s called non-friction!

Another use case is “Code for American”. Users can text to a number, and get your card balance. Such quick easy way to access some quick information provides real “accessibility to information”.

And one more use case is “American Red Cross” for disaster response, where they can form and coordinate the ad-hoc group of volunteers, where the group may be fluid.

In retrospective, our Taleo Interview Evaluation demo build can be thought of a very good use case for providing easy access to information and transaction.

With Twilio’s new release and capabilities, I look forward to building new contextual enterprise application for easy access and interaction.

Now here is a fun bit:

As usual, the Conference gives every attendee a backpack, and this time, with a twist. It has littleBits to power a 8×8 LED panel which can be attached to the backpack.


And during $Bash event, they have cloudBit as prize to give out. Mark and I were determined to win that cloudBit, so that we can extend the LED panel display on the backpack, to be controlled remotely over Internet! We found out the most efficient way to win points, which is by playing Pinball games. We worked together, and of course, we got what we aimed for.


And by the way, I became the champion of the night for Pinball games on the floor, by scoring over 430,000 points in one game.
That’s a nice surprise to me too, that I have got talent in Pinball game!Possibly Related Posts:

Fluid UI Development Training Available

PeopleSoft Technology Blog - Fri, 2015-05-22 10:56
PeopleSoft's new Fluid user interface rocks.  The response from customers has been enthusiastic, and many would like to learn more about developing with Fluid.  Training is available from Oracle University both in classroom and live virtual training.  Note that Oracle University has scheduled classroom training in Belmont, California to correspond with our Open World conference at the end of October.  If you are planning to attend Open World, you have a great opportunity to attend this valuable training as part of your trip.

Twilio Signal Conference 2015

Oracle AppsLab - Fri, 2015-05-22 08:40

Editor’s note: If you read here, you know we heart Twilio, especially Noel (@noelportugal). Remember the Rock ’em Sock ’em robot build?

This week, Twilio (@twilio) held its first Signal conference and Raymond and I were there to see what’s new in the world of web enabled communications and the likes.


For those of you not familiar with Twilio, here’s their spiel from their About page:

Twilio powers the future of business communications.  Enabling phones, VoIP, and messaging to be embedded into web, desktop, and mobile software.

For example, they provide REST APIs that can send and receive phone calls and text messages (SMS), allowing you, as a user of their services, to implement these extremely complex features in your applications, whether they are mobile, web or desktop apps with very little effort.  They provide many more features and announced a bunch of new ones at the conference, see their website for more details on those features.

I had no idea that Twilio is as big as it is: there were 2000 attendees at the conference and apparently, Twilio is the second largest provider of phone numbers in the us, right behind T-Mobile.

The conference started of with a pretty impressive magician’s act in which actual Twilio APIs were used, very original I thought.  It the proceeded with a bunch of keynotes, lead by the CEO of Twilio, Jeff Lawson.  He stressed the importance of services, comparing them to Lego blocks that, in the right hands, allow you to build anything by composing these services, just like you would do with Lego.

Among the lineup of key speakers was Werner Vogels, CTO of Amazon who gave a history of how Amazon moved from a monolithic architecture to a more Service Oriented Architecture, then towards Micro Services and finally towards an architecture that now aggregates these Services into useful components.  They had to build an infrastructure to support these changes which eventually led to what we now know as AWS, very interesting talk.

One other interesting topic I remember from the opening presentations was Jeff Lawson mentioning that the next big evolution in communication will be for them to become context-aware. i.e. rather than you having to enter your 17-digit account number on your phone and then having to identify yourself again and again to the agent that you get transferred to with some weird question about the street you grew up in, this information should be available when a call gets made, leading to much better quality of service and a much higher throughput of calls.

The rest consisted of product announcements and partners getting to explain how they use Twilio in their business.  We then attended a bunch of sessions, some more interesting than others, I’ll limit myself here to the more interesting ones.


Image from Twilio

I’m a huge fan of ngrok so I was delighted to attend a session by the maker of this tool, Alan Shreve.  Turns out that it was written in Go, and Alan gave a few examples of how this language made it easier to build these types of tools.  He also mentioned that rewriting an existing tool into a new language is a great way to learn that new language as you limit the scope and can focus purely on the language itself.  He also stressed  not to be discouraged if you discover that a tool already exists, competition is a good thing and it validates the business case.

Also very informative was a talk from Guillermo Rauch, the creator of of which I also am a huge fan.  The talk didn’t focus on itself, but on the challenges you will face when you start building realtime applications, something that allows you to do: conflict resolution, throughput, diffing etc.

Kate Heddleston gave a talk about One-click deploy for service-oriented architectures which is a project that she worked on that allows you to deploy (with 1 click), a fully operational environment, including load balancers, db servers etc. on Amazon EC2, using Docker.  It seemed like an excellent alternative to the likes of Heroku and I definitely will check this out more in the near future and see if this could be leverage somewhere for our work in the AppsLab.

Probably the most interesting talk of the whole conference, for me at least, was by Neil Mansilla from Runscope about API testing & debugging.  He didn’t just gave a sales pitch about Runscope but laid out a whole bunch of tools that you can use to test APIs, from Apache Benchmark to Charles and Wireshark.  I am definitely going to check out Runscope!

What I took away most from this conference though is that APIs are the future: IT infrastructure is turning into APIs (AWS), electronics is turning into APIs (littleBits) and telecommunication is turning into APIs (Twilio, of course, but also switch).  I am convinced that Enterprise apps will also evolve into this direction and Enterprise APIs will enable developers to compose and integrate easily with other, non-enterprise APIs, allowing them to build new and exciting applications, just as developers started doing with tele-communications when Twilio appeared.Possibly Related Posts:

Connecting OBIEE to Hive, HBase and Impala Tables for a DW-Offloading Project

Rittman Mead Consulting - Fri, 2015-05-22 07:28

In two previous posts this week I talk about a client request to offload part of their data warehouse top Hadoop, taking data from a source application and loading it into Hive tables on Hadoop for subsequent reporting-on by OBIEE11g. In the first post I talked about hosting the offloaded data warehouse elements on Cloudera Hadoop CDH5.3, and how I used Apache Hive and Apache HBase to support insert/update/delete activity to the fact and dimension tables, and how we’d copy the Hive-on-HBase fact table data into optimised Impala tables stored in Parquet files to make sure reports and dashboards ran fast.


In the second post I got into the detail of how we’d keep the Hive-on-HBase tables up-to-date with new and changed data from the source system, using HiveQL bulk-inserts to load up the initial table data and a Python script to handle subsequent inserts, updates and deletes by working directly with the HBase Client and the HBase Thrift Server. Where this leaves us at the end then is with a set of fact and dimension tables stored as optimised Impala tables and updatable Hive-on-HBase tables, and our final step is to connect OBIEE11g to it and see how it works for reporting.


As I mentioned in another post a week or so ago, the new release of OBIEE11g supports Cloudera Impala connections from Linux servers to Hadoop, with the Linux Impala drivers being shipped by Oracle as part of the Linux download and the Windows ones used for the Admin Tool workstation downloadable directly from Cloudera. Once you’ve got all the drivers and OBIEE software setup, it’s then just a case of setting up the ODBC connections on the Windows and Linux environments, and you should then be in a position to connect it all up.


In the Impala side, I first need to create a copy of the Hive-on-HBase table I’ve been using to load the fact data into from the source system, after running the invalidate metadata command to refresh Impala’s view of Hive’s metastore.

[bigdatalite.localdomain:21000]>invalidate metadata;
[bigdatalite.localdomain:21000]>create table impala_flight_delays
                                >stored as parquet
                                >as select *from hbase_flight_delays;

Next I import the Hive-on-HBase and the Impala table through the Impala ODBC connection – even though only one of the tables (the main fact table snapshot copy) was created using Impala, I still get the Impala speed benefit for the other three tables created in Hive (against the HBase source, no less). Once the table metadata is imported into the RPD physical layer, I can then create a business model and subject area as I would do normally, so my final RPD looks like this:


Now it’s just a case of saving the repository online and creating some reports. If you’re using an older version of Impala you may need to disable the setting where a LIMIT clause is needed for every GROUP BY (see the docs for more details, but recent (CDH5+) versions will work fine without this). Something you’ll also need to do back in Impala is compute statistics for each of the tables, like this:

[bigdatalite.localdomain:21000] > compute stats default.impala_flight_delays;
Query: compute stats default.impala_flight_delays
| summary                                 |
| Updated 1 partition(s) and 8 column(s). |
Fetched 1 row(s) in 2.73s
[bigdatalite.localdomain:21000] > show table stats impala_flight_delays;
Query: show table stats impala_flight_delays
| #Rows   | #Files | Size    | Bytes Cached | Format  | Incremental stats |
| 2514141 | 1      | 10.60MB | NOT CACHED   | PARQUET | false             |
Fetched 1 row(s) in 0.01s

Fetched 1 row(s) in 0.01s

Apart from being generic “good practice” and giving the Impala query optimizer better information to form a query plan with, you might hit the error below in OBIEE if you don’t do this.


If you do hit this error, go back to the Impala Shell or Hue and compute statistics, and it should go away next time. Then, finally, you can go and create some analyses and dashboards and you should find the queries run fine against the various tables in Hadoop, and moreover the response time is excellent if you use Impala as the main query engine.


I did a fair bit of testing of OBIEE running against Cloudera Impala, and my findings were that all of the main analysis features worked (prompts, hierarchies, totals and subtotals etc) and the response time was comparable with a well-turned data warehouse, maybe even Exalytics-level of speed. If you take a look at the nqquery.log file for the Impala SQL queries OBIEE is sending to Impala, you can see they get fairly complex (which is good, as I didn’t hit any errors when running the dashboards) and you can also see where the BI Server takes a more simple approach to creating subtotals, nested queries etc compared to the GROUP BY … GROUPING SETS that you get when using a full Oracle database.

select D1.c1 as c1,
     D1.c2 as c2,
     D1.c3 as c3,
     D1.c4 as c4,
     D1.c5 as c5,
     D1.c6 as c6,
     D1.c7 as c7,
     D1.c8 as c8,
     D1.c9 as c9,
     D1.c10 as c10,
     D1.c11 as c11,
     D1.c12 as c12
     (select 0 as c1,
               D1.c3 as c2,
               substring(cast(NULL as  STRING ), 1, 1 ) as c3,
               substring(cast(NULL as  STRING ), 1, 1 ) as c4,
               substring(cast(NULL as  STRING ), 1, 1 ) as c5,
               'All USA' as c6,
               substring(cast(NULL as  STRING ), 1, 1 ) as c7,
               1 as c8,
               substring(cast(NULL as  STRING ), 1, 1 ) as c9,
               substring(cast(NULL as  STRING ), 1, 1 ) as c10,
               D1.c2 as c11,
               D1.c1 as c12
               (select sum(T44037.late) as c1,
                         sum( as c2,
                         T43925.carrier_desc as c3
                              hbase_carriers T43925 inner join 
                              impala_flight_delays T44037 On (T43925.key = T44037.carrier)
                    where  ( T43925.carrier_desc = 'American Airlines Inc.' or T43925.carrier_desc = 'Delta Air Lines Inc.' or T43925.carrier_desc = 'Southwest Airlines Co.' or T43925.carrier_desc = 'Spirit Air Lines' or T43925.carrier_desc = 'Virgin America' ) 
                    group by T43925.carrier_desc
               ) D1
          union all
          select 1 as c1,
               D1.c3 as c2,
               substring(cast(NULL as  STRING ), 1, 1 ) as c3,
               substring(cast(NULL as  STRING ), 1, 1 ) as c4,
               D1.c4 as c5,
               'All USA' as c6,
               substring(cast(NULL as  STRING ), 1, 1 ) as c7,
               1 as c8,
               substring(cast(NULL as  STRING ), 1, 1 ) as c9,
               D1.c4 as c10,
               D1.c2 as c11,
               D1.c1 as c12
               (select sum(T44037.late) as c1,
                         sum( as c2,
                         T43925.carrier_desc as c3,
                         T43928.dest_state as c4
                                   hbase_carriers T43925 inner join 
                                   impala_flight_delays T44037 On (T43925.key = T44037.carrier) inner join 
                              hbase_geog_dest T43928 On (T43928.key = T44037.dest)
                    where  ( T43925.carrier_desc = 'American Airlines Inc.' or T43925.carrier_desc = 'Delta Air Lines Inc.' or T43925.carrier_desc = 'Southwest Airlines Co.' or T43925.carrier_desc = 'Spirit Air Lines' or T43925.carrier_desc = 'Virgin America' ) 
                    group by T43925.carrier_desc, T43928.dest_state
               ) D1
          union all
          select 2 as c1,
               D1.c3 as c2,
               substring(cast(NULL as  STRING ), 1, 1 ) as c3,
               D1.c4 as c4,
               D1.c5 as c5,
               'All USA' as c6,
               substring(cast(NULL as  STRING ), 1, 1 ) as c7,
               1 as c8,
               D1.c4 as c9,
               D1.c5 as c10,
               D1.c2 as c11,
               D1.c1 as c12
               (select sum(T44037.late) as c1,
                         sum( as c2,
                         T43925.carrier_desc as c3,
                         T43928.dest_city as c4,
                         T43928.dest_state as c5
                                   hbase_carriers T43925 inner join 
                                   impala_flight_delays T44037 On (T43925.key = T44037.carrier) inner join 
                              hbase_geog_dest T43928 On (T43928.key = T44037.dest and T43928.dest_state = 'Georgia')
                    where  ( T43925.carrier_desc = 'American Airlines Inc.' or T43925.carrier_desc = 'Delta Air Lines Inc.' or T43925.carrier_desc = 'Southwest Airlines Co.' or T43925.carrier_desc = 'Spirit Air Lines' or T43925.carrier_desc = 'Virgin America' ) 
                    group by T43925.carrier_desc, T43928.dest_city, T43928.dest_state
               ) D1
          union all
          select 3 as c1,
               D1.c3 as c2,
               D1.c4 as c3,
               D1.c5 as c4,
               D1.c6 as c5,
               'All USA' as c6,
               D1.c4 as c7,
               1 as c8,
               D1.c5 as c9,
               D1.c6 as c10,
               D1.c2 as c11,
               D1.c1 as c12
               (select sum(T44037.late) as c1,
                         sum( as c2,
                         T43925.carrier_desc as c3,
                         T43928.dest_airport_name as c4,
                         T43928.dest_city as c5,
                         T43928.dest_state as c6
                                   hbase_carriers T43925 inner join 
                                   impala_flight_delays T44037 On (T43925.key = T44037.carrier) inner join 
                              hbase_geog_dest T43928 On (T43928.key = T44037.dest and T43928.dest_city = 'Atlanta, GA')
                    where  ( T43925.carrier_desc = 'American Airlines Inc.' or T43925.carrier_desc = 'Delta Air Lines Inc.' or T43925.carrier_desc = 'Southwest Airlines Co.' or T43925.carrier_desc = 'Spirit Air Lines' or T43925.carrier_desc = 'Virgin America' ) 
                    group by T43925.carrier_desc, T43928.dest_airport_name, T43928.dest_city, T43928.dest_state
               ) D1
     ) D1
order by c1, c6, c8, c5, c10, c4, c9, c3, c7, c2 limit 65001

Not bad though for a data warehouse offloaded entirely to Hadoop, and it’s good to see such a system handling full updates and deletes to data as well as insert appends, and it’s also good to see OBIEE working against an Impala datasource and with such good response times. If any of this interests you as a potential customer, feel free to drop me an email at, or check-out our Big Data Quickstart page on the website.

Categories: BI & Warehousing

Opportunities for Cloud based BI

Dylan's BI Notes - Thu, 2015-05-21 18:33
Three opportunities for the future Cloud based BI: 1. Provide the tools for consolidating data  Investing the the data matching and merging technologies is too costly for on-premise BI implementations. Providing the services within the cloud BI will lower the cost. 2. Provide the External Data Bringing the external data into the individual on-premise env […]
Categories: BI & Warehousing

EM12c : Login to GUI with the correct password causes authentication failure

Pythian Group - Thu, 2015-05-21 16:47

So the other day I was trying to log in to my EM12c R4 environment with the SSA_ADMINISTRATOR user, and I got the error:

“Authentication failed. If problem persists, contact your system administrator”

I was quite sure that the password that I had was correct, so I tried with the SYSMAN user and had the same error. I still wanted to verify that I had the correct password , so I tried with the SYSMAN user to log in to the repository database, and was successful, so I know something was wrong there.

SQL> connect sysman/
Enter password:

So I went to the<gc_inst>/em/EMGC_OMS1/sysman/log/emoms.log and saw the following error

2015-05-18 21:22:06,103 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR audit.AuditManager auditLog.368 - Could not Log audit data, Error:java.sql.SQLException: ORA-14400: inserted partition key does not map to any partition
ORA-06512: at &quot;SYSMAN.MGMT_AUDIT&quot;, line 492
ORA-06512: at &quot;SYSMAN.MGMT_AUDIT&quot;, line 406
ORA-06512: at line 1

Which led me to believe that the JOB_QUEUE_PROCESSES was set to 0, but it wasn’t the case, since it was set to 50. Though, this is actually an incorrect limit, so I bumped it up to 1000 and tried to rerun the EM12c repository DBMS Scheduler jobs as per the documentation in 1498456.1:

SQL&gt; show parameter JOB_QUEUE_PROCESSES

------------------------------------ ----------- ------------------------------
job_queue_processes integer 50
SQL&gt; alter system set JOB_QUEUE_PROCESSES=1000 scope = both;

System altered.

SQL&gt; show parameter both
SQL&gt; show parameter job

------------------------------------ ----------- ------------------------------
job_queue_processes integer 1000
SQL&gt; connect / as sysdba
SQL&gt; alter system set job_queue_processes = 0;

System altered.

SQL&gt; connect sysman/alyarog1605
SQL&gt; exec emd_maintenance.remove_em_dbms_jobs;

PL/SQL procedure successfully completed.

SQL&gt; exec gc_interval_partition_mgr.partition_maintenance;

PL/SQL procedure successfully completed.

SQL&gt; @$OMS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_recompile_invalid.sql SYSMAN
old 11: AND owner = upper('&amp;RECOMPILE_REPOS_USER')
new 11: AND owner = upper('SYSMAN')
old 26: dbms_utility.compile_schema(upper('&amp;RECOMPILE_REPOS_USER'),FALSE);
new 26: dbms_utility.compile_schema(upper('SYSMAN'),FALSE);
old 41: WHERE owner = upper('&amp;RECOMPILE_REPOS_USER')
new 41: WHERE owner = upper('SYSMAN')
old 84: AND owner = upper('&amp;RECOMPILE_REPOS_USER')
new 84: AND owner = upper('SYSMAN')
old 104: AND ds.table_owner = upper('&amp;RECOMPILE_REPOS_USER')
new 104: AND ds.table_owner = upper('SYSMAN')

PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.

SQL&gt; connect / as sysdba
SQL&gt; alter system set job_queue_processes = 1000;

System altered.

SQL&gt; connect sysman/
Enter password:
SQL&gt; exec emd_maintenance.submit_em_dbms_jobs;

PL/SQL procedure successfully completed.

SQL&gt; commit;

Commit complete.

After this I bounced the OMS, but still kept getting the same error. And though it fixed the scheduler jobs, I was now seeing the following error in the emoms.log:

2015-05-18 22:29:09,573 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] WARN auth.EMRepLoginFilter doFilter.450 - InvalidEMUserException caught in EMRepLoginFilter: Failed to login using repository authentication for user: SSA_ADMIN
oracle.sysman.emSDK.sec.auth.InvalidEMUserException: Failed to login using repository authentication for user: SSA_ADMIN

So what I did was an update to the SYSMAN.MGMT_AUDIT_MASTER table and ran the procedure MGMT_AUDIT_ADMIN.ADD_AUDIT_PARTITION as was stated in document id 1493151.1:

oracle $ sqlplus


Enter user-name: sysman
Enter password:

SQL&gt; update mgmt_audit_master set prepopulate_days=5 where prepopulate_days is null;

1 rows updated.

SQL&gt; select count(1) from mgmt_audit_master where prepopulate_days is null;


SQL&gt; exec mgmt_audit_admin.add_audit_partition;

PL/SQL procedure successfully completed.

SQL&gt; commit;

Commit complete.

Once I did this, I was able to login with all my EM12c administrators without any issues:

oracle@em12cr4.localdomain [emrep] /home/oracle
oracle $ emcli login -username=ssa_admin
Enter password

Login successful


Even though the JOB_QUEUE_PROCESSES were not set to 0, it was the cause that it was failing, as it was a low value for this parameter. Thus, be careful when setting up this parameter, be sure to follow the latest installation guidelines.

Note– This was originally published on

Categories: DBA Blogs

New Style Sheet Guide Posted for Working with the PeopleSoft Fluid User Experience

PeopleSoft Technology Blog - Thu, 2015-05-21 16:08

Customers and partners have asked us for guidelines and standards for developing with PeopleSoft's Fluid UI.  To support that, we've just posted the PeopleSoft Fluid User Interface CSS Guide on the FLUID UI: PeopleSoft Fluid User Interface Documentation Updates page on My Oracle Support (Doc ID 1909955.1).  This guide contains descriptions of CSS styles that PeopleSoft delivers. You'll find this information helpful for creating custom Fluid applications as well as extending current CSS features delivered in your PeopleSoft applications. The document provides descriptions of nearly a thousand CSS styles delivered with PeopleTools 8.54. The styles are divided in these categories:

  • PeopleTools System Default CSS Styles: These styles are used to control basic elements of the fluid infrastructure provided by PeopleTools, such as the NavBar, the fluid banner, homepages, tiles and so on.
  • Application Content CSS Styles: These styles are used to control application content deployed in fluid mode.

Creating fluid applications relies heavily on CSS 3.0 for the look, feel, and layout of the runtime application. If you intend to customize or create any fluid application, expert knowledge off CSS is required.  Prior to working doing any fluid style development work, make sure you are familiar with the documentation provided in PeopleSoft PeopleTools 8.54: Fluid User Interface Developer’s Guide, “Adding Page Controls,” Applying Styles.

Refer to the FLUID UI: PeopleSoft Fluid User Interface Documentation Updates MOS page to receive important information about the PeopleSoft Fluid User Interface.

Loading, Updating and Deleting From HBase Tables using HiveQL and Python

Rittman Mead Consulting - Thu, 2015-05-21 14:32

Earlier in the week I blogged about a customer looking to offload part of the data warehouse platform to Hadoop, extracting data from a source system and then incrementally loading data into HBase and Hive before analysing it using OBIEE11g. One of the potential complications for this project was that the fact and dimension tables weren’t append-only; Hive and HDFS are generally considered write-once, read-many systems where data is inserted or appended into a file or table but generally then can’t be updated or overwritten without deleting the whole file and writing it again with the updated dataset.

To get around this problem we loaded our incoming data into HBase tables, a NoSQL key/value-store database that also runs on Hadoop and HDFS but permits update and delete operations on rows as well as selects and inserts; later on we took the main fact table stored in Hive-on-HBase and copied its contents into Impala to considerably improve the response time of queries against this tables and the still-Hive-on-HBase dimension tables, but going back to the insert-update-delete operations on the HBase tables, how exactly does this work and what’s the most efficient way to do it?

Taking a step back for a moment, HBase is a NoSQL, key/value-type database where each row has a key (for example, “SFO” for San Francisco airport) and then a number of columns, grouped into column families. In the Flight Delays dataset that we used in the previous blog post, an HBase of origin airports might have a few thousand entries with each entry, or row, keyed on a particular airport code like this:


(Note that at the start, these key values won’t be there – they’re more for illustrative purposes)

At the time of HBase table definition, you specify one or more “column families”. These are group headers for columns you might add earlier, and in the case of my origin airport table I might just use the column family name “dest”, so that the HBase table DDL looks like this:

create 'geog_origin','origin'

and the conceptual view of the table would look like this:


Now what’s neat about NoSQL-style databases like this (and Endeca Server is the same) is that you can define individual columns just by using them. For example, I could create columns for the airport name, airport city, airport state and airport code just by using their name in a data load, prefixing those column names with the named of a previously-defined column family. Using the HBase Shell, for example, I could issue the following PUT commands to insert the first row of data into this HBase table, like this:

put 'geog_origin’,’SFO','origin:airport_name','San Francisco, CA: San Francisco'
put 'geog_origin’,’SFO','origin:city’,’San Francisco, CA'
put 'geog_origin’,’SFO',’origin':state','California'
put 'geog_origin’,'SFO',’origin':id’,'14771'

Now my HBase table conceptually looks like this:


If I then want to use another column under the “origin” column family for LAX, I can just do so by using it in the next set of PUT commands, like this:

put 'geog_origin','LAX’,origin:airport_name','Los Angeles, CA: Los Angeles'
put 'geog_origin','LAX','origin:city','Los Angeles, CA'
put 'geog_origin','LAX','origin:state','California'
put 'geog_origin','LAX','origin:region’,’West Coast'
put 'geog_origin','LAX','origin:id','12892'


Each column within column families has its values individually set, retrieved and deleted using PUT, GET and DELETE commands, and as long as you prefix the column name with one of the previously-defined column-family names and provide the key value for the row you’re interested in, HBase database tables are very flexible and were designed for simple product catalog-type applications running on hundreds of sharded server nodes for companies of the likes of Amazon, Google and Facebook (see this HBase “Powered-by” page for more examples of organizations using HBase).

But what HBase very much isn’t is a relational database like Oracle, Microsoft SQL server or even Apache Hive, databases that we’re much more likely to store data warehouse-type data in. In the previous post I showed how Hive table structures can in-fact be put over HBase tables, mapping HBase columns to Hive columns, and then HiveQL INSERT INTO TABLE … SELECT commands can be used to bulk-load these HBase tables with initial sets of data. So back to the original question – what’s the best way to then incrementally load and refresh these HBase tables, and I can I still use HiveQL for this?

In my original post, I defined Hive tables over my HBase ones using the Hive-on-Hbase (yum install hive-hbase) package and associated Hive storage handler; for example, the Hive table that provided SQL access over the flight_delays HBase tables was defined like this:

ADD JAR /usr/lib/hive/lib/zookeeper.jar;
ADD JAR /usr/lib/hive/lib/hive-hbase-handler.jar;
ADD JAR /usr/lib/hive/lib/guava-11.0.2.jar;
ADD JAR /usr/lib/hive/lib/hbase-client.jar;
ADD JAR /usr/lib/hive/lib/hbase-common.jar;
ADD JAR /usr/lib/hive/lib/hbase-hadoop-compat.jar;
ADD JAR /usr/lib/hive/lib/hbase-hadoop2-compat.jar;
ADD JAR /usr/lib/hive/lib/hbase-protocol.jar;
ADD JAR /usr/lib/hive/lib/hbase-server.jar;
ADD JAR /usr/lib/hive/lib/htrace-core.jar;

CREATE EXTERNAL TABLE hbase_flight_delays
 (key string,
  year string,
  carrier string,
  orig string,
  dest string,
  flights string,
  late   string,
  cancelled string,
  distance string
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
("hbase.columns.mapping" = ":key,dims:year,dims:carrier,dims:orig,dims:dest,measures:flights,measures:late,measures:cancelled,measures:distance")
TBLPROPERTIES ("" = "test1_flight_delays");

With the underlying HBase table defined with a key and two column families, one for dimension columns and one for fact (measure) ones – the key is a sequence number that I added to the source dataset to give each row a unique identifier.

create ‘test1_flight_delays','dims','measures'

To initially populate the table, I’ve created another Hive table with the initial set of source data in it, and I just insert its values in to the Hive-on-HBase table, like this:

insert into table hbase_flight_delays              
select * from flight_delays_initial_load;      
Total jobs = 1
Total MapReduce CPU Time Spent: 11 seconds 870 msec
Time taken: 40.301 seconds

This initial load of 200,000 rows in this instance took 40 seconds to load; not bad, certainly acceptable for this particular project. Imagine now for every day after this we typically added another 500 or so flight records; in regular Hive this would be straightforward and we’d use the LOAD DATA or INSERT INTO TABLE … SELECT commands to add new file data to the Hive table’s underlying HDFS directories. And we can do this with the Hive-on-HBase table too, with the INSERT INTO TABLE command adding the incoming data to new rows/cells in the HBase table. Checking the row count and min/max ID values in the Hive-on-HBase table at the start, like this:

select count(*), min(cast(key as bigint)) as min_key, max(cast(key as bigint)) as max_key
from hbase_flight_delays;
Total jobs = 1
Total MapReduce CPU Time Spent: 14 seconds 660 msec
200000  1  200000
Time taken: 53.076 seconds, Fetched: 1 row(s)

I can see that there’s 200,000 rows in the HBase table, starting at key value 1 and ending at key value 200,000. The table containing new data has key values going from 200,001 to 200,500, so let’s insert that new data into the Hive-on-HBase table:

insert into table hbase_flight_delays                                              
select * from flight_delays_daily_update_500_rows;     
Total jobs = 1
Total MapReduce CPU Time Spent: 3 seconds 870 msec
Time taken: 26.368 seconds

Not bad – 26 seconds for the 500 rows, not quite as fast as the initial load but acceptable. Let’s also check that the data went in OK:

select count(*), min(cast(key as bigint)) as min_key, max(cast(key as bigint)) as max_key
from hbase_flight_delays;  
Total jobs = 1
Total MapReduce CPU Time Spent: 13 seconds 580 msec
200500   1   200500
Time taken: 44.552 seconds, Fetched: 1 row(s)

As I’d hoped, the number of rows has increased by 500 and the maximum key value is now 200,500. But how do we apply updates to the data in the table? I’ve got another source table that this time contains 1,000 randomly-selected rows from the initial data load dataset, where I’ve set the LATE column value to ‘999’:

hive> select * from flight_delays_daily_changes_1000_rows                                      
    > limit 5;
21307  2008 WN BDL  BWI  1  999  1  283
136461  2008  OO  ORD  TYS  0  999  1  475
107768  2008  WN  BWI  ORF  0  999  1  159
102393  2008  OO  SLC  ACV  0  999  1  635
110639  2008  WN  BOI  OAK  0  999  1  511
Time taken: 0.139 seconds, Fetched: 5 row(s)

In fact the way you apply these updates is just to INSERT INTO TABLE … SELECT again, and the incoming values create new versions of existing rows/cells if needed. Some versions of HBase automatically keep a number of versions of each cell value (typically 3 versions), however the version of HBase that comes with CDH5.2 and higher only keeps one version by default (you can increase this number per table, or system wide, using the steps in the CDH5.2 release notes). Let’s try this out now, first using the HBase shell to see the values and timestamps currently held for one particular key value I know should by updated by the next dataset:

hbase(main):029:0> get 'test1_flight_delays', '102393'
COLUMN                                     CELL                                                                                                                       
 dims:carrier                              timestamp=1432236609421, value=OO                                                                                          
 dims:dest                                 timestamp=1432236609421, value=ACV                                                                                         
 dims:orig                                 timestamp=1432236609421, value=SLC                                                                                         
 dims:year                                 timestamp=1432236609421, value=2008                                                                                        
 measures:cancelled                        timestamp=1432236609421, value=1                                                                                           
 measures:distance                         timestamp=1432236609421, value=635                                                                                         
 measures:flights                          timestamp=1432236609421, value=0                                                                                           
 measures:late                             timestamp=1432236609421, value=0                                                                                           
8 row(s) in 0.0330 seconds

I’ll now use Hive to apply the updates, like this:

insert into table hbase_flight_delays                                              
select * from flight_delays_daily_changes_1000_rows;
Total jobs = 1
Total MapReduce CPU Time Spent: 4 seconds 340 msec
Time taken: 24.805 seconds
select count(*), min(cast(key as bigint)) as min_key, max(cast(key as bigint)) as max_key
from hbase_flight_delays;                                                          
Total jobs = 1
Total MapReduce CPU Time Spent: 13 seconds 430 msec
200500 1 200500
Time taken: 47.379 seconds, Fetched: 1 row(s)

Notice how this third INSERT didn’t create any new rows, the max key ID in the follow-up query hasn’t increased since the previous insert of new data. Querying one of the rows that I know was changed by this new table of data updates, I can see that the LATE column value has been changed:

select * from hbase_flight_delays where key = '102393';
Total jobs = 1
Total MapReduce CPU Time Spent: 3 seconds 600 msec
102393  2008  OO  SLC  ACV  0  999  1  635

Let’s go into the HBase shell now and take a look at the columns cells for that same key ID:

hbase(main):030:0> get 'test1_flight_delays', '102393'
COLUMN                                     CELL                                                                                                                       
 dims:carrier                              timestamp=1432236723680, value=OO                                                                                          
 dims:dest                                 timestamp=1432236723680, value=ACV                                                                                         
 dims:orig                                 timestamp=1432236723680, value=SLC                                                                                         
 dims:year                                 timestamp=1432236723680, value=2008                                                                                        
 measures:cancelled                        timestamp=1432236723680, value=1                                                                                           
 measures:distance                         timestamp=1432236723680, value=635                                                                                         
 measures:flights                          timestamp=1432236723680, value=0                                                                                           
 measures:late                             timestamp=1432236723680, value=999                                                                                         
8 row(s) in 0.0800 seconds

Notice how the timestamp for each of the cells has now updated? If I had more than the default 1 version of each cell enabled, I could query the previous versions to see the old values and timestamps. So this works pretty well, and all I need to do is use HiveQL and INSERT INTO TABLE … SELECT to initially populate, append to and even update values in the table. But what If I want to update HBase more “programmatically”, maybe as part of a process that reads directly from a source application (for example, Salesforce or a web service) and then writes directly into HBase without the intermediate step of landing the incoming data into a file? For this we can use the HBase Client API of which there are libraries for many languages with the most popular being the Java API. If Java is too much though and you’d rather interact with HBase using a language such as Python, as this Cloudera blog post explains you can use either a REST API interface to HBase or one using the Thrift interface and work with languages such as Python.

In my case, my preferred way of programatically working with HBase is to use Python and a developer library called Happybase, where I can also bring in other libraries such as ones to work with Hive and even ones to work with OBIEE and Fusion Middleware and do my work at a much higher-level of abstraction. To show how this might work, I’m going to use Python, the HBase Client API and Happybase to programatically read from my update Hive tables (in real-life I’d probably connect directly to a web service if going down this more complicated route) and write a routine to read rows from the Hive table and load them into HBase.

Again I’m using the Oracle Big Data Lite 4.1 VM which has Python 2.7.6 already installed, and to get ready to install the Happybase library I first need to install pip, the “preferred installer program” for Python. As per the pip installation instructions, first download pip and then install it from the command-line:

sudo python

Then use Pip to install Happybase 

sudo pip install happybase

Whist you’re there you might as well install “pyhs2”, another python package that in this case lets us easily connect to Hive tables via the HiveServer2 interface found on CDH5+ and the Big Data Lite 4.1 VM.

sudo pip install pyhs2

Now I can put together a Python program such as the one below, that in this case creates a connection to a Hive table, selects all rows from it into a cursor and then PUTs these rows into the HBase table, via a batch process that sends data to HBase via the Thrift interface every 10,000 rows:

import pyhs2
import happybase
connection = happybase.Connection('bigdatalite')
flight_delays_hbase_table = connection.table('test1_flight_delays')
b = flight_delays_hbase_table.batch(batch_size=10000)
with pyhs2.connect(host='bigdatalite',
               database='default') as conn:
    with conn.cursor() as cur:
        #Execute query
        cur.execute("select * from flight_delays_initial_load")
        #Fetch table results
        for i in cur.fetch():
            b.put(str(i[0]),{'dims:year': i[1],
                             'dims:carrier': i[2],
                             'dims:orig': i[3],
                             'dims:dest': i[4],
                             'measures:flights': i[5],
                             'measures:late': i[6],
                             'measures:cancelled': i[7],
                             'measures:distance': i[8]})

which I can then run from the command-line like this:

[oracle@bigdatalite ~]$ python ./

As I said, using this approach I could just as easily connect to a web service or read in data via Flume or Kafka, and I can delete rows as well as insert/update them and add any other logic. From my testing it’s not all that faster than going via HiveQL and INSERT INTO TABLE … SELECT scripts (most probably because I’m still going into HBase indirectly, via the Thrift interface) but it does offer the possibility of direct inserts into HBase (and therefore Hive) from the source application without the intermediate step of writing files to disk.

So to finish this short series, tomorrow I’ll look at how well these Hive-on-HBase tables, and the Impala table I created in the previous example, work when queried from OBIEE11g. Back tomorrow.

Categories: BI & Warehousing

Oracle Priority Support Infogram for 21-MAY-2015

Oracle Infogram - Thu, 2015-05-21 14:11

From DBA Kevlar: SQL ID Specific Performance Information.
A tale of romantic notions and cold hard bonus cash: Dinner Courtesy of the Lady: My Most Successful PL/SQL Assignment, from
From the intriguingly named PETEWHODIDNOTTWEET.COM: INSTALLING ENTERPRISE MANAGER 12C FOR TESTING (also apparently the Pete whose shift key is stuck).
From the AMIS Oracle and Java Blog: Keeping track of your licenses with OEM12C – reports
Traffic Director is now available, from Oracle Exalogic.
Data Warehousing
More on monitoring parallel  processing from the Data Warehouse Insider: PX In Memory, PX In Memory IMC?
Cloud Computing
DOCS and APEX integration through REST API, from Oracle Cloud for Developers.
Now Available! Oracle Identity Management 11gR2 PS3, from Identity Management.
From Oracle DBA – Tips and Techniques: Oracle Goldengate on DBFS for RAC and Exadata
Oracle OpenStack
LIVE WEBINAR (May 28): How to Get Started with Oracle OpenStack for Oracle Linux, from the Oracle OpenStack Blog.
Creating and scaling Dynamic Clusters using wlst , from WebLogic Partner Community EMEA.
Differences between Oracle APEX 4.2 and 5.0, from get along with oracle and apex.
From Oracle Solaris Crash Analysis Tool: Oracle Solaris Crash Analysis Tool 5.5 Release
From the Oracle E-Business Suite Support blog:
Receiving Errors When Validating Invoices? APP-PO-14144: PO_ACTIONS-065 CLOSE_AUTO
Webcast: Getting Started with Document Sequencing in Oracle Receivables
Webcast: A Diagnosis on OPM - ASCP Data Collection Entities
New and Improved Shipping Execution Analyzer!
The Procurement Accrual Accounting Analyzer has been improved!
Webcast: An Overview of Construction Estimates in R12 in EAM
From the Oracle E-Business Suite Technology blog:

Firefox ESR 38 Certified with EBS 12.2 and 11i

Fixing Super LOV in Universal Theme

Dimitri Gielis - Thu, 2015-05-21 14:08
When you migrate to APEX 5.0 and the Universal Theme you might see that some plugins are not behaving correctly anymore. In this post I'll discuss the Enkitec Modal LOV plugin.

When I ran the plugin in my APEX 5.0 app with the Universal Theme it looked like this:

There's too much space in the search bar and the close button is not showing up with an icon.

Here're the steps I did to fix it. First you need to download the css file of the style you use and the js file from the plugin in Shared Components. I use the smoothness.css style most of the times, so I'll use that as an example.

To fix the close icon, add !important to the png:

.ek-ml .ui-state-default .ui-icon {
background-image: url(ui-icons_888888_256x240.png) !important;

Note: you can do that for all those png on line 467 till 489.

To fix the height, add following css to smoothness.css:

.superlov-button-container {

And finally in enkitec_modal_lov.min.js change the height of the searchContainer from a dynamic height (r) to 0px:
Next upload those files again to the plugin.
When you run the plugin it should give you this result:

Now the bigger question is; do we still need that plugin? In APEX 5.0 there're native Modal Pages, so you could create an Interactive Report and set the page as a Modal Page. Next you can hook that up to a button or link and you've just build your own Modal LOV.
I still like to use the plugin at the moment (as it's just one item on the page), but it could use a refresh to make it look nicer and more inline with Universal Theme.
Wonder what you think - would you build your own Modal LOV in APEX 5.0 or would you still prefer to use a plugin? 
Categories: Development

Understanding SQL

Jonathan Lewis - Thu, 2015-05-21 11:12

From time to time someone publishes a query on the OTN database forum and asks how to make it go faster, and you look at it and think it’s a nice example to explain a couple of principles because it’s short, easy to understand, obvious what sort of things might be wrong, and easy to fix. Then, after you’ve made a couple of suggestions and explained a couple of ideas the provider simply fades into the distance and doesn’t tell you any more about the query, or whether they’ve taken advantage of your advice, or found some other way to address the problem.

Such a query, with its execution plan, appeared a couple of weeks ago:

	FIL_RUN_DT = Current_fil_run_dt,
	ROW_UPDT_DT = dta_cltn_end_dttm
		AND	STG_CRME.FUND_ID IN ('AAB1', '1AA2', '1BA2', 'AAB2', '1AA3', '1BA3', '1B80', '1A80')

| Id  | Operation                     | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | UPDATE STATEMENT              |                       |     1 |   148 | 12431   (2)| 00:02:30 |
|   1 |  UPDATE                       | MMR_DTL               |       |       |            |          |
|   2 |   NESTED LOOPS SEMI           |                       |     1 |   148 | 12431   (2)| 00:02:30 |
|*  3 |    HASH JOIN RIGHT SEMI       |                       |    49 |  5488 | 12375   (2)| 00:02:29 |
|   4 |     TABLE ACCESS FULL         | FCTS_TMS_MBRID_XWLK   |  6494 | 64940 |    24   (0)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL         | MMR_DTL               |   304K|    29M| 12347   (2)| 00:02:29 |
|*  6 |    TABLE ACCESS BY INDEX ROWID| CRME_FUND_DTL_STG     |     1 |    36 |     5   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN          | IE1_CRME_FUND_DTL_STG |     8 |       |     1   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   3 - access("XWLK"."MBR_CK"="MMR_DTL"."MBRSHP_CK")
   5 - filter("CAPITN_PRCS_IND"=5 AND "HSPC_IND"='Y')
   6 - filter(("STG_CRME"."FUND_ID"='1A80' OR "STG_CRME"."FUND_ID"='1AA2' OR
              "STG_CRME"."FUND_ID"='1AA3' OR "STG_CRME"."FUND_ID"='1B80' OR "STG_CRME"."FUND_ID"='1BA2' OR
              "STG_CRME"."FUND_ID"='1BA3' OR "STG_CRME"."FUND_ID"='AAB1' OR "STG_CRME"."FUND_ID"='AAB2') AND
   7 - access("STG_CRME"."MBR_CK"="MMR_DTL"."MBRSHP_CK")

The most informative bit of narrative that went with this query said:

“The table MMR_DTL doesnt have index on these columns CAPITN_PRCS_IND , HSPC_IND .. Since this is an update stmt which will update 85k records, In a dilema whether to index these columns or not .. And this table MMR_DTL is an ever growing table. Worried about the update performance. “

This was in response an observation that there was a full tablescan on MMR_DTL at operation 5 despite the predicate “CAPITN_PRCS_IND”=5 AND “HSPC_IND”=’Y’. You’ll note that the predicted cardinality for that scan is 304K and the update statement is going to change CAPITN_PRCS_IND from the value 5 to the value 2 – so it’s not entirely unreasonable to be worried about the impact of creating an index that included the column capitn_prcs_ind.

What more can we say about this query, given the limited information. Lots – unfortunately the owner of the query isn’t giving anything else away.

I’m going to leave this note unfinished to give people a little chance to think about the clues in the request, the questions they might ask, reasons why there might be a performance problem, and strategies they might investigate, then I’ll update the posting with a few ideas some time in the next 24 hours.

Update 1 – 24th May

There are so many ideas that spring up from a small amount of information that it’s very hard to write a concise and coherent description of what you’ve noticed, when and how far you pursued it, and how relevant the ideas might be to the problem in hand – especially when most of the thoughts require you to ask for more information. Something has come up that has taken up the free time I had to write this note, so I’m just going to have to complete it in rapidly written installments. The first bit is an outline of the immediate response I had to the initial presentation of the problem and the execution plan that went with it.

The only comment from the OP on this statement and plan was: “I couldnt optimize this query for better performance and optimized cost.. Can some one guide me on this.”

We have no idea how many rows would be updated, how long it took, or how long the OP thinks it ought to take; it’s not until a subsequent post that we learn that the number of rows targetted for update is 85,000 – which tells us that the optimizer has run into some problems with its cardinality estimates. This suggests that IF there’s a serious performance problem then POSSIBLY there’s a better execution plan and we might get the optimizer to find it automatically if we could tell it how to adjust its cardinality estimates. It would be nice, however to know where the query spent it’s time (i.e. can we re-run it with rowsource execution stats or monitoring enabled, and see the actual run-time work in the plan).

If it took a couple of minutes to update that 85,000 rows, I probably wouldn’t want to spend time making it go faster; if it took 2 hours, of which 1 hour 50 minutes was spent waiting for a transaction (row) lock then I’d want to look at why the update collision could happen and see if that problem could be avoided – it might then be the case that the last 10 minutes was spent rolling back and restarting an update that ought to have taken 2 minutes “in vacuo”. Underlying all this, I would want to be sure (as I’ve implicitly, and I think resonably, assumed) that it’s an update that runs only occasionally, perhaps once per day or once per week.

In the absence of hard information – let’s resort to a few hypotheticals; looking at the plan itself (and knowing the target 85,000 rows) I am prepared to make a few guesses about the run-time activity.

  1. We build an inmemory hash table from the whole of FCTS_TMS_MBRID_XWLK, a step for which the optimizer ought to be able to give a reasonable cost and cardinality – assuming (as I will from now on) that the basic stats are reasonably accurate.
  2. We scan the (fairly large) MMR_DETAIL table, applying a couple of filters; again the optimizer ought to do a reasonable job of estimating the cost of such a table, and we might expect a significant fraction of the time to be spent on multiblock (possibly direct path) reads of the table. The cardinality reported is 304,000 but we note there are two predcicates and both are for columns which we might guess have a small number of distinct values – one of which we are changing. Perhaps there’s a bad cardinality error there and maybe a couple of single column histograms would help, but maybe column group stats with a histogram on the pair would be even better. I also wonder when (if) HSPC_IND ever changes from Y to N, and contemplate the possibility of creating a function-based index that records ONLY the rows that match this predicate pair (see the note on indexing that will appear some time over the next week further down the page). It’s at this point that we might ask whether the number of rows returned by this scan should be very similar to the number of rows updated, or whether the scan identifies far too many rows and the two existence tests do a lot of work to eliminate the excess and, if the latter, which test should we apply first and how should we apply it.
  3. Having scanned the MMR_DTL we probe the in-memory hash table copy of FCTS_TMS_MBRID_XWLK for the first match, using an equality predicate (which will be the access predicate) and a range-based (filter) predicate which looks as if it is checking that some “start date” is between an “effective date” and a “termination date”. The estimated size of the result set is FAR too small at 49 rows when we know we have to have at least 85,000 rows survive this test; moreover, this tiny estimate comes from inputs of 6,500 and 304,000 rows being joined so we ought to wonder how such a tiny estimate could appear. A possible explanation is that the application has used some extreme dates to represent NULL values. If that’s the case then it’s possible that suitable histograms might help the optimizer recognise the extreme distribution; alternatively virtual columns that change the extreme values back to NULL and a predicate referencing the virtual columns may help.
  4. After estimating the cardinality of the intermediate result so badly, the optimizer decides that the second existence test can be performed as a semi-join using a nested loop. The thing to note here is that the optimizer “knows” that this is an expensive step – the cost of each table access operation is 5 (4 + 1) – but it’s a step that shouldn’t happen very frequently so the cost is considered acceptable. We know, however, that this step has to execute at least 85,000 times, so the optimizer’s prediction of visiting 4 blocks in the table to identify (on average) 8 rows and discard (on average) 7 of them looks nasty. Again we note that one of the predicates is range-based on a pair of dates – and in this case we might wonder whether or not most of the rows we discard are outside the date range, and whether we ought to consider (as a general point, and not just for this query) whether or not we should add one, other, or both the ERN_FROM_DT and ERN_THRU_DAT to the IE1_CRME_FUND_DTL_STG index. It’s at this point in the query that we are most ignorant of time spent at present (and in the future as the MMR_DTL table grows) – on one hand it’s possible that the rows for each MMR_DTL are widely scattered across the CRME_FUND_DTL_STG and this step could do a lot of random I/O, on the other hand the (assumed) time-dependent nature of the data may mean that the only MMR_DTL rows we look at are recently entered and the associated CRME_FUND_DTL_STG rows are therefore also recently entered and closely clustered – leading to a beneficial “self-caching” effect at the “high” end of the table as the query runs, which introduces an efficiency that the optimizer won’t notice. There is one numerical highlight in this join – we have a cost of 5 for each probe and 49 rows to test, so we might expect the incremental cost of the query to be around 250, but the difference between operations 3 and 2 is only 56 – suggesting that the optimizer does have some “self-caching” information, possibly based on there being a significant difference between the two tables for the number of distinct values of the join column. (See, for example: )
Update 2 – 25th May

Scalability is a question that should always be considered – and there’s a scalability threat in the information we have so far. The plan shows a full tablescan of the MMR_DTL table, and while tablescans are not necessarily a bad thing we’ve been told that: “this table MMR_DTL is an ever growing table“. It’s possible that Oracle can be very quick and efficient when doing the existence tests on the rows it selects from the table – but it is inevitable that the tablescan will take longer to complete as time passes; whether or not this is likely to matter is something we can’t decide from the information given: we don’t know how much of the time is the tablescan, we don’t know what fraction of the total time is due to the tablescan, and we don’t know  how much larger the table will grow each day.

Another scalability detail we ought to ask about is the volume of data that we expect to update each time we run this statement. A time passes do we expect to see the same number of rows waiting to be updated, or are we expecting the business (whatever that may be) to grow steadily each month with an increase of a few percent in the number of rows to be updated on each execution. Our coding strategy may vary depending on the answer to that question – we might, for example, try to pre-empt a future problem by introducing some partitioning now.

The final scalablility issue is one I’ve raised already and comes from the CRME_FUND_DTL_STG table. According to the plan there about 8 rows in this table for each distinct value of MMR_DTL.MBRSHP_CK; if MMR_DTL is large and growing, is CRME_FUND_DTL_STG very large and growing even faster – as time passes will there be more rows for each distinct value of MMR_DTL.MBRSHP_CK.  Answers to these questions will help us decide whether we should use a hash join or a nested loop in the join to this table, and how to index the table to minimise random I/O.


To be continued …