Skip navigation.

Feed aggregator

Update hinted for wrong index

Bobby Durrett's DBA Blog - Fri, 2015-11-13 10:52

I worked with our support team to improve the performance of a PeopleSoft Financials update statement yesterday. The update statement had an index hint already in it but the index was not the best one of the available indexes.

Here is the original update statement:

 BANK_CD = :2,

I listed out the columns for the indexes on the table using the “querytuning” part of my standard sql tuning scripts.

Here are the columns for the hinted index:


The where clause includes only the first two columns.

But another similar index, PSEPYMNT_VCHR_XREF, exists with these columns:


The where clause has all three of these columns. So, why was the original query hinted this way? Does the E index not work better than the C index? I ran this query to see how selective the condition PYMNT_SELCT_STATUS = ‘N’ is.

>select PYMNT_SELCT_STATUS,count(*)
 4 AND B.REMIT_VENDOR = '12345678'

- ----------
C 5
N 979
P 177343
X 5485

I included the conditions on the first two columns that both indexes share, but removed the other conditions from the original update. A count on the number of rows that meet the conditions of only these two columns shows how many rows the original index will have to use to check the remaining where clause conditions.

I grouped by PYMNT_SELCT_STATUS to see how many rows met the condition PYMNT_SELCT_STATUS = ‘N’ and how many did not. Grouping on PYMNT_SELCT_STATUS shows how many rows the new index will use to check the remaining conditions in the where clause. I ran this query to see if the second index would use fewer rows than the first.

This query showed that only 979 of the over 180,000 rows met the condition. This made me think that the E index which includes PYMNT_SELCT_STATUS has a good chance of speeding up the original update. I ran a count with a hint forcing the C index and then again forcing the E index:

>set timing on
>select /*+ INDEX(B PSCPYMNT_VCHR_XREF) */ count(*)
 4 AND B.REMIT_VENDOR = '12345678'
 6 AND B.PYMNT_ID = ' '


Elapsed: 00:13:52.53
>select /*+ INDEX(B PSEPYMNT_VCHR_XREF) */ count(*)
 4 AND B.REMIT_VENDOR = '12345678'
 6 AND B.PYMNT_ID = ' '


Elapsed: 00:00:01.28

The original hint caused the select count(*) query to run in 13 minutes while the new hint caused it to run in 1 second. Clearly the new E index causes the query to run faster!

The developer that I was working with found the problem update statement in some PeopleCode and was able to edit the hint forcing it to use the better index. We migrated the modified code to production and the user was able to run the update statement without the web site timing out. Prior to the change the user was not able to complete the update because the SQL statement took so long it exceeded our application server timeout.



Categories: DBA Blogs

Questions Asked, Answered from Webcast: 3 Ways to Power Collaboration and Mobility

WebCenter Team - Fri, 2015-11-13 06:00
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

Thanks to everyone who joined us earlier this week on our live webcast “Three Ways to Power Digital Workplace Collaboration and Mobility”. For those who missed it or are interested in catching, the webcast replay, the on demand recording is now available. You will also find a copy of the presentation for download there.

We also thank you for your questions and interest. Due to limited time, we were able to drive real time responses to most, but not all, questions. So, for good measure, I have captured relevant questions and answers from the live webcast here. Please do keep the communication going. Feel free to post your comments and questions here, start your free trial at: and visit us at for more information on the solutions and potential next steps for you. We are committed to making your cloud journey easier and more meaningful.

Can Documents Cloud be used with

Yes. Oracle Documents Cloud comes with REST APIs that allow you to easily integrate with or any other CRM, ERP, on premises or SaaS applications and other systems.

Do we need to buy Oracle Mobile Cloud Service to enable mobile use of Process Cloud? Same question for Documents Cloud?

Both Oracle Documents Cloud and Oracle Process Cloud come with their native mobile applications that can be easily installed from your mobile store. The great advantage of these Oracle Platform as a Service (PaaS) solutions are that they easily plug and play together. So, if for example, you are looking to build a custom mobile app for sales reps or to serve a specific function, you can easily build the mobile app with Oracle Mobile Cloud Service that will drive content with Documents Cloud and processes using Oracle Process Cloud.

How do you maintain data segregation for security and compliance?

Security is a key tenant for Oracle PaaS solutions. Oracle Documents Cloud support data and user seggregation. For example, you can have a public instance and a separate private instance to support your security and corporate policy compliance. You can even segregate on user types. Plus, you have granular privileges control for folder sharing, link sharing, link expiry and such to enforce data and information security.

What is the benefit of using Process Cloud over Oracle BPM?

Oracle Process Cloud offers you a subscription based business process management service in the Cloud. Designed for business users, it allows for rapid process design and automation by users off any device without IT reliance. One of the major benefits to using Oracle Process Cloud is that Oracle handles the installation and uptime. Other benefits include easy integration with other Oracle Cloud services (including Oracle Documents Cloud, Oracle Mobile Cloud, etc.), other Oracle and third party on-premises or Cloud systems.

Are Documents Cloud and Process Cloud integrated together?

Many of our customers, including Mythics, are indeed using these two solutions together to enhance their process automation and document collaboration activities. This means that processes can be kicked-off during content/document collaboration and in turn, you can attach a document to a process. In addition, both services have great REST APIs which allow users to drive their own integration with existing solutions and applications.

Is Documents Cloud public or private/on premise?

Oracle Documents Cloud is a service available in Oracle public cloud. You can see the related services on Given Oracle’s pedigree in on-premises content management with Oracle WebCenter Content, we are one of the leading providers to support hybrid, public and on-premises content management and sharing.

What branch of Oracle does the Documents Cloud Service fall under- for example, Customer Experience, Business Intelligence, or Enterprise Resource Planning?

Oracle Documents Cloud Service is part of Oracle Cloud Platform for content and process. Oracle has the distinct advantage of leveraging resources, learnings and R&D from our on-premises Oracle WebCenter and Oracle BPM solutions as foundational elements to provide you the most comprehensive, holistic and proven cloud solutions. For more information, please visit

Mythics – are you doing document sharing as part of the process with external parties too (legal, for example)?

Currently, our process is internal only. However, we often communicate with external parties. Increasingly, we are recommending the Documents Cloud Service to do this. However, this is a shift in thinking. Many people are used to emailing files via email. It will take a while before this changes.

This is for Mythics – who is doing process modeling in the company? Are you involving IT or business analysts?

Mythics is an IT solutions company, so our business analyst functions overlap with IT. To compare with other industries, we are involving both sides of the company in process modeling.

Does Oracle Documents Cloud Services come with a document viewer?

Yes. Oracle Documents Cloud Service does have a document viewer that allows proper rendering of all document types on any device. In addition, Oracle Documents Presenter app allows you to present curated content directly off your tablet. Sales reps find the Presenter app particularly useful for impactful, visually appealing presentations right off their iPads, et al.

Does the document viewer/Documents Presenter support annotations capabilities for collaborations among users?

Yes. You can make and view annotations from Oracle Documents Cloud web application.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Will Sites Cloud be integrated with Documents and Process Cloud Services?

Quite like Oracle’s other PaaS services including Oracle Documents Cloud Service and Oracle Process Cloud Service, Oracle Sites Cloud will support integration with other Oracle PaaS services. In this webcast, for example, you saw an example of how community sites can benefit from pulling content, conversations and processes all in one place, all in context. You will continue to see more of this from Oracle in the coming weeks and months. Take a look at Mythics blog post on early impressions of oracle Sites Cloud Service, coming out of OOW:

Mythics – how do you see public institutions using Process and Documents cloud given the compliance requirements?

I recently attended Oracle OpenWorld. I noted several Public Sector organizations interest and adoption of Platform as a Service cloud tools. Process Cloud and Documents Cloud are industry agnostic tools and can be used different business areas. Specific to compliance requirements, not all public sector institutions have the same compliance requirements - some will require strict FedRAMP controls, where others have specific data security requirements. Oracle has done tremendous investments for data security across all solutions and these Cloud Platform services are no different. I also understand that Oracle is currently working to make all PaaS tools available in their government cloud data centers.

Can we try Oracle Documents Cloud Service today?

Yes. You can start your free trial of Oracle Documents Cloud Service today. Simply go to to register for the free trial.

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Confused about Oracle Identity Management : See me #Sangam15 Hyderabad on 21/22 Nov 2015

Online Apps DBA - Fri, 2015-11-13 05:36

Are you confused about so many Oracle Identity & Access Management (OAM, OIM, OID, OVD, OAG and the list goes on…) ?

Do you understand Identity Management Jargons like Authentication, Authorization (Fine-grained vs Coarse-grained), Provisioning, Reconciliation, SoD ?

Wish you could automatically creates Users and assign Roles/Responsibilities in Oracle E-Business Suite centrally ?

Interested in enhancing your skills and see my Journey from DBA/Apps DBA to Author of book on Oracle Identity & Access Management ?




I am presenting two papers on Oracle Identity & Access Management at #Sangam15 on 22nd Nov 2015 at Hyderabad@India



First Presentation at 10 AM on 22nd Nov (Sunday) covers various Oracle Identity & Access Management products and basics functionality of each of those products .


Second Presentation at 2:40 PM on 22nd Nov (Sunday) covers Oracle Identity Management and Integration with Oracle E-Business Suite. This session also covers basic concepts like Provisioning, Reconciliation, User, Role, and Responsibility Management in Oracle E-Business Suite Apps R12/11i using OIM.


Seats are limited for my session at #Sangam15 so register for these two sessions early to avoid disappointment.

1. Overview of Oracle IAM for terminology and all the products that are part of Oracle IAM like OAM, OIM, OID, OVD, OIF, OAG, OWSM …..

Screen Shot 2015-11-13 at 11.24.48


2. Integrate Oracle E-Business Suite with OIM for User, Role and Responsibility Management

Screen Shot 2015-11-13 at 11.08.00



Special Bonus to those who attend presentation will get my “30 Day Apps DBA Challenge” absolutely FREE (more details during the session).


Did you subscribe to our Private Facebook Group and YouTube Channel ?

The post Confused about Oracle Identity Management : See me #Sangam15 Hyderabad on 21/22 Nov 2015 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle JET and ADF Faces Integration in ADF

Andrejus Baranovski - Thu, 2015-11-12 22:49
Oracle JET provides a set of UI components, based on a combination of HTML and JavaScript. ADF Faces is an Ajax-enabled rich JavaServer Faces component framework that uses JavaScript to render client-side components, implement rich component functionality, validate user input and convert user data input. Although it is not officially documented, but obviously Oracle JET components can be integrated into JSF pages implemented with ADF Faces. I think such integration provides many benefits to ADF Faces. Use cases where performance is critically important can be implemented with Oracle JET client components and rich functionality can be implemented with ADF Faces, all in the same ADF application. Probably I would not create dependencies between Oracle JET and ADF Faces components rendered on the same page. Oracle JET should render data coming from REST services, while ADF Faces should work with data obtained from ADF Bindings layer.

I have managed to implement Oracle JET component and ADF Faces UI in the same physical ADF page. Oracle JET group renders input text, user types text and this text appears in the output below (use case is described here - Getting Started Quickly without the JET QuickStart). ADF Faces UI group renders regular ADF data entry form with navigation and save/cancel buttons. It all works together on the same page:

Download sample application implemented with JDeveloper 12.2.1 - I did not delete Oracle JET libraries from the sample application, so you could download and directly run it. Download size is a drawback, Oracle JET is part of application and it occupies around 11 MB.

Some text is entered into Oracle JET field, on enter it is copied into output below:

ADF also works as expected, for example I can change Salary value and commit changes with Save button:

ADF validation also works fine, I remove value for mandatory field and try to save. Required validation message will popup:

Take a look into ADF application structure with Oracle JET elements inside. Basically I just copied Oracle JET files into ADF ViewController public_html folder:

There is Oracle JET JavaScript file, where function is implemented to handle text entry and output update, pretty simple one:

Here it comes the most important part - ADF page. Instead of using HTML page to implement Oracle JET, I'm using regular ADF page based on JSF and ADF Faces. Oracle JET CSS and RequireJS script for Oracle JET are defined in the top of the page, just above the form - this will make sure correct initialisation for Oracle JET:

Oracle JET UI (input text, together with output text) is defined inside ADF Faces Panel Header with HTML div. This is how Oracle JET is rendered inside ADF page, using HTML:

Stay tuned and I will be posting more use cases on Oracle JET integration topic.

Oracle OpenWorld 2015 Recap – The Forecast Includes Clouds and a Better WebCenter User Experience

12189830_10153162421878053_7938706606449034768_nOracle OpenWorld 2015 wrapped 2 weeks ago. For those of you unable to attend, it was sunny all week with temperatures in the 70s, yet clouds were everywhere. My apologies for the pun, but you could have easily not noticed the blue skies and bright sun as The Cloud was really everywhere. Attendees even had the chance to immerse themselves in the cloud with an interactive display consisting of a few thousand plastic balls you could jump into and roll around in.

Sadly I wasn’t able to partake in this fun, but I was able to attend most of the keynotes, so I got pretty well immersed myself. The Cloud was the theme of the keynotes, with Oracle making it very clear that the future of their applications, middleware, and database technologies are in the cloud – as part of their software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) offerings. In fact, the future is really now as Oracle announced they have over 1,300 customers already using their Oracle ERP Cloud.

All of this is very exciting from an IT and technology perspective, as the cloud makes it easier to innovate and deploy home-grown or 3rd-party applications across business functions. But what about Oracle WebCenter? Can that innovation and deployment flexibility be extended to Oracle WebCenter in the cloud? Not yet, or not entirely. WebCenter is currently in a hybrid mode when it comes to the cloud. WebCenter customers will continue to manage their content on premise, but use Oracle Documents Cloud Service for file syncing and sharing in the cloud. Despite this, Oracle did have some exciting announcements regarding Oracle WebCenter 12c, which was released on October 26th. Here are some highlights:

  • User Experience (UX) is a major priority for Oracle WebCenter. UX has been a problem area for WebCenter for many years, but a lot of enhancements have been made that customers should be excited about. I saw a lot of these demonstrated during the WebCenter Portal Strategy and Vision session, and I came away impressed with the improvements they have made with contribution, publishing, and mobility.

    Specific examples include a new inline editor (WYSIWYG) entirely written in Oracle’s Application Development Framework (ADF), which will make it easier for contributors to add content to the portal in-context of the page. Overall, the page composer is just more intuitive, and the click stream to add, edit, and publish content on the portal was easy to follow during the demonstration. Lastly, the page templates are now mobile responsive meaning they will adjust for the form factor of the device (desktops, tablets, and smartphones) being used to view the portal.

  • WebCenter Content also received some user interface improvements. This includes a new image viewer featuring annotations so users can mark up images. And speaking of images, the imaging server has been merged with the content server.
  • And for the front-end developers out there, Oracle was heavily promoting their new JavaScript framework called JET. If you like to use frameworks like bootstrap you should check out JET’s component library. JET will be Oracle’s framework of choice for the cloud and its applications going forward.

Fishbowl Solutions will be sharing more about what we learned regarding Oracle WebCenter 12c, Oracle’s cloud offerings, and Oracle’s front-end design and development technologies during a webinar in early December. Check back to this blog or the Fishbowl website for more details soon.

More information on Oracle WebCenter 12c can be found on the Oracle WebCenter blog and within this press release.

For more information on Oracle’s cloud offerings, see this Forbe’s article.

The post Oracle OpenWorld 2015 Recap – The Forecast Includes Clouds and a Better WebCenter User Experience appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Data To Back Up Concerns Of Textbook Expenditures By First-Generation Students

Michael Feldstein - Thu, 2015-11-12 15:49

By Phil HillMore Posts (378)

David Wiley has added to the conversation[1] over use of data on college textbook pricing and student spending patterns with “The Practical Cost of Textbooks”. The key argument is to go beyond prices and spending and look at the most direct measure of asking students themselves how textbooks costs have impacted them. He then looks at the Florida Virtual Campus surveys (also included in my post), concluding:

What impact does the cost of textbooks have on students? Textbook costs cause students to occasionally or frequently take fewer courses (35% of students), to drop or withdraw from courses (24%), and to earn either poor or failing grades (26%). Regardless of whether you have historically preferred the College Board number or the student survey number, a third fact that is beyond dispute is that surveys of students indicate that the cost of textbooks negatively impacts their learning (grades) and negatively impacts their time to graduation (drops, withdraws, and credits).

And yes, we need to do something about it.

Amen. Surveying over 18,000 students, the FVC surveys are quite important and should be on everyone’s radar.

More Out Of Data

I strongly feel that this type of discussion (as well as media quotes, policy, and legislation) should use the best data available, describe that data accurately, and ask for more data where there are holes. And it often takes multiple views into multiple data sources to get a complete picture. For the College Board / IPEDS data, it is indisputable what it says, but descriptions of the source should note that it comes from financial aid offices and not from student spending or budgets. For the student spending data (Student Monitor, NACS, Cal State), the data is solid and accurately described (that I have seen), but as reported they are missing segmentation across key student demographics.

In Mike Caulfield’s post he argues that it’s all well and good that students are creatively reducing their textbook expenses by renting, borrowing, etc, but that this data can mask other important factors.

You could use skills like this to cobble together that bag of books and say *that’s* the true cost. Look what you can get your books for if you game the system right!

But to say that, you’d have to have learned nothing in the past decade about why students fail. Requiring a non-traditional student to cobble together a bag of half-priced textbooks the way a second-generation student might is setting them up for failure.

Since I have argued that the student spending data is critical and much more relevant than the College Board data, let’s explore that data further – asking for more. Richard Hershman and NACS were kind enough to agree to requests for cross-tabs on two of their questions against sector and first-generation status.

For Fall 2014, students were asked how much they spent on required course materials.

Spending_and_Cost_Concerns 1

It turns out that Mike is right. First-generation students spend 10 % more, acquire 6% fewer textbooks, and end up paying 17% more per textbook than do non first-generation students. This data could be used as a starting point for policy that addresses this problem – explaining the problem and providing a measurement for improvement.

For Spring 2015, students were asked to list their top 3 cost concerns for college.

Spending_and_Cost_Concerns 3

First-generation students are somewhat more likely to worry about course materials (41% to 37%) than non first-generation students, but the view across sectors is more telling. 2-year college students are much more likely to worry about course materials (50% to 37%) than 4-year college students. Tuition is lower at 2-year schools, and fewer student live on campus or away from home. So it makes sense that course material concerns would increase in relative terms (% listing in top 3 concerns). It also makes sense how car payments / insurance / gas would be more important.

These are real issues of equity, and accurate understanding of multiple data sources is more likely to lead to effective policy decisions than using single sources. And you get that data by exploring it further and seeing where it takes you.

  1. My initial post, Mike Caulfield responseBracken Mosbacker, my response to Mike, Mike follow-up

The post Data To Back Up Concerns Of Textbook Expenditures By First-Generation Students appeared first on e-Literate.

Oracle Priority Support Infogram for 12-NOV-2015

Oracle Infogram - Thu, 2015-11-12 15:22

Another OpenWorld is in the books, and now it’s time to absorb and pursue the material covered. Here’s a good place to get started: Continue Learning Post-Oracle OpenWorld
Switch off "_rowsets_enabled" in Oracle Database 12c, Upgrade your Database - NOW!
The complete guide to tuning the appearance of NetBeans, from WebLogic Partner Community EMEA.
From the same source:  Showing HashMap values in af:table using af:iterator
Security Alert CVE-2015-4852 was released on November 10th, 2015.

This vulnerability, which involves the Apache Commons and Oracle WebLogic Server, has received a CVSS Base Score of 7.5.

Due to the severity of CVE-2015-4852, Oracle strongly recommends applying mitigation steps and patches as soon as available.

The Security Alert Advisory for CVE-2015-4852 is the starting point for relevant information. This Security Alert provides mitigation recommendations to be implemented while awaiting the release of Oracle WebLogic Server patches. It includes links to other important documents that provide a list of affected products and the patch availability information. It is essential to review the Security Alert supporting documentation referenced in the Advisory before applying patches or mitigation instructions.

The Security Alert Advisory is available at the following location:

Mitigation instructions are available at:

WebLogic Server Patch Availability information will be updated at:

All Oracle Critical Patch Updates and Security Alerts are available on the Oracle Technology
Network at:

SPARC and Solaris
Virtual HBA in Oracle VM Server for SPARC, from Virtually All The Time.
Updating an Oracle Solaris 8 system, from Ops Center.
SPARC M7 Software In Silicon - Useful Webinar, from Notes from the Main Sequence.
Last login tracking in pam_unix_session, from The Observatory.
Fusion Middleware 12c – Selective tracing, from SOA & BPM Partner Community Blog.
When is the next Java update?, from Java Platform Group, Product Management blog.
NetBeans IDE 8.1 Plugin: Entity Expander, from Geertjan’s Blog.
Oracle Retail
Announcing the New Oracle Retail Reference Library Repository, from Oracle Retail Documentation.
From the Oracle E-Business Suite Support blog:
Webcast: EAM 12.2.5 Enhancements, Nov 19, 2015 9:00 AM MT
Webcast: Item Orderability Functionality in OM, Nov 17, 2015 9:00 AM MT
Webcast: Actual Costing and Create Accounting In OPM, Nov 16, 2015 9:00 AM MT
Do you Know My Oracle Support Communities?
1099 Tax Reporting Patches for 2015 Are Now Available
From the Oracle E-Business Suite Technology blog:

Quarterly EBS Upgrade Recommendations: November 2015 Edition

OBIEE 11g and Essbase – Faking Federation Using the GoURL

Rittman Mead Consulting - Thu, 2015-11-12 14:56

This blog is going to address what happens when we can’t take advantage of the Admin tool’s powerful vertical federation capabilities when integrating relational stars and Essbase cubes. In the Admin tool, synonymously referred to as the RPD, vertical federation is the process of integrating an aggregate data source, in this case Essbase, with a detail level source from a data mart. This technique not only has the ability to increase query efficiency and decrease query time, it also has the added benefit of bringing together two powerful and dynamic reporting tools. But like most things, there is a pretty big caveat to this approach. But, before I jump into what that is, some housework. To start, let’s make sure things don’t get lost in translation when going back and forth between Essbase and OBIEE jargon. In Essbase speak, dimensions can be thought of as tables in a relational structure, whereas Essbase generations can be thought of as columns in each table, and members are the values in each column. Housework done, now the caveat. Often, dimensions in Essbase cubes are built in such a way as to not neatly support federation; that is, they are arranged so as to have an uneven number of generations relative to their corresponding relational dimension. It should be noted at this point that while federation is possible with a ragged hierarchical structure, it can get kind of messy, essentially ending up in a final product that doesn’t really look like something an Essbase-centric user community would readily and eagerly adopt. So what then, can we do when federation is out of the question? Let’s frame the solution in the form of a not-atypical client scenario. Say we’ve got a requirement per a large finance institution of a client to bring together their Essbase cubes they’ve used thus far for their standardized reporting, i.e. balance sheets, income statements and the like, with their relational source in order to drill to account detail information behind the numbers they’re seeing on said reports. They’ve got a pretty large user base that’s fairly entrenched and happy with their Smart View and Excel in getting what they want from their cubes. And why shouldn’t they be? OBIEE simply can’t support this level of functionality when reporting on an Essbase source, in most cases. And, in addition to these pretty big user adoption barriers to an OBIEE solution, now we’ve got technology limitations to contend with. So what are our options then when faced with this dilemma? How can we wow these skeptical users with near seamless functionality between sources? The secret lies with URL Action Links! And while this solution is great to go from summary level data in Essbase to its relational counterpart, it is also a great way to simply pass values from one subject area to another. There are definitely some tricks to set this up, but more on those later. Read on.

The Scenario

In order to best demonstrate this solution, let’s set up a dashboard with two pages, one for each report, and a corresponding dashboard prompt. The primary, source report, out of Essbase, will be something that could easily resemble a typical financial report, if not at least in structure. From this high-level chart, or similar summary level analysis, we’ll be able to drill to a detail report, out of a relational source, to identify the drivers behind any figures present on the analysis. In this example, we’re going to be using the Sample App, Sample Essbase subject area to go to the equivalent relational area, Sample Sales. Yes, you could federate these two, as they’ve done in Sample App, however they’ll serve well to demonstrate how the following concept could work for financial reporting against ragged or parent-child structures. Values for Product Type, in the following instance, could just as well be the descendants or children of a specific account, as an example. As well, there is no equivalent relational subject area to use for the sake of the SampleApp Essbase GL subject area. In the example below, we have a summary, month level pivot table giving us a monthly sales trend. The user, in the following example, can prompt on the Year and Customer segment through a dashboard prompt, but as you’ll see, this could easily be any number of prompts for your given scenario.

Monthly Trend Summary:

Solution 1:

In the sales trend example above, we are going to enable our user to click on a value for a revenue figure and then navigate to a detail report that shows products sold for the month by date. Again, this all must be done while passing any chosen parameters from both the dashboard prompt and analysis along to the detail analysis.

Proof of Concept

First, let’s start with the guts of the report example above. As you can see, there is quite a bit more under the hood than meets the eye. Let’s go over the approach piece by piece to help build a more thorough understanding of the method.

Step 1: Include the Columns!

So the idea here is that we want to pass any and all dimensional information associated with the revenue figure that we pick to a detail level report that will be filtered on the set of parameters at the chosen intersection. We can hide these columns later, so your report won’t be a mess. I’ll add here that you might want to set any promoted values to be equal to the presentation variable on its respective dashboard prompt with a default value set, as seen below. This will help to make the report digestible on the compound layout. The following picture shows the prompted values to drive our summary report on Year and Customer Segment. You can do this in the filters pane on the criteria tab with the following syntax:


                            All column values we want to pass need to be represented on the report:


                           Values that will be passed to detail report (in this case, BizTech, Communication, Active Singles, 2012, and 2012 / 11):

Step 2: More Columns!

In addition to the columns that comprise the report, we need to add an additional iteration of every column for all of those added to the report in the first place. In the pictures above, you can see that these are the columns titled with the ‘URL’ prefix. In the column editor, concatenate quotes to the column values by attaching the following string (this is a single quote followed by a double quote and another single quote w/ NO spaces between them):

‘ ” ‘ || “Table”.”Column Name” || ‘ ” ‘

While this step may seem extemporaneous, you’ll see a bit later that this step is all too necessary to successfully pass our column values through our URL Action Links. After you’ve created the custom columns, just group them along with their counterpart in the report, as in the pics above.

Step 3: An Approach to Handling Hierarchies

In the previous pictures, you can see the products hierarchy that comprises the rows to the report. In order to pass any value from the hierarchy as well as its members we are going to have to include its respective generations in the rows as well. For our example, we’re going to use Brand, LOB, and Product Type. In this way, a user can select any sales value and have all three of these values passed as filter parameters to the detail analysis through a URL. You’ll notice that we haven’t given these columns a counterpart wrapped in quotes as you were told to do previously. This is quite on purpose, as we’ll see later. These columns will provide for another example on how to pass values without having to implement a second column for the purpose of wrapping the value in quotes.


When first placing the hierarchy on your analysis and expanding it to where you’d like it for the sake of the report, you can simply select all the column values, right click and then select ‘Keep Only’. This will establish a selection step under the Products Hierarchy to ensure that the report always opens to the specified structure from now on. So, that’s good for now, let’s get to the magic of this approach.


Step 4. Set up the Action Link

In this case, we’re going to ‘drill’ off of the Sales column in our table, but we could really ‘drill’ off of anything, as you’ll see. So, pop open the Interaction tab for the column and select Action Links as our primary interaction. Edit that guy as follows (see URL procedure below). It used to be that we could do this via the ‘P’ parameters, however this method seems to be mostly deprecated in favor of the col/val method, as we shall utilize below.

URL Procedure – Server URL*
Portal&Path=@{1} – path to dashboard
&Page=@{2} – dashboard page
&Action=@{3} – action to perform, in this case navigate (there are others)
&col1=@{4} – column from target analysis we wish to manipulate (our sales detail analysis)
&val1=@{5} – column from source analysis with which we are going to pass a filter parameter to target
&val4=“@{11}” – will discuss these quoted parameters later on

*Note that this value can be made into a variable in order to be moved to different environments (DEV/TEST, etc…) while maintaining link integrity

The picture above details how to set up the URL link as described above. The col1 value is the column from the target analysis we want to filter using the value (val1) from our source. Be sure to qualify this column from the subject area from which it originates, in this case “A – Sample Sales”.

Ex: “A – Sample Sales”.”Time”.”T05 Per Name Year”

Val1, as these parameters exist in ‘sets’, is the column from our source analysis we want to use to filter the target analysis. This is where our custom, quoted columns come into play. Instead of using the original column from our analysis, we’re going to use its quoted counterpart. This will ensure that any values passed through the URL will be enclosed in quotes, as is required buy the URL. Note that we’re not using a value parameter in this case, but a column instead (the dropdown to the left of the text box).

Ex: ‘ ” ‘ || “Time”.”T05 Per Name Year” || ‘ ” ‘

You can proceed this way to pass as many values as you’d like to your detail analysis, with this coln, valn method. Again, just be sure that your columns are included in the source analysis or the values won’t get ported over. Once you’ve got all your columns and values set up, go ahead and enter them into the URL field in the Edit Action dialogue box, as above. Make sure you reference your variables using the proper syntax (similar to a presentation variable w/ an @ sign):

Ex: col1=@{4} – ‘4’ being the variable name (note that these can be named most anything)

Quoting Parameters

As an alternative to including an extra iteration of each column for the sake of passing quoted column values, we can instead, put quotes around the parameter in our URL, as in the example above. The limitation to this method, however, is that you can only pass a singular value, as in Year, for example. In later posts, we’ll address how to handle passing multiple values, as you might through a dashboard prompt.

Step 5. Set Up the Detail Analysis

For our detail analysis we’re going to set it up in much the same way as our summary. That is, we need to include the columns we want to filter on in the target report as. Unfortunately, our target report won’t simply pick them up as filters as you might put on your filters pane, without including them on the actual analysis. Again, any columns we don’t want visible to a user can be hidden. Below, we simply want to see the Calendar Date, Product, and Revenue, but filtered by all of our source analysis columns.

In the criteria view for our target, detail analysis, we need to make sure that we’re also setting any filtered columns to ‘is prompted’. This will ensure that our target analysis listens to any filter parameters passed through the URL from our source, summary analysis. As a last step, we must again fully qualify our filters, as in the picture below.

This picture shows our Year ‘is prompted’ filter on our target, detail analysis. Note that this column is also a column, albeit hidden, on this report as well. This will act as a filter on the analysis. It is being ‘prompted’ not by a dashboard prompt, in this instance, but by our source, summary analysis.

Step 6. Testing it All Out

Now that we’ve got all the pieces of the puzzle together, let’s see if it works! To QA this thing, let’s put a filter object on the target, detail analysis to make sure that the report is picking up on any values passed. So if we click on a sales value, we should be taken to the target analysis and see that all the parameters we set up were passed. The picture below confirms this!


Hopefully this can be one more trick to keep in the tool belt when faced with a similar scenario. If you have any hiccups in your implementation of this solution or other questions, please feel free to respond to this post. Stay tuned for additional articles related to this topic that go much more in depth. How do you handle passing multiple column values? How do I keep my report query time low with all those extra columns? How do I pass values using the presentation variable syntax? Can I use the Evaluate function to extract the descendants of a filtered column?



The post OBIEE 11g and Essbase – Faking Federation Using the GoURL appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Little Things Doth Crabby Make – Part XVIV: Enterprise Manager 12c Cloud Control Install Problem.

Kevin Closson - Thu, 2015-11-12 14:42

This is a short post to help out any possible “googlers” looking for an answer to why their EM Cloud Control install is failing in the make phase with

Note, this EM install was taking place on an Oracle Linux 7.1 host.

The following snippet shows the text that was displayed in the dialogue box when the error was hit:

INFO: 11/12/15 12:10:37 PM PST: ----------------------------------
INFO: 11/12/15 12:10:37 PM PST: Exception thrown from action: make
Exception Name: MakefileException
Exception String: Error in invoking target 'install' of makefile '/home/oracle/app/oracle/oms12cr5/Oracle_WT/webcache/lib/'. See '/home/oracle/oraInventory/logs/cloneActions2015-11-12_12-10-18-PM.log' for details.
Exception Severity: 1
INFO: 11/12/15 12:10:37 PM PST: POPUP WARNING:Error in invoking target 'install' of makefile '/home/oracle/app/oracle/oms12cr5/Oracle_WT/webcache/lib/'. See '/home/oracle/oraInventory/logs/cloneActions2015-11-12_12-10-18-PM.log' for details.

Click "Retry" to try again.
Click "Ignore" to ignore this error and go on.
Click "Cancel" to stop this installation.
INFO: 11/12/15 12:20:14 PM PST: The output of this make operation is also available at: '/home/oracle/app/oracle/oms12cr5/Oracle_WT/install/make.log'

The following shows the simple fix:

$ diff ./app/oracle/oms12cr5/Oracle_WT/lib/sysliblist.orig ./app/oracle/oms12cr5/Oracle_WT/lib/sysliblist
< -ldl -lm -lpthread -lnsl -lirc -lipgo --- > -ldl -lm -lpthread -lnsl -lirc -lipgo -ldms2

So if this error hath made at least one googler less crabby I’ll consider this installment in the Little Things Doth Crabby Make series all worth it.

Filed under: oracle


Jonathan Lewis - Thu, 2015-11-12 14:01

A short video that I did at the OTN lounge at RMOUG a couple of years ago has just been posted on YouTube. It’s about the improvements that appear in histograms in 12c. I’ll move this link into a more suitable posting in the near future.


How to Keep Your Business Process Looking Simple

Jan Kettenis - Thu, 2015-11-12 12:32
There are two key words in Business Process Management Notation (and Language) or BPMN for short that very often seemed to be missed. The first is "business" the second "management". In this posting I will discuss the significance of the first, and how you are in control of that.

In BPMN the word business does not wants to express that it is "just" about modeling business processes. The idea is also that these models should be understandable, or even created by the business. Now one can argue that with respect to the latter BPMN does not always seems to deliver on the promise, or at least not for every business. But I know of a few cases where the business analyst creates the non-technical versions of the model (level 1, and 2 as Bruce Silver would call them), and I know of a significant amount of cases where the business or at least the analyst is able to understand BPMN process models. That is to say, if these models have not been cluttered with technical details.

Unfortunately this cluttering happens quicker that you wish, and too often the executable process models are almost beyond comprehension for the business, while there is no good reason for that. And that is too bad, because you then miss the opportunity to let the executable process model being validated by that business. Observing how process modeling is done at some of my projects, unfortunately I have to conclude that quite a few people are not aware of the problem or don't know how to prevent it, and as I did not (yet) found any references that gives a comprehensive overview of the options offered by the Oracle BPM Suite that can help you out, I discuss them in the following.

Embedded Sub-ProcessThe embedded sub-process is one of the options that most people are aware of, and (generally) reasonably well used. In the example below an embedded sub-process with name "Store Order" contains a script activity "Create Message Header" that constructs the header for the message to be used in the service call activity "Save Order Data". By simply collapsing the embedded sub-process the technical details of how an order is stored, can be hidden for the business that typically does not want to know that a header needs to be created. One could argue they should not even be interested in the fact that this is done synchronously (using a service activity) instead of asynchronously (using a send and receive activity), which also is conveniently hidden by the embedded sub-process.

Except for using it to hide technical details, embedded sub-processes can also be used to determine a scope. This can be done from a business perspective (for example to determine a scope of activities that might be repeated or for which multiple instances should be handled in parallel), but also from a technical perspective (for example as a scope for temporary variables, or exception handling).

The issue I often see with embedded sub-process in action, is that developers very often do not bother collapsing them, still exposing technical details to the business.

One should be aware of a couple of aspects concerning embedded sub-processes. The first is that they are not reusable (meaning you cannot use them elsewhere in the same or any other process model). The second that they come with a little overhead from an audit perspective, as every embedded sub-process results in 2 extra entries (one for the start and one for the end of it).

Reusable Sub-processA reusable sub-process is created as a separate process. The only thing that distinguishes it from other types of processes, is that it has a none start as well as a none end event, and it cannot have an initiator activity. As the name already suggests, a reusable sub-process is never started directly, but only by calling it from some parent process. This is done by the Call activity.

Going back to the step in the example where we want to save order data, and let's assume the order has to be updated more than once, than this makes it a typical candidate for reuse. In the following example a reusable "Order Storage" reusable sub-process has been created that contains this functionality. It has been made a little bit more complex by including a notification activity that will notify the sales representative every time an update of the order has taken place.

The reusable sub-process has access to the /project/ variables (by value), and its own /process/ variables. In other words, the reusable sub-process has access to the "order" project variable. A choice has been made to pass on the email address of the one that has been notified, as an argument. In the reusable sub-process this email address is stored in a (local) "email" process variable.

The choice to define a variable at project versus process level should be made carefully. Project variables are global variables with the following properties:
  • In case of functionality that is executed in parallel, one should be careful that the parallel threads do not make conflicting changes to the same project variable.
  • Simple type project variables are mapped to protected attributes (also known as mapped attributes or flex field), of which there is a limited number (for example 20 protected text attributes). Their values are stored in separated columns (instead of part of the process payload).
  • The lifespan of a project variable is from its initialization up to the end of the (main) process instance.
Like an embedded sub-process, a reusable sub-process is executed in the same thread. A reusable sub-process is only reusable in the same BPM project (composite) and cannot be shared with other projects. A reusable sub-process adds a little bit more auditing overhead than the embedded sub-process to auditing.

Finally, up to version 12.1.2 a Call activity in a BPM project makes it incompatible with any other revision, meaning that you cannot migrate instances. Period. Not even when you deploy the same revision without changing any bit of your code. For most customers I work with, this is a major limitation, and some therefore choose not to use reusable sub-processes.

Process As a ServiceThe next alternative to a reusable sub-process is the process-as-a-service, which means that you start it with a message start event or send activity. Any response is returned by a message end event or receive activity. As long as the process-as-a-service is part of the same BPM project (composite) it can make use of the project variables, but only by definition, not by value. So all data has to be mapped to and from the process. You can put the process in the same composite, or put it in a composite of its own. The criteria to do the latter would be reuse over composites. When in a separate composite, you cannot reuse the business objects, nor the project variable definitions.

From a functional perspective, the process-as-a-service is equivalent to a reusable sub-process. From a technical perspective it requires more work if you implement it in a separate composite, and it will add extra overhead to auditing (not only BPM auditing, but also every instance will have its own entry in the COMPOSITE_INSTANCE and CUBE_INSTANCE tables). In 11g you will also have to create some custom mechanism to propagate cancellation of the parent instance to child instances, but in 12c this is automatically done (see also

Detail Activity
Since 12c you can "detail" an activity. With that you can hide logic that is tightly related to an activity, but has to be done using an activity of its own. From the outside a detailed activity looks like any other activity, and keeps the original icon associated with it. The fact that it is detailed you can see by a + sign at the bottom, very much like an embedded sub-process. And basically that is what it is, a specialized embedded activity. You can even have local variables, and in the structure pane it is represented as an embedded sub-process. Again, to keep the business process a "business" process you should try not to get over-exited and put all sorts of logic in it that really belongs somewhere else. Use it only for logic that is tightly coupled to the main activity, but of any importance to the business.

In the following example I have implemented a call to some service that has to happen right after the user activity. It is a technical service call that we don't want to bother the business with, as it concerns a call to a service to confirm the order to the customer. As far as the business is concerned, this is an integral part of the Contact Provider activity, and they should not care if that service is called from the UI or from the process for that matter.

Hope you can make good use of this, and let me know if you have any other suggestion!!

Developing an Application with PL/SQL, the Formspider Way

Gerger Consulting - Thu, 2015-11-12 03:05
On November 24, attend our webinar to watch Professional Consultant Nicholas Mwaura as he talks about his experience with Formspider and shows you how you can build first class applications to impress your prospective customers and end users.

Watch a short demo of the application Nicholas will show you how to build:

During the webinar the following topics will be discussed:
  • The importance of Formspider for PL/SQL developers and Oracle customers
  • Why Oracle Forms applications are very easy to migrate to Formspider
  • Introduction to the demo application
  • Building the demo application: How to design the UI, how to work with windows, panels, tabs, trees and various other Formspider components, working with LOV’s, application structure in the database. 
At the end of the webinar, the attendees will get priority access to the source code of the demo application.

During the webinar, each attendee will receive a discount offer for Formspider developer licenses! :-)
Sign up to the webinar now!
Categories: Development

Cloud Registered Level in Oracle Partner Network

As you may know, we announced a new entry level into OPN a few weeks ago at OOW15. We are now able to take partner applications via Oracle Partner Store. This means effective today, companies not...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Oracle Private Cloud Solutions for IaaS and PaaS

Even though the cloud computing could make a significant difference to your business, relegating governance and control to someone else isn't simply an option. With enterprise private cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle ASM Rebalance – Turn it up. To 11?

Pythian Group - Wed, 2015-11-11 14:01


If you’ve ever seen or heard of the movie This is Spinal Tap then you have likely heard the phrase Turn it up to 11.

Why bring this up?

When ASM was introduced as a method for configuring storage for Oracle, one of the features was the ability to rebalance the data across all disks when disks were added or replaced.  The value used to control how aggressively Oracle rebalances the disks is the REBALANCE POWER. And yes, the maximum value for rebalancing was 11, as an homage to the movie.

Here is an example of a command to only rebalance a disk group:

 alter diskgroup data rebalance power 11; 

That is rather straightforward, so why blog about it?

The reason is that the maximum value for REBALANCE POWER changed with Oracle, as per the documentation for the ASM_POWER_LIMIT parameter.

From, the maximum value is no longer 11, but 1024.

I’ve asked a number of DBA’s about this, and it seems that knowledge of the rebalance power limit is not really too well known.

Why does it matter?

Imagine that an ASM diskgroup has had disks replaced, and the task took longer than expected.

Now you want to speed up the rebalance of the disk group as much as possible:

 alter diskgroup data rebalance power 11; 

Will that bit of SQL do the job?

On 10g that would be fine. But on an database that would set the POWER limit to 1.07% of the maximum allowed value, having little effect on how aggressive Oracle would be in rebalancing the disks.

The correct SQL in this case would be:

 alter diskgroup data rebalance power 1024; 

The following is a short demonstration of REBALANCE POWER on, and databases.  These examples just confirm the documented maximum values for REBALANCE POWER.


SQL> select version from v$instance;

SQL> alter diskgroup ASM_COOKED_FS rebalance power 12;
alter diskgroup ASM_COOKED_FS rebalance power 12
ERROR at line 1:
ORA-15102: invalid POWER expression

SQL> alter diskgroup ASM_COOKED_FS rebalance power 11;

Diskgroup altered.


SQL> select version from v$instance;


SQL> alter diskgroup fra rebalance power 1025;
alter diskgroup fra rebalance power 1025
ERROR at line 1:
ORA-15102: invalid POWER expression

SQL> alter diskgroup fra rebalance power 1024;

Diskgroup altered.


SQL> select version from v$instance;


SQL> alter diskgroup data rebalance power 1025;
alter diskgroup data rebalance power 1025
ERROR at line 1:
ORA-15102: invalid POWER expression

SQL> alter diskgroup data rebalance power 1024;

Diskgroup altered.


Discover more about our expertise in the world of Oracle.

Categories: DBA Blogs

Oracle Upgrade Failures due to METHOD_OPT and XDBCONFIG

Pythian Group - Wed, 2015-11-11 13:51

I recently experienced a problem when upgrading an old Oracle database to that had no matches in a My Oracle Support (MOS) or Google search. The problem presented itself initially when upgrading as the following error was reported by the upgrade script:

ERROR at line 1:
ORA-20001: invalid column name or duplicate columns/column groups/expressions
in method_opt
ORA-06512: at "SYS.UTL_RECOMP", line 865
ORA-06512: at line 4


Initially, the problem was reported in the upgrade log file for the ORACLE_OCM schema which is not critical. However, it later caused the XDB component to become invalid and consequently other components that depend on XDB to also become invalid. The error reported when trying to validate XDB was:

Warning: XDB now invalid, could not find xdbconfig


Even if not upgrading, this error could be encountered when trying to install or re-install the XDB component in an 11g database. XDB is a mandatory component as of Oracle 12c but is optional with 11g and below. Hence, it’s possible to experience this same problem if you’re trying to add the XDB component to an 11g database that didn’t already have it.


“Warning: XDB now invalid, could not find xdbconfig”

Several MOS documents already exist describing the error “Warning: XDB now invalid, could not find xdbconfig”. Those include:

  • Utlrp.sql results to “Warning: XDB Now Invalid, Could Not Find Xdbconfig” (Doc ID 1631290.1)
  • XDB Invalid after Utlrp during Activation of Extended Datatypes (Doc ID 1667689.1)
  • XDB Invalid After utl32k.sql during activation of extended datatypes (Doc ID 1667684.1)

Unfortunately, none of those applied as either the cause or the solution to the problem I encountered. Either going through the XDB installation logs or simply manually running utlrp.sql shows that the xdbconfig is missing due to the “ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt” error.

For example:

SQL> @?/rdbms/admin/utlrp

COMP_TIMESTAMP UTLRP_BGN  2015-11-09 11:36:22

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>   Use the following queries to track recompilation progress:
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
ERROR at line 1:
ORA-20001: invalid column name or duplicate columns/column groups/expressions
in method_opt
ORA-06512: at "SYS.UTL_RECOMP", line 865
ORA-06512: at line 4

COMP_TIMESTAMP UTLRP_END  2015-11-09 11:36:23

DOC> The following query reports the number of objects that have compiled
DOC> with errors.
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.


DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.


Function created.

PL/SQL procedure successfully completed.

Function dropped.

Warning: XDB now invalid, could not find xdbconfig
ORDIM registered 0 XML schemas.
The following XML schemas are not registered:

PL/SQL procedure successfully completed.



Hence the ORA-20001 error is the true cause of the XDB problem.


“ORA-20001: Invalid column name or duplicate columns/column groups/expressions in method_opt”

Searching My Oracle Support (MOS) for this error leads to the following notes:

  • Gather Table Statistics Fails With ORA-20001 ORA-06512 On “invalid Column Name” (Doc ID 1668579.1).
  • 11i – 12 Gather Schema Statistics fails with Ora-20001 errors after 11G database Upgrade (Doc ID 781813.1).
  • Gather Schema Statistics Fails With Error For APPLSYS Schema (Doc ID 1393184.1).
  • Performance Issue Noted in Trading Partner Field of Invoice Workbench (Doc ID 1343489.1).

Unfortunately, those are all related to specific tables from Oracle Applications Technology Stack, Oracle EBS, or Oracle Payables – none of those were applicable in my case. In my case the application was home grown.

Hence, MOS and Google searches returned no relevant results.


The Root Cause & Solution

The root cause of this problem was the METHOD_OPT parameter of DBMS_STATS.

The METHOD_OPT parameter is related to how optimizer statistic histograms are collected for columns. METHOD_OPT is set using DBMS_STATS.SET_PARAM and can be queried through DBMS_STATS.GET_PARAM or directly from the underlying base table SYS.OPTSTAT_HIST_CONTROL$.

For example:


PL/SQL procedure successfully completed.



SQL> select sname, spare4 from SYS.OPTSTAT_HIST_CONTROL$ where sname = 'METHOD_OPT';

SNAME                          SPARE4
------------------------------ ----------------------------------------



The actual root cause of the ORA-20001 error and all of the subsequent failures and invalid components is that in the problematic database, the METHOD_OPT was set to the rarely used and outdated setting of “FOR COLUMNS ID SIZE 1”. From the database that experienced this issue:





The “FOR COLUMNS ID SIZE 1” setting was sometimes used in older versions of Oracle to prevent histogram buckets for being collected for primary keys and for plan stability through statistic changes. However, it should not be used for modern 11g or 12c databases. In fact it’s not even settable through the DBMS_STATS package after Oracle 10g.  Executing against an database will give:

SQL> exec dbms_stats.set_param('METHOD_OPT','FOR COLUMNS ID SIZE 1');
BEGIN dbms_stats.set_param('METHOD_OPT','FOR COLUMNS ID SIZE 1'); END;

ERROR at line 1:
ORA-20001: method_opt should follow the syntax "[FOR ALL [INDEXED|HIDDEN]
COLUMNS [size_caluse]]" when gathering statistics on a group of tables
ORA-06512: at "SYS.DBMS_STATS", line 13179
ORA-06512: at "SYS.DBMS_STATS", line 13268
ORA-06512: at "SYS.DBMS_STATS", line 13643
ORA-06512: at "SYS.DBMS_STATS", line 31462
ORA-06512: at line 1


Though it can still be set in by directly updating SYS.OPTSTAT_HIST_CONTROL$, which is definitely NOT recommended.

And of course this setting can be present in an 11g database that was upgraded from an older version such as a 10g release.

Reverting this parameter to “FOR ALL COLUMNS SIZE AUTO” resolved the ORA-20001 error with UTL_RECOMP allowing the XDB component to validate and become VALID in the registry and subsequently all other components that depend on XDB.



If upgrading an older databases to (to remain on a supported version) it is prudent to check the setting of the METHOD_OPT parameter of the DBMS_STATS package. This isn’t mentioned in any of the pre-upgrade documents or checklists and isn’t caught by even the most recent version of Oracle’s Database Pre-Upgrade Utility (MOS Doc ID 884522.1) or the DB Upgrade/Migrate Diagnostic Information (MOS Doc ID 556610.1).

The check and solution are simple and should be incorporated into your own pre-upgrade procedure:




PL/SQL procedure successfully completed.





Discover more about our expertise in the world of Oracle

Categories: DBA Blogs

Dynamic MySQL Credentials with Vault

Pythian Group - Wed, 2015-11-11 13:20

Recently I have been looking at the Vault project as a means to manage secrets for applications and end-users. One of the use cases that immediately drew my attention was the ability to create dynamic role-based MySQL credentials.

Why Dynamic MySQL Credentials?

There are a few reasons why dynamic credentials would be beneficial, all of which can be handled by Vault, including:

  • The database environment is too large to manage individual users.
  • A need to authenticate on an external service, such as LDAP or GitHub organization.
  • Provide credentials to external resources, such as auditors or outside consultants that automatically expire.
  • Compliance requirements for strict audit logs for database access.
A High-Level Overview of Vault

Vault is a fairly new project by HashiCorp, the folks behind projects such as Vagrant and Consul. The goal is to decouple the handling of secrets from applications to enforce access control, encryption standards, and create an audit trail.

There are several components to Vault:

  • Authentication such as LDAP, GitHub or custom app-id.
  • Authorization using path-based ACL policies.
  • Encrypted storage backend using one of several options such as Consul, etcd, Zookeeper, S3, or MySQL.
  • Secret backends that define how secrets are stored or generated. Options include MySQL and AWS IAM credentials, among others.
  • Audit logging of token generation and secrets access.

To begin working with a Vault deployment, Vault must be initialized and unsealed. Unsealing Vault is a very important aspect of Vault’s security model, but is beyond the scope of this post.

After Vault is unsealed, users can begin interacting with Vault either by a CLI tool or HTTP API. Users, whether they are human or applications, are authenticated based on tokens. These tokens can be revoked at any time, be given a time-to-live (TTL) or a specific number of uses.

Authentication methods are associated with access control list (ACL) policies to define which secrets the token will have access to.

Certain secret backends can generate actual credentials upon request. This includes the MySQL secret backend, which creates users with specific grants based on their role and passes only the generated user credentials to the requesting token.

Creating Dynamic Credentials with Vault

Let’s generate a read-only MySQL credential using Vault. To follow along with this exercise, you will need to install Docker Toolbox and clone my vault repository. The docker-compose file will look like this:
View the code on Gist.

Next, we will bring up the vault and mysql containers. 
View the code on Gist.

The script will initiate and unseal the vault, then set the environment variable ‘VAULT_TOKEN’ that we will use later. It also creates a ‘vault’ alias that allows us to interact with the Vault CLI within the docker container. The output will look like this:
View the code on Gist.

Now that Vault is unsealed, we can create the MySQL backend. Vault must be provided with the credentials of a MySQL user with GRANT OPTION privilege. For the purpose of our example, we will use the root user.
View the code on Gist.

With the MySQL backend configured, we can finally create our dynamic credentials. The generated credentials will be valid for 10 minutes, then expire.
View the code on Gist.

Final Thoughts

Overall, Vault is a promising new technology to decouple secrets from the application. Perhaps we can use it to begin to move beyond the idea that it’s OK to store credentials in environment variables.

Of course, implementing Vault does add yet another dependency that must be highly available. Care must be taken to deploy Vault in a fault-tolerant manner.

One concern I have with the current workflow of secret handling is that each GET request to /mysql/readonly creates a new user. So a busy environment could thrash the database server with CREATE USER USER commands, which will be cleaned up later with DROP USER commands.

The way I expected it to work is that if the same Vault token requested the same credential endpoint, Vault would return an unexpired credential instead of generating an entirely new user.

To work around this, the user must keep track of the ‘lease-id’ returned from the initial read request and renew the lease before it expires. The user can do this up until the lease_max period.


Discover more about our expertise with MySQL.

Categories: DBA Blogs

ADF 12.2.1 Responsive Dashboard with Masonry Layout

Andrejus Baranovski - Wed, 2015-11-11 11:07
ADF 12.2.1 is a breakthrough in responsive UI. Never before it was so easy to implement good looking and high performance ADF UI. Masonry layout component allows to develop responsive dashboard made from separate tiles. Tiles can be reordered, resized and even removed. Tiles layout is dynamically constructed based on available screen size.

You should see how it works, check video based on sample application - Tiles are reordered as I change screen size, and even more - I can reorder them by myself, just with drag and drop:

This is default dashboard layout with four tiles on wide screen:

As resize screen to be more narrow, tiles are recorded to occupy available space:

With even more narrow screen, tiles are reordered to occupy only one column:

User can scroll up/down on narrow screen to see all tiles:

I can move and reorder tiles manually with drag and drop. In this example I reordered tile with employee information to be below employees chart:

If screen width is resized to be even more narrow, ADF 12.2.1 responsive UI template (check my previous post about responsive UI improvements in ADF 12.2.1 - ADF 12.2.1 Responsive UI Improvements) will reorder facets and there will be only one column UI:

Implementation is simple, you should use new ADF Faces component - Masonry Layout. Add any number of UI layout components (tiles) inside. Manual reordering is handled with regular ADF Faces drag and drop functionality:

Each tile must be assigned with masonry layout style class, this will control tile dimensions:

You can watch OTN ArchBeat video, where I'm talking about ADF 12.2.1 and responsive UI: