Skip navigation.

Feed aggregator

Are open source projects run like a democracy or an oligarchy?

Sean Hull - Wed, 2015-08-19 09:32
I was reading Fred Wilson’s comments recently on The Bitcoin XT Fork. In it he discussed how open source developers manage their projects. “A group of open source core developers are a democratic system.” I was surprised by this comment because I had never thought if it as democratic. Here are my thoughts… Join 28,000 … Continue reading Are open source projects run like a democracy or an oligarchy? →


Jonathan Lewis - Wed, 2015-08-19 03:02

I made a mistake a few days ago following up a question on the OTN database forum. The question was about a problem creating a hash/list composite partitioned table, and one of the respondants suggested that perhaps the problem appeared because hash/list wasn’t a legal combination.

Spot on: so I confirmed that observation and supplied a link to the official Oracle white paper that listed the combinations that were legal in 11.2 for composite partitioning.  In fact, although I was fairly sure that hash/list wasn’t legal, I had even run up a quick test to check that the attempt would fail before I’d searched online for the document.

Fortunately other people had different ideas about the original failure, and one of them supplied a link to a thread on AskTom which actually included some SQL to create a hash/list composite partitioned table. Naturally I tested it (even though it was from Tom Kyte) and obviously it worked (after all it was from Tom Kyte) – and then I spotted the syntax error in the example I had created for myself.

Trust but verify … and then …

I had fallen into two traps – and one of them was documented in my own “Trust” posting from 2006.

The white paper was dated September 2009 (Tom’s example was dated June 2013) and as it says in my Trust note:

  • If its date is more than about 18 months old – don’t assume it’s (still) true
  • If it’s not your exact version number – don’t assume it’s (still) true

The second trap was an example of confirmation bias, I was fairly sure that my test was supposed to fail with Oracle error “ORA-00922: missing or invalid option”, so when it failed with exactly that error I didn’t check why it had failed and didn’t notice that I had swapped the order of a couple of clauses in the create table statement. It’s very easy to think you’ve done enough when testing – especially when your test results match your expectation.

Update – later that day

So I’ve had an email asking me how I got the ORA-00922.  Here’s the SQL as I wrote it – don’t give the answer in the comments, but how quickly can you spot what I did wrong ?

create table t1
partition by hash(object_name) partitions 4
subpartition by list(object_type)
subpartition template (
        subpartition sp1 values ('TABLE'),
        subpartition sp2 values ('INDEX'),
        subpartition sp3 values (default)
select  object_name, object_type, created
from    all_objects

It’s a deceptive error (to me, at least) because even though I know it’s wrong it still looks right.

Shrink/Grow Exadata diskgroups

Syed Jaffar - Wed, 2015-08-19 01:42
One of the important tasks that I foresee after an initial Exadata deployment is, mostly prior to DB in production, is to balance/resize the Exadata diskgroups (DATA & RECO).  Generally, the space is distributed as 80(DATA), 20(RECO) or 40(DATA), 60(RECO), depending on the database backup option you choose while deploying. In one of our Exadata setups, we don't need such a huge RECO size, hence, we shrunk the RECO size and increased the DATA diskgroup size. I am pretty sure, many of you might have done and want to do the same. However, shrinking/rebalancing the space is not like a normal ASM resize operation on Exadata, it needs some special consideration and tasks. The following Oracle Support Notes has the better explanation and examples to achieve the task.

Example of dropping RECO diskgroup and adding the space to DATA diskgroup (Doc ID 1905972.1)
NOTE:1465230.1 - Resizing Grid Disks in Exadata: Example of Recreating RECO Grid Disks in a Rolling Manner
How to increase ASM disks size in Exadata with free space in cell disks (Doc ID 1684112.1)
Resizing Grid Disks in Exadata: Examples (Doc ID 1467056.1)

August 26: Oracle EPM Cloud Customer Reference Forum with Communications Test Design, Inc. (CTDI)

Linda Fishman Hoyle - Tue, 2015-08-18 15:27
Join us for another Oracle Customer Reference Forum on August 26, 2015, at 9:00 a.m. PT to hear Paul Cardell, Vice President Corporate Operations at Communications Test Design, Inc. (CTDI). He will explain how Oracle EPM Cloud (Planning and Budgeting Cloud) provides easy integration with current financial systems and improves speed and accuracy of data processing at CTDI.

Cardell also will share CTDI’s lessons learned during the company's implementation and the benefits it is receiving by using Oracle EPM Cloud.

Register now to attend the live Forum on Wednesday, August 26, 9:00 a.m. PT / 6:00 p.m. CEST and learn more about CTDI’s experience with Oracle EPM Cloud.

For more information:

Customer Story: Communications Test Design Supports Accurate Daily Forecasting Across 70 Global Facilities by Using the Cloud

Customer Video: CTDI Enhances Daily Forecasting with Oracle Solutions

Fishbowl Solutions is now a Google for Work Premier Partner

As of August 1, 2015, Google promoted Fishbowl Solutions to Premier Partner status! This elevation is a testament to our commitment to selling and servicing Google Search Solutions, and we are very excited to be counted among the select companies to attain this level of partnership. Kim Negaard, the Google Product Manager at Fishbowl, took some time to talk about our history with Google and what being a Premier Partner means.


The post Fishbowl Solutions is now a Google for Work Premier Partner appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

WordPress 4.3 “Billie”

Tim Hall - Tue, 2015-08-18 14:02

WordPress 4.3 “Billie” has arrived.

I was on my blog admin pages doing something else and I noticed the update was available before the auto-updater kicked in and updated the site automatically, so I gave it a nudge and now I’m upgraded.

Like most of the recent WordPress updates, there is little new that interests me, but it’s always a good idea to be up to date, so whatever… :)



WordPress 4.3 “Billie” was first posted on August 18, 2015 at 9:02 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

New Adventures with Raspberry Pi

Oracle AppsLab - Tue, 2015-08-18 13:02

If you read here, you’ll recall that Noel (@noelportugal) and I have been supporters of the Raspberry Pi for a long time, Noel on the build side, many, many times, me on the talking-about-how-cool-and-useful-it-is side.

And we’ve been spreading the love through internal hackdays and lots of projects.

So, yeah, we love us some Raspi.

The little guy has become our go-to choice to power all our Internet of Things (IoT) projects.

Since its launch in early 2012, the little board that could has come a long way. The latest model, the Raspberry Pi 2 B, can even run a stripped down Windows 10 build to do IoT stuff.

Given all that we do with Raspis, I’ve always meant to get one for my own tinkering. However, Noel scared me off long ago with stories about how long it took to get one functional and the risks.

For example, I remember reading a long post early on the Pi’s history about how choosing a Micro USB was critical, amperage too high burned out the board, too low and it wouldn’t run.

The information was out there, contributed by a huge and generous community. I just never had the time to invest.

Recently, I’ve been talking the good people at the Oracle Education Foundation (@ORCLcitizenship) about ways our team can continue to help them with their workshops, and one of their focus areas is the Raspberry Pi.

After all, the mission of the Raspi creators was to teach kids about computers, so yeah.

I figured it was finally time to overcome my fears and get dirty, and thanks to Noel, I found a kit that included everything I would need, this Starter Kit from Vilros.


Vilros Raspberry Pi 2 Complete Starter Kit

Armed with this kit, I took a day and hoped that would be enough to get the little guy running. About an hour after starting, I was done.

IMG_20150812_095354 IMG_20150812_103311 IMG_20150812_103512 IMG_20150812_111034

Going from zero to functional is now ridiculously easy, thanks to these kits that include all the necessities.

So, now I have a functioning Pi running Raspbian. All I need is a project, any ideas?

Coda: Happy coincidence, as I wrote this post, I got a DM from Kellyn Pot’Vin-Gorman (@dbakevlar) asking if knew any ways for her to use her Raspberry Pi skills in an educational capacity. Yay kismet.
Possibly Related Posts:

Virtual Technology Summit - Spotlight on Database

OTN TechBlog - Tue, 2015-08-18 13:00

Register now for OTN's new Virtual Technology Summit - September 16, 2015. Hear from Oracle ACEs, Java Champions and Oracle Product Experts, as they share their insights and expertise through Hands-on-Labs, highly technical presentations and demos that enable you to master the skills you need to meet today's IT challenges. Chat live with folks and ask your questions as you attend sessions.

Database Spotlight: Develop, Deploy and Manage Database Applications in the Oracle Cloud - Oracle has delivered the most comprehensive and powerful platform for deploying Cloud-based applications and services. Starting from the Infrastructure as a Service to the Platform as a Service, Oracle delivers a fully integrated cloud --platform and applications. This track provides an in-depth look at Oracle Database Cloud Services and the enabling technologies for developing, deploying and managing applications in the Oracle Cloud. There are three sessions in the database track:

  • Using Oracle SQL Developer and Oracle REST Data Services to enable the Oracle Cloud - This session presents the latest capabilities in Oracle's popular SQL Developer and ORDS tools for database application development and deployment in the cloud. See how to clone and migrate data between Oracle cloud and on-premise implementations and other powerful techniques for developing applications for the cloud, in the cloud.
  • Developing APEX 5.0 Mobile Applications in the Oracle Cloud - This session will walk through the capabilities for rapidly building and deploying responsive web applications using APEX 5.0 showing Oracle Developer Cloud Services in action.
  • How to Deploy and Monitor Cloud Database Applications Using Oracle Enterprise Manager 12c Cloud Control - This session provides an overview of Oracle Database Cloud Services and the management capabilities of Oracle Enterprise Manager 12c Cloud Control. See how to manage Oracle cloud deployment, provisioning, security and application access and monitoring.
Register today!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are already a member of the Oracle Technology Network Community, you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it: Oracle Community - Rewards & Recognition FAQ.

Get a Look at the Future Oracle Cloud User Experience at Oracle OpenWorld 2015

Oracle AppsLab - Tue, 2015-08-18 12:19

Here’s the first of many OpenWorld-related posts, this one cross-posted from our colleagues and friends at VoX, the Voice of Experience for Oracle Cloud Applications. Enjoy.

Are you all set for Oracle OpenWorld 2015 (@oracleopenworld)? Even if you think you’re already booked for the event, you’ll want to squeeze in a chance to experience the future of the Oracle Applications User Experience (OAUX) — and maybe even make a few UX buddies along the way — with these sessions, demos, and speakers. We loved OOW 2014, and couldn’t wait to get ready for this year.


Lucas Jellema, AMIS & Oracle ACE Director (left), Anthony Lai, Oracle (center), Jake Kuramoto, Oracle (right) at OOW 2015 during our strategy day. Photo by Rob Hernandez.

Save the Date: Oracle Applications Cloud User Experience Strategy & Roadmap Day

The OAUX team is hosting a one-day interactive seminar ahead of Oracle OpenWorld 2015 to get select partners and customers ready for the main event. This session will focus on Oracle’s forward-looking investment in the Oracle Applications Cloud user experience.

You’ll get the opportunity to share feedback about the Oracle Applications Cloud UX in the real world. How is our vision lining up with what needs to happen in your market?

Speaking of our vision, we’ll start the session with the big-picture perspective on trends and emerging technologies we are watching and describe their anticipated effect on your end-user experiences. Attendees will take a deeper dive into specific focus areas of the Oracle Applications Cloud and learn about our impending investments in the user experience including HCM Cloud, CX Cloud, and ERP Cloud.

The team will also share with you the plans for Cloud user experience tools, including extensibility and user experience in the Platform-as-a-Service (PaaS4SaaS) world (get the latest here). We’ll close out the day with a “this-town-ain’t-big-enough” event that was extremely popular last year: the ACE Director Speaker Showdown.

Want to go?

When: 9 a.m. to 5 p.m. Wednesday, Oct. 21, 2015
Where: Oracle Conference Center, Room 202, 350 Oracle Pkwy, Redwood City, CA 94065
Who: Applications Cloud partners and customers (especially HCM, CX, or ERP Cloud), Oracle ACE Directors, and Oracle-internal Cloud thought leaders in product development, sales, or Worldwide Alliances and Channels

Register Now!

To get on our waitlist.

Active confidential disclosure agreement required.

Chloe Arnold and Mindi Cummins, Oracle, during OOW 2014 at the OAUX Cloud Exchange

Chloe Arnold and Mindi Cummins, Oracle, during OOW 2014 at the OAUX Cloud Exchange.

Save the Date: Oracle Applications User Experience Cloud Exchange

Speakers and discussions are all well and good, but what is the future of the Oracle Applications UX really like? The OAUX team is providing a daylong, demo-intensive networking event at Oracle OpenWorld 2015 to show you what the results of Oracle’s UX strategy will look like.

User experience is a key differentiator for the Oracle Applications Cloud, and Oracle is investing heavily in its future. Come see what our recently released and near-release user experiences look like, and check out our research and development user experience concepts, then let us know what you think.

These experience experiments for the modern user will delve even deeper into the OAUX team’s guiding principles of simplicity, mobility, and extensibility and come from many different product areas. This is cutting-edge stuff, folks. And, since we know you’re worn out from these long, interactive days, this event will also feature refreshments.

Want to go?

When: Monday, October 26, 2015
Where: InterContinental Hotel, San Francisco
Who: Oracle Applications Cloud Partners, Customers, Oracle ACEs and ACE Directors, Analysts, Oracle-internal Cloud thought leaders in product development, sales, or Worldwide Alliances and Channels.

Register Now!

To get on our waitlist.

Active confidential disclosure agreement required.Possibly Related Posts:

Script to get previous month’s AWR report

Bobby Durrett's DBA Blog - Tue, 2015-08-18 11:58

We keep 6 weeks of history in the AWR on our databases, but I want to capture some information for long-term trending. What I really want to do is capture some metrics and put them in some database tables to use to generate reports, but I have not had time to build the scripts to do that.  So, instead I built a simple set of scripts to capture an AWR for the previous month. Since we have 6 weeks of history if I run my report in the first week of a month all the days of the previous month should still be in the AWR. I have just finished building this script so I can not promise that there is value in keeping monthly AWR reports but I thought it was worth sharing it.  Maybe something in the code will be useful to someone.  Here is the script:

-- Has to be run in the first week of the month so the entire
-- previous month is available. We keep 6 weeks of awr history.

-- setup columns for snapshots

column bsnap1 new_value bsnap1s noprint;
column esnap1 new_value esnap1s noprint;
column filenm new_value filenms noprint;

-- get snap id for first day of previous month

select min(snap_id) bsnap1
from dba_hist_snapshot
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))
(select max(STARTUP_TIME)
from dba_hist_snapshot
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD')))));

-- get snap id for last day of previous month

select max(snap_id) esnap1
from dba_hist_snapshot
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD'))));

-- get html file name

to_char(extract(month from 
to_char(extract(year from 
'.html' filenm
from v$database;

-- get awr report

define report_type='html';
define begin_snap = &bsnap1s;
define end_snap = &esnap1s;
define report_name = '&filenms';

define num_days = 0;


undefine report_type
undefine report_name
undefine begin_snap
undefine end_snap
undefine num_days

If the database bounced during the previous month we get the last set of snapshots after the last bounce.

I am not sure whether this approach will give us any benefits but I think it may help to show how to use queries to pick begin and end snapshots and then run an AWR report.

The tricky part of the code is this:

extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))

It just returns the previous month as a number.  It is August now so here is what it returns today:

SQL> select
 2 extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))
 3 prev_month
 4 from dual;


sysdate-to_number(to_char(sysdate,’DD’)) is the last day of the previous month:

SQL> select sysdate-to_number(to_char(sysdate,'DD')) last_day
 2 from dual;


– Bobby

Categories: DBA Blogs

Oracle Social Cloud Expands Twitter Partnership to Enable Better Social Service and Customer Experiences

Linda Fishman Hoyle - Tue, 2015-08-18 11:42

A Guest Post by Group Vice President Meg Bear (pictured left), Oracle Social Cloud

We all know the importance of "customer service” in delivering excellent customer experiences, and Oracle Social Cloud is taking social service capabilities to a new level for its customers. During the past two weeks alone, Oracle Social has announced its new Twitter partnership for advanced customer service insights, as well as the latest Social Cloud + Service Cloud enhancements and integrations. You can read more about these social service enhancements in this latest Forbes blog.

On August 6, 2015, Twitter and Oracle Social Cloud announced an extended partnership that gives Oracle customers enriched Twitter data and functionality to create improved and transformative customer service solutions for brands. As Twitter is the primary and growing social platform for customer service, this allows Oracle to deliver next-generation social service solutions for its customers and build out stronger, more engaging customer experiences. You can read more detail about the new Twitter functionally in this Wired article and on the Oracle Social Spotlight blog.

The importance of social service is echoed by our customers: “General Motors continues to strive for excellence with our customer care capabilities, providing new ways to understand and engage with our customers like never before—and it’s a winning strategy for both our customers and our business,” said Rebecca Harris, Global Head of Social Center of Expertise at General Motors. “We interact daily with our customers on Twitter, allowing for a quicker, more personal engagement, enabling General Motors to put its customers at the center of everything we do.”

Additionally, Oracle Social and Service Cloud jointly announced the latest social service integrations and enhancements this week during the CRM Evolution industry event in NYC, where Oracle has a strong presence through a sponsorship and speaking presence with Oracle executives Meg Bear (Social Cloud), Erika Brookes (Social Cloud) and Stephen Fioretti (Service Cloud). Read more about these social service enhancements in this VentureBeat article.  

Bottomline: Oracle CX is leading the market as the only platform that takes a comprehensive view of customer engagements and insights across the entire lifecycle and across every channel to drive stronger, more satisfying customer experiences.

Convert CSV file to Apache Parquet... with Drill

Tugdual Grall - Tue, 2015-08-18 08:44
Read this article on my new blog A very common use case when working with Hadoop is to store and query simple files (CSV, TSV, ...); then to get better performance and efficient storage convert these files into more efficient format, for example Apache Parquet. Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem. Apache Parquet has the following Tugdual Grall

(X)Query OPatch Information From The Database

Marco Gralike - Tue, 2015-08-18 07:55
There are cool posts out there regarding querying the OPatch info in Oracle 12 which…

If You Were the CIO...

WebCenter Team - Tue, 2015-08-18 07:47
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

If you were a Chief Information Officer (CIO) of a large company, say like Oracle, what would you want from a collaboration solution? What would be your 3 biggest priorities? Would security trump convenience? How about heterogeneity? And plugability – the ability for the solution to connect with existing systems and infrastructure? What about control? Would you trust the public Cloud?

Find out what Mark Sunday, Senior Vice President and Chief Information Officer has to say about cloud based Enterprise File Sync and Share solution.

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Oracle – JSON Database Patch Bundle(s)

Marco Gralike - Tue, 2015-08-18 03:58
It has been a while I patched a local development database, but nowadays stuff gets…

IFTTT Maker Channel

Oracle AppsLab - Mon, 2015-08-17 20:29


A couple months a go IFTTT added a much needed feature: A custom channel for generic urls. They called it the Maker Channel. If you noticed my previous post, I used it to power an IoT Staples Easy Button.

At a closer look this is a very powerful feature. Now you can basically make and receive web requests (webhooks) from any possible connected device to any accessible web service (API, public server, etc..) It is important to highlight that requests “may” be rate limited, so don’t start going crazy with Big Data style pushing.


You can also trigger any of the existing Channels with the Maker Channel.  So either you can choose to trigger any of the existing Channels when you POST/GET to the Maker Channel:


if-maker-then-hue if-maker-then-lifx if-maker-then-gdrive

Or you could have IFTTT POST/GET/PUT to your server when any of the existing Channels are triggered.

There seems to be hundred of possible combinations or “recipes.”

Do you use IFTTT? Do you find it useful? Let me know in the comments.Possibly Related Posts:

Smart Combo Box LOV with Filter

Andrejus Baranovski - Mon, 2015-08-17 17:47
Combo box LOV with filtering is simple, but effective feature not used often. You should rethink LOV design and apply combo box LOV with filtering where possible. Advantage of such LOV implementation is pretty clear - fast list items access, based on filter criteria. This will reduce number of times required to open LOV popup and improve performance of UI.

LOV VO should implement View Criteria, returning top items to be included into choice list. My example retrieves all jobs with minimum salary greater or equal than 10000. Users will see list items based on this criteria first:

I would recommend to use List UI Hints section of the LOV VO to define LOV UI definition. This will save time when assigning LOV for the attributes, do it once and reuse. I have specified Combo box LOV with filtering based on View Criteria - to include jobs with minimum salary greater than 10000 into initial choice list:

Time is saved when assigning LOV for the attribute - UI Hints are automatically set, based on LOV UI Hints definition:

This is how it looks on UI. Top records are displayed in the choice list, based on applied filter criteria.  All list items can be accessed through More... option:

This is how LOV popup looks by default in ADF 12c. Height is stretched automatically to occupy all possible display height, width is not stretched. Such layout is not good, users don't like it:

LOV popup can be made stretchable with little help of ADF UI CSS. We should use -tr-stretch-search-dialog CSS property to enable LOV popup stretching in ADF 12c (CSS sample is provided with example):

Another important thing to remember about LOV's in ADF 12c - List Range Size property. By default it is being set to -1. This is OK for simple choice lists, where you would like to show all data at once. But not suitable for LOV or combo with LOV, where we don't want to fetch all LOV items at first load:

I would set it to 10, this will populate only visible set of LOV records:

Take a look, how LOV popup window will look after improvements applied - it is stretchable by the user and only initial set of list item records is fetched:

Make sure to use combo LOV with initial filter criteria, to optimise LOV behaviour. Download sample application -

It’s all about the Cloud!

OTN TechBlog - Mon, 2015-08-17 13:00

Register now for OTN's next Virtual Technology Summit:

It's all about the Cloud! Hear from Oracle ACEs, Java Champions and Oracle Product Experts, as they share their insights and expertise through Hands-on-Labs, highly technical presentations and demos that enable you to master the skills you need to meet today's IT challenges. Chat live with folks and ask your questions as you attend sessions.

This interactive, online event offers four technical tracks, each with a unique focus on specific tools, technologies, best practices and tips:

  • Java: Java 8 in Action - Java 8 has been out for over a year now. But do you really know and use Java 8 to its full potential? In this Virtual Technology Summit, learn about Java SE cloud applications, Cloud-enabled JavaScript stored procedures with Java 8 Nashorn and the Java 8 Date and Time API.
  • Operating Systems: Virtualization Technologies, and Hardware: Implementing Your Cloud - Most IT organizations have roadmaps for cloud infrastructure. Most vendors have some sort of story as to how they can get you to the cloud. Oracle specifically has itself to the idea that you can run your applications identically in our public cloud and your private cloud. The question is: How? In this track we'll roll up our sleeves and show you how to implement your clouds using Oracle hardware, software, and best practices.
  • Database: Develop, Deploy and Manage Database Applications in the Oracle Cloud - Oracle has delivered the most comprehensive and powerful platform for deploying Cloud-based applications and services. Starting from the Infrastructure as a Service to the Platform as a Service, Oracle delivers a fully integrated cloud --platform and applications. This track provides an in-depth look at Oracle Database Cloud Services and the enabling technologies for developing, deploying and managing applications in the Oracle Cloud.
  • Middleware: Middleware in the Cloud: PaaS Gets Real - The middleware track in the Fall 2015 edition of the OTN Virtual Technology Summit puts the spotlight on Oracle's Mobile Cloud Service (MCS), Process Cloud Service (PCS), and Java Cloud Service (JCS), three of the more the two dozen new services available on the Oracle Cloud Platform. In each of the three deep-dive sessions a recognized expert from the OTN community walks you through a technical how-to to demonstrate how you can use these PaaS services, and compares each to its on-premise counterparts. PaaS services loom large in the future for developers and architects, so if you're developing enterprise mobile applications, or working with Oracle BPM or WebLogic, you'll want to make sure these #OTNVTS sessions are on your calendar.
Register today!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are already a member of the Oracle Technology Network Community, you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it: Oracle Community - Rewards & Recognition FAQ.

Index Usage

Jonathan Lewis - Mon, 2015-08-17 09:25

The question of how to identify indexes that could be dropped re-appeared (yet again) on the OTN database forum last week. It’s not really surprising that it recurs so regularly – the problem isn’t an easy one to solve but new (and even less new) users keep hoping that there’s a quick and easy solution.

There are, however, strategies and pointers that can help you to optimise the trade-off between effort, risk, and reward. Broadly the idea is to spend a small amount of effort finding a relatively small number of “expensive” indexes that might be safe to drop, so that when you do the detailed analysis you have a good chance that the time spent will be rewarded by a positive result.

Before we get to some results posted on OTN, it’s worth thinking about the global impact and what we’re trying to achieve, and the threats that go with our attempt to achieve it.

The key detail, of course, is that index maintenance is an expensive process. We could insert 1,000 rows into a table at a cost of writing about 25 table blocks plus a few undo blocks plus something like half a megabyte of redo (assuming, for the purposes of illustration that each row is about 200 bytes on insert). Add one index to the table and we might have to locate and modify 1,000 separate index leaf blocks. The increment on the redo might be about quarter of a megabyte and we may have to access 1,000 different undo blocks for read consistency reasons, but the simple fact that we may need 1,000 buffers to be able to maintain that index is likely to be a significant extra cost on the insert. Make that 10 indexes, or 70 (as one unhappy DBA once told me) and the probability of being able to do high-speed inserts becomes rather low.

Of course we hope that our indexes will allow our queries to operate efficiently with great precision, but inevitably we get to a point where the benefit of precision is outweighed by the cost of maintenance. Our target, then, is to design the set of indexes that makes it possible for the optimizer to find good paths for all the important queries and “good enough” paths for the rest. By the time the system is live, though, it’s too late for “proper design”, and the only option is for damage limitation, a bit of guesswork, and some live testing with fingers crossed (thank goodness for invisible indexes).

The starting point is usually an attempt to identify “the indexes we are not using”, which is typically translated into “the indexes that do not appear in execution plans” – but that’s not actually a good target, for various reasons:

  • Problem 1: If we are using an index it’s possible that we shouldn’t be and that there’s an alternative index available that ought to be more efficient. A corollary to this is that if you do identify and drop such an index you may find that the optimizer doesn’t use the alternative index you were expecting it to use until you take some action to help the optimizer recognise that the alternative is a good choice.
  • Problem 2: if we aren’t using a particular index then perhaps we should be using it and would use it if we dropped one of the other indexes on the table. (And there’s always the possibility that we didn’t happen to use it during the interval we were checking but do use it at some other times)
  • Problem 3: the optimizer is capable of using information about the number of distinct keys in a multi-column index to select an executon plan even though it may not use that index in the plan it finally chooses. We may be able to work around this problem in current versions of Oracle by creating a column group (extended statistics) that matches the definition of each indexes we drop – but there’s a limit of 20 column groups per table.
  • Problem 4: There are some indexes we might not be using but which must exist to avoid the “foreign key locking” problem. It should be easy enough to check, before dropping an index, whether it has to exist to match a foreign key; and even then it may be possible to show that nothing in the application would cause the locking problem to appear – and as a safety measure you could disable locks on the (child) table to ensure that the application doesn’t grind to a halt because of foreign key locking problems.

Provided you remember that problems like these exist, and think carefully about the indexes that your strategy suggests, there are various ways you could approach the problem of identifying indexes that don’t get into execution plans.


The ink had barely dried on the manual pages for this view before several people (including me) had written notes explaining why this view wasn’t particularly helpful. (I think I even said something about this in Practical Oracle 8i). I won’t repeat the discussion here but it revolves around the fact that an index is flagged as “used” even if it has only been used once in a single execution of a single statement – so you don’t get any idea of the real importance of the index.

v$sql_plan et. al.

If you review the set of in-memory execution plans (and the AWR or Statspack equivalents) you can identify indexes which definitely have been used – but (a) it’s expensive to scan v$sql_plan frequently and (b) the AWR/Statspack repositories only capture a subset of the more expensive plans, so it’s easy to miss indexes which have been used and are relatively important but aren’t in the repository and don’t happen to be in memory at the moments you look.

Review the definitions

If you examine the index definitions you may spot indexes where look very similar. If one index starts with the same columns, in the same order, as another index, there is a good chance that you could reduce two indexes to one – especially if the whole of one of the indexes is the “leading edge” of the other – for example:

  • (dp_datetime_date)
  • (dp_datetime_date, dp_compid)

Even if the leading edges match and the trailing edges differ we might be able to collapse two indexes into one – depending on how selective the leading columns are and how the indexes are used – for example:

  • (dp_compid, ddzt, cirmhcx, ct_nxr_mhcx, dp_datetime_date)
  • (dp_compid, ddzt, cirmhcx, ct_nxr_mhcx, pnr_cfrqsj_date)

which could perhaps be replaced by one of :

  • (dp_compid, ddzt, cirmhcx, ct_nxr_mhcx, dp_datetime_date, pnr_cfrqsj_date)


  • (dp_compid, ddzt, cirmhcx, ct_nxr_mhcx, pnr_cfrqsj_date, dp_datetime_date)

Guessing about the use of a typical date column, though, it’s possible that in this example the current trailing date columns are used with a range-based predicate, so it’s possible that this strategy won’t be effective for this pair of indexes.

Even if the order of later columns in the index doesn’t match you may still find that a pair of indexes could be reduced to a single index – for example the pair:

  • (dp_datetime_date, dp_compid)
  • (dp_datetime_date, ddzdt, dp_compid, ct_nxrdh, ct_smsmobilno)

which could perhaps be replaced by just:

  • (dp_datetime_date, dp_compid, ddzdt, ct_nxrdh, ct_smsmobilno)

As a safety measure, of course, you would probably create a new index, then make the subject indexes invisible, and wait for at least a week to see whether any performance problems appear (remembering that one automatic performance threat would be the increase in workload as yet another index – temporarily – has to be maintained).

The difficulty of eliminating indexes by examination is that it takes a lot of effort to investigate all the possibilities, so you really need some way of choosing a relatively small subset of indexes that might be worth the effort. This brings me to the principle topic of this posting – using segment statistics to help you pick which indexes might be worth the effort.

v$segstat / v$segment_statistics

Oracle records a number of workload statistics for each object in memory. The view v$segstat is an efficient version of these statistics, and v$segment_statistics is a friendlier version that joins v$segstat to tables user$, obj$ and ts$, with a filter against ind$ to turn meaningless numbers into names.

SQL> desc V$segstat
 Name                    Null?    Type
 ----------------------- -------- ----------------
 TS#                              NUMBER
 OBJ#                             NUMBER
 DATAOBJ#                         NUMBER
 STATISTIC_NAME                   VARCHAR2(64)
 STATISTIC#                       NUMBER
 VALUE                            NUMBER

SQL> desc V$segment_statistics
 Name                    Null?    Type
 ----------------------- -------- ----------------
 OWNER                            VARCHAR2(30)
 OBJECT_NAME                      VARCHAR2(30)
 SUBOBJECT_NAME                   VARCHAR2(30)
 TABLESPACE_NAME                  VARCHAR2(30)
 TS#                              NUMBER
 OBJ#                             NUMBER
 DATAOBJ#                         NUMBER
 OBJECT_TYPE                      VARCHAR2(18)
 STATISTIC_NAME                   VARCHAR2(64)
 STATISTIC#                       NUMBER
 VALUE                            NUMBER

For each segment Oracle records the following statistics (according to v$segstat_name – but there are a couple more hidden statistics reported in the underlying x$ksolsstat object):

NAME                             SAMPLED
-------------------------------- -------
logical reads                    YES
buffer busy waits                NO
gc buffer busy                   NO
db block changes                 YES
physical reads                   NO
physical writes                  NO
physical read requests           NO
physical write requests          NO
physical reads direct            NO
physical writes direct           NO
optimized physical reads         NO
optimized physical writes        NO
gc cr blocks received            NO
gc current blocks received       NO
ITL waits                        NO
row lock waits                   NO
space used                       NO
space allocated                  NO
segment scans                    NO

Both Statspack (at level 7) and the AWR report have several “Top N” sections for segment statistics. If we examine these stats for all the indexes on a given table we can get some clues about which indexes are likely to be worth further investigation to see if they could be dropped.

One very simple measure is the number of “physical reads” (which, for indexes, will generally be very similar to “physical read requests”). Since a (real) physical read is generally going to take a significant amount of time, segments with very large numbers of physical reads could be contributing a lot of of time to the total database time – so it’s worth knowing why it’s responsible for so many physical reads and worth cross-checking with v$sql_plan (and its historic equivalents) which statements seem to be using or modifying this index.

Even if it turns out that the index is absolutely necessary, you might still be able to spot opportunities to improve efficiency. If it is subject to a significant number of physical reads it may be that the index is just very large – could you make it smaller by rebuilding it with compression on some of the leading columns, is it an index which (for some reason you can identify) tends to degenerate over time and waste a lot of space and should you rebuild it occasionally. It might be possible (depending on the predicates used) to re-arrange the column order in such a way that the activity is focused onto a particular section of the index rather than being spread across the entire index – or you could even find that by careful choice of global partitioning (which is legal on even a non-partitioned table) you might be able to isolate the activity to a small section of the index.

A more interesting measure, though, comes from comparing the “logical reads” with the number of “db block changes”; and that’s the point of this posting – except that I’ve spent so much time on it already that I’m going to have to write part 2 some time next week.


Test MCS Custom API using SoapUI

Darwin IT - Mon, 2015-08-17 08:40
I'm at the OPN Summercamps 2015 in Lisbon on the Mobile Cloud Services workshop, doning the FixIt Fast tutorial. Now, one of the steps is to test the API imported from a RAML document and confgured it. Then in step 3 you're going to test it using the build in test console of MCS. But since I have SoapUI installed I found it neat to be able to test it from there. Since then you can create Test Suites/Cases to do automatic tests.

What I did was to create a new SOAP UI Rest project based on the endpoint of the API:

Then I added a  paramter named contact with value 'lynn'.
But there are two HTTP-Header properties needed:
PropertyvalueAuthorizationBasic TUFOVElT...(rest of very long string)oracle-mobile-backend-id01d3b3a2-7a6b-42c8-b314-d6e8c8f3e898

Both values can be found on the settings tab of the Mobile Backend:
The 'oracle-mobile-backend-id' is found here literally. But the 'Authorization' is found under 'Anonymous Key', where initially it gives a link 'Show'. Clicking it will show a long string. Copy and paste it and prefix it with the word 'Basic' and a single space:

And then you're good to go....