Skip navigation.

Feed aggregator

Oracle Priority Support Infogram for 25-NOV-2015 1000th posting!

Oracle Infogram - Wed, 2015-11-25 11:44

This marks the 1000th post to the Infogram. I am awarding myself a low-carb lollipop.

Data Warehouse
Little things to know about ... Oracle Partitioning (part one of hopefully many), from The Data Warehouse Insider.
Oracle VM
Oracle VM Performance and Tuning - Part 4, from Virtually All The Time.
Changing Appearances: Give The Apps Your Corporate Look, from Fusion Applications Developer Relations.
Offloading row level security to Smart Scan, from SmartScan Deep Dive.
Patch Set Update: Oracle Data Relationship Management, from Business Analytics - Proactive Support.
Oracle and Adaptive Case Management: Part 1 , from SOA & BPM Partner Community Blog.
Remote debugging of nashorn scripts with NetBeans IDE using "debugger" statements, from A. Sundararajan's Weblog.
From the Oracle E-Business Suite Support blog:
EBS HCM December Updates To Employer Shared Responsibility Reporting Under The Affordable Care Act
From the Oracle E-Business Suite Technology blog:

WebCenter Portal Certified with E-Business Suite 12.2

Oracle Security Training In York

Pete Finnigan - Wed, 2015-11-25 09:20

We ran a five day Oracle Security training event in York, England from September 21st to September 25th at the Holiday Inn hotel. This proved to be very successful and good fun. The event included back to back teaching by....[Read More]

Posted by Pete On 22/10/15 At 08:49 PM

Categories: Security Blogs

New Presentation - Building Practical Oracle Audit Trails

Pete Finnigan - Wed, 2015-11-25 09:20

I wrote a presentation on designing and building practical audit trails back in 2012 and presented it once and then never again. By chance I did not post the pdf's of these slides at that time. I did though some....[Read More]

Posted by Pete On 01/10/15 At 05:16 PM

Categories: Security Blogs

Protect Your APEX Application PL/SQL Source Code

Pete Finnigan - Wed, 2015-11-25 09:20

Oracle Application Express is a great rapid application development tool where you can write your applications functionality in PL/SQL and create the interface easily in the APEX UI using all of the tools available to create forms and reports and....[Read More]

Posted by Pete On 21/07/15 At 04:27 PM

Categories: Security Blogs

Oracle Security and Electronics

Pete Finnigan - Wed, 2015-11-25 09:20

How does Oracle Security and Electronic mix together? - Well I started my working life in 1979 as an apprentice electrician in a factory here in York, England where I live. The factory designed and built trains for the national....[Read More]

Posted by Pete On 09/07/15 At 11:24 AM

Categories: Security Blogs

New Conference Speaking Dates Added

Pete Finnigan - Wed, 2015-11-25 09:20

In the last few years I have not done as many conference speaking dates as I used to. This is simply because when offered they usually clashed with pre-booked work. I spoke for the UKOUG in Dublin last year and....[Read More]

Posted by Pete On 06/07/15 At 09:40 AM

Categories: Security Blogs

Happy 10th Belated Birthday to My Oracle Security Blog

Pete Finnigan - Wed, 2015-11-25 09:20

Make a Sad Face..:-( I seemed to have missed my blogs tenth which happened on the 20th September 2014. My last post last year and until very recently was on July 23rd 2014; so actually its been a big gap....[Read More]

Posted by Pete On 03/07/15 At 11:28 AM

Categories: Security Blogs

Oracle Database Vault 12c Paper by Pete Finnigan

Pete Finnigan - Wed, 2015-11-25 09:20

I wrote a paper about Oracle Database Vault in 12c for SANS last year and this was published in January 2015 by SANS on their website. I also prepared and did a webinar about this paper with SANS. The Paper....[Read More]

Posted by Pete On 30/06/15 At 05:38 PM

Categories: Security Blogs

Unique Oracle Security Trainings In York, England, September 2015

Pete Finnigan - Wed, 2015-11-25 09:20

I have just updated all of our Oracle Security training offerings on our company website. I have revamped all class pages and added two page pdf flyers for each of our four training classes. In have also updated the list....[Read More]

Posted by Pete On 25/06/15 At 04:36 PM

Categories: Security Blogs

Splunk engages in stupid lawyer tricks

DBMS2 - Wed, 2015-11-25 08:14

Using legal threats as an extension of your marketing is a bad idea. At least, it’s a bad idea in the United States, where such tactics are unlikely to succeed, and are apt to backfire instead. Splunk seems to actually have had some limited success intimidating Sumo Logic. But it tried something similar against Rocana, and I was set up to potentially be collateral damage. I don’t think that’s working out very well for Splunk.

Specifically, Splunk sent a lawyer letter to Rocana, complaining about a couple of pieces of Rocana marketing collateral. Rocana responded publicly, and posted both the Splunk letter and Rocana’s lawyer response. The Rocana letter eviscerated Splunk’s lawyers on matters of law, clobbered them on the facts as well, exposed Splunk’s similar behavior in the past, and threw in a bit of snark at the end.

Now I’ll pile on too. In particular, I’ll note that, while Splunk wants to impose a duty of strict accuracy upon those it disagrees with, it has fewer compunctions about knowingly communicating falsehoods itself.

1. Splunk’s letter insinuates that Rocana might have paid me to say what I blogged about them. Those insinuations are of course false.

Splunk was my client for a lot longer, and at a higher level of annual retainer, than Rocana so far has been. Splunk never made similar claims about my posts about them. Indeed, Splunk complained that I did not write about them often or favorably enough, and on at least one occasion seemed to delay renewing my services for that reason.

2. Similarly, Splunk’s letter makes insinuations about quotes I gave Rocana. But I also gave at least one quote to Splunk when they were my client. As part of the process — and as is often needed — I had a frank and open discussion with them about my quote policies. So Splunk should know that their insinuations are incorrect.

3. Splunk’s letter actually included the sentences 

Splunk can store data in, and analyze data across, Splunk, SQL, NoSQL, and Hadoop data repositories. Accordingly, the implication that Splunk cannot scale like Hadoop is misleading and inaccurate.

I won’t waste the time of this blog’s readers by explaining how stupid that is, except to point out that I don’t think Splunk executes queries entirely in Hadoop. If you want to consider the matter further, you might consult my posts regarding Splunk HPAS and Splunk Hunk.

4. I and many other people have heard concerns about the cost of running Splunk for high volumes of data ingest. Splunk’s letter suggests we’re all making this up. This post suggests that Splunk’s lawyers can’t have been serious.

Related links

Categories: Other

Tech 15

Jonathan Lewis - Wed, 2015-11-25 07:00
Updated 27th Nov 2015:

Thanks for all the questions so far – there are plenty of useful topics coming out. At this rate I may have to pass some of these on to the round-table that Tony Hasler.

I’ve made a couple of comments in response, but generally I plan to avoid making any comments until after the panel.

Monday 7th Dec: 11:20 – 12:05

I’ve arranged a panel session on the Cost Based Optimizer for UKOUG Tech 2015, with Maria Colgan, Nigel Bayliss, and Chris Antognini joining me to answer (and maybe argue about) questions from the audience.

To keep things moving along, we aim to have a few questions available before the day and collect a few questions at the door as well as accepting questions from the floor. Martin Widlake and Neil Chandler will be acting as MCs, wielding the microphones, and collecting questions at the door.

So far we have a handful of questions – if you have a question you’d like to raise, add it to the comments below. Even if you can’t attend we may get to your question, and I may be able to write up a summary of questions and answers after the event.

Questions so far:

  • Why do queries that have been working fine for months suddenly change to a very slow plan ?
  • Are smaller, simpler queries less likely to have a bad plan or change from a good plan to a bad one ?
  • What is the down-side of using SQL Profiles to lock in plans ?
  • What’s the best way to collect stats on a partitioned table where you add a daily / weekly / monthly partition ?
  • Is there a good way to make sure you’ve collected stats for interval partitions when you don’t know they’ve been added ?
  • Why do the manuals talk about “prefixed” local indexes (on partitioned tables) as if they have some special effect ?

Oracle Midlands : Event #12 – Summary

Tim Hall - Tue, 2015-11-24 17:53

Tonight was Oracle Midlands Event #12 with Bryn Llewellyn  and Neil Chandler.

Bryn very kindly stepped in for Tom Freyer who unfortunately couldn’t make it. Bryn’s presentation was called Why use PL/SQL?, which was based on his whitepaper of the same name. I’ve written about a number of the topics he raised, including a piece on My Utopian Development Environment, but his talk and his whitepaper put forward a much more consistent and compelling argument. Definitely worth checking out if you are a PL/SQL expert or newbie!

Along the way, he used a little piece of syntax I have never noticed before, which has been available for the whole 20+ years I’ve been using PL/SQL. I’ll write about that in a post tomorrow if I get chance.

By the way, Bryn loves being called Brian, and he loves it when you say S.Q.L. and PL/S.Q.L. rather than Sequel and PL/Sequel. :)

redstacktechNext up was Neil with a session called “Why has my plan changed?”. I’ve written about many of the issues discussed in this session, but Neil crystallised a whole array of points really well, and used live demos which I always like to see. Regardless of how well you think you know the subject, I would be surprised if you don’t walk away with a few new things, which you can later pretend you always knew. :)

Both guys will be presenting these sessions at UKOUG Tech15 if you want to catch up with them!

Thanks to the guys for coming to speak to us. Thanks to Bryn’s mom, for living in the UK, giving him an excuse to come to UKOUG Tech15 early and therefore be available to step in at short notice. Thanks as always to Red Stack Tech for sponsoring the event, allowing it to remain free, and to Mike for organising it!



Oracle Midlands : Event #12 – Summary was first posted on November 25, 2015 at 12:53 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

What did you miss at Oracle OpenWorld regarding WebCenter, The Cloud, and Front-End Website and Portal Development? Find out in this webinar.

fbweb_003072Fishbowl Solutions has been attending and sponsoring Oracle OpenWorld for many years, and every year we get questions from customers asking why they should attend, how is Oracle OpenWorld different from Collaborate, and if they do attend what sessions should they go to? Although there is no single answer to these questions, our general response is OpenWorld provides so many continuous learning, networking, and fun events that going isn’t hard to justify. However, as the old saying goes “you get out of it what you put into it” definitely applies to OpenWorld. The event is so big (60,000+ attendees), so busy (1,000+ sessions), and so much fun (Elton John concert), that first-time attendees and even alumni need to build out their schedules before the event. This can help ensure they are going to the sessions that will teach them something new that they can apply within their work, or they learn about the road map for the product areas they manage so that they can understand and plan for upgrades, new technology, or integrations.

Even with this proactive planning, attending OpenWorld just isn’t an option for all WebCenter customers, so my colleagues and I at Fishbowl will be hosting a webinar on December 10th that will give you a recap of what you missed as it applies to Oracle WebCenter 12c; The Cloud – Sites Cloud Service, Documents Cloud Service, Mobility Cloud Service, and Process Cloud Service; and new front-end development technologies that can be used for WebCenter-based websites and portals. The webinar will include a Q & A session, so even if we don’t cover a topic that you are interested in or have a question about, our WebCenter and Cloud experts will be available to fill you in. The webinar details are included below, and we hope you will be able to join us.

Date: December 10, 2015
Time: 1:00 – 2:00 PM EST, 12 – 1:00 PM CST

Register here.



The post What did you miss at Oracle OpenWorld regarding WebCenter, The Cloud, and Front-End Website and Portal Development? Find out in this webinar. appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Why Data Virtualization Is so Vital

Kubilay Çilkara - Tue, 2015-11-24 16:35
In today’s day and age, it probably seems like every week you hear about a new form of software you absolutely have to have. However, as you’re about to find out, data virtualization is actually worth the hype.
The Old Ways of Doing Things
Traditionally, data management has been a cumbersome process, to say the least. Usually, it means data replication, data management or using intermediary connectors and servers to pull off point-to-point integration. Of course, in some situations, it’s a combination of the three.

Like we just said, though, these methods were never really ideal. Instead, they were just the only options given the complete lack of alternatives available. That’s the main reason you’re seeing these methods less and less. The moment something better came along, companies jumped all over them. However, their diminishing utility can also be traced to three main factors. These would be:
  • ·      High costs related to data movement
  • ·      The astronomical growth in data (also referred to as Big  Data)
  • ·      Customers that expect real-time information
These three elements are probably fairly self-explanatory, but that last one is especially interesting to elaborate on. Customers these days really don’t understand why they can’t get all the information they want exactly when they want it. How could they possibly make sense of that when they can go online and get their hands on practically any data they could ever want thanks to the likes of Google? If you’re trying to explain to them that your company can’t do this, they’re most likely going to have a hard time believing you. Worse, they may believe you, but assume that this is a problem relative to your company and that some competitor won’t have this issue.
Introducing Data Virtualization
It was only a matter of time before this problem was eventually addressed. Obviously, when so many companies are struggling with this kind of challenge, there’s quite the incentive for another one to solve it.

That’s where data virtualization comes into play. Companies that are struggling with having critical information spread out across their entire enterprise in all kinds of formats and locations never have to worry about the hardships of having to get their hands on it. Instead, they can use virtualization platforms to search out what they need.
Flexible Searches for Better Results
It wouldn’t make much sense for this type of software to not have a certain amount of agility built in. After all, that’s sort of its main selling point. The whole reason companies invest in it is because it doesn’t get held back by issues with layout or formatting. Whatever you need, it can find.

Still, for best results, many now offer a single interface that can be used to separate and extract aggregates of data in all kinds of ways. The end result is a flexible search which can be leverage toward all kinds of ends. It’s no longer about being able to find any type of information you need, but finding it in the most efficient and productive way possible.
Keep Your Mainframe
One misconception that some companies have about data virtualization is that it will need certain adjustments to be made to your mainframe before it can truly be effective. This makes sense because, for many platforms, this is definitely the case. These are earlier versions, though, and some that just aren’t of the highest quality.

With really good versions, though, you can basically transform your company’s mainframe into a virtualization platform. Such an approach isn’t just cost-effective. It also makes sure you aren’t wasting resources, including time, addressing the shortcomings of your current mainframe, something no company wants to do.

Don’t get turned off from taking a virtualization approach to your cache of data because you’re imagining a long list of chores that will be necessary for transforming your mainframe. Instead, just be sure you invest in a high-end version that will actually transform your current version into something much better.
A Better Approach to Your Current Mainframe
Let’s look at some further benefits that come from taking this approach. First, if the program you choose comes with the use of a high-performance server, you’ll immediately eliminate the redundancy of integrating from point-to-point. This will definitely give you better performance in terms of manageability. Plus, if you ever want to scale up, this will make it much easier to do so.

Proper data migration is key to a good virtualization process. If it is done right the end user wont have to worry out corrupted data and communication between machines will be crystal clear.If you divert the data mapping you need to do at processing-intensive level and transformation processes away from the General Purpose Processor of your mainframe to the zIIP specialty engine, you’ll get to dramatically reduce your MIPS capacity usage and, therefore, also reduce your company’s TCO (Total Cost of Ownership).

Lastly, maybe you’d like to exploit of every last piece of value you derive from your mainframe data. If so, good virtualization software will not only make this possible, but do so in a way that will let you dramatically turn all of your non-relational mainframe data virtualization into relational formats that any business analytics or intelligence application can use.
Key Features to Look for in Your Virtualization Platform
If you’re now sold on the idea of investing in a virtualization platform, the next step is getting smart about what to look for. As you can imagine, you won’t have trouble finding a program to buy, but you want to make sure it’s actually going to be worth every penny.

The first would be, simply, the amount of data providers available. You want to be able to address everything from big data to machine data to syslogs, distributed and mainframe. Obviously, this will depend a lot on your current needs, but think about the future too.

Then, there’s the same to think about in terms of data consumers. We’re talking about the cloud, analytics, business intelligence and, of course, the web. Making sure you will be able to stay current for some time is very important. Technology changes quickly and the better your virtualization process is the longer you’ll have before having to upgrade. Look closely at the migration process, and whether or not the provider can utilize your IT team to increase work flow. This will help you company get back on track more quickly and with better results.

Finally, don’t forget to look at costs, especially where scalability is concerned. If you have plans of getting bigger in the future, you don’t want it to take a burdensome investment to do so. As you can see, virtualization platforms definitely live up to the hype.You just have to be sure you spend your money on the right kind.
Mike Miranda writes about enterprise software and covers products offered by software companies like Rocket software about topics such as Terminal emulation,  Enterprise Mobility and more.
Categories: DBA Blogs

Advanced Compression Option Caveat in Oracle 12c

Pythian Group - Tue, 2015-11-24 12:18


Oracle 12c introduced a new capability to move a partition online, without any interruptions to DML happening at the same time. But, there’s a catch. So far we’ve been able to use basic table compression without having to worry about any extra licensing – it was just a plain EE feature.

If you are planning to use the online partition move functionality, carefully check if you’re not using basic compression anywhere. For example:

create tablespace data datafile '+DATA' size 1g

create user foo identified by bar
default tablespace data
quota unlimited on data

grant create session, create table to foo

connect foo/bar

create table test (x int, y varchar2(20))
partition by range (x)
partition p1 values less than (100) tablespace data compress,
partition p2 values less than (200) tablespace data,
partition p3 values less than (300) tablespace data

So we now have this, and our licensing is still as we know it:

select partition_name, compression, compress_for from user_tab_partitions
------------------------------ -------- ------------------------------

We can use the new feature on partition p3:

alter table test move partition p3

Or, we can use the traditional means to compress the partition p2:

alter table test move partition p2

But as soon as we do this move “online”, we are required to purchase the Advanced Compression Option:

alter table test move partition p2

And, even sneakier:
alter table test move partition p1

Notice how partition p1 – which was previously compressed – also was online moved to a compressed format:

select partition_name, compression, compress_for from user_tab_partitions

—————————— ——– ——————————


And that, therefore, required the Advanced Compression Option.

Also note that the usage of this is not caught by dba_feature_usage_statistics (tested on

select name, currently_used from dba_feature_usage_statistics where lower(name) like '%compress%';

—————————————————————- —–
Oracle Advanced Network Compression Service FALSE
Backup ZLIB Compression FALSE
Backup BZIP2 Compression FALSE
Backup BASIC Compression FALSE
Backup LOW Compression FALSE
Backup MEDIUM Compression FALSE
Backup HIGH Compression FALSE
Segment Maintenance Online Compress FALSE
Compression Advisor FALSE
SecureFile Compression (user) FALSE
SecureFile Compression (system) FALSE
HeapCompression FALSE
Advanced Index Compression FALSE
Hybrid Columnar Compression FALSE
Hybrid Columnar Compression Row Level Locking FALSE

15 rows selected.

I also tried to bounce the database and the data wasn’t updated in my tests. I would’ve expected this to show up under “Segment Maintenance Online Compress”, but in my tests, it did not.

This feature restriction isn’t documented anywhere in the official product documentation – at least not that I could find. The only place where I could find this information was in this Oracle document.


Discover more about our experience in the world of Oracle.

Categories: DBA Blogs

MobaXterm 8.4

Tim Hall - Tue, 2015-11-24 10:04

While I was in India, a MobaXterm 8.4 was released.

Downloads and changelog in the usual places.

This is a must for Windows users who use SSH and X Emulation!



MobaXterm 8.4 was first posted on November 24, 2015 at 5:04 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

AIOUG Sangam15 : It’s a Wrap!

Tim Hall - Tue, 2015-11-24 07:59

The whole Sangam15 experience is over for me now!

Here are the posts I put out during the trip.

I did my thank you messages in the Day 2 post, but once again I would like to say a big thank you to AIOUG for inviting me, to the Oracle ACE Program for getting me to the event and to all the attendees for making it a great event and for making my Facebook go nuts! :)

See you all soon!



AIOUG Sangam15 : It’s a Wrap! was first posted on November 24, 2015 at 2:59 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

AIOUG Sangam15 : The Journey Home

Tim Hall - Tue, 2015-11-24 07:40

The night before a morning flight is always a little tricky for me. I lie in bed thinking, “What if I oversleep?”, which winds me up and makes it really hard to relax and drop off. I dozed on and off, but eventually gave up, turned the TV on and watched some films for a few hours.

I grabbed a quick breakfast and got the taxi to the airport. The hotel staff advised leaving at 07:00 for a 10:15 flight. The traffic was very light and the queues for check-in and security very small, so I ended up sitting at the boarding gate two hours before the flight. Better to be early than late!

Hyderabad airport is very clean, modern and has some reclining seats so I was able to chill out for a while. :)

The flight from Hyderabad to Dubai took about 3.5 hours. It was relatively uneventful, but rather uncomfortable. I had an aisle seat, but the guy in the centre decided he wanted to sit with his elbows digging into my ribs and his leg pressed against mine. I wouldn’t mind so much if he was a big guy and couldn’t help it, but he was a skinny little thing, so taking all of his space and part of mine seemed unnecessary. :)

As soon as we landed in Dubai airport, I grabbed some food and headed off for the boarding gate. Very soon we were on the plane and heading to Birmingham. I enquired about an upgrade, but it was going to cost $2,500 USD, so that wasn’t going to happen. :)

I got to experience wifi on the plane for the first time. It’s been “available” on other flights, but I’ve never got it to work before. I got it working on the flight between Dubai and Birmingham. It was ultra-slow, but better than nothing and only $1. :) It made the 7 hour flight go a lot quicker.

I arrived back in Birmingham safe and sound. After a quick taxi ride home I put my washing on, scrubbed the smell of aeroplane off me and went to bed!



AIOUG Sangam15 : The Journey Home was first posted on November 24, 2015 at 2:40 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Sangam 15

Amardeep Sidhu - Tue, 2015-11-24 07:34

This was my 6th year at Sangam and as always was good fun. We were a group of 4 people who were traveling from Delhi and we reached Hyderabad on Friday morning. Just wanted to keep a day for visiting Ramoji Film City and also wanted to avoid the rush that morning travel on the conference’s starting day brings. So after dropping the luggage at the hotel we hired a taxi and reached Ramoji Film City. It is a huge place and it is tiring to move around checking everything. But fortunately on that day the weather was very pleasant so moving around was good fun. We took a ride what they call as Space Walk and watched few sets where some movies were shot. Also they have a pretty good bird sanctuary over there where they have pretty good number of beautiful birds. Spending time there was nice and fun.

By 7 PM or so we were done with everything and started back to hotel. As it was dinner time already so we directly headed to Paradise and had some awesome Biryani.

Saturday was the first day of the conference. We reached the venue by 8:30 AM and the registration was pretty quick. Before starting of the technical sessions at 10 AM, we had plenty of time to move around, meet folks especially  who we know online but had never met in person. For me it was my chance to meet Tim Hall in person for the first time. Simply put Tim is brilliant. His website is an inspiration for many bloggers. It was great meeting Tim in person and striking few conversations about various technologies.

Also met Kamran for the first time in person. Been connected to him on social media for quite some time now. It was great catching up with you mate.

Had last met Francisco in Sangam 10 and this year got a chance to meet him again. The second question (first was how is job

Categories: BI & Warehousing

Table Expansion

Jonathan Lewis - Tue, 2015-11-24 06:13

I’ve often found that while I’m investigating one Oracle feature I get waylaid by noticing anomalies in other parts of the code. I was caught by one of these events a little while ago while experimenting with the new ( Inmemory Columnar Store.  After reading a posting by Martin Bach I asked the question:

“If you have a partitioned table with a local index and one of the table partitions has been declared INMEMORY, would a query that could use that index be able to apply table expansion to produce a plan that did a tablescan on the in-memory partition and an indexed access path on the partitions that weren’t in-memory?”

The answer was yes, with one important caveat – the first test I built to investigate the question did something very silly and produced the wrong results. In fact the error turned out to have nothing to do with the inmemory columnar store, it also appeared when I disabled the inmemory feature off and nudged the optimizer into table expansion by making one of the index partitions unusable. Here’s the demo, followed by a few notes, if you want to test it for yourselves:

create table t1 (
partition by range (id) interval (250000) (
        partition p_start values less than (250000)
        segment creation immediate
        inmemory priority critical
with generator as (
        select  --+ materialize
                rownum id 
        from dual 
        connect by 
                level <= 1e4
        rownum - 1              id,
        mod(rownum - 1,1250)    n1,             -- 200 rows per segment
        rpad('x',400,'x')       padding
        generator       v1,
        generator       v2
        rownum <= 1e6

begin dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'

create index t1_i1 on t1(n1) local nologging;

I’ve created an interval-partitioned table holding a million rows with an interval of 250,000, and I’ve defined the first partition with an upper bound of 250,000 and the inmemory attribute, leaving Oracle to add the other three partitions which won’t have the inmemory attribute set.

I’ve created a local index on the n1 column, with 200 rows (1,250 distinct values) per partition. Because of the way I’ve defined n1 I can easily recreate the table to adjust the relative code of the index accessed path and the tablescan path by adjusting the value I use in the mod() function. The use of the mod() function also means that every partition holds the same volume of data (with exactly the same degree of scattering) for any specific value of n1.

To test the option for table expansion I’ve then checked the execution plan for (and run) four queries:

        /*+ full(t1) */
        n1 = 1
and     id < 250000

        /*+ index(t1) */
        n1 = 1
and     id < 250000 
        /*+ full(t1) */
from    t1
where   n1 = 1
and     id >= 250000
and     id <  500000

        n1 = 1

The first two queries give me the cost of accessing the inmemory partition by tablescan compared to the cost of accessing it by index. The third query is there to demonstrate that the non-inmemory tablescan is a LOT more expensive than the inmemory one. The final query accesses all four partitions to see if the inmemory partition is treated differently from the other three. Here are the four plans:

Default tablescan when limited to the in-memory partition
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | SELECT STATEMENT            |      |   200 |  1800 |   184   (9)| 00:00:01 |       |       |
|   1 |  PARTITION RANGE SINGLE     |      |   200 |  1800 |   184   (9)| 00:00:01 |     1 |     1 |
|*  2 |   TABLE ACCESS INMEMORY FULL| T1   |   200 |  1800 |   184   (9)| 00:00:01 |     1 |     1 |

Predicate Information (identified by operation id):
   2 - inmemory("N1"=1 AND "ID"<250000)
       filter("N1"=1 AND "ID"<250000)

Index access is more expensive than the inmemory tablescan
| Id  | Operation                                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | SELECT STATEMENT                           |       |   200 |  1800 |   201   (0)| 00:00:01 |       |       |
|   1 |  PARTITION RANGE SINGLE                    |       |   200 |  1800 |   201   (0)| 00:00:01 |     1 |     1 |
|*  2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |   200 |  1800 |   201   (0)| 00:00:01 |     1 |     1 |
|*  3 |    INDEX RANGE SCAN                        | T1_I1 |   200 |       |     1   (0)| 00:00:01 |     1 |     1 |

Predicate Information (identified by operation id):
   2 - filter("ID"<250000)
   3 - access("N1"=1)

Tablescan on a non-inmemory partition is much higher than inmemory (and indexed access)
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | SELECT STATEMENT      |       |   200 |  1800 |   1891  (2)| 00:00:01 |       |       |
|   1 | PARTITION RANGE SINGLE|       |   200 |  1800 |   1891  (2)| 00:00:01 |     2 |     2 |
|*  2 | TABLE ACCESS FULL     | T1    |   200 |  1800 |   1891  (2)| 00:00:01 |     2 |     2 |

Predicate Information (identified by operation id):
   2 - filter("N1"=1 AND "ID">=250000 AND "ID"<500000)

Access all four partitions - no table expansion. Cost = 4 x cost of single partition indexed access path.
| Id  | Operation                                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | SELECT STATEMENT                           |       |   800 |  7200 |   807   (1)| 00:00:01 |       |       |
|   1 |  PARTITION RANGE ALL                       |       |   800 |  7200 |   807   (1)| 00:00:01 |     1 |1048575|
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |   800 |  7200 |   807   (1)| 00:00:01 |     1 |1048575|
|*  3 |    INDEX RANGE SCAN                        | T1_I1 |   800 |       |     6   (0)| 00:00:01 |     1 |1048575|

Predicate Information (identified by operation id):
   3 - access("N1"=1)

So we can see that the inmemory tablescan of a single segment is cheaper than the indexed access path, and much cheaper than the tablescan of a single non-inmemory segment. When we execute the query that accesses all four segments, though, the optimizer chooses to use the same access path for all four partitions rather then splitting the query into one tablescan and three indexed accesses.

This is just a starting point for testing though – there are transformations where the optimizer will only use a particular transformation if the transformed query has a cost that is “X%” lower than it would be without the transformation (and some of these transformations have associated parameters – usually hidden – that can be set to adjust the percentage). Perhaps if I adjusted the data so that the relative benefit of a single inmemory scan was larger; or if I created a table with more partitions and had two inmemory and three not inmemory, or 3 inmemory and 4 not inmemory; or if I had more inmemory partitions than not then maybe the optimizer would spontaneously do table expansion.

Rather than fiddle about with the data my next step was to hint the final query with /*+ expand_table(t1) */. Here’s the resulting execution plan:

| Id  | Operation                                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | SELECT STATEMENT                             |         |       |       |  1797 (100)|          |       |       |
|   1 |  VIEW                                        | VW_TE_1 |  1600 | 20800 |  1797   (1)| 00:00:01 |       |       |
|   2 |   UNION-ALL                                  |         |       |       |            |          |       |       |
|   3 |    PARTITION RANGE SINGLE                    |         |   200 |  1800 |   184   (9)| 00:00:01 |     1 |     1 |
|*  4 |     TABLE ACCESS INMEMORY FULL               | T1      |   200 |  1800 |   184   (9)| 00:00:01 |     1 |     1 |
|   5 |    PARTITION RANGE ITERATOR                  |         |   600 |  5400 |   806   (1)| 00:00:01 |     2 |     4 |
|*  6 |     TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1      |   600 |  5400 |   806   (1)| 00:00:01 |     2 |     4 |
|*  7 |      INDEX RANGE SCAN                        | T1_I1   |   800 |       |     5   (0)| 00:00:01 |     2 |     4 |
|   8 |    PARTITION RANGE INLIST                    |         |   800 | 16800 |   807   (1)| 00:00:01 |       |       |
|   9 |     TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1      |   800 | 16800 |   807   (1)| 00:00:01 |       |       |
|* 10 |      INDEX RANGE SCAN                        | T1_I1   |   800 |       |     6   (0)| 00:00:01 |       |       |

Predicate Information (identified by operation id):
   4 - inmemory(("N1"=1 AND "T1"."ID"<250000))
       filter(("N1"=1 AND "T1"."ID"<250000)) 6 - filter(("T1"."ID">=250000 AND "T1"."ID"<1000000))
   7 - access("N1"=1)
  10 - access("N1"=1)

Points to notice:
Table expansion has occurred – we did an inmemory full tablescan on the inmemory partition at a cost of 184, and index range scans on the other three partitions at a cost of 805 (which looks more like the cost of visiting 4 partitions), expecting 800 rowids from the local index (a failure of partition elimination) and 600 rows from the table. The partition start/stop columns do show that the optimizer expects to visit just the three correct partitions, although some of the cost and cardinality numbers seem to be appropriate to 4 partitions.

Worse still we see a third branch to the UNION ALL / table expansion – operations 8 to 10 – which don’t report a partition start and stop. What’s that doing ? Sometimes, of course, we see sections of plan that don’t actually run – usually preceded by a FILTER operation that can evaluate to FALSE – so maybe that’s what’s happening here. Re-running the query with rowsource execution stats enabled it turned out that the PARTITION RANGE INLIST started once, and operations 9 and 10 didn’t operate at all – so that’s okay.

But what happens if I execute a query that should only access the first two partitions ? Here’s the run-time plan to answer that question:

SQL_ID  5dgp982ffsph8, child number 0
select  /*+ expand_table(t1) gather_plan_statistics */         id from
       t1 where  n1 = 1 and id < 500000

Plan hash value: 2876620067

| Id  | Operation                                    | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
|   0 | SELECT STATEMENT                             |         |      1 |        |    800 |00:00:00.05 |     649 |
|   1 |  VIEW                                        | VW_TE_1 |      1 |   1200 |    800 |00:00:00.05 |     649 |
|   2 |   UNION-ALL                                  |         |      1 |        |    800 |00:00:00.01 |     649 |
|   3 |    PARTITION RANGE SINGLE                    |         |      1 |    200 |    200 |00:00:00.01 |       3 |
|*  4 |     TABLE ACCESS INMEMORY FULL               | T1      |      1 |    200 |    200 |00:00:00.01 |       3 |
|   5 |    PARTITION RANGE SINGLE                    |         |      1 |    200 |    200 |00:00:00.01 |     215 |
|*  6 |     TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1      |      1 |    200 |    200 |00:00:00.01 |     215 |
|*  7 |      INDEX RANGE SCAN                        | T1_I1   |      1 |    200 |    200 |00:00:00.01 |      15 |
|   8 |    PARTITION RANGE INLIST                    |         |      1 |    800 |    400 |00:00:00.01 |     431 |
|   9 |     TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1      |      2 |    800 |    400 |00:00:00.01 |     431 |
|* 10 |      INDEX RANGE SCAN                        | T1_I1   |      2 |    800 |    400 |00:00:00.01 |      31 |

Predicate Information (identified by operation id):
   4 - inmemory(("N1"=1 AND "T1"."ID"<250000))
       filter(("N1"=1 AND "T1"."ID"<250000))
   6 - filter(("T1"."ID"<500000 AND "T1"."ID">=250000))
   7 - access("N1"=1)
  10 - access("N1"=1)

My query should have returned 400 rows, accessing only the first two partitions. It returned 800 rows, accessing all 4 partitions. (Notice how there are no predicates on ID for operations 9 and 10).

Since I had started my investigation with the inmemory option I assumed at this point that the problem was somehow related to the that feature; which shows how easy it is to get too focused and jump to conclusions. After raising the problem with Oracle I got a reply that the problem wasn’t about the inmemory columnar store – and here’s the next little change to test to demonstrate that point:

alter index t1_i1 indexing partial;

alter table t1 modify partition p_start indexing off;
alter table t1 modify partition p_start no inmemory;

select  partition_name, indexing, inmemory 
from    user_tab_partitions 
where   table_name = 'T1';

select  partition_name, segment_created    
from    user_ind_partitions 
where   index_name = 'T1_I1';

Enable partial indexing for the index, switch off the index on the first partition then disable the inmemory option for the partition. This is the plan I got from re-running the two-partition query:

| Id  | Operation                                    | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
|   0 | SELECT STATEMENT                             |         |      1 |        |    800 |00:00:00.08 |   15370 |  14706 |
|   1 |  VIEW                                        | VW_TE_1 |      1 |   1200 |    800 |00:00:00.08 |   15370 |  14706 |
|   2 |   UNION-ALL                                  |         |      1 |        |    800 |00:00:00.07 |   15370 |  14706 |
|   3 |    PARTITION RANGE SINGLE                    |         |      1 |    200 |    200 |00:00:00.01 |     216 |      0 |
|*  4 |     TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1      |      1 |    200 |    200 |00:00:00.01 |     216 |      0 |
|*  5 |      INDEX RANGE SCAN                        | T1_I1   |      1 |    200 |    200 |00:00:00.01 |      16 |      0 |
|   6 |    PARTITION RANGE SINGLE                    |         |      1 |    200 |    200 |00:00:00.04 |   14723 |  14706 |
|*  7 |     TABLE ACCESS FULL                        | T1      |      1 |    200 |    200 |00:00:00.04 |   14723 |  14706 |
|   8 |    PARTITION RANGE INLIST                    |         |      1 |    800 |    400 |00:00:00.01 |     431 |      0 |
|   9 |     TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1      |      2 |    800 |    400 |00:00:00.01 |     431 |      0 |
|* 10 |      INDEX RANGE SCAN                        | T1_I1   |      2 |    800 |    400 |00:00:00.01 |      31 |      0 |

Predicate Information (identified by operation id):
   4 - filter(("T1"."ID"<500000 AND "T1"."ID">=250000))
   5 - access("N1"=1)
   7 - filter(("N1"=1 AND "T1"."ID"<250000))
  10 - access("N1"=1)

Again I accessed all 4 partitions and returned 800 rows. As an odd little detail the plan reversed the order of access of partitions 1 and 2. The problem isn’t about the inmemory option, it’s a problem with generic table expansion.

Continuing Tests
I won’t go into all the details of what I did next – once you get started it’s hard to stop, and easy to lose track of what you’ve done. So here’s a quick summary.

Given the problem appeared without tthe inmemory columnar store enabled, I switched back to (where table expansion is also possible) and emulated the problem by setting the first index partition unusable (since partial indexing is a 12c feature). Table expansion did not occur even when hinted: the plan was a full tablescan on both partitions.

So I switched the table from being interval partitioned to simple range partitioned, creating all 4 partitions as I created the data. In 11g I got table expansion and the correct answer without the extra branch to the union all; so I went back to 12c and did the same – pure range partitioning, partial indexing, and got table expansion with the correct result and no spare branches to the plan.

Tentative Conclusion

Clearly my testing is by no means exhaustive – but 12c seems to allow table expansion for interval partitioning in cases where 11g does not; unfortunately it is possible for the table expansion code to go wrong for interval partitioning in cases where simple range partitioning does not. In my case this led to wrong results.


Reference Script: in_memory_table_expansion_*.sql