Skip navigation.

Feed aggregator

Compression -- 6 : Advanced Index Compression

Hemant K Chitale - Tue, 2016-05-03 08:23
Earlier, I had covered Index (Key) Compression which is included in the Enterprise Edition.

In Key Compression, the DBA must specify the Prefix (i.e. number of leading columns in a composite index) that must be used as the compression key. Advanced Index Compression does not require the DBA to manually identify the prefix key length.  Advanced Index Compression auto(magically) identifies the optimal prefix columns.
(Note : Advanced Index Compression requires the Advanced Compression licence Option and is available only in and higher)

SQL> create table target_data as select * from source_data where 1=2;

Table created.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.


Creating a Key Compression Index by specifying the Prefix size (the leading 2 columns) :

SQL> create index target_data_ndx_1_comp on
2 target_data (owner, object_type, object_name) compress 2;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1_COMP'
4 /



Note how I specified "2" as the Prefix size as I want to compress on repeated values of (OWNER, OBJECT_NAME).

Using Advanced Index Compression by specifying "COMPRESS ADVANCED LOW" and letting Oracle decide on the compression strategy in each leaf block :

SQL> drop index target_data_ndx_1_comp;

Index dropped.

SQL> create index target_data_ndx_2_advcomp on
2 target_data (owner, object_type, object_name)
3 compress advanced low;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_2_ADVCOMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_2_ADVCOMP'
4 /



Wow, that's significantly smaller.  What's more, I did not have to spend time analyzing the data and the index definition to identify the "correct" Prefix size.

However, it is now possible to specify the entire composite key as the Prefix, although that is not what I would have done in earlier versions.  Identifying the Prefix size requires analyzing the data.

SQL> create index target_data_ndx_1_comp on
2 target_data (owner, object_type, object_name) compress 3
3 /

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1_COMP'
4 /



So, the Advanced option allows me to let Oracle automatically decide the appropriate mechanism to compress the Index keys.

UPDATE : Also see the subsequent test with a reversal of the columns in the composite index.

Just for comparison, here is a regular index :

SQL> drop index target_data_ndx_2_advcomp;

Index dropped.

SQL> create index target_data_ndx_3_nocomp on
2 target_data (owner, object_type, object_name)
3 /

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_NOCOMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_3_NOCOMP'
4 /



That is a much larger regular index !

Categories: DBA Blogs

Thanks Oracle, for making my life easier!

Jeff Moss - Tue, 2016-05-03 05:11

So, I’m really pleased to see that Oracle has acquired Opower. Why? Well, for the last few months I’ve occasionally had to do an extract for Opower….and another for Oracle on the same stuff…..hopefully I’ll only need to do the one extract from now on…yay!

Oracle’s New Data Visualization Desktop

Rittman Mead Consulting - Tue, 2016-05-03 04:00

A recent addition to the Oracle lineup of visualization tools is the Oracle Data Visualization Desktop. Described by Oracle as a “single user desktop application that provides Oracle Data Visualization functionality to business users,” DVD is an easy-to-install data visualization tool for Windows 7, 8 or 10 that packs some very powerful features.

I recently had a chance to sit down and explore DVD and wanted to share some of my first impressions.

At its core, DVD is a stand-alone version of Oracle’s DVCS. If you are at all familiar with Visual Analyzer, you will feel right at home.

Screen Shot 2016-04-29 at 9.34.25 AM

Installation was a breeze on my Windows 10 VM and only took about 5 minutes and required no additional software or plugins for the standard VA functionality.

After installation, launching DVD is as easy as clicking on the desktop icon like any other stand-alone application.

Screen Shot 2016-04-29 at 2.05.26 PM

Screen Shot 2016-04-29 at 10.02.43 AM

After the ODV startup, I was presented with a home screen which contains a search field for finding projects, a list of user folders and a main window to select individual visualizations that have been created.

Screen Shot 2016-05-02 at 11.15.59 AM

Clicking on the hamburger Screen Shot 2016-04-29 at 2.24.46 PM icon in the top left corner brings up a menu where I can choose to start Visual Analyzer with the last data source selected, select new Data Sources or create a new VA Project.

Screen Shot 2016-04-29 at 2.02.58 PM

I chose to create a new VA project and selected the sample data from Oracle (the sample data is an optional install chosen during the DVD install process). Creating a dashboard was a fairly straightforward process. Following Visual Analyzer’s functionality of dragging and dropping columns, I was able to put together a simple sales and profit dashboard in a few minutes.

Screen Shot 2016-04-29 at 2.39.44 PM

While creating my dashboard, I noticed that Oracle has included some new visualization types. You can now choose Scatter (Cat.), Stacked Scatter (Cat.), Donut or Sunburst visualizations.

Screen Shot 2016-04-29 at 1.50.12 PM

Screen Shot 2016-04-29 at 10.15.46 AM

One other feature that Oracle added to DVD is the ability to insert images onto the dashboards. You can choose to upload your own image or link to a URL to pull images from the web.

Screen Shot 2016-05-02 at 8.21.19 AM

Screen Shot 2016-05-02 at 8.25.25 AM

I uploaded an image and changed the canvas layout to freeform, which allowed me to move the image anywhere on the dashboard. By adjusting the transparency it is possible to have the image underlay the entire dashboard and still be able to see the visualizations. This example is pretty extreme, and in a real world scenario, caution should be used as to not obstruct the visualizations.

Screen Shot 2016-05-02 at 8.33.38 AM

Next I decided to try to connect to my Oracle 12c sample database to pull in some new data to work with. Selecting “Create New Datasource” from the menu prompted me with three options: create from a file, from an existing app or from a database.

Screen Shot 2016-04-29 at 11.24.23 AM

Clicking on the “From Database” option, I was presented with a connection screen.

Screen Shot 2016-05-01 at 1.15.42 PM

On this screen I discovered one of the most impressive things about DVD. Clicking on “Database Type” reveals a dropdown menu which you can choose from a variety of database formats, including Spark, Hive and Mongo DB, among others.
Screen Shot 2016-05-01 at 1.16.15 PM

That’s awesome.

Because I already had 12c DB installed, I selected the Oracle Database Type and entered all my connection information.

Once a connection to the database is made, it shows up in the available connections list. Clicking on my sample database brought up a list of available schemas to choose from. In this case, I chose the sample HR schema which then brings up a list of tables available to add as data sources for visualizations.

Screen Shot 2016-04-29 at 3.11.22 PM

Screen Shot 2016-04-29 at 3.04.32 PM

I chose to add EMPLOYEES, JOBS and LOCATIONS and then started a new VA project. The HR tables now show up in the list of available data sources.

Screen Shot 2016-04-29 at 3.17.22 PM

I selected EMPLOYEES and JOBS and, within seconds, was able to create a simple table showing a list of employee names, their job titles, salaries and commission percentages.

Screen Shot 2016-04-29 at 11.35.09 AM

As you can see, adding new data sources is quick and easy and allows users to explore their data and create meaningful visualizations from that data in a very short amount of time.

Another feature is the Advanced Analytics portion of Oracle Data Visualization Desktop. This feature, which uses R, gives users the ability to do things like highlight outliers or show trend lines with a click of a button.

Screen Shot 2016-04-29 at 3.43.43 PM

Screen Shot 2016-04-29 at 3.48.50 PM

This feature does require an optional install located within the DVD application folder. The install process proved once again to be very quick and easy and completed in about 5 minutes.

Screen Shot 2016-04-29 at 10.02.23 AM

Screen Shot 2016-04-29 at 9.58.38 AM

After the installation was complete, I created a new VA project. Choosing the sample data provided by Oracle for DVD, I created a quick scatter chart and then, by right clicking anywhere on the visualization, clicked “Add Outliers.”

Screen Shot 2016-05-02 at 9.29.19 AM

Screen Shot 2016-05-02 at 9.29.39 AM

As you can see, outliers and non-outliers are easily distinguishable by the color key that DVD assigned automatically.

Next, I wanted to see how if I could change some of the colors in my visualization. DVD allows you to do this under the visualization menu.

Screen Shot 2016-05-02 at 9.35.17 AM

Screen Shot 2016-05-02 at 9.38.43 AM

Screen Shot 2016-05-02 at 9.39.33 AM

As with OBIEE, entering specific hex values is supported as well as selecting from pre-made color pallets is possible with DVD.

Using the same right-click functionality that I used for adding outliers, I was able to additionally add a polynomial trend line to show a gains and losses.

Screen Shot 2016-05-02 at 9.43.58 AM

Next, I decided to see if I could export this data and import it into Excel. Choosing export from the visualization menu, I was able to easily export the data as a .CSV and upload it into Excel.

Screen Shot 2016-05-02 at 9.54.35 AM

Overall, Oracle Data Visualization Desktop is a very impressive new addition to the to the DVCS lineup. The ability to collect data from multiple sources, its native adaptors for a variety of popular databases, and the ability to manipulate visualizations to convey the data in creative ways make it a strong contender against Tableau and Wave. It requires no remote server infrastructure and is a solid business solution for users Oracle Data Visualization functionality in a small and easily accessible package.

I feel as though I have just cracked the surface of everything this tool can do. Check back for future blogs and articles as we at Rittman Mead continue to explore the possibilities of DVD. The future of data visualization may be closer than we think.

If you would like more information about Visual Analyzer or the Oracle Cloud Service, see this blog post by Mark Rittman.

If you would like to watch the official Tech Demo of DVD, you can find it here.

Rittman Mead also offers in depth professional training courses for OBIEE 12c and Visual Analyzer.

The post Oracle’s New Data Visualization Desktop appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing


Jonathan Lewis - Tue, 2016-05-03 01:11

The OTN database forum supplied a little puzzle a few days ago – starting with the old, old, question: “Why is the plan with the higher cost taking less time to run?”

The standard (usually correct) answer to this question is that the optimizer doesn’t know all it needs to know to predict what’s going to happen, and even if it had perfect information about your data the model used isn’t perfect anyway. This was the correct answer in this case, but with a little twist in the tail that made it a little more entertaining. Here’s the query, with the two execution plans and the execution statistics from autotrace:

        client_account d, 
        client         e, 
        account        a
        A.acct_nb ='00000000000000722616216'</li>


Plan (A) with a full tablescan of client_account – cost 808, runtime 1.38 seconds, buffer gets 17,955

| Id | Operation                      | Name           | Rows  | Bytes  | Cost (%CPU)| Time     |
|  0 | SELECT STATEMENT               |                |     1 |    59  |   808 (14) | 00:00:10 |
|  1 |  NESTED LOOPS                  |                |     1 |    59  |   808 (14) | 00:00:10 |
|  2 |   NESTED LOOPS                 |                |     1 |    59  |   808 (14) | 00:00:10 |
|* 3 |    HASH JOIN                   |                |     1 |    42  |   806 (14) | 00:00:10 |
|  4 |     TABLE ACCESS BY INDEX ROWID| ACCOUNT        |     1 |    30  |     5  (0) | 00:00:01 |
|* 5 |      INDEX RANGE SCAN          | XAK1ACCOUNT    |     1 |        |     4  (0) | 00:00:01 |
|  6 |     TABLE ACCESS FULL          | CLIENT_ACCOUNT |  9479K|   108M |   763 (10) | 00:00:09 |
|* 7 |    INDEX UNIQUE SCAN           | XPKCLIENT      |     1 |        |     1  (0) | 00:00:01 |
|  8 |   TABLE ACCESS BY INDEX ROWID  | CLIENT         |     1 |    17  |     2  (0) | 00:00:01 |

     0  recursive calls
     0  db block gets
 17955  consistent gets
     0  physical reads
     0  redo size
   623  bytes sent via SQL*Net to client
   524  bytes received via SQL*Net from client
     2  SQL*Net roundtrips to/from client
     0  sorts (memory)
     0  sorts (disk)
     1  rows processed

Plan (B) with an index fast full scan on a client_account index – cost 1,190, runtime 0.86 seconds, buffer gets 28696

| Id | Operation                      | Name              | Rows  | Bytes  | Cost (%CPU)| Time     |
|  0 | SELECT STATEMENT               |                   |     1 |    59  |  1190  (8) | 00:00:14 |
|  1 |  NESTED LOOPS                  |                   |     1 |    59  |  1190  (8) | 00:00:14 |
|  2 |   NESTED LOOPS                 |                   |     1 |    59  |  1190  (8) | 00:00:14 |
|* 3 |    HASH JOIN                   |                   |     1 |    42  |  1188  (8) | 00:00:14 |
|  4 |     TABLE ACCESS BY INDEX ROWID| ACCOUNT           |     1 |    30  |     5  (0) | 00:00:01 |
|* 5 |      INDEX RANGE SCAN          | XAK1ACCOUNT       |     1 |        |     4  (0) | 00:00:01 |
|  6 |     INDEX FAST FULL SCAN       | XPKCLIENT_ACCOUNT | 9479K |   108M |  1145  (5) | 00:00:13 |
|* 7 |    INDEX UNIQUE SCAN           | XPKCLIENT         |     1 |        |     1  (0) | 00:00:01 |
|  8 |   TABLE ACCESS BY INDEX ROWID  | CLIENT            |     1 |    17  |     2  (0) | 00:00:01 |

     0  recursive calls
     0  db block gets
 28696  consistent gets
     0  physical reads
     0  redo size
   623  bytes sent via SQL*Net to client
   524  bytes received via SQL*Net from client
     2  SQL*Net roundtrips to/from client
     0  sorts (memory)
     0  sorts (disk)
     1  rows processed

Note, particularly, that the two plans are the same apart from operation 6 where a full tablescan changes to an index fast full scan, predicting the same number of rows but with an increase of 50% in the cost; the increase in cost is matched by an increase in the reported workload – a 60% increase in the number of consistent reads and no disk reads or recursive SQL in either case. Yet the execution time (on multiple repeated executions) dropped by nearly 40%.

So what’s interesting and informative about the plan ?

The cost of a tablescan or an index fast full scan is easy to calculate; broadly speaking it’s “size of object” / “multiblock read count” * k, where k is some constant relating to the hardware capability. The costs in these plans and the autotrace statistics seem to be telling us that the index is bigger than the table, while the actual run times seem to be telling us that the index has to be smaller than the table.

It’s easy for an index to be bigger than its underlying table, of course; for example, if this table consisted of nothing but two short columns the index could easily be bigger (even after a rebuild) because it would be two short columns plus a rowid. If that were the case here, though, we would expect the time to fast full scan the index to be higher than the time to scan the table.

So two thoughts crossed my mind as I looked at operation 6:

  • Mixing block sizes in a database really messes up the optimizer costing, particularly for tablescans and index fast full scans. Maybe the table had been built in a tablespace using 32KB  blocks while the index had been built in a tablespace using the more common 8KB blocksize – I didn’t want to start working out the arithmetic but that might be just enough to produce the contradiction.
  • Maybe the table was both bigger AND smaller than the index – bigger because it held more data, smaller because it had been compressed. If so then the difference in run-time would be the overhead of decompressing the rows before projecting and comparing the data.

Conveniently the OP has included an extract from the 10053 trace:

Table Stats::
  Table: CLIENT_ACCOUNT  Alias:  D
    #Rows: 9479811  #Blks:  18110  AvgRowLen:  71.00  ChainCnt:  0.00
  Column (#1): CLNT_ID(
    AvgLen: 6 NDV: 1261035 Nulls: 0 Density: 0.000001 Min: 0 Max: 4244786
    Histogram: HtBal  #Bkts: 254  UncompBkts: 254  EndPtVals: 239
  Column (#2): ACCT_ID(
    AvgLen: 6 NDV: 9479811 Nulls: 0 Density: 0.000000 Min: 1 Max: 22028568
    Histogram: HtBal  #Bkts: 254  UncompBkts: 254  EndPtVals: 255

Index Stats::
  Index: XPKCLIENT_ACCOUNT  Col#: 1 2
    LVLS: 2  #LB: 28543  #DK: 9479811  LB/K: 1.00  DB/K: 1.00  CLUF: 1809449.00

Note that the index is called xpclient_account – which suggests “primary key” –  and the number of distinct keys in the index (#DK) matches the number of rows in the table(#Rows). The index and table stats seem to be consistent so we’re not looking at a problem of bad statistics.

Now to do some simple (ballpark) arithmetic: for the table can we check if  “rows * average row length / 8K =  blocks”. We can read the numbers directly from the trace file:  9,500,000 * 71 / 8,000 = 84,000.  It’s wrong by a factor of about 4 (so maybe it’s a 32K block, and maybe I could rule out that possibility by including more detail in the arithmetic – like allowing properly for the block header, row overheads, pctfree etc).

For the index – we believe it’s the primary key, so we know the number of rows in the index – it’s the same as the number of distinct keys. As for the length of an index entry, we have the index definition (col#: 1 2) and we happen to have the column stats about those columns so we know their average length. Allowing for the rowid and length bytes we can say that the average index entry is (6 +1) + (6 + 1) + 6 = 20 bytes.  So the number of leaf blocks should be roughy 9,500,000 * 20 / 8,000 = 23,750. That’s close enough given the reported 28,543 and the fact that I haven’t bothered to worry about row overheads, block overheads and pctfree.

The aritmetic provides an obvious guess – which turned out to be correct: the table is compressed, the index isn’t. The optimizer hasn’t allowed for the CPU cost of decompressing the compressed rows, so the time required to decompress 9.5M rows doesn’t appear in the execution plan.


Looking at the column stats, it looks like there are roughly 8 acct_ids for each clnt_id, so it would probably be sensible to compress the primary key index (clnt_id, acct_id) on the first column as this would probably reduce the size of the index by about 20%.

Better still – the client_account table has very short rows – it looks like a typical intersection table with a little extra data carried. Perhaps this is a table that should be an index-organized table with no overflow. It looks like there should also be an index (acct_id, clnt_id) on this table to optimse the path from account to client and this would become a secondary index – interestingly being one of those rare cases where the secondary index on an IOT might actually be a tiny bit smaller than the equivalent index on a heap table because (in recent versions of Oracle) primary key columns that are included in the secondary key are not repeated in the index structure. (It’s a little strange that this index doesn’t seem to exist already – you might have expected it to be there given the OP’s query, and given that it’s an “obvious” requirement as an index to protect the foreign key.)

The only argument against the IOT strategy is that the table clearly compresses very well as a heap table, so a compressed heap table plus two B-tree indexes might be more cost-effective than an IOT with a single secondary index.


Links for 2016-05-02 []

Categories: DBA Blogs

SQL vs. Excel – Subgroup medians

RDBMS Insight - Mon, 2016-05-02 18:34

Recently I ran across this post on how to do subgroup medians in Excel 2010. First you need to create a pivot table, then “do some copying and pasting and use a formula to make it happen”. In SQL you can do this with one command.

Suppose that you have the same table as the Excel article, something like this:

CREATE TABLE sampletab
(arrest_day_of_week varchar2(10), 
arrest_ts TIMESTAMP, 
fingerprint_ts TIMESTAMP, 
days_between NUMBER);

and you want to get the median value of days_between for each day of the week.

The steps in Excel apparently go like this:

  1. Create pivot table to get the means
  2. Copy and paste the column values from the pivot table
  3. For Sunday, create an IF formula to include a cell’s days_between in the median calculation only if the arrest_day_of_week for that row is Sunday
  4. Repeat for other six days


In SQL, it’s one simple statement:

SELECT arrest_day_of_week, median(days_between) AS median_days_between FROM sampletab
GROUP BY arrest_day_of_week;

Conclusion – if you’re into data analysis, SQL can be a big time-saver!

Categories: DBA Blogs

Oracle Fusion Middleware : WebLogic is Key, Learn Domain, Install/Configure, Start/Stop : Part II

Online Apps DBA - Mon, 2016-05-02 16:53

 This is part II of my journey on learning Oracle Fusion Middleware with me (Atul Kumar). In part I here I covered why I started learning Oracle Fusion Middleware almost 10 Years ago, How it transformed my career and I now charge premium rate for my consulting Services and mentioned that If I can do it […]

The post Oracle Fusion Middleware : WebLogic is Key, Learn Domain, Install/Configure, Start/Stop : Part II appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Reserved words usage in MySQL

Pythian Group - Mon, 2016-05-02 14:07

It is not uncommon to come across MySQL databases where reserved words are in use as identifiers for any kind of database objects.

Perhaps when the application schema was implemented, the words were not reserved yet, and they became reserved later on a subsequent MySQL release.

It is a good practice to check reserved words usage prior to doing any database upgrades, as any newly reserved keywords will cause syntax errors on the new version.

This is usually not a problem if proper quoting is used for referencing the objects, as described on the official manual page.

The actual steps to do this depend on the environment; for example, the following can be configured to tell Hibernate to escape identifiers:

property name="hibernate.globally_quoted_identifiers" value="true"

This does not appear to be documented properly (there is an open bug unresolved at the time of this writing).

However, we cannot make the assumption that all application code is properly escaped to deal with this kind of issues.

So what are the symptoms?

Error 1064 will be reported while trying to use a reserved word:

mysql> CREATE TABLE interval (begin INT, end INT);
ERROR 1064 (42000): You have an error in your SQL syntax ...
near 'interval (begin INT, end INT)'
How can we check for reserved words?

The following procedure can help you find out if any particular MySQL version’s reserved words are in use:

  1. Using the list on the corresponding manual page, create a text file with one reserved word on each line
  2. Load data into a temporary table
     USE test;
    CREATE TABLE reserved_words VARCHAR(50); 
    LOAD DATA INFILE 'reserved_words.txt' INTO TABLE test.reserved_words;
  3. Check for any column names using reserved keywords
    SELECT table_schema, table_name, column_name, ordinal_position 
    FROM information_schema.columns
    WHERE table_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' ) 
    AND column_name = ANY ( SELECT * FROM test.reserved_words ) 
    ORDER BY 1,2,4;
  4. Check for any table names using reserved keywords
    SELECT table_schema, table_name
    FROM information_schema.tables
    WHERE table_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' ) 
    AND table_name = ANY ( SELECT * FROM test.reserved_words );
  5. Check for any procedures or functions
    SELECT routine_schema, routine_name, routine_type
    FROM information_schema.routines
    WHERE routine_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' )
    AND routine_name = ANY ( select * from test.words );

I hope this post helps you avoid one of the many issues you may encounter during the challenging task of database upgrades.

Categories: DBA Blogs

storage cloud appliance in the cloud

Pat Shuff - Mon, 2016-05-02 09:46
Last week we focused on getting infrastructure as a service up and running. I wanted to move up the stack and talk about platform as a service but unfortunately, I got distracted with yet another infrastructure problem. We were able to install the storage cloud appliance software in a virtual machine but how do you install this in a compute cloud instance? This brings up two issues. First, how do you run a Linux 7 - 3.10 kernel in the Oracle Compute Cloud Service. Second, how do you connect and manage this service both from an admin perspective and client from another compute engine in the cloud service.

Let's tackle the first problem. How do you spin up a Linux 7 - 3.10 kernel in the Oracle Compute Cloud Service? If we look at the compute instance creation we can see what images that we can boot from.

There is not Linux 7 - 3.10 kernel so we need to download and import and image that we can boot from. Fortunately, Oracle has gone through a good importing a bootable image tutorial. If we follow these steps, we need to first download a CentOS 7 bootable image from The cloud instance that we use is the CentOS-7-x86_64-OracleCloud.raw.tar.gz. We first download this to a local directory then upload it to the compute cloud image area. This is done by going to the compute console and clicking on the "Images" tab at the top of the screen.

We then upload the tar.gz file that is a bootable image. This allows us to create a new storage instance that we can boot from. The upload takes a few minutes and once it is complete we need to associate it with a bootable instance. This is done by clicking on the "Associate Image" button where we basically enter a name to use for the operating system as well as description.

Note that the OS size is 9 GB which is really small. We don't have a compute instance at this point. We either need to create a bootable storage element or compute instance based on this image. We will go through the storage create first since this is the easiest way of getting started. We first have to change from the Image tab to the Storage tab. We click on the Create Storage Volume and go through selection of the image, storage name, and size. We went with the storage size rather than resizing the storage we are creating.

At this point we should be able to create a compute instance based on this boot disk. We can clone the disk, boot from it, or mount it on another instance. We will go through and boot from this instance once it is created. We do this by going to the Instance tab and clicking on Create Instance. It does take 5-10 minutes to create the storage instance and need to wait till it is completed before creating a compute instance. An example of a creation looks like

We select the default network, the CentOS7 storage that we previously created, the 2016 ssh keys that we uploaded, and review and launch the instance.

After about 15 minutes, we have a compute instance based on our CentOS 7 image. Up to this point, all we have done is create a bootable Linux 7 - 3.10 kernel. Once we have the kernel available we can focus on connecting and installing the cloud storage appliance software. This follows the making backup better blog post. There are a couple of things that are different. First, we connect as the user centos rather than oracle or opc. This is a function of the image that we downloaded and not a function of the compute cloud. Second, we need to create a second user that allows us to login. When we use the centos user and install the script, we can't login with our ssh keys for some reason. If we create a new user then whatever stops us from logging in as the centos user does not stop us from logging in as oracle, for example. The third thing that we need to focus on is creating a tunnel from our local desktop to the cloud instance. This is done with ssh or putty. What we are looking for is routing the management port for the storage appliance. It is easier to create a tunnel rather than change the management port and opening up the port through the cloud firewall.

From this we execute the commands we described in the maker backup better blog. We won't go through the screen shots on this since we have done this already. One thing is missing from the screenshot, you need to disable selinux vy editing /etc/sysconfig/selinux. You need to disable SELINUX by editing the file and rebooting. Make sure that you add a second user before rebooting otherwise you will get locked out and the ssh keys won't work once this change is made.

The additional steps that we need to do are create a user, copy the authorized_keys from an existing user into the .ssh directory, change the ownership, and assign a password to the new user, and add the user to /etc/sudoers.

useradd oracle
mkdir ~oracle/.ssh
cp ~centos/.ssh/authorized_keys ~oracle/.ssh
chown -R oracle ~oracle
passwd oracle
vi /etc/sudoers
The second major step is to create an ssh tunnel to allow you to connect in from your localhost into the cloud compute service. When you create the oscsa instance it starts up a management console using port 32769. To tunnel this port we use putty to connect.

At this point we should be able to spin up other compute instances and mount this file system internally using the command

 mount -t nfs -o vers=4,port=32770 e53479.compute-metcsgse00028.oraclecloud.internal:/ /local_mount_point
We might want to use the internal ip address rather than the external dns name. In our example this would be the Private IP address of We should be able to mount this file system and clone other instances to leverage the object storage in the cloud.

In summary, we did two things in this blog. First, we uploaded a new operating system that was not part of the list of operating systems presented by default. We selected a CentOS instance that conforms to the requirements of the cloud storage appliance. Second, we configured the cloud storage appliance software on a newly created Linux 7 - 3.10 kernel and created a putty tunnel so that we can manage the directories that we create to share. This gives us the ability to share the object storage as an nfs mount internal to all of our compute servers. It allows for things like spinning up web servers or other static servers all sharing the same home directory or static pages. We can use these same processes and procedures to pull data from the Marketplace and configure more complex installations like JD Edwards, PeopleSoft, or E-Business Suite. We can import a pre-defined image, spin up a compute instance based on that image, and provision higher level functionality onto infrastructure as a service. Up next, platform as a service explained.

Video : Flashback Table

Tim Hall - Mon, 2016-05-02 06:51

Today’s video gives a quick run through of flashback table.

If you prefer to read articles, rather than watch videos, you might be interested in these.

The clip in today’s video comes courtesy of Deiby Gómez, an Oracle ACE Director, OCM 11g & 12c and consultant at Pythian.



Video : Flashback Table was first posted on May 2, 2016 at 1:51 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

#GoldenGate #Cloud Service (#GGCS) … what to expect?

DBASolved - Mon, 2016-05-02 06:30

As I sit here working on performing some GoldenGate migrations to AWS for a client, I’ve been thinking about the glimpse of GoldenGate Cloud Service (GGCS) that was provided to me earlier this week. That glimpse has helped me define what and how GGCS is going to work within the Oracle Cloud space. Ever since this service was announced back at Oracle Open World 2015, I’ve been wanting to get my hands on this cloud product from Oracle to just better understand it. Hopefully, what I’m about to share with you will provide some insight into what to expect.

First, you will need a cloud account. If you do not have a cloud account; visit and sign up for an account. This will typically be the same account you use to login to My Oracle Support (MOS).

Once you have an account and are in the cloud interface, subscribe to some services. You will need a Database Cloud Service or an Compute Cloud Service. These services will be the end points for the GGCS to point to. As part of setting up the compute node, you will need to setup SSH access with a public/private key. Once you create the GGCS instance, the same public/private key should be use to keep everything simple.

Once GGCS is made available for trial, currently it is only available through the sales team, many of us will have the opportunity to play with this. The following screen captures and comments were taken from the interface I had access to while discussing GGCS with Oracle Product Management.

Like any of the other cloud services from Oracle, once you have access to GGCS it will appear in your dashboard as available cloud services. In the figure below, GGCS is listed at the top of the services that I had access to. You will notice over on the right, there is a link called “service console”.

When you click on the service console link, you are taken to the console that is specific to GGCS. On the left hand side of the console, you will see three primary areas. The “overview” area is the important one; it provides you with all the information needed about your GGCS environment. You will see the host and port number, what version of GGCS you are running and the status of your environment.

With the environment up and running, you will want to create a new GGCS instance. This instance is created under your cloud service console. On this screen you are given information that tells you how many instances you have running with the number of OCPUs, Memory and storage for the configuration along with the public IP address. Notice the button to the right, just below Public IPs, this is the button that allows you to create a new GGCS instance. In the figure below, the instance has already been created.

Drilling down into the instance, you are taken to a page that illustrates your application nodes for GGCS. Notice that the GGCS instance actually created a compute node VM to run GoldenGate from.

With everything configured from the Oracle Cloud interface, you can now access the cloud server using the details provided (do not have current screen shots of this). Once you access the cloud server, you will find that Oracle GoldenGate has been configured for you along with a TNS entry that points to a “target” location. These items are standard template items for you to build your GoldenGate environment from. The interesting thing about this configuration is that Oracle is providing a single virtual machine (compute node) that will handle all the apply process to a database (compute node).

With the GGCS service running, you are then ready to build out your GoldenGate environment.

Like many other GoldenGate architectures, you build out the source side of the architecture like anything else. You install the GoldenGate software, build an extract, trail files and a data pump. The data pump process is then pointed to the GoldenGate Cloud Service (GGCS) instance instead of the target instance. The local trail files will be shipped to the GGCS machine. Once on the GGCS instance, the replicat would need to be configured. Part of the configuration of the replicat at this point is updating the TNSNames.ora file to point to the correct “target” compute node/database instance. The below picture illustrates this concept.

You will notice that the GGCS is setup to be an intermediary point in the cloud. This allows you to be flexible with your GoldenGate architecture in the cloud. From a single GGCS service you can run multiple replicats that can point to multiple difference cloud compute nodes; turning your GGCS into a hub that can send data to multiple cloud resources.

In talking with the Oracle Product team about GGCS, the only downside to GGCS right now is that it cannot be used for bi-directional setup or pulling data from the cloud. In essence, this is a uni-direction setup that can help you move from on-premise to cloud with minimal configuration setup needed.

Well, this is my take on GGCS as of right now. Once GGCS trials are available, I’ll try to update this post or add more posts on this topic. Until then, hope you have gain a bit of information this topic and looking forward to using GGCS.


Filed under: Cloud, Golden Gate
Categories: DBA Blogs

FBDA -- 7 : Maintaining Partitioned Source Table

Hemant K Chitale - Mon, 2016-05-02 01:46
Taking up the TEST_FBDA_PARTITIONED table,  let's look at a couple of Partition Maintenance operations.

SQL> select partition_name, high_value, num_rows
2 from user_tab_partitions
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 order by partition_position
5 /

---------------- ------------------------- ----------
P_100 101 100
P_200 201 100
P_300 301 100
P_400 401 100



SQL> alter table test_fbda_partitioned truncate partition p_100;

Table truncated.


So, that's supported.


SQL> alter table test_fbda_partitioned       
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /
alter table test_fbda_partitioned
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table


So, a SPLIT PARTITION fails.  We need to DISASSOCIATE the Flashback Archive.

SQL> execute dbms_flashback_archive.disassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

----------------------------------------------------- -------------

SQL> alter table test_fbda_partitioned
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /

Table altered.

SQL> execute dbms_flashback_archive.reassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

----------------------------------------------------- -------------


While a Table is disassociated with it's Flashback Archive, DDL that would not normally be permitted may be done under strict control to ensure that there is no data divergence.

Categories: DBA Blogs

Partition Storage -- 7 : Revisiting HWM - 2 (again)

Hemant K Chitale - Mon, 2016-05-02 01:19
Revisiting the previous test case, but with a larger AVG_ROW_LEN

SQL> create table part_table_large
(id_column number(6), data_col_1 varchar2(100), data_col_2 varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
2 3 4 5 6 7 8 9

Table created.

SQL> insert into part_table_large values
(51,rpad('String',75,'X'), rpad('Another',60,'Y'))
2 3
SQL> /

1 row created.

SQL> commit;

Commit complete.

SQL> declare
cntr number;
cntr := 0;
while cntr < 100000
insert into part_table_large
values (25, rpad('String',75,'X'), rpad('Another',60,'Y'));
cntr := cntr + 1;
end loop;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> declare
cntr number;
cntr := 0;
while cntr < 500001
insert into part_table_large
values (45, rpad('String',75,'X'), rpad('Another',60,'Y'));
cntr := cntr + 1;
end loop;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> declare
cntr number;
cntr := 0;
while cntr < 500001
insert into part_table_large
values (55, rpad('String',75,'X'), rpad('Another',60,'Y'));
cntr := cntr + 1;
end loop;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
2 3 4 5
----------- ---------- ----------
140 1100003 22349

SQL> alter table part_table_large move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
2 3 4 5
----------- ---------- ----------
140 1100003 22626

SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_LARGE'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
2 3 4 5 6 7 8
---------- ----------
0 1024
1 1024
2 1024
3 1024
4 1024
5 1024
6 1024
7 1024
8 1024
9 1024
10 1024
11 512
12 1024
13 1024
14 1024
15 1024
16 1024
17 1024
18 1024
19 1024
20 1024
21 1024
22 1024

23 rows selected.


Aha ! Unlike the previous case (where an AVG_ROW_LEN of 11, a MOVE reduced the HWM from 3,022 to 2,484), with a larger row size, the HWM has moved from 22,349 to 22,626.

So, space consumption is a factor of both the AVG_ROW_LEN and the manner in which the rows are  inserted / relocated.

SQL> l
1 select avg_row_len*num_rows*1.2/8192 Expected_Blocks, Blocks
2 from user_tab_partitions
3 where table_name = 'PART_TABLE_LARGE'
4* and partition_name = 'P_100'
SQL> /

--------------- ----------
22558.6553 22626


Also, see how the "Expected Blocks" count seems more accurate than earlier.

Categories: DBA Blogs

LittleCodingKata: Hardware Excuse Generator with gRPC

Paul Gallagher - Mon, 2016-05-02 01:10
gRPC is a very interesting lightweight middleware framework for language-neutral, cross-platform RPC.

When I heard about Natalie Silvanovich's Hardware Excuse Generator on, I immediately recognised a better "Hello World" for testing out gRPC.

Introducing "The Explainer": my programming exercise to learn basic cross-language request/reply with gRPC.

I haven't completed the whole matrix of client-server possibilities yet, but here's a sampling...

Start up a server (e.g ruby version)
$ ./explainer.rb 
ShiFu is waiting to explain all of your problems...

And then ask it questions. Pick a language!

# Ruby client
$ ./explain.rb "Your phone is crashing because of REASON"
Your phone is crashing because of the
PCB not being manufactured to specification

# C# client
$ mono bin/Release/explain.exe "Your phone is crashing because of REASON"
Your phone is probably crashing because of stray harmonics

# C++ client
$ ./explain "Your phone is crashing because of REASON"
Your phone is crashing because of impedance in the coil

# node.js client
$ node ./explain.js "Your phone is crashing because of REASON"
Your phone is crashing because of a lack of shielding against
alpha radiation (cosmic rays) in antenna

# Python client
$ python "Your phone is crashing because of REASON"
Your phone is crashing because of residual capacitance
caused by the USB connector

All my notes and code are available in the LittleCodingKata GitHub repository.

Getting a C++11 compiler for Node 4, 5 and 6 on Oracle Linux 6

Christopher Jones - Sun, 2016-05-01 22:36

A newer compiler is needed on Oracle Linux 6 when you want to use add-ons like node-oracledb with Node 4 or later. This is because add-ons for those versions need to be built with a C++11 compatibile compiler. The default compiler on OL 6 doesn't have this support. OL 7 does have such a compiler, so these instructions are not needed for that version.

For OL 6 the easist way to get a new compiler is from the Software Collection Library (SCL). You enable the software collection yum channel, run a yum install command, and then the compiler is immediately available to use. Detailed installation SCL instructions are in the manual.

The steps below show how to install node-oracledb on Oracle Linux 6 for Node.js 4 or later.

Enabling the Software Collection Library

If you are using (formerly known as then edit /etc/yum.repos.d/public-yum-ol6.repo and enable the ol6_software_collections channel:

  name=Software Collection Library release 1.2 packages for Oracle Linux 6 (x86_64)

If necessary, you can get the latest channel list from and merge any updates from it into your existing /etc/yum.repos.d/public-yum-ol6.repo file.

Alternatively, if your machine has a ULN support subscription, you can subscribe to the Oracle Software Collections 1.2 for Oracle Linux 6 channel in the Manage Subscription page on

Installing the Updated Compiler

Once the channel is enabled, install the updated compiler with:

  yum install scl-utils devtoolset-3

This will install a number of packages that comprise the complete, updated tool set.

Installing node-oracledb

Installing node-oracledb on Node 4 (or later) is the same as in install instuctions, but using the new compiler. The Oracle Linux manual chapter Using the Software Collection Version of a Command shows various ways to enable the dev toolset.

In summary, to install node-oracledb on Node 4 or later using Oracle Linux 6, first install an Oracle client such as Instant Client. If you have anything except the Instant Client RPM packages, tell the installer where the libraries and header files are located, for example:

  export OCI_LIB_DIR=$HOME/instantclient
  export OCI_INC_DIR=$HOME/instantclient/sdk/include

If you are behind a firewall, set your proxy:

  export http_proxy=

In my development environments I often find some cleanup helps:

  which npm && rm -rf $(npm root)/oracledb $(npm root)/nan $HOME/.node-gyp $HOME/.npm \
        && npm cache clean

Now node-oracledb can be installed using the newer compiler:

  scl enable devtoolset-3 -- npm install oracledb
Using Node

Now you can use Node:

  $ node version.js 
  node.js version: v4.4.3
  node-oracledb version: 10900
  node-oracledb text format: 1.9.0
  oracle client library version: 1201000200
  oracle client library text format:
  oracle database version: 1201000200
  oracle database text format:

  $ cat /etc/oracle-release 
  oracle linux server release 6.7

Modern CX 2016

Oracle AppsLab - Sun, 2016-05-01 19:31

Last week, we were back in Las Vegas again for Oracle Modern Customer Experience Conference!  Instead of talking to customers and partners, we had the honor of chatting with UNLV Lee graduate students (@lbsunlv) and getting their feedback on how we envision the future of work, customers experience, marketing and data security.

We started off with Noel (@noelportugal) showing the portable Smart Office demo, including the Smart Badge, that we debuted at OpenWorld in October, followed by a break out session for the gradates to experience Glance and Virtual Reality at their own leisure.

The event was a hit! The 2 hour session flew by quickly. The same group of graduates who came in for the demos at the start of our session, left at the very last minute when we had to close down.

Experiencing these demos led into some exciting discussions that following day between the UNLV Lee Business School panelists and Rebecca Wettemann (@rebeccawettemann) from Nucleus Research (@NucleusResearch) on the future of work:

  • How will sales, marketing, customer service, and commerce change for the next generation?
  • What does the next generation expect from their employers?
  • Are current employers truly modern and using the latest technology solutions?

Fantastic panel with UNLV students @NucleusResearch @theappslab great UX = more prod work.

— Erin Killian Evers (@keversca) April 28, 2016

Great thoughts from @unlv milleanials on #smartoffice opportunities in the workforce with @theappslab #SalesX16

— Gozel Aamoth (@gozelaamoth) April 28, 2016

While all of this was going on, a few of the developers and I were at the Samsung Developers Conference in SF discussing how we could build a more connected future. More on that in the next coming posts!Possibly Related Posts:

DISTRIBUTED mode deprecated

Anthony Shorten - Sun, 2016-05-01 17:50

Based upon feedback from partners and customers, the DISTRIBUTED mode used in the batch architecture has been deprecated in Oracle Utilities Application Framework V4.3.x and above. The DISTRIBUTED mode was originally introduced to the batch cluster architecture back in Oracle Utilities Application Framework V2.x and was popular but suffered from a number of restrictions. Given the flexibility of the batch architect was expanded in newer releases it was decided to deprecate the DISTRIBUTED mode to encourage more effective use of the architecture.

It is recommended that customers using this mode migrate to CLUSTERED mode using a few techniques:

  • For customers on non-production environments, it is recommended to use CLUSTERED mode using the single server (ss) template used by the Batch Edit facility. This is a simple cluster that uses CLUSTERED mode without the advanced configurations in a clustered environment. It is restricted to single host servers so it is not typically recommended for production or clustered environments that use more than one host server.
  • For customers on production environments, it is recommended to use CLUSTERED mode with the unicast (wka) template used by the Batch Edit facility. This will allow flexible configuration without the use of multi-cast which can be an issue on some implementations using CLUSTERED mode. The advantage of Batch Edit is that it has a simple interface to allow you to define this configuration without too much fuss. 

The advantage of Batch Edit when building your new batch configurations is that it is a simple to use as well as it generates an optimized set of configuration files that can be used directly for the batch architecture. Execution of the jobs would have to remove the DISTRIBUTED tags on the command lines or configuration files to use the new architecture.

Customers should read the Batch Best Practices (Doc Id: 836362.1) and the Server Administration Guide shipped with your product for advice on Batch Edit as well as the templates mentioned in this article.

Fall 2014 IPEDS Data: Interactive table ranking DE programs by enrollment

Michael Feldstein - Sun, 2016-05-01 17:33

By Phil HillMore Posts (405)

Last week I shared a static view of the US institutions with the 30 highest enrollments of students taking at least one online (distance ed, or DE) course. But we can do better than that, thanks to some help from Justin Menard at LISTedTECH and his Tableau guidance.

The following interactive chart allows you to see the full rankings based on undergraduate, graduate and combined enrollments. And it has two views – one for students taking at least one online course and one for exclusive online students. Note the following:

Tableau hints

  • (1) shows how you can change views by selecting the appropriate tab.
  • (2) shows how you can sort on any of the three measures (hover over the column header).
  • (3) shows the sector for each institution by the institution name.

Have at it! You can also go directly to Tableau, allowing a wider view of the table.

The post Fall 2014 IPEDS Data: Interactive table ranking DE programs by enrollment appeared first on e-Literate.

Why I am a Dostoevskyan Humanist

Greg Pavlik - Sun, 2016-05-01 16:11
An explanation in 5 parts, by reference to the works of those who were not.*
'Lo! I show you the Last Man.

"What is love? What is creation? What is longing? What is a star?" -- so asks the Last Man, and blinks.

The earth has become small, and on it hops the Last Man, who makes everything small. His species is ineradicable as the flea; the Last Man lives longest.

"We have discovered happiness" -- say the Last Men, and they blink.

They have left the regions where it is hard to live; for they need warmth. One still loves one's neighbor and rubs against him; for one needs warmth.

Turning ill and being distrustful, they consider sinful: they walk warily. He is a fool who still stumbles over stones or men!

A little poison now and then: that makes for pleasant dreams. And much poison at the end for a pleasant death.

One still works, for work is a pastime. But one is careful lest the pastime should hurt one.

One no longer becomes poor or rich; both are too burdensome. Who still wants to rule? Who still wants to obey? Both are too burdensome.

No shepherd, and one herd! Everyone wants the same; everyone is the same: he who feels differently goes voluntarily into the madhouse.

"Formerly all the world was insane," -- say the subtlest of them, and they blink.

They are clever and know all that has happened: so there is no end to their derision. People still quarrel, but are soon reconciled -- otherwise it upsets their stomachs.

They have their little pleasures for the day, and their little pleasures for the night, but they have a regard for health.

"We have discovered happiness," -- say the Last Men, and they blink.'Friedrich Nietzsche: Thus Spoke Zarathustra

The Body of the Dead Christ in the Tomb, Hans Holbein

Now, did He really break the seal
And rise again?
We dare not say….
Meanwhile, a silence on the cross
As dead as we shall ever be,
Speaks of some total gain or loss,
And you and I are free
Auden, Friday’s Child
“Wherever an altar is found, there is civilization."Joseph de Maistre
“All actual life is encounter.”
Martin Buber, I and Thou
* model for composition stolen gratuitously from an online challenge.

Accessing Fusion Data from BI Reports using Java

Angelo Santagata - Sat, 2016-04-30 03:57

In a recent article by Richard Williams on A-Team Chronicles, Richard explained how you can execute a BI publisher report from a SOAP Service and retrieve the report, as XML, as part of the response of the SOAP call.  This blog article serves as a follow on blog article providing a tutorial style walk through on how to implement the above procedure in Java.

This article assumes you have already followed the steps in Richard's blog article and created your report in BI Publisher, exposed it as a SOAP Service and tested this using SOAPUI, or another SOAP testing tool.

Following Richards guidance we know that he correct SOAP call could look like this

<soap:Envelope xmlns:soap="" xmlns:pub="">
            <pub:reportAbsolutePath>/ report.xdo</pub:reportAbsolutePath>
            <pub:reportRawData xsi:nil="true" >true</pub:reportRawData>

Tip :One easy way to determine the reports location is to run the report and then examine the URL in the browser.


Implementing the SOAP call using JDeveloper 11g

We can now need to implement the Java SOAP Client to call our SOAP Service. For this blog we will use JDeveloper 11g, the IDE recommended for extending Oracle Fusion, however you are free to use your IDE of choice, e.g. NetBeans, Eclipse, VI, Notepad etc, the steps will obviously be different.

Creating the project

Within JDeveloper 11g start by creating a new Application and within this application create two generic projects. Call one project “BISOAPServiceProxy” and the other “FusionReportsIntegration”. The "BISOAPServiceProxy" project will contain a SOAP Proxy we are going to generate from JDeveloper 11g and the "FusionReportsIntegration" project will contain our custom client code. It is good practice to create separate projects so that the SOAP Proxies resides in its own separate project, this allows us to regenerate the proxy from scratch without affecting any other code.

Generating the SOAP Proxy

For this example we will be using the SOAP Proxy wizard as part of JDeveloper. This functionality generates a static proxy for us, which in turn makes it easier to generate the required SOAP call later.

  1. 1. With the BISOAPService project selected, start the JDeveloper SOAP Proxy wizard.
    File-> New-> Business Tier-> Web Services-> Web Service Proxy
  2. Proxy1
  3. 2. Click Next
  4. 3. Skipping the first welcome screen, in step 2 enter the JAX-WS Style as the type of SOAP Proxy you wish to generate in step 3 enter the WSDL of your Fusion Application BI Publisher webservice WSDL. It’s best to check this URL returns a WSDL document in your web browser before entering it here. The WSDL location will normally be something like : http://<your fusion Applications Server>/xmlpserver/services/ExternalReportWSSService?wsdl
  5. Proxy2
  6. It's recommended that you leave the copy WSDL into project check-box selected.
  7. 4. Give a package name, unless you need to it's recommended to leave the Root Package for generated types to be left blank
  8. proxy3
  9. 5. Now hit Finish
Fixing the project dependencies

We now need to make sure that the “FusionReportsIntegration” is able to see classes generated by the  “BISOAPServiceProxy” proxy. To resolve this in JDeveloper we simply need to setup a dependency between the two projects.

  1. 1. With the FusionReportsIntegration project selected, right-mouse click on the project and select “Project properties
  2. 2. In the properties panel select Dependencies
  3. 3. Select the little pencil icon and in the resulting dialog select “Build Output”. This selection tells JDeveloper that “this project depends on the successful build output” of the other project.
  4. 4. Save the Dialog
  5. 5. Close [OK] the Project Properties dialog
  6. 6. Now is a good time to hit compile and make sure the SOAP proxy compiles without any errors, given we haven't written any code yet it should compile just fine.
Writing the code to execute the SOAP call

With the SOAP Proxy generated, the project dependency setup, we’re now ready to write the code which will call the BI Server using the generated SOAP Proxy

  1. 1. With the Fusion Reports Integration selected , right mouse Click -> New -> Java -> Java Class
  2. 2. Enter a name, and java package name, for your class
  3. 3. Ensure that “Main Method” is selected. This is so we can execute the code from the command line, you will want to change this depending on where you execute your code from, e.g. A library, a servlet etc.
  4. 4. Within the main method you will need to enter the following code snippet, once this code snippet is pasted you will need to correct and resolve imports for your project.
  5. 1.	ExternalReportWSSService_Service externalReportWSSService_Service;
    2.	// Initialise the SOAP Proxy generated by JDeveloper based on the following WSDL xmlpserver/services/ExternalReportWSSService?wsdl
    3.	externalReportWSSService_Service = new ExternalReportWSSService_Service();
    4.	// Set security Policies to reflect your fusion applications
    5.	SecurityPoliciesFeature securityFeatures = new SecurityPoliciesFeature(new String[]
    6.	{ "oracle/wss_username_token_over_ssl_client_policy" });
    7.	// Initialise the SOAP Endpoint
    8.	ExternalReportWSSService externalReportWSSService = externalReportWSSService_Service.getExternalReportWSSService(securityFeatures);
    9.	// Create a new binding, this example hardcodes the username/password, 
    10.	// the recommended approach is to store the username/password in a CSF keystore
    11.	WSBindingProvider wsbp = (WSBindingProvider)externalReportWSSService;
    12.	Map<String, Object> requestContext = wsbp.getRequestContext();
    13.	//Map to appropriate Fusion user ID, no need to provide password with SAML authentication
    14.	requestContext.put(WSBindingProvider.USERNAME_PROPERTY, "username");
    15.	requestContext.put(WSBindingProvider.PASSWORD_PROPERTY, "password");
    16.	requestContext.put(WSBindingProvider.ENDPOINT_ADDRESS_PROPERTY, "https://yourERPServer:443/xmlpserver/services/ExternalReportWSSService");
    17.	// Create a new ReportRequest object using the generated ObjectFactory
    18.	ObjectFactory of = new ObjectFactory();
    19.	ReportRequest reportRequest = of.createReportRequest();
    20.	// reportAbsolutePath contains the path+name of your report
    21.	reportRequest.setReportAbsolutePath("/ report.xdo");
    22.	// We want raw data
    23.	reportRequest.setReportRawData("");
    24.	// Get all the data
    25.	reportRequest.setSizeOfDataChunkDownload(-1); 
    26.	// Flatten the XML response
    27.	reportRequest.setFlattenXML(true);
    28.	// ByPass the cache to ensure we get the latest data
    29.	reportRequest.setByPassCache(true);
    30.	// Run the report
    31.	ReportResponse reportResponse = externalReportWSSService.runReport(reportRequest, "");
    32.	// Display the output, note the response is an array of bytes, you can convert this to a String
    33.	// or you can use a DocumentBuilder to put the values into a XLM Document object for further processing
    34.	System.out.println("Content Type="+reportResponse.getReportContentType());
    35.	System.out.println("Data ");
    36.	System.out.println("-------------------------------");
    37.	String data=new String (reportResponse.getReportBytes());
    38.	System.out.println(data);
    39.	System.out.println("-------------------------------");
  6. Going through the code
  7.  LineWhat does it do1-3This is the instantiation of a new class containing the WebService Proxy object. This was generated for us earlier5Initialise a new instance of a security policy object, with the correct security policy, for your Oracle Fusion server . The most common security policy is that of “oracle/wss_username_token_over_ssl_client_policy", however your server maybe setup differently8Calls the factory method to initialise a SOAP endpoint with the correct security features set9-16These lines setup the SOAP binding so that it knows which endpoint to execute (i.e. the Hostname+URI of your webservice which is not necessarily the endpoint where the SOAP Proxy was generated, the username and the password.In this example we are hard coding the details because we are going to be running this example on the command line. If this code is to be  executed on a JEE server, e.g. Weblogic, then we recommend this data is stored in the Credential store as CSF keys.17-19Here we create a reportRequest object and populate it with the appropriate parameters for the SOAP call. Although not mandatory its recommended that you use the objectFactory generated by the SOAP proxy wizard in JDeveloper.21This set the ReportPath parameter, including path to the report23This line ensures we get the raw data without decoration, layouts etc.25By default BI Publisher publishes data on a range basis, e.g. 50 rows at a time, for this usecase we want all the rows, and setting this to -1 will ensure this27Tells the webservice to flatten out the XML which is produced29This is an optional flag which instructs the BI Server to bypass the cache and go direct to the database30This line executes the SOAP call , passing the “reportReport” object we previously populated as a parameter. The return value is a reportResponse object34-39These lines print out the results from the BI Server. Of notable interest is the XML document is returned as a byte array. In this sample we simply print out the results to the output, however you would normally pass the resulting XML into Java routines to generate a XML Document.



Because we are running this code from the command line as a java client code we need to import the Fusion Apps Certificate into the Java Key Store. If you run the code from within JDeveloper then the java keystore is stored in <JDeveloperHome>\wlserver_10.3\server\lib\DemoTrust.jks

Importing certificates


  1. 1. Download the Fusion Applications SSL certificate, using a browser like internet explorer navigate to the SOAP WSDL URL
  2. 2. Mouse click on the security Icon which will bring you to the certificate details
  3. 3. View Certificate
    4. Export Certificate as a CER File
  4. 5. From the command line we now need to import the certificate into our DemoTrust.jks file using the following commandkeytool -import -alias fusionKey -file fusioncert.cer -keystore DemoIdentity.jks


Now ready to run the code!

With the file selected press the “Run” button, if all goes well then the code will execute and you should see the XML result of the BI Report displayed on the console.