Skip navigation.

Feed aggregator

FBI Bug reprise

Jonathan Lewis - Mon, 2015-01-12 06:52

I’ve just had cause to resurrect a blog note I wrote three years ago. The note says that an anomaly I discovered in 9.2.0.8 wasfixed in 10.2.0.3 – and this is true for the simple example in the posting; but a recent question on the OTN database forum has shown that the bug still appears in more complex cases.  Here’s some code to create a table and two indexes:


create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	rownum			id,
	cast(dbms_random.string('U',2) as char(2))	c1,
	cast(dbms_random.string('U',2) as char(2))	c2,
	cast(dbms_random.string('U',2) as char(2))	c3,
	cast(dbms_random.string('U',2) as char(2))	c4,
	rpad('x',100)		padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1e5
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);
end;
/

create index t1_iasc  on t1(c1, c2,      c3, c4) nologging;
create index t1_idesc on t1(c1, c2 desc, c3, c4) nologging;

I’ve designed the table to model the way a problem was presented on OTN, it’s possible that the anomaly would appear in simpler circumstance; note that I have two indexes on the same four columns, but the second column of one of the indexes is declared as descending. To identify the indexes easily in execution plans the latter index has the text “desc” in its name. So here’s a query, with execution plan, that should use one of these indexes:


select
        *
from t1
where
        (C1 = 'DE' and C2 >  'AB')
or      (C1 = 'DE' and C2 >= 'AB' and C3 > 'AA' )
or      (C1 = 'DE' and C2 >= 'AB' and C3 >= 'AA' and C4 >= 'BB')
order by
        C1, C2, C3, C4
;

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |    21 |  2478 |     4  (25)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |    21 |       |     3  (34)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C1"='DE')
       filter(((SYS_OP_DESCEND("C2")<SYS_OP_DESCEND('AB') AND "C2">'AB') OR
              (SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C3">'AA' AND "C2">='AB') OR
              (SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C4">='BB' AND "C2">='AB' AND
              "C3">='AA')))

Sure enough – the query has used the t1_iasc index – but why has the optimizer introduced all those predicates with the sys_op_descend() function calls in them when we’re not using an index with a descending column ? Somewhere in the code path the optimizer has picked up the other index, and decided to use some information from it that is totally redundant. One side effect of this is that the cardinality prediction is 21 – if I drop the index t1_idesc the sys_op_descend() calls disappear and the cardinality changes to 148.

Oracle 12c behaves differently – it uses concatenation to turn the query into three separate index probes unless I add the hint /*+ first_rows */ (which I shouldn’t be using, but it’s the hint supplied by the OP on OTN). With the hint in place we get an example of the actual execution plan differing from the prediction made through explain plan:


12c execution plan unhinted (concatenation, and lots of sys_op_descend()):

--------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |          |       |       |    12 (100)|          |
|   1 |  SORT ORDER BY                        |          |   149 | 17582 |    12   (9)| 00:00:01 |
|   2 |   CONCATENATION                       |          |       |       |            |          |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1       |   142 | 16756 |     4   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | T1_IDESC |     1 |       |     2   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1       |     6 |   708 |     4   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                  | T1_IDESC |     1 |       |     2   (0)| 00:00:01 |
|   7 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1       |     1 |   118 |     3   (0)| 00:00:01 |
|*  8 |     INDEX RANGE SCAN                  | T1_IASC  |     1 |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("C1"='DE' AND "C3">='AA' AND "C4">='BB' AND
              SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB'))
       filter("C4">='BB' AND SYS_OP_UNDESCEND(SYS_OP_DESCEND("C2"))>='AB' AND "C3">='AA')
   6 - access("C1"='DE' AND "C3">'AA' AND SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB'))
       filter("C3">'AA' AND SYS_OP_UNDESCEND(SYS_OP_DESCEND("C2"))>='AB' AND
              (LNNVL("C4">='BB') OR LNNVL(SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB')) OR
              LNNVL(SYS_OP_UNDESCEND(SYS_OP_DESCEND("C2"))>='AB') OR LNNVL("C3">='AA')))
   8 - access("C1"='DE' AND "C2">'AB' AND "C2" IS NOT NULL)
       filter((LNNVL("C3">'AA') OR LNNVL(SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB')) OR
              LNNVL("C2">='AB')) AND (LNNVL("C4">='BB') OR
              LNNVL(SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB')) OR LNNVL("C2">='AB') OR
              LNNVL("C3">='AA')))

12c Execution plan with first_rows hint (and the sys_op_descend have gone)
---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |   150 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   148 | 17464 |   150   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |   148 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C1"='DE')
       filter(("C2">'AB' OR ("C3">'AA' AND "C2">='AB') OR ("C4">='BB' AND
              "C2">='AB' AND "C3">='AA')))

12c Execution plan with first_rows according to Explain Plan (and they're back again)
---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |   148 | 17464 |   150   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   148 | 17464 |   150   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |   148 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C1"='DE')
       filter("C2">'AB' AND SYS_OP_DESCEND("C2")<SYS_OP_DESCEND('AB') OR
              "C3">'AA' AND SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C2">='AB' OR
              "C4">='BB' AND SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C2">='AB' AND
              "C3">='AA')

On the plus side the upgrade to 12c has removed some of the sys_op_descend() appearances and seems to have fixed the cardinality anomaly when the sys_op_descend() calls do appear – but things are still going wrong in odd ways. (And this looks like another case where you could drop an index that you’re not using and seeing an execution plan change.)

Footnote:

When playing around with 12c, I had to be very careful to avoid confusing the issue as SQL Plan Directives were appearing from time to time, and giving me unexpected results as I repeated some of the experiments.

Footnote 2:

I haven’t actually addressed the question originally posed on OTN, but that can wait for another day.


Video Tutorial: XPLAN_ASH Active Session History - Introduction

Randolf Geist - Sun, 2015-01-11 16:38
I finally got around preparing another part of the XPLAN_ASH video tutorial.

This part is about the main funcationality of XPLAN_ASH: SQL statement execution analysis using Active Session History and Real-Time SQL Monitoring.

In this video tutorial I'll explain what the output of XPLAN_ASH is supposed to mean when using the Active Session History functionality of the script. Before diving into the details of the script output using sample reports I provide some overview and introduction in this part that hopefully makes it simpler to understand how the output is organized and what it is supposed to mean.

This is the initial, general introduction part. More parts to follow.

VC Investment Data visualization and analytics

Nilesh Jethwa - Sun, 2015-01-11 14:52

Using the data from pwcmoneytree.com and easy to use dashboard software we perform analytics on a huge dataset that spans 20 years of Venture capital investment data from 1995 onward. Having data that goes far into the history should give us enough to extract the necessary analytical juice out of it.

 

VC investment by industry

Change in investment pattern between 2000 and 2014

The year 2000 was definitely the peak for VC investment craziness. A whopping 105 Billions was pumped into startups and bringing them quickly for IPO. Ever since after the crash of 2000… Continue reading the original article

Representing many-to-many tables in a Report

Dimitri Gielis - Sun, 2015-01-11 14:08
Consider my case: I've a customer table and a products table. Customers can select multiple products they want to have. The diagram look like this: Customers on the left, Products on the right and a table in between that says which customers like which products. Customers might have multiple products and products might be linked to multiple customers - so a many-to-many relationship.


There're many ways to represent this data in a report. Here's a SQL statement that brings the three tables together:

A default Report created on top of this SQL statement looks like this:


The same SQL in an Interactive Report gives us a bit more possibilities, you could break by Customer and show the products underneath for example:

But another technique I use frequently as well, is by using the listagg (analytical) function.
This function was introduced in Oracle DB 11gR2. Here's a great article that compares listagg with previous possibilities you had in the Oracle Database and also talks about the performance.

Here's my SQL statement:


This shows for every customer, which products they like as a comma separated list:

You could do the reverse as well; for a certain product get a list of customers. I found myself using the listagg function a lot over the years, so hopefully you also find it useful if you didn't know about it yet...

You find the above examples online here.

Next to learning about the LISTAGG Oracle function, I also wanted to give this example to show that although APEX can do a lot of reporting out of the box, the more you know SQL the more options you have to show your data. Long-live SQL!!!

Categories: Development

Resilience

Floyd Teter - Sun, 2015-01-11 11:49
re·sil·iencerəˈzilyəns/nounnoun: resilience; plural noun: resiliences
  1. 1. the ability of a substance or object to spring back into shape; elasticity."nylon is excellent in wearability and resilience"
  2. 2. the capacity to recover quickly from difficulties; toughness."the often remarkable resilience of so many British institutions"

My parents were children of the Great Depression here in the U.S.  The experience influenced how they approached their lives - stability over all else.  Even though they never quite realized the goal of stability, it is still one of the characteristics passed down by their generation to those that followed.  That's why society tends to idolize successful risk takers; they broke the self-imposed limitations that come from the focus on stability.

My own experience in working with NASA taught me that stability is not the key to preserving the viability of any system.  The key is resilience.  With all the hooey happening recently both in the IT world and in real life, I thought the idea of resilience might be worth a brief post here.

Let's consider a very narrowly-focused, basic statistic.  The U.S. Navy is the target of, on average, 30 cyber-attacks every second - every minute, every hour, every day over the course of a year.  That's over 1 billion attacks in a 12-month period.  Common sense alone says they can't stop them all.  Some attacks succeed, some data is lost, some damage is done.  Stability can't be preserved.  So, for the U.S. Navy, the focus is on system resilience.

Resilience in IT systems essentially embraces the following idea:  bad things will eventually happen to your system, and you can't prevent it.  Make every effort to defend against hackers.  Build earthquake-ready systems to house your data center.  Keep your patches and maintenance up to date.  In my little corner of NASA, we used a floating iceberg analogy and referred to this as the "above-the-waterline" stuff...things we could see or foresee.  

But it's the things below the waterline that hold the most risk: a new hacking approach, a natural disaster of massive proportions, a unique anomaly, etc.  To address these risks, you design system architectures that can bounce back quickly from attacks and damage.  

So it's not a matter of preventing all the bad things from happening (you can't), its a matter of how quickly your system can adapt and bounce back from the bad things that happen.

Then following is a direct quote from the Rockefeller Foundation's work on resilience:

Resilient systems, organizations, or individuals possess five characteristics in good times and in times of stress. They are:
  • Aware. Awareness means knowing what your strengths and assets are, what liabilities and vulnerabilities you have, and what threats and risks you face. Being aware is not a static condition; it’s the ability to constantly assess, take in new information, reassess and adjust your understanding of the most critical and relevant strengths and weaknesses and other factors on the fly. This requires methods of sensing and information-gathering, including robust feedback loops, such as community meetings or monitoring systems for a global telecommunications network.
  • Diverse. Diversity implies that a person or system has a surplus of capacity such that it can successfully operate under a diverse set of circumstances, beyond what is needed for every-day functioning or relying on only one element for a given purpose. Diversity includes redundancy, alternatives, and back-ups, so it can call up reserves during a disruption or switch over to an alternative functioning mode. Being diverse also means that the system possesses or can draw upon a range of capabilities, information sources, technical elements, people or groups. 
  • Self-Regulating. This means elements within a system behave and interact in such a way as to continue functioning to the system’s purpose, which means it can deal with anomalous situations and interferences without extreme malfunction, catastrophic collapse, or cascading disruptions. This is sometimes called “islanding” or “de-networking”—a kind of failing safely that ensures failure is discrete and contained. A self-regulating system is more likely to withstand a disruption, less likely to exacerbate the effects of a crisis if it fails, and is more likely to return to function (or be replaced) more quickly once the crisis has passed.
  • Integrated. Being integrated means that individuals, groups, organizations and other entities have the ability to bring together disparate thoughts and elements into cohesive solutions and actions. Integration involves the sharing of information across entities, the collaborative development of ideas and solutions, and transparent communication with people and entities that are involved or affected. It also refers to the coordination of people groups and activities. Again, this requires the presence of feedback loops.
  • Adaptive. The final defining characteristic of resilience is being adaptive: the capacity to adjust to changing circumstances during a disruption by developing new plans, taking new actions, or modifying behaviors so that you are better able to withstand and recover from a disruption, particularly when it is not possible or wise to go back to the way things were before. Adaptability also suggests flexibility, the ability to apply existing resources to new purposes or for one thing to take on multiple roles.
Resilience is all about making systems and the components of those systems stronger:  hardware, software, people, communities, etc.

From an IT perspective, the next time you design a solution or a system, stop and think about how your solution or system could be designed for greater resilience.  You'd be amazed how simple and inexpensive it can be once you invest a little brain power.

And what I just wrote about the IT perspective?  It applies to real life situations too.  How's that for a pearl of wisdom?

Your thoughts?  Love to hear 'em!  Comments...

CREATE USER and ALTER USER – changing passwords and a New Year’s Resolution

The Anti-Kyte - Sun, 2015-01-11 08:39

Monday morning. The first day back at work in the New Year.
Still groggy, having been awoken from my slumber at the insistence of my Darth Vader Lego Alarm Clock, I stagger downstairs in search of coffee.
The clock was a Christmas present from Deb. Whilst clinking around the kitchen, I wonder whether it was intended as a subtle reminder of how she likes her coffee with only a little milk. Yes, she prefers it on the Dark Side.
Then I remember. I usually avoid New Year’s resolutions. I find them to be not unlike a cheap plastic toy at Christmas – there in the morning and then broken by the first afternoon.
This year however, is an exception.

In a recent post about APEX Authentication Schemes, I went to great lengths to ensure that a dynamic SQL statement to re-set a users password was safe from the possibility of injection.
Fortunately, Jeff Kemp took the time to point out a couple of issues with my approach.
As a result, this year, my resolution is to :
READ THE MANUAL.

What follows is the result of keeping this resolution ( so far, at least)…

Setting a Password with CREATE USER and ALTER USER

To start with, we’re going to forget about any PL/SQL context for this exercise and simply focus on the ALTER USER command when used to change a user’s password.

First of all, we need a user…

create user marvin identified by 42!
/

create user marvin identified by 42!
                                   *
ERROR at line 1:
ORA-00922: missing or invalid option


That’s a bit surprising. There’s no password verify function on the DEFAULT profile in the database :

select resource_name, limit
from dba_profiles
where resource_type = 'PASSWORD'
and resource_name = 'PASSWORD_VERIFY_FUNCTION'
and profile = 'DEFAULT'
/

RESOURCE_NAME                    LIMIT
-------------------------------- ----------------------------------------
PASSWORD_VERIFY_FUNCTION         NULL

…and even if there were – ORA-00922 ?

At this point, much head-scratching can be saved if we follow Jeff’s recommended link to the relevant documentation.

From this we can see that the password has the same rules applied to it as a database object. So…

  • it must start with a letter
  • …and can’t contain spaces…
  • …or characters other than alphanumeric, $, # or _

…unless…

create user marvin identified by "42!"
/

User created.

Yes, double-quotes around the password will allow other characters to be used, just as with database objects.
So, does the user need to provide the double-quotes when they connect ?

grant create session to marvin
/

…now….

connect marvin/42!@mydb

…works fine. No need to include the double-quotes when connecting.
If the password contains spaces however, then the double-quotes are required :

alter user marvin identified by "Feeling very depressed"
/

conn marvin/Feeling very depressed@XE
/
SP2-0306: Invalid option.
Usage: CONN[ECT] [{logon|/|proxy} [AS {SYSDBA|SYSOPER|SYSASM}] [edition=value]]
where <logon> ::= <username>[/<password>][@<connect_identifier>]
      <proxy> ::= <proxyuser>[<username>][/<password>][@<connect_identifier>]

conn marvin/"Feeling very depressed"@XE
/
Connected.
A Sensible Change Password Procedure

We now come back to the question of how to provide the functionality for a user to change their password by means of a PL/SQL procedure. For the sake of simplicity, we’re going to assume that there is no password verify function in place and therefore the old password does not need to be provided :

create or replace procedure change_pwd_pr
(
    i_new_pwd in varchar2
)
    authid current_user
is
begin
    execute immediate 'alter user '||user||' identified by '||CHR(34)||i_new_pwd||CHR(34);
end;
/

Now, we are dropping user input directly into a dynamic SQL statement, something that rings alarm bells in terms of the potential for injection.
There would appear to be two techniques that are common to injection of such code.

Here, however, we’re only concatenating a password. This is not a database object per se. Once it is set, it will be referenced only as the password hash by Oracle’s internal authentication process and will therefore not be useful as a blind injection string.
Additionally, by enclosing the password in double-quotes, it is simply treated as a string within the dynamic SQL statement itself.

Let’s put this to the test.

First, we need to make the procedure available to MARVIN :

grant execute on change_pwd_pr to marvin;

Now let’s run some tests. Connect as MARVIN and …

set serveroutput on size unlimited
declare
    
    function test_pwd_fn( i_test_no in pls_integer, i_pwd in varchar2)
        return varchar2
    is
    begin
        mike.change_pwd_pr( i_new_pwd => i_pwd);
        return ('Test '||i_test_no||' - Password Changed.');
    exception
        when others then
            return('Test '||i_test_no||'- Password Change Failed : '||substr(sqlerrm,1,100));
    end test_pwd_fn;
begin
    dbms_output.put_line(test_pwd_fn( 1, '42!'));
    dbms_output.put_line(test_pwd_fn( 2, 'silly'||'; grant dba to marvin'));
    dbms_output.put_line(test_pwd_fn( 3, q'[I want you to know that I'm feeling very depressed]'));
    dbms_output.put_line(test_pwd_fn( 4, 'How awful!'));
end;
/    

Run this as Marvin and we get :

SQL> @tests.sql
Test 1 - Password Changed.
Test 2 - Password Changed.
Test 3- Password Change Failed : ORA-00972: identifier is too long
Test 4 - Password Changed.

PL/SQL procedure successfully completed.

SQL> 

If we now check, we can confirm that our attempt at injection in test2 was not successful :

select granted_role 
from user_role_privs
/

no rows selected

SQL> 
Conclusion

Thanks to Jeff, I will henceforth be moderating my paranoia when dealing with programatic password resets.
Oh yes, and I’ll try to remember to read the manual.


Filed under: Oracle, PL/SQL, SQL Tagged: alter user, alter user identified by, create user, create user identified by, dba_profiles, escape new password with double quotes, identified by, ORA-00922, ORA-00972, oracle password rules, password verify function

Using Oracle Big Data SQL to Add Dimensions and Attributes to Hadoop Reporting

Rittman Mead Consulting - Sun, 2015-01-11 06:18

In a previous post I looked at using Oracle’s new Big Data SQL product with ODI12c, where I used Big Data SQL to expose two Hive tables as Oracle external tables, and then join them using the BETWEEN operator, something that’s not possible with regular HiveQL. In this post I’m going to look at using Oracle Big Data SQL with OBIEE11g, to enable reporting against Hive tables without the need to use Hive ODBC drivers and to bring in reference data without having to stage it in Hive tables in the Hadoop cluster.

In this example I’ve got some webserver log activity from the Rittman Mead Blog stored as a Hive table in Hadoop, which in its raw form only has a limited amount of descriptive data and wouldn’t be all that useful to users reporting against it using OBIEE. Here’s the contents of the Hive table as displayed via SQL*Developer:

NewImage

When I bring this table into OBIEE, I really want to add details of the country that each user is visiting from, and also details of the category that each post referenced in the webserver logs belongs to. Tables for these reference data items can be found in an accompanying Oracle database, like this:

NewImage

The idea then is to create an ORACLE_HIVE external table over the Hive table containing the log activity, and then import all of these tables into the OBIEE RPD as regular Oracle tables. Back in SQL*Developer, connected to the database that has the link setup to the Hadoop cluster via Big Data SQL, I create the external table using the new ORACLE_HIVE external table access driver:

NewImage

And now with the Hive table exposed as the Oracle external table BDA_OUTPUT.ACCESS_PER_POST_EXTTAB, I can import all four tables into the OBIEE repository.

NewImage

I can now create joins across the two Oracle schemas and four tables:

NewImage

and then create a business model and presentation model to define a simple star schema against the combined dataset:

NewImage

Once the RPD is saved and made available to the Presentation layer, I can now go and create some simple reports against the Hive and Oracle tables, with the Big Data SQL feature retrieving the Hive data using SmartScan technology running directly on the Hadoop cluster – bypassing MapReduce and filtering, projecting and just returning the results dataset back to the Exadata server running the Oracle SQL query.

NewImage

In the previous ODI12c and Big Data SQL posting, I used the Big Data SQL feature to enable a join between the Hive table and a table containing IP address range lookups using the BETWEEN operator, so that I could return the country name for each visitor to the website. I can do a similar thing with OBIEE, by first recreating the main incoming fact table source as a view over the ORACLE_HIVE external table and adding an IP integer calculation that I can then use for the join to the IP range lookup table (and also take the opportunity to convert the log-format date string into a proper Oracle DATE datatype):

NewImage

and then using that to join to a new table I’ve imported from the BLOG_REFDATA Oracle schema that contains the IP range lookups:

NewImage

Now I can add country as a dimension, and create reports that break down site visits by country of access.

NewImage

Similarly, I can break the date column in the view over the Hive external table out into its own logical dimension table, and then create some reports to show site access over time.

NewImage

and with the final RPD looking like this:

NewImage

If you’re interested in reading more about Oracle Big Data SQL I also covered it earlier on the blog around the launch date, with this post introducing the feature and another looking at how it extends Oracle security over your Hadoop cluster.

Categories: BI & Warehousing

EBS 12.2 Blocking a Concurrent Program while Online Patching

Senthil Rajendran - Sat, 2015-01-10 22:36
Will there be a need to Blocking a Concurrent Program while Online Patching in EBS 12.2 , if so then here are the steps.

  • Use the Concurrent Program window or page to edit your concurrent program definition.
  • Select the Incompatibilities button to open the Incompatible Programs window.
  • Add a new global incompatibility rule for your program with the following program:
    • Application Name: Applications DBA
    • Program Name: Online Patching In Progress (internal name: ADZDPATCH) concurrent program

    How To Start a Case in Oracle Adaptive Case Management 12c

    Andrejus Baranovski - Sat, 2015-01-10 03:43
    Blog reader was asking to describe how to start a new case in Oracle ACM 12c. You can read my previous blog post on ACM 12c topic - Adaptive Case Management 12c and ADF Human Tasks. There are multiple ways to start a case, depends if you want just to test the case or really use it. I would recommend to use SoapUI to test the case. In the real scenario, case most likely will be started from third party web service or through ACM 12c Java API. Here I would like to describe, how you could use SoapUI to test ACM process during development.

    Once ACM 12c application is deployed, you could open it in EM. There is the option to test deployed composite and invoke it through EM Web Service tester. In the example below you can see startCase operation selected and payload structure displayed. Unfortunately it doesn't work well to test ACM process through EM, payload structure is very confusing and it usually fails with mandatory attributes missing errors:


    Instead I would recommend to use SoapUI to test the ACM 12c composite. You could download and use it free of charge. Start the tool and choose to create New SOAP Project:


    We need WSDL URL to define a SOAP project in SoapUI. You can copy WSDL URL from EM test screen:


    Paste it into SoapUI dialog (Initial WSDL field) and Project Name will be filled in automatically (keep Create sample requests for all operations? option selected):


    SoapUI will fetch all operations from the service and new SoapUI project will be generated:


    In order to start a new case, expand startCase operation and open Request 1. Request will be pre-filled with default payload structure (similar as we saw in EM):


    Save SoapUI project, before doing any changes to the request structure. I would suggest to save it under ACM JDEV application structure - project is saved into single XML file:


    Change the request to include custom data payload (you can find this XML code inside sample application available for download) and invoke the service. If request will be successful, you should see case number returned in the response:


    Case with the same number should be available in BPM Workspace:


    Here you can download sample application with SoapUI project (XML file) included - HotelBookingProcessing_v1.zip.

    Migration

    DBMS2 - Sat, 2015-01-10 00:45

    There is much confusion about migration, by which I mean applications or investment being moved from one “platform” technology — hardware, operating system, DBMS, Hadoop, appliance, cluster, cloud, etc. — to another. Let’s sort some of that out. For starters:

    • There are several fundamentally different kinds of “migration”.
      • You can re-host an existing application.
      • You can replace an existing application with another one that does similar (and hopefully also new) things. This new application may be on a different platform than the old one.
      • You can build or buy a wholly new application.
      • There’s also the inbetween case in which you extend an old application with significant new capabilities — which may not be well-suited for the existing platform.
    • Motives for migration generally fall into a few buckets. The main ones are:
      • You want to use a new app, and it only runs on certain platforms.
      • The new platform may be cheaper to buy, rent or lease.
      • The new platform may have lower operating costs in other ways, such as administration.
      • Your employees may like the new platform’s “cool” aspect. (If the employee is sufficiently high-ranking, substitute “strategic” for “cool”.)
    • Different apps may be much easier or harder to re-host. At two extremes:
      • It can be forbiddingly difficult to re-host an OLTP (OnLine Transaction Processing) app that is heavily tuned, tightly integrated with your other apps, and built using your DBMS vendor’s proprietary stored-procedure language.
      • It might be trivial to migrate a few long-running SQL queries to a new engine, and pretty easy to handle the data connectivity part of the move as well.
    • Certain organizations, usually packaged software companies, design portability into their products from the get-go, with at least partial success.

    I mixed together true migration and new-app platforms in a post last year about DBMS architecture choices, when I wrote:

    • Sometimes something isn’t broken, and doesn’t need fixing.
    • Sometimes something is broken, and still doesn’t need fixing. Legacy decisions that you now regret may not be worth the trouble to change.
    • Sometimes — especially but not only at smaller enterprises — choices are made for you. If you operate on SaaS, plus perhaps some generic web hosting technology, the whole DBMS discussion may be moot.

    In particular, migration away from legacy DBMS raises many issues:

    • Feature incompatibility (especially in stored-procedure languages and/or other vendor-specific SQL).
    • Your staff’s programming and administrative skill-sets.
    • Your investment in DBMS-related tools.
    • Your supply of hockey tickets from the vendor’s salesman.

    Except for the first, those concerns can apply to new applications as well. So if you’re going to use something other than your enterprise-standard RDBMS, you need a good reason.

    I then argued that such reasons are likely to exist for NoSQL DBMS, but less commonly for NewSQL. My views on that haven’t changed in the interim.

    More generally, my pro-con thoughts on migration start:

    • Pure application re-hosting is rarely worthwhile. Migration risks and costs outweigh the benefits, except in a few cases, one of which is the migration of ELT (Extract/Load/Transform) from expensive analytic RDBMS to Hadoop.
    • Moving from in-house to co-located data centers can offer straightforward cost savings, because it’s not accompanied by much in the way of programming costs, risks, or delays. Hence Rackspace’s refocus on colo at the expense of cloud. (But it can be hard on your data center employees.)
    • Moving to an in-house cluster can be straightforward, and is common. VMware is the most famous such example. Exadata consolidation is another.
    • Much of new application/new functionality development is in areas where application lifespans are short — e.g. analytics, or customer-facing internet. Platform changes are then more practical as well.
    • New apps or app functionality often should and do go where the data already is. This is especially true in the case of cloud/colo/on-premises decisions. Whether it’s important in a single location may depend upon the challenges of data integration.

    I’m also often asked for predictions about migration. In light of the above, I’d say:

    • Successful DBMS aren’t going away.
      • OLTP workloads can usually be lost only so fast as applications are replaced, and that tends to be a slow process. Claims to the contrary are rarely persuasive.
      • Analytic DBMS can lose workloads more easily — but their remaining workloads often grow quickly, creating an offset.
    • A large fraction of new apps are up for grabs. Analytic applications go well on new data platforms. So do internet apps of many kinds. The underlying data for these apps often starts out in the cloud. SaaS (Software as a Service) is coming on strong. Etc.
    • I stand by my previous view that most computing will wind up on appliances, clusters or clouds.
    • New relational DBMS will be slow to capture old workloads, even if they are slathered with in-memory fairy dust.

    And for a final prediction — discussion of migration isn’t going to go away either. :)

    Categories: Other

    APEX and Font Awesome integration

    Dimitri Gielis - Fri, 2015-01-09 17:30
    In the previous post I talked about scalable vector images. In this post we will integrate Font Awesome in our APEX app, but the steps for the same if you would pick another library.

    Step 1: we need to make the library available in our APEX application.

    The easiest way to make the library available in your entire application is to add the following line in your Page Template:


    How do I know it's that string you might ask. If you go to the Getting Started section of the library you chose it will tell you:

    You're actually done now with the integration... You'll find some examples on the site to see the properties you can use (size etc.). The list of icons is also available here.

    Here're some examples I use the image fonts for:

    An image in a region

    Sometimes I just want an image in a region, so I included the paypal icon (fa-paypal) and gave it a bigger size (fa-5x).


    You see the result here:

    Edit icon in a report

    During the creation of a report region you can select the image you want, those are either images or text. Change the Link Icon to following:


    Replacing the image will give your report immediately a fresher and more modern look




    Icons in the Navigation List

    For my navigation list I mainly use the "Vertical List with Subtext and Icon" list template.
    By default it looks like this:

    We want to say for every list item which icon we want to use. To do that edit the list template and change in the Template Definition:

    #LIST_LABEL#

    to:





    Note: I changed it in both the Current as Noncurrent List Template.

    Now we can define the icon we want to use in our List at Attribute 02:


    The final step to make the list look nice we need to make the icon float left so it's next to the text and we make it white when you hover. As we use an icon font we can just change the color with some css (you can add that on your page or in your page template in the Inline CSS position).


    This is the result:

    You find the online example at https://www.apexrnd.be/ords/f?p=DGIELIS_BLOG:FONTAWESOME
    Categories: Development

    Flashback logging overhead: 'db file sequential read' on UNDO tablespace

    Yann Neuhaus - Fri, 2015-01-09 13:09

    in my previous post I've reproduced an issue where some bulk insert into a GTT was generating too much undo. But the real case I encountered was worse: the insert was reading lot of UNDO. And lot of this undo was read from disk.

    Insert into GTT: bulk with APPEND_VALUES

    Yann Neuhaus - Fri, 2015-01-09 13:04

    This post is about the case I had where a GTT was generating too much undo. The effects were amplified by the fact that the database was in flashback logging (see Jonathan Lewis answer in my question on OTN forum about that, but that's for the next post.

    Toad world's Oracle Pro for January 2015 - truly honored

    Syed Jaffar - Fri, 2015-01-09 12:44
    Thank you Toad for honoring me with 'Toad World's Oracle Pro for January 2015'. I am indeed truly touched for the honor.


    http://www.toadworld.com/platforms/oracle/default.aspx?Redirected=true

    I thank my family, all friends, colleagues, Oracle community, my readers, Chris and Steve Hilker from Toad for your support and encouragement.

    2015 AT&T Developer Summit & Hackathon

    Oracle AppsLab - Fri, 2015-01-09 12:16

    Editor’s Note: Noel did it! After competing in 2013 and 2014, he broke through and won a prize at the annual AT&T Developer Summit Hackathon (@attdeveloper). Congrats to the whole team.

    MediaTek.prize

    The whole team minus Anthony who was too sick to enjoy the moment.

     

    This year, Anthony (@anthonyslai), Raymond, Osvaldo (@vaini11a), Luis (@lsgaleana), Tony and I (@noelportugal) participated in the AT&T Developer Summit & Hackathon.

    From the beginning we realized we had too much brain power for just one project so we decided to split the group. The first group would attempt to go for the first overall prize. And the second group would focus on just one accelerator price from a sponsor.

    “Your Voice”  – First Overall Prize Entry:

    We knew we only had three minutes to present, and we had to leave an impression with the judges. So, we opted to build our solution around a specific use case with only one persona in mind. The use case was to use our voice to control AT&T Digital Life, AT&T WebRTC and the AT&T Speech APIs. The persona was an older gentleman going around his daily life around the house. We opted to use our latest toy, the Amazon Echo as way to interface with AT&T services. We went to work, found a couple limitations, but at the end we overcame them and we felt pretty confident with our solution.

    Here is our use case:

    Tony is an 89 year old man and lives alone. He is pretty self sufficient, but his daughter (Cindy) worries about his well being. So she bought AT&T Digital Life to make sure her dad was safe and sound. Tony doesn’t want to be bothered to learn all the new mumbo-jumbo that comes with new technology, like a mobile app, a fancy remote, etc. Instead he prefers to use “Your Voice” to make sure all doors are locked, garage door closed, lights off/on, etc. “Your Voice” also works as personal assistant that can take care reminding Tony of important things, read email, initiate video calls (WebRTC), etc.

    So that’s it! We pre-programmed sequences to identify actions. When Tony said “Alexa, I’m tired. I’m going to bed,” the system started a series of actions, not just one. When Tony said “Alexa, call my grandson,” the system automatically started the projector and did a video conference.

    And finally we created a video introduction for our presentation:

     “Sensus” – Accelerator Entry:

    Raymond and Anthony decided to enter the “MediaTek Labs IoT and Wearables Challenge.” MediaTek (@MediaTekLabs) has a very nice multipurpose development board called LinkIt ONE that includes an array of connectivity options (BLE, Wifi, GSM, GPRS, etc.), plus access to a lot of plug-and-play sensors.

    They built a sensor station to monitor environmental safety metrics (temperature, fire hazard) and environmental health metrics (noise, dust, UV). They used Android Wear as the wearable platform to notify users when things happen, using an IFTTT model.

    Their solution was an end-to-end solution using only the MediaTek LinkIt One and their cloud platform. This gave them the edge since this was a pure MediaTek solution. It became a clear solution when the judges came to talk to them. Our room had A/C issues and constantly overheated, so we had to chase the maintenance guys quite often for them to fix it. Raymond talk to them about the opportunity to solve the issue by giving a wearable device to the head of maintenance so he would know whats going on in the building by just “glancing.”

    “Sensus” got the first prize for the accelerator entry.  And as team we could not be happier!

    Conclusion:

    Hackathons or developer challenges are a great way to work as a team, learn new technologies and push the limits of what can be accomplished on such short time. As a team we have proven to be always ahead of the curve with our solutions, e.g. last year we built a Smart Holster for Law Enforcement, and if you have been following CES 2015, there are some companies doing similar implementations.

    There is no doubt that voice control will be huge this year. The technology is maturing at a very fast rate and we are bound to see a lot more great implementations.

    Finally, winning is not everything in these events. The journey is what matters. What we learned along the way. I find it very apt to have this competition in Las Vegas since this place is full of chance, probability and ultimately pure luck.Possibly Related Posts:

    Inserting into a table with potentially long rows

    Hemant K Chitale - Fri, 2015-01-09 09:25
    Note :  This post builds on blog posts by Nikolay Savvinov and Jonathan Lewis.


    Nikolay Savvinov and Jonathan Lewis have identified that when you have a table with a potentially large row size -- where the theoretical maximum row length exceeds the block size -- redo overheads are significantly greater for multi-row inserts.

    First, I demonstrate with a table with a small potential row size. A table with 1 number column and 3 columns of a total max length of 30 characters.  The actual data inserted is also very little.  The first row has the values (1,'1','1','1'), the second row has the values (2,'2','2','2') and so on to (1000,'1000','1000','1000) for a total of 1,000 rows. :

    SQL> drop table hkc_test_small_row_size purge;
    drop table hkc_test_small_row_size purge
    *
    ERROR at line 1:
    ORA-00942: table or view does not exist


    SQL>
    SQL> create table hkc_test_small_row_size
    2 (id_column number, data_col_1 varchar2(10), data_col_2 varchar2(10), data_col_3 varchar2(10));

    Table created.

    SQL> create unique index hkc_tlrs_undx on hkc_test_small_row_size(id_column);

    Index created.

    SQL>
    SQL>
    SQL>
    SQL> select n.name, s.value At_Beginning
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME AT_BEGINNING
    ---------------------------------------------------------------- ------------
    redo entries 102
    redo size 23896

    SQL>
    SQL> insert into hkc_test_small_row_size
    2 select rownum+4000, to_char(rownum), to_char(rownum), to_char(rownum)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL> commit;

    Commit complete.

    SQL>
    SQL> select n.name, s.value Normal_Insert
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME NORMAL_INSERT
    ---------------------------------------------------------------- -------------
    redo entries 154
    redo size 92488

    SQL>
    SQL>
    SQL>
    SQL> insert /*+ APPEND */ into hkc_test_small_row_size
    2 select rownum, to_char(rownum), to_char(rownum), to_char(rownum)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL>
    SQL> commit;

    Commit complete.

    SQL>
    SQL> select n.name, s.value APPEND_Insert
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME APPEND_INSERT
    ---------------------------------------------------------------- -------------
    redo entries 252
    redo size 193396

    SQL>
    SQL>
    SQL> drop table hkc_test_small_row_size purge;

    Table dropped.

    SQL>

    Thus, we can see that, for the "SMALL_ROW_SIZE" table, the redo entries for 1000 of these small rows  :
    Simple Insert of 1,000 rows :   52 redo entries and 68,592 bytes.
    Direct Path (APPEND) of 1,000 rows : 98 redo entries and 100,908 bytes.


    Next, I demonstrate with a  table with a large potential row size -- exceeding the database block size.  Exactly the same data set is inserted  --  The first row has the values (1,'1','1','1'), the second row has the values (2,'2','2','2') and so on to (1000,'1000','1000','1000) for a total of 1,000 rows.

    SQL> drop table hkc_test_large_row_size purge;
    drop table hkc_test_large_row_size purge
    *
    ERROR at line 1:
    ORA-00942: table or view does not exist


    SQL>
    SQL> create table hkc_test_large_row_size
    2 (id_column number, data_col_1 varchar2(4000), data_col_2 varchar2(4000), data_col_3 varchar2(4000));

    Table created.

    SQL> create unique index hkc_tlrs_undx on hkc_test_large_row_size(id_column);

    Index created.

    SQL>
    SQL>
    SQL>
    SQL> select n.name, s.value At_Beginning
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME AT_BEGINNING
    ---------------------------------------------------------------- ------------
    redo entries 102
    redo size 23900

    SQL>
    SQL>
    SQL> insert into hkc_test_large_row_size
    2 select rownum+4000, to_char(rownum), to_char(rownum), to_char(rownum)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL> commit;

    Commit complete.

    SQL>
    SQL> select n.name, s.value Normal_Insert
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME NORMAL_INSERT
    ---------------------------------------------------------------- -------------
    redo entries 2145
    redo size 526320

    SQL>
    SQL>
    SQL> insert /*+ APPEND */ into hkc_test_large_row_size
    2 select rownum, to_char(rownum), to_char(rownum), to_char(rownum)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL>
    SQL> commit;

    Commit complete.

    SQL>
    SQL> select n.name, s.value APPEND_Insert
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME APPEND_INSERT
    ---------------------------------------------------------------- -------------
    redo entries 2243
    redo size 627228

    SQL>
    SQL>
    SQL> drop table hkc_test_large_row_size purge;

    Table dropped.

    SQL>

    Thus, we can see that, for the "LARGE_ROW_SIZE" table, the redo entries for 1000 of these actually small rows :
    Simple Insert of 1,000 rows :   2,043 redo entries and 502,420 bytes.
    Direct Path (APPEND) of 1,000 rows : 98 redo entries and 100,908 bytes.

    Therefore, a simple (regular) Insert into such a table where the *potential* row size (not necessarily the actual row size) exceeds the block size is very expensive.  Apparently, the optimization to insert multiple rows into a single block with minimal redo is not invoked when Oracle thinks that the row may exceed the block size.  It switches to a row-by-row insert.  This is evident in the LARGE_ROW_SIZE case where 2,043 redo entries is more than 1000 rows + 1000 index entries.  The SMALL_ROW_SIZE had only 52 redo entries.
    Remember : This happens when
    (a) The potential row size exceeds the block size (irrespective of the actual row size which may be very few bytes)
    AND
    (b) a regular (non-Direct) Insert is used.
    In such cases, a Direct Path Insert works out better.  Although there are obvious downsides to doing Direct Path Inserts -- the HighWaterMark keeps extending, irrespective of Deletes that may have been issued.
    .
    .
    .


    Categories: DBA Blogs

    Securing Big Data - Part 4 - Not crying Wolf.

    Steve Jones - Fri, 2015-01-09 09:00
    In the first three parts of this I talked about how Securing Big Data is about layers, and then about how you need to use the power of Big Data to secure Big Data, then how maths and machine learning helps to identify what is reasonable and was is anomalous. The Target Credit Card hack highlights this problem.  Alerts were made, lights did flash.  The problem was that so many lights flashed and
    Categories: Fusion Middleware

    count(*) – again !

    Jonathan Lewis - Fri, 2015-01-09 06:56

    Because you can never have enough of a good thing.

    Here’s a thought – The optimizer doesn’t treat all constants equally.  No explanations, just read the code – execution plans at the end:

    
    SQL> drop table t1 purge;
    SQL> create table t1 nologging as select * from all_objects;
    SQL> create bitmap index t1_b1 on t1(owner);
    
    SQL> alter session set statistics_level = all;
    
    SQL> set serveroutput off
    SQL> select count(*) from t1;
    SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
    
    SQL> select count(1) from t1;
    SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
    
    SQL> select count(-1) from t1;
    SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
    
    SQL> alter session set cursor_sharing = force;
    SQL> alter system flush shared_pool;
    
    SQL> select count(1) from t1;
    SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
    
    

    So, are you expecting to see the same results and performance from every single one of those queries ?

    
    select count(*) from t1
    ----------------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
    ----------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      1 |        |      1 |00:00:00.01 |       9 |      5 |
    |   1 |  SORT AGGREGATE               |       |      1 |      1 |      1 |00:00:00.01 |       9 |      5 |
    |   2 |   BITMAP CONVERSION COUNT     |       |      1 |  84499 |     31 |00:00:00.01 |       9 |      5 |
    |   3 |    BITMAP INDEX FAST FULL SCAN| T1_B1 |      1 |        |     31 |00:00:00.01 |       9 |      5 |
    ----------------------------------------------------------------------------------------------------------
    
    select count(1) from t1
    -------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      1 |        |      1 |00:00:00.01 |       9 |
    |   1 |  SORT AGGREGATE               |       |      1 |      1 |      1 |00:00:00.01 |       9 |
    |   2 |   BITMAP CONVERSION COUNT     |       |      1 |  84499 |     31 |00:00:00.01 |       9 |
    |   3 |    BITMAP INDEX FAST FULL SCAN| T1_B1 |      1 |        |     31 |00:00:00.01 |       9 |
    -------------------------------------------------------------------------------------------------
    
    select count(-1) from t1
    -------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      1 |        |      1 |00:00:00.43 |       9 |
    |   1 |  SORT AGGREGATE               |       |      1 |      1 |      1 |00:00:00.43 |       9 |
    |   2 |   BITMAP CONVERSION TO ROWIDS |       |      1 |  84499 |  84499 |00:00:00.22 |       9 |
    |   3 |    BITMAP INDEX FAST FULL SCAN| T1_B1 |      1 |        |     31 |00:00:00.01 |       9 |
    -------------------------------------------------------------------------------------------------
    
    SQL> alter session set cursor_sharing = force;
    SQL> alter system flush shared_pool;
    
    select count(1) from t1
    select count(:"SYS_B_0") from t1    -- effect of cursor-sharing
    -------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      1 |        |      1 |00:00:00.46 |       9 |
    |   1 |  SORT AGGREGATE               |       |      1 |      1 |      1 |00:00:00.46 |       9 |
    |   2 |   BITMAP CONVERSION TO ROWIDS |       |      1 |  84499 |  84499 |00:00:00.23 |       9 |
    |   3 |    BITMAP INDEX FAST FULL SCAN| T1_B1 |      1 |        |     31 |00:00:00.01 |       9 |
    -------------------------------------------------------------------------------------------------
    
    

    Check operation 2 in each plan – with the bitmap index in place there are two possible ways to count the rows referenced in the index – and one of them converts to rowids and does a lot more work.

    The only “real” threat in this set of examples, of course, is the bind variable one – there are times when count(*) WILL be faster than count(1). Having said that, there is a case where a redundant “conversion to rowids” IS a threat – and I’ll write that up some time in the near future.

    Trick question: when is 1+1 != 2 ?
    Silly answer: compare the plan for: “select count (2) from t1″ with the plan for “select count(1+1) from t1″

    Note: All tests above run on 12.1.0.2


    Oracle Audit Vault - Custom Reports and BI Publisher

    Custom reports can be created in Oracle Audit Vault using Oracle BI Publisher.  BI Publisher is an add-on to Microsoft Word and can be used to modify or create new reports.

    For example, to modify a new report, to meet specific corporate or internal audit needs, download a standard Oracle Audit Vault report that is similar (Auditor -> Reports -> Custom Reports -> Uploaded Reports).  Click on the icon to download both the template and the report definition and load both files into BI Publisher.

    Once complete, upload the report definition to the same location (Auditor -> Reports -> Custom Reports -> Uploaded Reports).

    If you have questions, please contact us at mailto:info@integrigy.com

    Reference

     

    Tags: AuditingOracle Audit Vault
    Categories: APPS Blogs, Security Blogs

    Je suis Charlie

    Frank van Bortel - Fri, 2015-01-09 03:34
    Je suis Charlie Bien sûr, moi aussi, je suis Charlie, moi.Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0