Skip navigation.

David Kurtz

Syndicate content
This blog contains things about PeopleSoft that DBAs might find interesting.
Or then again they might not!
Non-PeopleSoft Oracle stuff is at blog.go-faster.co.uk.
Updated: 1 week 3 hours ago

Minimising Parse Time in Application Engine with ReUseStatement

Fri, 2014-10-24 09:15
This article explains how to determine the overhead of using literal values rather than bind variables in SQL submitted by PeopleSoft Application Engine programs. Using a combination of PeopleSoft Application Engine Batch Timings and Oracle Active Session History (ASH), it is possible to determine where it is most effective to change to alter this behaviour by setting the ReUse attribute.
ReUse Statement Flag I originally wrote about the Performance Benefits of ReUseStatement in Application Engine over 5 years ago, and the problem is no less significant today than it was then.  There are still many places in the delivered PeopleSoft application that would benefit from it.  However, it is not possible to simply set the attribute across all Application Engine programs because it will cause errors in steps where SQL is dynamically generated, so each case must be considered.  Where the ReUse attributed is changed, it must be tested and migrated like any other customisation.

Recently, I have been encountering problems in a number of places on a Financials system which were resolved by enabling ReUseStatement.  So I started by calculating how much time was lost by not setting it.
Application Engine Batch TimingsIf an Application Engine step is not reused, then it is compiled prior to every execution during which the Application Engine bind variables are resolved to literals. The number and duration of compilations and executions are reported in the Batch Timings, the production of which are controlled with the TraceAE flag in the Process Scheduler configuration file (psprcs.cfg).  

;-------------------------------------------------------------------------
; AE Tracing Bitfield
;
; Bit Type of tracing
; --- ---------------
...
; 128 - Timings Report to AET file
...
; 1024 - Timings Report to tables
...
TraceAE=1152
;------------------------------------------------------------------------

Whatever other TraceAE flags you set, I would always recommend that you set 128 and 1024 so that batch timings are always emitted to both file and database.  The overhead of enabling them is negligible because they are managed in memory at run time and physically written once when the Application Engine program ends.

NB: The settings in the configuration file can be overridden by command line options.  If you specify -TRACE parameter in the Process Scheduler definitions remember to also set these flags.

Batch timings are written to the AE Trace file at end of an Application Engine program, and to PS_BAT_TIMINGS PeopleTools tables at the successful end of an Application Engine program. 
It can be useful to have batch timings in the trace file of an Application Engine that failed because they are not written to the database.  As your system runs, you will build up batch timings for all of your successful Application Engine processes (which will be most of them.  This is a useful source of performance metrics.
Compilations, Execution and ReUseIn this example, the number of compilations is equal to the number of executions because ReUseStatement is not set.

                          PeopleSoft Application Engine Timings
(All timings in seconds)

C o m p i l e E x e c u t e F e t c h Total
SQL Statement Count Time Count Time Count Time Time
------------------------------ ------- -------- ------- -------- ------- -------- --------
99XxxXxx.Step02.S 8453 2.8 8453 685.6 0 0.0 688.4
...

With ReUse Statement enabled, there is now only a single compilation, and most of the time is saved in execution not compilation.

                               C o m p i l e    E x e c u t e    F e t c h        Total
SQL Statement Count Time Count Time Count Time Time
------------------------------ ------- -------- ------- -------- ------- -------- --------
99XxxXxx.Step02.S 1 0.0 8453 342.3 0 0.0 342.3
...

So we can use the batch timings to identify steps where ReUseStatement is not set because they have as many compilations as executions, and then we can profile the top statements.
 Profile CompilationsThis query produces a simple profile of batch timings for statements. 
  • In sub-query x it extract batch timings for statements with more than one compilation in processes that ended in the last 7 days.
  • There is a long-standing bug in batch timings where negative timings can be returned when the clock that returns milliseconds recycles back to zero every 216 milliseconds.  Sub-query y calculates an adjustment that is applied in sub-query z if the timing is negative.
  • Arbitrarily, I am only looking at statements with more than a total of 10000 compilations.

REM ReUseCand.sql
REM (c)Go-Faster COnsultancy Ltd. 2014
COLUMN detail_id FORMAT a32
COLUMN step_time FORMAT 999990 HEADING 'AE|Step|Secs'
COLUMN compile_count HEADING 'AE|Compile|Count'
COLUMN execute_count HEADING 'AE|Execute|Count'
COLUMN processes HEADING 'Num|Process|Instances'
COLUMN process_name HEADING 'Process|Name'
SPOOL ReUseCand
WITH x AS (
SELECT l.process_instance, l.process_name
, l.time_elapsed/1000 time_elapsed
, l.enddttm-l.begindttm diffdttm
, d.bat_program_name||'.'||d.detail_id detail_id
, d.compile_count, d.compile_time/1000 compile_time
, d.execute_time/1000 execute_time
FROM ps_bat_Timings_dtl d
, ps_bat_timings_log l
WHERE d.process_instance = l.process_instance
AND d.compile_count = d.execute_count
AND d.compile_count > 1
AND l.enddttm > SYSDATE-7
), y as (
SELECT x.*
, GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
+EXTRACT(hour FROM diffdttm))
+EXTRACT(minute FROM diffdttm))
+EXTRACT(second FROM diffdttm)-x.time_elapsed) delta
FROM x)
, z as (
SELECT process_instance, process_name, detail_id
, CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
ELSE time_elapsed END time_elapsed
, compile_count
, CASE WHEN compile_time < 0 THEN compile_time+delta
ELSE compile_time END AS compile_time
, CASE WHEN execute_time < 0 THEN execute_time+delta
ELSE execute_time END AS execute_time
FROM y
), a as (
SELECT process_name, detail_id
, SUM(compile_time+execute_time) step_time
, SUM(compile_count) compile_count
, COUNT(DISTINCT process_instance) processes
FROM z
GROUP BY process_name, detail_id)
SELECT * FROM a
WHERE compile_count >= 10000
ORDER BY step_time DESC
/
SPOOL OFF

So now I have a list of steps with lots of compilations.  I know how long they took, but I don't know how much time I might save by enabling ReUseStatement. That will save some time in Application Engine, but it will also cut down database parse time.  So now I need determine the parse time from ASH data.

WITH x AS (
Process Step Compile Process
Name DETAIL_ID SEcs Count Instances
------------ -------------------------------- ------ ---------- ----------
AP_PSTVCHR AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10
AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step03.S 4313 49536 28
FS_VATUPDFS FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3
AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step02.S 2799 49536 28
PC_BI_TO_PC PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10
FS_VATUPDFS FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3
GL_JEDIT_0 FS_CEDT_ECFS.iTSELog.iTSELog.S 1628 13104 519
AP_APY2015 AP_APY2015.V_CREATE.Step14.H 1192 11318 19

This query is based on the previous one, but includes scalar queries on the ASH data for each step.
  • WARNING: This query can run for a long period because it has to scan ASH data for each entry in BAT_TIMINGS_DTL.
  • This time in sub-query x I am looking for a rolling 7-day period up to the last hour, because ASH data will have been flushed to the ASH repository.
  • In sub-query y there are two scalar queries that retrieve the DB time spent on hard parse, and all DB time for each batch timing entry.  These queries count 10 seconds for each distinct sample ID to estimate elapsed time rather than total DB time.
  • The query does not group by process name because one step can be called from many Application Engine programs and I want to aggregate the total time across all of them.

REM ReUseCandASH.sql
REM ReUseCandASH.sql
REM (c)Go-Faster Consultancy Ltd. 2014
COLUMN processes HEADING 'Num|Process|Instances'
COLUMN process_name HEADING 'Process|Name'
COLUMN detail_id FORMAT a32
COLUMN step_time HEADING 'AE|Step|SEcs' FORMAT 999990
COLUMN compile_count HEADING 'AE|Compile|Count'
COLUMN execute_count HEADING 'AE|Execute|Count'
COLUMN hard_parse_secs HEADING 'Hard|Parse|Secs' FORMAT 99990
COLUMN ash_secs HEADING 'DB|Time' FORMAT 99990
SPOOL ReUseCandASH
WITH x AS (
SELECT l.process_instance, l.process_name
, l.time_elapsed/1000 time_elapsed
, l.begindttm, l.enddttm
, l.enddttm-l.begindttm diffdttm
, d.bat_program_name||'.'||d.detail_id detail_id
, d.compile_count, d.compile_time/1000 compile_time
, d.execute_time/1000 execute_time
FROM ps_bat_timings_dtl d
, ps_bat_timings_log l
WHERE d.process_instance = l.process_instance
AND d.compile_count = d.execute_count
AND d.compile_count > 1
AND l.enddttm >= TRUNC(SYSDATE-7,'HH24')
AND l.enddttm < TRUNC(SYSDATE,'HH24')
), y as (
SELECT x.*
, GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
+EXTRACT(hour FROM diffdttm))
+EXTRACT(minute FROM diffdttm))
+EXTRACT(second FROM diffdttm)-x.time_Elapsed) delta
FROM x)
, z as (
SELECT process_instance, process_name, detail_id
, CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
ELSE time_elapsed END AS time_elapsed
, compile_count
, CASE WHEN compile_time < 0 THEN compile_time+delta
ELSE compile_time END AS compile_time
, CASE WHEN execute_time < 0 THEN execute_time+delta
ELSE execute_time END AS execute_time
, (
SELECT 10*COUNT(DISTINCT h.sample_id)
FROM psprcsque q
, dba_hist_snapshot x
, dba_hist_active_sess_history h
WHERE q.prcsinstance = y.process_instance
AND x.begin_interval_time <= y.enddttm
AND X.END_INTERVAL_TIME >= y.begindttm
AND h.sample_time between y.begindttm and y.enddttm
AND h.SNAP_id = x.SNAP_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
AND h.action = y.detail_id
AND h.in_hard_parse = 'Y'
) hard_parse_secs
, (
SELECT 10*COUNT(DISTINCT h.sample_id)
FROM psprcsque q
, dba_hist_snapshot x
, dba_hist_active_sess_history h
WHERE q.prcsinstance = y.process_instance
AND x.begin_interval_time <= y.enddttm
AND X.END_INTERVAL_TIME >= y.begindttm
AND h.sample_time between y.begindttm and y.enddttm
AND h.SNAP_id = X.SNAP_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
AND h.action = y.detail_id
) ash_secs
FROM y
), a AS (
SELECT /*process_name ,*/ detail_id
, SUM(compile_time+execute_time) step_time
, SUM(compile_count) compile_count
, COUNT(DISTINCT process_instance) processes
, SUM(hard_parse_secs) hard_parse_secs
, SUM(ash_secs) ash_secs
FROM z
GROUP BY /*process_name,*/ detail_id)
SELECT a.*
FROM a
WHERE compile_count >= 10000
ORDER BY step_time DESC
/
spool off

Now we can also see how much time the database is spending on hard parse on each step, and it is this time that is likely to be saved by enabling ReUseStatement.
However, before we can enable the ReUseStatement attribute we must first manually inspect each step in Application Designer and determine whether doing so would break anything.  The Comment column in this profile was added manually as I did that.  Some statements I can change, some I have to accept the overhead.

                                   Step    Compile    Process      Parse         DB
DETAIL_ID Secs Count Instances Secs Time Comment
-------------------------------- ------ ---------- ---------- ---------- ---------- …………………………………………………………………………………………………………………………………
AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10 11820 11920 Set ReUseStatement
FS_CEDT_ECMB.4EditCDT.uValCDT.S 5531 10289 679 620 5870 Dynamic SQL, can't set ReUseStatement
APVEDTMOVE.UPDQTY03.Step03.S 4306 49471 27 4020 4100 Set ReUseStatement
FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3 3150 3860 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECFS.iTSELog.iTSELog.S 3332 19073 716 2130 3520 Dynamic SQL, can't set ReUseStatement
APVEDTMOVE.UPDQTY03.Step02.S 2796 49471 27 2730 2820 Set ReUseStatement
PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10 230 1920 Set ReUseStatement
FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3 0 0 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.4uAnchCT.uAnchCDT.S 1319 10289 679 510 1290 Dynamic SQL, can't set ReUseStatement
AP_APY2015.V_CREATE.Step14.H 1169 11094 19 0 0 Set ReUseStatement
GL_JETSE.GA100.CHKEDT.S 1121 15776 569 860 930 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.iTSELog.iTSELog.S 988 10289 679 450 990 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.uMarkVal.uMarkVal.S 711 10289 679 50 670 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.uMarkInv.uMarkInv.S 668 10289 679 40 790 Dynamic SQL, can't set ReUseStatement
  • Due to a bug in the instrumentation of Application Engine, the session's action attribute is not set for Do Select (type D) and Do When (type H) steps.  ASH data cannot therefore be matched for them.
  • More DB Time is reported for FS_CEDT_ECMB.uMarkInv.uMarkInv.S than is reported by batch timings.  This is a consequence of ASH sampling, where we count 10 seconds for each sample.
ConclusionSetting ReUseStatement is very simple because it doesn't involve changing SQL, but there are lots of places where it could be set.  This technique picks out the relatively few places where doing so could potentially have a significant effect.
    ©David Kurtz, Go-Faster Consultancy Ltd.

    Who is using this index?

    Wed, 2014-09-03 04:30
    Or, to put it another way, I want to change or drop this index, who and what will I impact?
    The Challenge The problem that I am going to outline is certainly not exclusive to PeopleSoft, but I am going to illustrate it with examples from PeopleSoft. I often find tables with far more indexes than are good for them.
    • The Application Designer tool makes it very easy for developers to add indexes to tables. Sometimes, too easy!
    • Sometimes, DBAs are too quick to unquestioningly follow the advice of the Oracle tuning advisor to add indexes.
    Recently, I have been working on 3 different PeopleSoft Financials systems where I have found major tables with a host of indexes.

    There are several concerns:
    • Indexes are maintained during data modification. The more indexes you have, the greater the overhead. 
    • The more indexes you have, particularly if they lead on the same columns, the more likely Oracle is to use the wrong one, resulting in poorer performance.
    • There is of course also a space overhead for each index, but this is often of less concern. 
    If you can get rid of an index, Oracle doesn't store, maintain or use it. 
    In some cases, I have wanted to remove unnecessary indexes, and in others to adjust indexes. However, this immediately raises the question of where are these indexes used, and who will be impacted by the change. Naturally, I turn to the Active Session History (ASH) to help me find the answers. Index Maintenance Overhead during DDL ASH reports the object number, file number, block number and (from 11g) row number within the block being accessed by physical file operations. However, the values reported in v$active_session_history (and later other views) are not reliable for other events because they are merely left over from the previous file event that reported them. So, we can profile the amount of time spent on physical I/O on different tables and indexes, but not for other forms of DB Time, such as CPU time, spent accessing the blocks in the buffer cache.
    Let me take an extreme example from PeopleSoft Global Payroll. The table PS_GP_RSLT_ACUM is one of the principal result tables. It has only a single unique index (with the same name). The table is populated with the simplest of insert statements.
    INSERT /*GPPRDMGR_I_ACUM*/ INTO PS_GP_RSLT_ACUM
    (CAL_RUN_ID ,EMPLID ,EMPL_RCD ,GP_PAYGROUP ,CAL_ID ,RSLT_SEG_NUM ,PIN_NUM ,EMPL_RCD_ACUM ,ACM_FROM_DT ,ACM_THRU_DT ,USER_KEY1 ,USER_KEY2 ,USER_KEY3 ,USER_KEY4 ,USER_KEY5 ,USER_KEY6 ,SLICE_BGN_DT ,SLICE_END_DT ,COUNTRY ,ACM_TYPE ,ACM_PRD_OPTN ,CALC_RSLT_VAL ,CALC_VAL ,USER_ADJ_VAL ,PIN_PARENT_NUM ,CORR_RTO_IND ,ORIG_CAL_RUN_ID ,SEQ_NUM8 ,VALID_IN_SEG_IND ,CALLED_IN_SEG_IND )
    VALUES
    (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30)
    I can profile the ASH data for just this statement over the last week on a production system. Note that DBA_OBJECTS and DBA_DATA_FILES are outer joined to the ASH data and only matched for events like 'db file%'
    SELECT o.object_type, o.object_name
    , f.tablespace_name, NVL(h.event,'CPU+CPU Wait') event
    , SUM(10) ash_Secs
    FROM dba_hist_Active_sess_history h
    LEFT OUTER JOIN dba_objects o
    ON o.object_id = h.current_obj#
    AND h.event like 'db file%'
    LEFT OUTER JOIN dba_data_files f
    ON f.file_id = h.current_file#
    AND h.event like 'db file%'
    WHERE h.sql_id = '4ru0618dswz3y'
    AND h.sample_time >= sysdate-7
    GROUP BY o.object_type, o.object_name, h.event, f.tablespace_name
    ORDER BY ash_secs DESC
    /
    A full payroll calculation inserts over 3 million rows on this particular system. The calculation is run incrementally several times per week during which old rows are deleted and newly recalculated rows inserted.  Looking at just this insert statement:
    • 30% of the time is spent on CPU operations, we cannot profile that time further with ASH.
    • 38% of the time is spent reading from the table and index, yet this is a simple INSERT … VALUES statement.
    OBJECT_TYPE         OBJECT_NAME        TABLESPACE_NAME EVENT                      ASH_SECS
    ------------------- ------------------ --------------- ------------------------ ----------
    CPU+CPU Wait 1040
    UNDOTBS1 db file sequential read 900
    INDEX SUBPARTITION PS_GP_RSLT_ACUM GP201408IDX db file sequential read 750
    TABLE SUBPARTITION PS_GP_RSLT_ACUM GP201408TAB db file sequential read 550
    gc current grant 2-way 70
    cursor: pin S wait on X 60
    db file sequential read 10
    buffer exterminate 10
    row cache lock 10
    ----------
    3400
    More time is spent reading the index than the table.  That is not a surprise.  When you insert a row into a table, you also insert it into the index. Rows in index leaf blocks are ordered by the key columns, and the new entry must go into the right place, so you have to read down the index from the root block, through the branch blocks, to find the correct leaf block for the new entry.
    [Digression: Counter-intuitively index compression can improve DML performance. It does for this index.  The overhead of the compression processing can be outweighed by the saving in physical I/O.  It depends.]
    Profile Physical I/O by Object I can twist this query around and profile DB_TIME by object for 'db file%' events
    SELECT o.object_type, o.object_name, sum(10) ash_secs
    FROM dba_hist_active_sess_history h
    , dba_objects o
    WHERE o.object_id = h.current_obj#
    AND h.event LIKE 'db file%'
    AND h.sample_time > sysdate-7
    GROUP BY o.object_type, o.object_name
    ORDER BY ash_Secs DESC
    Now I can see upon which objects the most time is spent on physical I/O.
    OBJECT_TYP OBJECT_NAME          ASH_SECS
    ---------- ------------------ ----------
    TABLE PS_ITEM 101130
    INDEX PS_WS_ITEM 98750
    TABLE PS_PROJ_RESOURCE 97410
    TABLE PS_BI_LINE 85040
    INDEX PSAPSAPMSGSUBCON 75070
    TABLE PS_BI_HDR 37230
    TABLE PS_RS_ASSIGNMENT 29460
    INDEX PS_PSAPMSGPUBHDR 23230
    INDEX PS_BI_ACCT_ENTRY 21490
    TABLE PS_VOUCHER 21330
    TABLE PS_VCHR_ACCTG_LINE 21250
    TABLE PS_BI_ACCT_ENTRY 18860

    ----------
    sum 1382680
    This is a worthwhile exercise, it shows the sources of physical I/O in an application.

    However, if you want to find where an index is used, then this query will also identify SQL_IDs where the index is either used in the query or maintained by DML. If I am interested in looking for places where changing or deleting an index could have an impact then I am only interested in SQL query activity. ASH samples which relate to index maintenance are a false positive. Yet, I cannot simply eliminate ASH samples where the SQL_OPNAME is not SELECT because the index may be used in a query within the DML statement.

    Another problem with this method is that it matches SQL to ASH by object ID. If someone has rebuilt an index, then its object number changes.

    A different approach is required.
    Index Use from SQL Plans Captured by AWR During an AWR snapshot the top-n SQL statements by each SQL criteria in the AWR report (Elapsed Time, CPU Time, Parse Calls, Shareable Memory, Version Count) , see dbms_workload_repository. The SQL plans are exposed by the view DBA_HIST_SQL_PLAN.

    On PeopleSoft systems, I generally recommend decreasing the snapshot interval from the default of 60 minutes to 15. The main reason is that SQL gets aged out of the library cache very quickly in PeopleSoft systems. They generate lots of dynamic code, often with literal values rather than bind variables. Cursor sharing is not recommended for PeopleSoft, so different bind variables result in different SQL_IDs. The dynamic code also results in different SQL IDs even with cursor sharing. Therefore, increasing the snapshot frequency means that will capture more SQL statement. This will increase total volume of the AWR repository simply because there are more snapshots. However, the overall volume of ASH data captured does not change, it just gets copied to the repository earlier.

    On DBA_HIST_SQL_PLAN the object ID, owner, type and name are recorded, so I can find the plans which referenced a particular object. I am going to carry on with the example from a PeopleSoft Financials system, and look at indexes on the PS_PROJ_RESOURCE table.

    These are some of the indexes on PS_PROJ_RESOURCE. We have 4 indexes that all lead on PROCESS_INSTANCE. I suspect that not all are essential, but I need to work out what is using them, and which one I should retain.
                              Col
    INDEX_NAME Pos COLUMN_NAME COLUMN_EXPRESSION
    ------------------ ---------- -------------------- ----------------------------------

    PSJPROJ_RESOURCE 1 PROCESS_INSTANCE
    2 BUSINESS_UNIT_GL
    3 BUSINESS_UNIT
    4 PROJECT_ID
    5 ACTIVITY_ID
    6 CUST_ID

    PSLPROJ_RESOURCE 1 PROCESS_INSTANCE
    2 EMPLID
    3 EMPL_RCD
    4 TRANS_DT

    PSMPROJ_RESOURCE 1 PROCESS_INSTANCE
    2 BUSINESS_UNIT
    3 PROJECT_ID
    4 ACTIVITY_ID
    5 RESOURCE_ID

    PSNPROJ_RESOURCE 1 PROCESS_INSTANCE
    2 BUSINESS_UNIT
    3 TIME_RPTG_CD

    I find it easier to extract the ASH data to my own working storage table. For each index on PS_PROJ_RESOURCE, I am going to extract a distinct list of plan hash values. I will then extract all ASH data for those plans. Note, that I have not joined the SQL_ID on DBA_HIST_SQL_PLAN. That is because different SQL_IDs can produce the same execution plan. The plan is equally valid for all SQL_IDs that produce the plan, not just the one where the SQL_ID also matches.
    DROP TABLE my_ash purge
    /
    CREATE TABLE my_ash COMPRESS AS
    WITH p AS (
    SELECT DISTINCT p.plan_hash_value, p.object#, p.object_owner, p.object_type, p.object_name
    FROM dba_hist_sql_plan p
    WHERE p.object_name like 'PS_PROJ_RESOURCE'
    AND p.object_type LIKE 'INDEX%'
    AND p.object_owner = 'SYSADM'
    )
    SELECT p.object# object_id, p.object_owner, p.object_type, p.object_name
    , h.*
    FROM dba_hist_active_sess_history h
    , p
    WHERE h.sql_plan_hash_value = p.plan_hash_value
    /
    I am fortunate that PeopleSoft is a well instrumented application. Module and Action are set to fairly sensible values that will tell me whereabouts in the application the ASH sample relates. In the following query, I have omitted any ASH data generated by SQL*Plus, Toad, or SQL Developer, and also any generated by Oracle processes to prevent statistics collection jobs being included.
    Set pages 999 lines 150 trimspool on
    break on object_name skip 1
    compute sum of ash_secs on object_name
    column ash_secs heading 'ASH|Secs' format 9999999
    column module format a20
    column action format a32
    column object_name format a18
    column max_sample_time format a19 heading 'Last|Sample'
    column sql_plans heading 'SQL|Plans' format 9999
    column sql_execs heading 'SQL|Execs' format 99999
    WITH h AS (
    SELECT object_name
    , CASE WHEN h.module IS NULL THEN REGEXP_SUBSTR(h.program,'[^.@]+',1,1)
    WHEN h.module LIKE 'PSAE.%' THEN REGEXP_SUBSTR(h.module,'[^.]+',1,2)
    ELSE REGEXP_SUBSTR(h.program,'[^.@]+',1,1)
    END as module
    , CASE WHEN h.action LIKE 'PI=%' THEN NULL
    ELSE h.action
    END as action
    , CAST(sample_time AS DATE) sample_time
    , sql_id, sql_plan_hash_value, sql_exec_id
    FROM my_ash h
    )
    SELECT object_name, module, action
    , sum(10) ash_secs
    , COUNT(DISTINCT sql_plan_hash_value) sql_plans
    , COUNT(DISTINCT sql_id||sql_plan_hash_value||sql_exec_id) sql_execs
    , MAX(sample_time) max_sample_time
    FROM h
    WHERE NOT LOWER(module) IN('oracle','toad','sqlplus','sqlplusw')
    AND NOT LOWER(module) LIKE 'sql%'
    GROUP BY object_name, module, action
    ORDER BY SUBSTR(object_name,4), object_name, ash_Secs desc
    /
    Spool off
    I now have a profile of how much each index is used. In this particular case I found something using every index.  It is possible that you will not find anything that uses some indexes.
                                                                                 ASH   SQL    SQL Last
    OBJECT_NAME MODULE ACTION Secs Plans Execs Sample
    ------------------ -------------------- -------------------------------- ------- ----- ------ -------------------

    PSJPROJ_RESOURCE PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step24.S 7300 1 66 06:32:57 27/08/2014
    PC_PRICING GFCPBINT_AE.CallmeA.Step24.S 40 1 2 08:38:57 22/08/2014
    ****************** -------
    sum 7340

    PSLPROJ_RESOURCE PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step28.S 1220 1 53 06:33:17 27/08/2014
    ****************** -------
    sum 1220

    PSMPROJ_RESOURCE PC_TL_TO_PC GFCPBINT_AE.XxBiEDM.Step07.S 60 2 6 18:35:18 20/08/2014
    ****************** -------
    sum 60

    PSNPROJ_RESOURCE PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step26.S 6720 1 49 18:53:58 26/08/2014
    PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step30.S 3460 1 60 06:33:27 27/08/2014
    GFCOA_CMSN GFCOA_CMSN.01INIT.Step01.S 2660 1 47 19:19:40 26/08/2014
    PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step06.S 1800 1 52 18:53:28 26/08/2014
    PC_TL_TO_PC GFCPBINT_AE.CallmeG.Step01.S 1740 1 61 06:34:17 27/08/2014
    PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step02.S 1680 1 24 18:53:18 26/08/2014
    PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step10.S 1460 1 33 17:26:26 22/08/2014
    PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step08.S 920 1 26 17:26:16 22/08/2014
    PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step36.S 460 1 18 18:26:38 20/08/2014
    PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step09.S 420 1 16 06:33:07 27/08/2014
    PC_PRICING GFCPBINT_AE.CallmeG.Step01.S 200 1 10 08:09:55 22/08/2014
    PC_AP_TO_PC GFCPBINT_AE.CallmeH.Step00A.S 170 1 17 21:53:26 21/08/2014
    PC_PRICING GFCPBINT_AE.CallmeA.Step36.S 20 1 1 08:02:46 05/08/2014
    PC_PRICING GFCPBINT_AE.CallmeA.Step30.S 20 1 1 13:42:48 04/08/2014
    PC_PRICING GFCPBINT_AE.CallmeA.Step06.S 20 1 1 15:58:35 28/07/2014
    PC_TL_TO_PC GFCPBINT_AE.CallmeA.Pseudo.S 20 1 1 19:45:11 06/08/2014
    ****************** -------
    sum 21770

    The next stage is to look at individual SQL statements This query looks for which SQL statement is using a particular index on PROJ_RESOURCE. If I can't find the SQL which cost the most time, then just choose another SQL with the same plan
    • I have found that sometimes a plan is captured by AWR, but the SQL statement is not. Personally, I think that is a bug. Working around it has made the following query quite complicated.
    Break on object_name skip 1 
    column ash_secs heading 'ASH|Secs' format 9999999
    Set long 50000
    Column cmd Format a200
    Spool dmk

    WITH h AS (
    SELECT h.object_name
    , CASE WHEN h.module IS NULL THEN REGEXP_SUBSTR(h.program,'[^.@]+',1,1)
    WHEN h.module LIKE 'PSAE.%' THEN REGEXP_SUBSTR(h.module,'[^.]+',1,2)
    ELSE REGEXP_SUBSTR(h.program,'[^.@]+',1,1)
    END as module
    , CASE WHEN h.action LIKE 'PI=%' THEN NULL
    ELSE h.action
    END as action
    , h.sql_id, h.sql_plan_hash_value
    , t.command_type –-not null if plan and statement captured
    FROM my_ash h
    LEFT OUTER JOIN (
    SELECT t1.*
    FROM dba_hist_sqltext t1
    , dba_hist_sql_plan p1
    WHERE t1.sql_id = p1.sql_id
    AND p1.id = 1
    ) t
    ON t.sql_id = h.sql_id
    AND t.dbid = h.dbid
    WHERE h.object_name IN('PSMPROJ_RESOURCE')
    AND h.object_Type = 'INDEX'
    AND h.object_owner = 'SYSADM'
    And NOT LOWER(module) IN('oracle','toad','sqlplus','sqlplusw')
    AND NOT LOWER(module) LIKE 'sql%'
    ), x AS ( --aggregate DB time by object and statement
    SELECT object_name, sql_id, sql_plan_hash_value
    , sum(10) ash_secs
    , 10*COUNT(command_type) sql_secs --DB time for captured statements only
    FROM h
    WHERE NOT LOWER(module) IN('oracle','toad','sqlplus','sqlplusw')
    AND NOT LOWER(module) LIKE 'sql%'
    GROUP BY object_name, sql_id, sql_plan_hash_value
    ), y AS ( --rank DB time per object and plan
    SELECT object_name, sql_id, sql_plan_hash_value
    , ash_secs
    , SUM(ash_secs) OVER (PARTITION BY object_name, sql_plan_hash_value) plan_ash_secs
    , row_number() OVER (PARTITION BY object_name, sql_plan_hash_value ORDER BY sql_Secs DESC) ranking
    FROM x
    ), z AS (
    SELECT object_name
    , CASE WHEN t.sql_text IS NOT NULL THEN y.sql_id
    ELSE (SELECT t1.sql_id
    FROM dba_hist_sqltext t1
    , dba_hist_sql_plan p1
    WHERE t1.sql_id = p1.sql_id
    AND p1.plan_hash_value = y.sql_plan_hash_value
    AND rownum = 1) --if still cannot find statement just pick any one
    END AS sql_id
    , y.sql_plan_hash_value, y.plan_ash_secs
    , CASE WHEN t.sql_text IS NOT NULL THEN t.sql_text
    ELSE (SELECT t1.sql_Text
    FROM dba_hist_sqltext t1
    , dba_hist_sql_plan p1
    WHERE t1.sql_id = p1.sql_id
    AND p1.plan_hash_value = y.sql_plan_hash_value
    AND rownum = 1) --if still cannot find statement just pick any one
    END AS sql_text
    from y
    left outer join dba_hist_sqltext t
    on t.sql_id = y.sql_id
    WHERE ranking = 1 --captured statement with most time
    )
    SELECT *
    --'SELECT * FROM table(dbms_xplan.display_awr('''||sql_id||''','||sql_plan_hash_value||',NULL,''ADVANCED''))/*'||object_name||':'||plan_ash_Secs||'*/;' cmd
    FROM z
    ORDER BY object_name, plan_ash_secs DESC
    /
    Spool off
    So now I can see the individual SQL statements.
    PSJPROJ_RESOURCE   f02k23bqj0xc4          3393167302          7340 UPDATE PS_PROJ_RESOURCE C SET (C.Operating_Unit, C.CHARTFIELD1, C.PRODUCT, C.CLA
    SS_FLD, C.CHARTFIELD2, C.VENDOR_ID, C.contract_num, C.contract_line_num, …

    PSLPROJ_RESOURCE 2fz0gcb2774y0 821236869 1220 UPDATE ps_proj_resource p SET p.deptid = NVL (( SELECT j.deptid FROM ps_job j WH
    ERE j.emplid = p.emplid AND j.empl_rcd = p.empl_rcd AND j.effdt = ( SELECT MAX (…

    PSMPROJ_RESOURCE 96cdkb7jyq863 338292674 50 UPDATE PS_GFCBI_EDM_TA04 a SET a.GFCni_amount = ( SELECT x.resource_amount FROM
    PS_PROJ_RESOURCE x WHERE x.process_instance = …

    1kq9rfy8sb8d4 4135884683 10 UPDATE PS_GFCBI_EDM_TA04 a SET a.GFCni_amount = ( SELECT x.resource_amount FROM
    PS_PROJ_RESOURCE x WHERE x.process_instance = …

    PSNPROJ_RESOURCE ga2x2u4jw9p0x 2282068749 6760 UPDATE PS_PROJ_RESOURCE P SET (P.RESOURCE_TYPE, P.RESOURCE_SUB_CAT) = …

    9z5qsq6wrr7zp 3665912247 3500 UPDATE PS_PROJ_RESOURCE P SET P.TIME_SHEET_ID = …
    If I replace the last select clause with the commented line, then I can generate the commands to extract the statement and plan from the AWR repository.
    SELECT * FROM table(dbms_xplan.display_awr('45ggt0yfrh5qp',3393167302,NULL,'ADVANCED'))/*PSJPROJ_RESOURCE:7340*/;

    SELECT * FROM table(dbms_xplan.display_awr('8ntxj3694r6kg',821236869,NULL,'ADVANCED'))/*PSLPROJ_RESOURCE:1220*/;

    SELECT * FROM table(dbms_xplan.display_awr('96cdkb7jyq863',338292674,NULL,'ADVANCED'))/*PSMPROJ_RESOURCE:50*/;

    SELECT * FROM table(dbms_xplan.display_awr('1kq9rfy8sb8d4',4135884683,NULL,'ADVANCED'))/*PSMPROJ_RESOURCE:10*/;

    SELECT * FROM table(dbms_xplan.display_awr('ga2x2u4jw9p0x',2282068749,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:6760*/;
    SELECT * FROM table(dbms_xplan.display_awr('9z5qsq6wrr7zp',3665912247,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:3500*/;
    SELECT * FROM table(dbms_xplan.display_awr('b28btd6k3x8jt',1288409804,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:3060*/;
    SELECT * FROM table(dbms_xplan.display_awr('avs70c19khxmw',2276811880,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:2660*/;
    SELECT * FROM table(dbms_xplan.display_awr('b78qhsch85g4a',1019599680,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:1960*/;
    SELECT * FROM table(dbms_xplan.display_awr('65kq2v1ubybps',3138703971,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:1820*/;
    SELECT * FROM table(dbms_xplan.display_awr('1dj17ra70c801',1175874548,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:1460*/;
    SELECT * FROM table(dbms_xplan.display_awr('3w71v896s7m5d',3207074729,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:500*/;
    SELECT * FROM table(dbms_xplan.display_awr('35mz5bw7p5ubw',2447377432,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:170*/;
    Ultimately, I have needed to look through the SQL plans that use an index to decide whether I need to keep that index or decide whether the statement would perform adequately using another index. In this case, on this particular system, I think the index PSMPROJ_RESOURCE would be adequate for this statement, and I would consider dropping PSLPROJ_RESOURCE.
    >SELECT * FROM table(dbms_xplan.display_awr('8ntxj3694r6kg',821236869,NULL,'ADVANCED'))/*PSLPROJ_RESOURCE:1220*/;
    --------------------
    UPDATE ps_proj_resource p SET p.deptid = NVL (( SELECT j.deptid FROM
    ps_job j WHERE j.emplid = p.emplid AND j.empl_rcd = p.empl_rcd AND
    j.effdt = ( SELECT MAX (j1.effdt) FROM ps_job j1 WHERE j1.emplid =
    j.emplid AND j1.empl_rcd = j.empl_rcd AND j1.effdt <= p.trans_dt) AND
    j.effseq = ( SELECT MAX (j2.effseq) FROM ps_job j2 WHERE j2.emplid =
    j.emplid AND j2.empl_rcd = j.empl_rcd AND j2.effdt = j.effdt)),
    p.deptid )
    WHERE p.process_instance = …
    AND EXISTS ( SELECT
    j.deptid FROM ps_job j WHERE j.emplid = p.emplid AND j.empl_rcd =
    p.empl_rcd AND j.effdt = ( SELECT MAX (j1.effdt) FROM ps_job j1 WHERE
    j1.emplid = j.emplid AND j1.empl_rcd = j.empl_rcd AND j1.effdt <=
    p.trans_dt) AND j.effseq = ( SELECT MAX (j2.effseq) FROM ps_job j2
    WHERE j2.emplid = j.emplid AND j2.empl_rcd = j.empl_rcd AND j2.effdt =
    j.effdt))

    Plan hash value: 821236869

    -----------------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    -----------------------------------------------------------------------------------------
    | 0 | UPDATE STATEMENT | | | | 63104 (100)| |
    | 1 | UPDATE | PS_PROJ_RESOURCE | | | | |
    | 2 | INDEX RANGE SCAN | PSLPROJ_RESOURCE | 365 | 11315 | 22 (0)| 00:00:01 |
    | 3 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
    | 4 | SORT AGGREGATE | | 1 | 20 | | |
    | 5 | INDEX RANGE SCAN| PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
    | 6 | SORT AGGREGATE | | 1 | 23 | | |
    | 7 | INDEX RANGE SCAN| PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
    | 8 | INDEX RANGE SCAN | PSAJOB | 1 | 29 | 3 (0)| 00:00:01 |
    | 9 | SORT AGGREGATE | | 1 | 20 | | |
    | 10 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
    | 11 | SORT AGGREGATE | | 1 | 23 | | |
    | 12 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
    -----------------------------------------------------------------------------------------

    I carried on with examination of SQL statements and execution plans to determine whether each index is really needed or another index (or even no index at all) would do as well.  This decision also requires some background knowledge about the application. Eventually, I decided that I want drop the J, L and N indexes on PROJ_RESOURCE and just keep M. 
    Limitations of Method
      AWR does not capture all SQLs, nor all SQL plans. First the SQL has to be in the library cache and then it must be one of the top-n. A SQL that is efficient because it uses an appropriate index may not be captured, and will not be detected by this approach. This might lead you to erronously believe that the index could be dropped.
        ASH data is purged after a period of time, by default 31 days. If an index is only used by a process that has not run within the retention period, then it will not be detected by this approach. This is another reason to retain ASH and AWR in a repository for a longer period. I have heard 400 days suggested, so that you have ASH for a year and a month.
          • However, this also causes the SYSAUX tablespace to be become very large, so I would suggest regularly moving the data to a separate database. I know one customer who has built a central AWR repository for all their production and test databases and automated regular transfer of data. That repository has been of immense diagnostic value.
          [Update] This analysis will not detect index use in support constraint validation (PeopleSoft doesn't use database referential integrity constraints).  As Mark Farnham points out below, that may be a reason for retaining a particular index.Getting Rid of Indexes Obviously any index changes need to be tested carefully in all the places that reference the index, but on the other hand it is not viable to do a full regression test every time you want to change an index.
            Therefore, if all the testing is successful and you decide to go ahead and drop the index in production, you might prefer to make it invisible first for a while before actually dropping it. It is likely that the indexes you choose to examine are large and will take time to rebuild. An invisible index will not be used by the optimizer, but it will continue to be maintained during DML. If there are any unfortunate consequences, you can immediately make the index visible without having to rebuild it.
              ©David Kurtz, Go-Faster Consultancy Ltd.

              To Hint or not to hint (Application Engine), that is the question

              Mon, 2014-08-25 12:36
              Over the years Oracle has provided a number of plan stability technologies to control how SQL statements are executed.  At the risk of over simplification, Outlines (deprecated in 11g), Profiles, Baselines and Patches work by injecting a set of hints into a SQL statement at parse time.  There is quite a lot of advice from Oracle to use these technologies to fix errant execution plans rather than hint the application.  I think it is generally good advice, however, there are times when this approach does not work well with PeopleSoft, and that is due to the behaviour and structure of PeopleSoft rather than the Oracle database.

              It is possible to produce a SQL profile from a plan captured by AWR.  A part of distribution for the SQLT Diagnostic Tool (Doc ID 215187.1) is a script called coe_xfr_sql_profile.sql written by Carlos Sierra.
              The only thing I would change in the delivered script, (for use with PeopleSoft and as suggested in a comment) is to create the profile with FORCE_MATCHING so that similar statements with different literal values still match. 
              The Slings and Arrows of outrageous execution plans Let's take an example of a statement (from the vanilla Financials product that has not been customised) that performed poorly because it didn't generate a good execution plan (although I have cut out most of the statement for readability.  Note, that it references instance 5 of PeopleTools temporary record CA_SUM_TAO.
              INSERT INTO PS_CA_SUM_RC_TAO5 (…) SELECT

              FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE

              B.PROCESS_INSTANCE = 51381955 AND C.IN_USE_FLAG = 'Y'

              Plan hash value: 2039212279
              ----------------------------------------------------------------------------------------------------
              | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
              ----------------------------------------------------------------------------------------------------
              | 0 | INSERT STATEMENT | | | | 14424 (100)| |
              | 1 | LOAD TABLE CONVENTIONAL | | | | | |
              | 2 | NESTED LOOPS | | | | | |
              | 3 | NESTED LOOPS | | 1 | 430 | 14424 (1)| 00:02:54 |
              | 4 | NESTED LOOPS | | 1 | 318 | 14421 (1)| 00:02:54 |
              | 5 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14420 (1)| 00:02:54 |
              | 6 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO5 | 1 | 233 | 1 (0)| 00:00:01 |
              | 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO5 | 1 | | 0 (0)| |
              | 8 | INDEX RANGE SCAN | PSACA_PR_SUMM | 1 | | 2 (0)| 00:00:01 |
              | 9 | TABLE ACCESS BY INDEX ROWID | PS_CA_PR_SUMM | 1 | 112 | 3 (0)| 00:00:01 |
              ----------------------------------------------------------------------------------------------------
              However, below is the plan we get on instance 4.  We get this plan because there is already a profile that has been applied in the past, but now we are on a different non-shared instance of the temporary table, so the profile cannot match because we are on different objects, and we get the same problem, but on different non-shared instances of the temporary record.  Different literal values, such as those for Process Instance can be handled by FORCE_MATCHING, but not different tables.  This is a totally different SQL statement.
              SQL_ID 5gtxdqbx0d0c3
              --------------------
              INSERT INTO PS_CA_SUM_RC_TAO4 (…) SELECT

              FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE

              B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'

              Plan hash value: 3552771247

              ---------------------------------------------------------------------------------------------------
              | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
              ---------------------------------------------------------------------------------------------------
              | 0 | INSERT STATEMENT | | | | 36361 (100)| |
              | 1 | LOAD TABLE CONVENTIONAL | | | | | |
              | 2 | HASH JOIN | | 1 | 430 | 36361 (3)| 00:07:17 |
              | 3 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14347 (1)| 00:02:53 |
              | 4 | NESTED LOOPS | | | | | |
              | 5 | NESTED LOOPS | | 1 | 345 | 22014 (3)| 00:04:25 |
              | 6 | TABLE ACCESS FULL | PS_CA_PR_SUMM | 5268K| 562M| 21539 (1)| 00:04:19 |
              | 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO4 | 1 | | 0 (0)| |
              | 8 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO4 | 1 | 233 | 1 (0)| 00:00:01 |
              ---------------------------------------------------------------------------------------------------

              Note
              -----
              - SQL profile "coe_gn3n77gs6xj2a_3552771247" used for this statement
              Of course, the statement on instance 4 had a profile because it was added as a short term fix and then left in situ long term.  It worked fine until a process errored, left the non-shared instance of the temporary record allocated to that process instance, and so PeopleSoft allocated instance 5 on the next execution.
              So we could just create another profile using the coe_xfr_sql_profile.sql script
              SPO coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.log;
              SET ECHO ON TERM ON LIN 2000 TRIMS ON NUMF 99999999999999999999;
              REM
              REM $Header: 215187.1 coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql 11.4.1.4 2014/08/13 csierra $
              REM
              REM Copyright (c) 2000-2010, Oracle Corporation. All rights reserved.
              REM
              REM AUTHOR
              REM carlos.sierra@oracle.com
              REM
              REM SCRIPT
              REM coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql
              REM
              REM DESCRIPTION
              REM This script is generated by coe_xfr_sql_profile.sql
              REM It contains the SQL*Plus commands to create a custom
              REM SQL Profile for SQL_ID 5gtxdqbx0d0c3 based on plan hash
              REM value 3552771247.
              REM The custom SQL Profile to be created by this script
              REM will affect plans for SQL commands with signature
              REM matching the one for SQL Text below.
              REM Review SQL Text and adjust accordingly.
              REM
              REM PARAMETERS
              REM None.
              REM
              REM EXAMPLE
              REM SQL> START coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql;
              REM
              REM NOTES
              REM 1. Should be run as SYSTEM or SYSDBA.
              REM 2. User must have CREATE ANY SQL PROFILE privilege.
              REM 3. SOURCE and TARGET systems can be the same or similar.
              REM 4. To drop this custom SQL Profile after it has been created:
              REM EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE('coe_5gtxdqbx0d0c3_3552771247');
              REM 5. Be aware that using DBMS_SQLTUNE requires a license
              REM for the Oracle Tuning Pack.
              REM
              WHENEVER SQLERROR EXIT SQL.SQLCODE;
              REM
              VAR signature NUMBER;
              REM
              DECLARE
              sql_txt CLOB;
              h SYS.SQLPROF_ATTR;
              BEGIN
              sql_txt := q'[
              INSERT INTO PS_CA_SUM_RC_TAO4 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
              A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
              C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
              B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
              ]';
              h := SYS.SQLPROF_ATTR(
              q'[BEGIN_OUTLINE_DATA]',
              q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
              q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
              q'[DB_VERSION('11.2.0.3')]',
              q'[OPT_PARAM('_unnest_subquery' 'false')]',
              q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
              q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
              q'[ALL_ROWS]',
              q'[OUTLINE_LEAF(@"SEL$1")]',
              q'[OUTLINE_LEAF(@"INS$1")]',
              q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO4"@"INS$1")]',
              q'[FULL(@"SEL$1" "A"@"SEL$1")]',
              q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO4"."PROCESS_INSTANCE" "PS_CA_SUM_TAO4"."BUSINESS_UNIT" "PS_CA_SUM_TAO4"."PROJECT_ID" "PS_CA_SUM_TAO4"."ACTIVITY_ID" "PS_CA_SUM_TAO4"."ANALYSIS_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO4"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO4"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_NUM" ]',
              q'[ "PS_CA_SUM_TAO4"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO4"."CA_FEE_STATUS"))]',
              q'[FULL(@"SEL$1" "C"@"SEL$1")]',
              q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
              q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
              q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
              q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
              q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
              q'[END_OUTLINE_DATA]');
              :signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
              DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
              sql_text => sql_txt,
              profile => h,
              name => 'coe_5gtxdqbx0d0c3_3552771247',
              description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
              category => 'DEFAULT',
              validate => TRUE,
              replace => TRUE,
              force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
              END;
              /
              WHENEVER SQLERROR CONTINUE
              SET ECHO OFF;
              PRINT signature
              PRO
              PRO ... manual custom SQL Profile has been created
              PRO
              SET TERM ON ECHO OFF LIN 80 TRIMS OFF NUMF "";
              SPO OFF;
              PRO
              PRO COE_XFR_SQL_PROFILE_5gtxdqbx0d0c3_3552771247 completed
              But then we must manually change the table and index names from 4 to 5.
              DECLARE
              sql_txt CLOB;
              h SYS.SQLPROF_ATTR;
              BEGIN
              sql_txt := q'[
              INSERT INTO PS_CA_SUM_RC_TAO5 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
              A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
              C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
              B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
              ]';
              h := SYS.SQLPROF_ATTR(
              q'[BEGIN_OUTLINE_DATA]',
              q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
              q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
              q'[DB_VERSION('11.2.0.3')]',
              q'[OPT_PARAM('_unnest_subquery' 'false')]',
              q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
              q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
              q'[ALL_ROWS]',
              q'[OUTLINE_LEAF(@"SEL$1")]',
              q'[OUTLINE_LEAF(@"INS$1")]',
              q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO5"@"INS$1")]',
              q'[FULL(@"SEL$1" "A"@"SEL$1")]',
              q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO5"."PROCESS_INSTANCE" "PS_CA_SUM_TAO5"."BUSINESS_UNIT" "PS_CA_SUM_TAO5"."PROJECT_ID" "PS_CA_SUM_TAO5"."ACTIVITY_ID" "PS_CA_SUM_TAO5"."ANALYSIS_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO5"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO5"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_NUM" ]',
              q'[ "PS_CA_SUM_TAO5"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO5"."CA_FEE_STATUS"))]',
              q'[FULL(@"SEL$1" "C"@"SEL$1")]',
              q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
              q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
              q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
              q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
              q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
              q'[END_OUTLINE_DATA]');
              :signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
              DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
              sql_text => sql_txt,
              profile => h,
              name => 'coe_5gtxdqbx0d0c3_3552771247',
              description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
              category => 'DEFAULT',
              validate => TRUE,
              replace => TRUE,
              force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
              END;
              /
              Or to take Arms against a Sea of statements, The profile has advantage that it can be applied quickly without a code change.  It is the perfect tool for the DBA with a production performance problem. However, there are some other considerations.
              • If applying to statement that references a PS temp record then we need to apply the profile to all instances of the record (both non-shared instances and the shared instance).
              • We were lucky that we referenced instance 5 of two temporary records. However, you could get a situation where a statement references different instances of different temporary records.  So perhaps instance 5 of one table and instance 6 of another.  In which case, you might also get instance 6 of the first table and instance 5 of the other.  A SQL profile could be needed for each permutation.
              • Bear in mind also that some areas of PeopleSoft use dynamically generated SQL.  So you get similar SQL statements which are sufficiently different for the profile not to match.  
              • Any changes to the expansion of Application Engine and PeopleCode MetaSQL on upgrading PeopleTools, or potentially even patching, will also prevent matching.
              • There is also the challenge of dealing with code changes as the system is upgraded, patched and customised.  A small code change, perhaps just an extra field in the select clause, can result in a performance regression because the profile stops matching. Of course, this challenge is not limited to PeopleSoft systems! 
              Profiles are likely to be effective if there are no PeopleSoft temporary records present.  So you can generally use them in COBOL and SQR processes and the on-line application (other than in on-line Application Engine processes). Aye, there's the rub,I would use a profile (or a set of profiles) as a short-term temporary fix that is easier to introduce into production, and then add hints to the source code and so fix all instances of the code, not just the ones that have been profiled. Of course, that does entail a code change, and everything that goes with that.  One strong argument against making code change is that you have to change the code again to remove or change the hint if it becomes unnecessary at some time in future after a significant change, such as an Oracle upgrade.  However, on balance, I think it is better than the scenario where the profile stops working one day without warning.The rest is silence.Unless you add a comment.©David Kurtz, Go-Faster Consultancy Ltd.

              Introducing the Analytic Keep Clause for Effective-Dated/Sequence Queries in PeopleSoft

              Wed, 2014-07-09 12:46
              Those of us who work with PeopleSoft, and especially the HCM product, are all too familiar with the concept of effect-dated data, and the need to find data that was effective at a particular date.  PeopleSoft products have always made extensive use of correlated sub-queries to determine the required rows from an effective-dated record.

              The JOB record is a the heart of HCM. It is both effective-dated and effective sequenced. I will use it for the demonstrations in this article. I am going to suggest an alternative, although Oracle-specific, SQL construction.

               Let's start by looking at the job data for an employee in the demo database. Employee KF0018 has 17 rows of data two concurrent jobs.  The question I am going to ask is "What was the annual salary for this employee on 11 February 1995?".  Therefore, I am interested in the rows marked below with the asterisks. 
              column annual_rt format 999,999
              SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
              FROM ps_job j
              WHERE j.emplid = 'KF0018'
              ORDER BY 1,2,3,4
              /

              EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
              ----------- ---------- --------- ---------- --- ---------- --- ---------
              KF0018 0 12-JUN-83 0 HIR 13000 FRF 120,000
              KF0018 0 01-JAN-84 0 PAY 13000 FRF 123,600
              KF0018 0 01-JAN-85 0 PAY 13000 FRF 127,308
              KF0018 0 01-JAN-86 0 PAY 13000 FRF 131,764
              KF0018 0 01-JAN-87 0 PAY 13000 FRF 136,376
              KF0018 0 01-JAN-88 0 PAY 13000 FRF 140,467
              KF0018 0 01-JAN-89 0 PAY 13000 FRF 147,490
              KF0018 0 22-JAN-95 0 PRO 13000 FRF 147,490
              KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239 *
              KF0018 0 22-JAN-96 0 PAY 13000 FRF 318,575
              KF0018 0 01-JAN-98 0 PAY 13000 FRF 346,156
              KF0018 0 01-JAN-00 0 DTA 13000 FRF 346,156
              KF0018 0 01-JAN-02 0 PAY 13000 EUR 52,771
              KF0018 1 01-NOV-89 0 ASG 21300 GBP 22,440
              KF0018 1 31-DEC-93 0 ASC 21300 GBP 22,440
              KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440 *
              KF0018 1 31-DEC-95 0 ASC 10000 GBP 22,440

              I will set statistics level to ALL so I can obtain detailed information about how the SQL statements execute:
              ALTER SESSION SET statistics_level = ALL;

              I extracted the execution plans and execution statistics with the following command
              select * from table(dbms_xplan.display_cursor(null,null,'IOSTATS')) 
              Typical PeopleSoft Platform Agnostic ConstructionThis is the usual way to construct the query in PeopleSoft. It is also valid on all databases platforms supported by PeopleSoft, not just Oracle. 
              SELECT  emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
              FROM ps_job j
              WHERE j.effdt = (
              SELECT MAX (j1.effdt) FROM ps_job j1
              WHERE j1.emplid = j.emplid
              AND j1.empl_rcd = j.empl_rcd
              AND j1.effdt <= TO_DATE('19950211','YYYYMMDD'))
              AND j.effseq = (
              SELECT MAX (j2.effseq) FROM ps_job j2
              WHERE j2.emplid = j.emplid
              AND j2.empl_rcd = j.empl_rcd
              AND j2.effdt = j.effdt)
              AND j.emplid = 'KF0018'
              ORDER BY 1,2,3,4
              /

              EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
              ----------- ---------- --------- ---------- --- ---------- --- ---------
              KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
              KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

              This required three access of indexes on the PS_JOB table, and two accesses of the table, using 26 consistent reads.
              Plan hash value: 2299825310
              ----------------------------------------------------------------------------------------------------------------
              | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
              ----------------------------------------------------------------------------------------------------------------
              | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 26 | 2 |
              | 1 | SORT ORDER BY | | 1 | 1 | 2 |00:00:00.01 | 26 | 2 |
              | 2 | NESTED LOOPS | | 1 | 1 | 2 |00:00:00.01 | 26 | 2 |
              | 3 | NESTED LOOPS | | 1 | 1 | 3 |00:00:00.01 | 21 | 2 |
              | 4 | VIEW | VW_SQ_1 | 1 | 1 | 2 |00:00:00.01 | 14 | 2 |
              |* 5 | FILTER | | 1 | | 2 |00:00:00.01 | 14 | 2 |
              | 6 | HASH GROUP BY | | 1 | 1 | 2 |00:00:00.01 | 14 | 2 |
              | 7 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 14 | 2 |
              |* 8 | INDEX RANGE SCAN | PS_JOB | 1 | 1 | 12 |00:00:00.01 | 2 | 2 |
              | 9 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 2 | 1 | 3 |00:00:00.01 | 7 | 0 |
              |* 10 | INDEX RANGE SCAN | PSAJOB | 2 | 1 | 3 |00:00:00.01 | 4 | 0 |
              |* 11 | VIEW PUSHED PREDICATE | VW_SQ_2 | 3 | 1 | 2 |00:00:00.01 | 5 | 0 |
              |* 12 | FILTER | | 3 | | 3 |00:00:00.01 | 5 | 0 |
              | 13 | SORT AGGREGATE | | 3 | 1 | 3 |00:00:00.01 | 5 | 0 |
              |* 14 | FILTER | | 3 | | 5 |00:00:00.01 | 5 | 0 |
              |* 15 | INDEX RANGE SCAN | PSAJOB | 3 | 1 | 5 |00:00:00.01 | 5 | 0 |
              ----------------------------------------------------------------------------------------------------------------

              Predicate Information (identified by operation id):
              ---------------------------------------------------

              5 - filter("J1"."EMPLID"='KF0018')
              8 - access("J1"."EMPLID"='KF0018' AND "J1"."SYS_NC00164$">=HEXTORAW('883CFDF4FEF8FEFAFF') )
              filter(SYS_OP_UNDESCEND("J1"."SYS_NC00164$")<=TO_DATE(' 1995-02-11 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
              10 - access("J"."EMPLID"='KF0018' AND "ITEM_2"="J"."EMPL_RCD" AND
              "J"."SYS_NC00164$"=SYS_OP_DESCEND("MAX(J1.EFFDT)"))
              filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")="MAX(J1.EFFDT)")
              11 - filter(SYS_OP_UNDESCEND("J"."SYS_NC00165$")="MAX(J2.EFFSEQ)")
              12 - filter(COUNT(*)>0)
              14 - filter('KF0018'="J"."EMPLID")
              15 - access("J2"."EMPLID"='KF0018' AND "J2"."EMPL_RCD"="J"."EMPL_RCD" AND
              "J2"."SYS_NC00164$"=SYS_OP_DESCEND(SYS_OP_UNDESCEND("J"."SYS_NC00164$")))
              filter(SYS_OP_UNDESCEND("J2"."SYS_NC00164$")=SYS_OP_UNDESCEND("J"."SYS_NC00164$"))

              This construction is also the reason you are required to set
              _UNNEST_SUBQUERY=FALSE
              on all PeopleSoft systems
              Analytic Function and In-LineView/Sub-query FactorI have seen people use a combination of analytic functions and in-line views to avoid having to use the correlated sub-query construction. This has been possible since Oracle 9i.
              WITH X AS (
              SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
              , ROW_NUMBER() OVER (PARTITION BY emplid, empl_rcd 
              ORDER BY effdt DESC, effseq DESC) myrowseq
              FROM ps_job j
              WHERE j.effdt <= TO_DATE('19950211','YYYYMMDD')
              AND j.emplid = 'KF0018'
              )
              SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
              FROM x
              WHERE myrowseq = 1
              ORDER BY 1,2,3,4
              /

              EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
              ----------- ---------- --------- ---------- --- ---------- --- ---------
              KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
              KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

              We get the same result, but now the index is scanned just once and we only need 14 consistent reads, so it produces a significant improvement. However, it still includes a sort operation in addition to the window function. We have to create a sequence number field in the in-line view and filter by that in the final query.
              Plan hash value: 1316906785
              ---------------------------------------------------------------------------------------------------
              | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
              ---------------------------------------------------------------------------------------------------
              | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 14 |
              | 1 | SORT ORDER BY | | 1 | 1 | 2 |00:00:00.01 | 14 |
              |* 2 | VIEW | | 1 | 1 | 2 |00:00:00.01 | 14 |
              |* 3 | WINDOW NOSORT | | 1 | 1 | 12 |00:00:00.01 | 14 |
              | 4 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 14 |
              |* 5 | INDEX RANGE SCAN | PSAJOB | 1 | 1 | 12 |00:00:00.01 | 2 |
              ---------------------------------------------------------------------------------------------------

              Predicate Information (identified by operation id):
              ---------------------------------------------------

              2 - filter("MYROWSEQ"=1)
              3 - filter(ROW_NUMBER() OVER ( PARTITION BY "EMPLID","EMPL_RCD" ORDER BY
              "J"."SYS_NC00164$","J"."SYS_NC00165$")<=1)
              5 - access("J"."EMPLID"='KF0018' AND "J"."SYS_NC00164$">=HEXTORAW('883CFDF4FEF8FEFAFF')
              )
              filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")<=TO_DATE(' 1995-02-11 00:00:00',
              'syyyy-mm-dd hh24:mi:ss'))

              Analytic Function Keep Clause This form of the analytic functions is documented for the first time in 12c, but is available in 10g (my thanks to Tony Hasler for introducing me to it). It works by effectively keeping a running maximum value of the columns in the order by clause within in group.
              SELECT emplid, empl_rcd
              , MAX(effdt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS effdt
              , MAX(effseq) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS effseq
              , MAX(action) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS action
              , MAX(deptid) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS deptid
              , MAX(currency_cd) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS currency_cd
              , MAX(annual_rt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS annual_rt
              FROM ps_job j
              WHERE j.effdt <= TO_DATE('19950211','YYYYMMDD')
              AND j.emplid = 'KF0018'
              GROUP BY emplid, empl_rcd
              /

              EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
              ----------- ---------- --------- ---------- --- ---------- --- ---------
              KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
              KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

              Although this construction uses an additional consistent read, it has the advantage of not using either an inline view or a window function and does not sort the data.
              Plan hash value: 1550496807
              -------------------------------------------------------------------------------------------------
              | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
              -------------------------------------------------------------------------------------------------
              | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 15 |
              | 1 | SORT GROUP BY NOSORT | | 1 | 1 | 2 |00:00:00.01 | 15 |
              | 2 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 15 |
              |* 3 | INDEX RANGE SCAN | PS_JOB | 1 | 1 | 12 |00:00:00.01 | 3 |
              -------------------------------------------------------------------------------------------------

              Predicate Information (identified by operation id):
              ---------------------------------------------------

              3 - access("J"."EMPLID"='KF0018' AND "J"."SYS_NC00164$">=HEXTORAW('883CFDF3FEF8FEFAFF'
              ) )
              filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")<=TO_DATE(' 1995-02-12 00:00:00',
              'syyyy-mm-dd hh24:mi:ss'))

              I think this construction could be useful in PeopleSoft.  At first glance the SQL appears more complicated, but it in this example it removed two correlated sub-queries. 
              Using Analytic Functions in PS/QueryOf course you can code it anywhere where you can simply enter SQL as text.  However, it also has the advantage over the other analytic function construction that it can be coded in the PS/Query tool.  The analytic functions in the select caluse should be created in PS/Query expressions with the aggregate expression checkbox ticked.
              Analytic 'Keep' function in PS/Query Aggregate ExpressionAnalytic Function in Aggregated Expression in Windows Client version of PS/Query  The analytic functions can be selected in the PS/Query, and their lengths and titles can be tidied up.
              Analytic PS/QueryPS/Query with Analytic 'Keep' Functions
              This is the resulting SQL which is the same as before (with row level security added by PS/Query) and produces the same results.
              SELECT A.EMPLID, A.EMPL_RCD, MAX(effdt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq)
              , MAX(effseq) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(action) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(deptid) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(currency_cd) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(annual_rt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq)
              FROM PS_JOB A, PS_EMPLMT_SRCH_QRY A1
              WHERE ( A.EMPLID = A1.EMPLID
              AND A.EMPL_RCD = A1.EMPL_RCD
              AND A1.OPRID = 'PS'
              AND ( A.EFFDT <= TO_DATE('1995-02-11','YYYY-MM-DD')
              AND A.EMPLID = 'KF0018' ) )
              GROUP BY A.EMPLID, A.EMPL_RCD
              ©David Kurtz, Go-Faster Consultancy Ltd.