Skip navigation.

David Kurtz

Syndicate content
This blog contains things about PeopleSoft that DBAs might find interesting.
Or then again they might not!
Non-PeopleSoft Oracle stuff is at blog.go-faster.co.uk.
Updated: 13 hours 48 min ago

Filtering PeopleTools SQL from Performance Monitor Traces

Mon, 2014-11-03 15:01

I have been doing some on-line performance tuning on a PeopleSoft Financials system using PeopleSoft Performance Monitor (PPM).  End-users have collect verbose PPM traces. Usually, when I use PPM in a production system, all the components are fully cached by the normal activity of the user (except when the application server caches have recently been cleared).  However, when working in a user test environment it is common to find that the components are not fully cached. This presents two problems.
  • The application servers spend quite a lot of time executing queries on the PeopleTools tables to load the components, pages and PeopleCode into their caches. We can see in the screenshot of the component trace that there is a warning message that component objects are not fully cached, and that these  cache misses skew timings.
  • In verbose mode, the PPM traces collect a lot of additional transactions capturing executions and fetches against PeopleTools tables. The PPM analytic components cannot always manage the resultant volume of transactions.
    Figure 1. Component trace as collected by PPMFigure 1. Component trace as collected by PPMIf I go further down the same page and look in the SQL Summary, I can see SQL operations against PeopleTools tables (they are easily identifiable in that they generally do not have an underscore in the third character). Not only are 5 of the top 8 SQL operations related to PeopleTools tables, we can also see that they also account for over 13000 executions, which means there are at least 13000 rows of additional data to be read from PSPMTRANSHIST.
    Figure 2. SQL Summary of PPM trace with PeopleTools SQLFigure 2. SQL Summary of PPM trace with PeopleTools SQLWhen I open the longest running server round trip (this is also referred to as a Performance Monitoring Unit or PMU), I can only load 1001 rows before I get a message warning that the maximum row limit has been reached. The duration summary and the number of executions and fetches cannot be calculated and hence 0 is displayed.
     Details of longest PMU with PeopleTools SQLFigure 3: Details of longest PMU with PeopleTools SQL
    Another consequence of the PeopleTools data is that it can take a long time to open the PMU tree. There is no screenshot of the PMU tree here because in this case I had so much data that I couldn't open it before the transaction timed out!
    Solution My solution to this problem is to delete the transactions that relate to PeopleTools SQL and correct the durations, and the number of executions and fetches held in summary transactions. The rationale is that these transactions would not normally occur in significant quantities in a real production system, and there is not much I can do about them when they do.
    The first step is to clone the trace. I could work on the trace directly, but I want to preserve the original data.
    PPM transactions are held in the table PSPMTRANSHIST. They have a unique identifier PM_INSTANCE_ID. A single server round trip, also called a Performance Monitoring Unit (PMU), will consist of many transactions. They can be shown as a tree and each transaction has another field PM_PARENT_INST_ID which holds the instance of the parent. This links the data together and we can use hierarchical queries in Oracle SQL to walk the tree. Another field PM_TOP_INST_ID identifies the root transaction in the tree.
    Cloning a PPM trace is simply a matter of inserting data into PSPMTRANSHIST. However, when I clone a PPM trace I have to make sure that the instance numbers are distinct but still link correctly. In my system I can take a very simple approach. All the instance numbers actually collected by PPM are greater than 1016. So, I will simply use the modulus function to consistently alter the instances to be different. This approach may break down in future, but it will do for now.
    On an Oracle database, PL/SQL is a simple and effective way to write simple procedural processes.  I have written two anonymous blocks of code.
    Note that the cloned trace will be purged from PPM like any other data by the delivered PPM archive process.

    REM xPT.sql
    BEGIN --duplicate PPM traces
    FOR i IN (
    SELECT h.*
    FROM pspmtranshist h
    WHERE pm_perf_trace != ' ' /*rows must have a trace name*/
    -- AND pm_perf_trace = '9b. XXXXXXXXXX' /*I could specify a specific trace by name*/
    AND pm_instance_id > 1E16 /*only look at instance > 1e16 so I do not clone cloned traces*/
    ) LOOP
    INSERT INTO pspmtranshist
    (PM_INSTANCE_ID, PM_TRANS_DEFN_SET, PM_TRANS_DEFN_ID, PM_AGENTID, PM_TRANS_STATUS,
    OPRID, PM_PERF_TRACE, PM_CONTEXT_VALUE1, PM_CONTEXT_VALUE2, PM_CONTEXT_VALUE3,
    PM_CONTEXTID_1, PM_CONTEXTID_2, PM_CONTEXTID_3, PM_PROCESS_ID, PM_AGENT_STRT_DTTM,
    PM_MON_STRT_DTTM, PM_TRANS_DURATION, PM_PARENT_INST_ID, PM_TOP_INST_ID, PM_METRIC_VALUE1,
    PM_METRIC_VALUE2, PM_METRIC_VALUE3, PM_METRIC_VALUE4, PM_METRIC_VALUE5, PM_METRIC_VALUE6,
    PM_METRIC_VALUE7, PM_ADDTNL_DESCR)
    VALUES
    (MOD(i.PM_INSTANCE_ID,1E16) /*apply modulus to instance number*/
    ,i.PM_TRANS_DEFN_SET, i.PM_TRANS_DEFN_ID, i.PM_AGENTID, i.PM_TRANS_STATUS,
    i.OPRID,
    SUBSTR('xPT'||i.PM_PERF_TRACE,1,30) /*adjust trace name*/,
    i.PM_CONTEXT_VALUE1, i.PM_CONTEXT_VALUE2, i.PM_CONTEXT_VALUE3,
    i.PM_CONTEXTID_1, i.PM_CONTEXTID_2, i.PM_CONTEXTID_3, i.PM_PROCESS_ID, i.PM_AGENT_STRT_DTTM,
    i.PM_MON_STRT_DTTM, i.PM_TRANS_DURATION,
    MOD(i.PM_PARENT_INST_ID,1E16), MOD(i.PM_TOP_INST_ID,1E16), /*apply modulus to parent and top instance number*/
    i.PM_METRIC_VALUE1, i.PM_METRIC_VALUE2, i.PM_METRIC_VALUE3, i.PM_METRIC_VALUE4, i.PM_METRIC_VALUE5,
    i.PM_METRIC_VALUE6, i.PM_METRIC_VALUE7, i.PM_ADDTNL_DESCR);
    END LOOP;
    COMMIT;
    END;
    /
    Now I will work on the cloned trace. I want to remove certain transaction.
    • PeopleTools SQL. Metric value 7 reports the SQL operation and SQL table name. So if the first word is SELECT and the second word is a PeopleTools table name then it is a PeopleTools SQL operation. A list of PeopleTools tables can be obtained from the object security table PSOBJGROUP.
    • Implicit Commit transactions. This is easy - it is just transaction type 425. 
    Having deleted the PeopleTools transactions, I must also
    • Correct transaction duration for any parents of transaction. I work up the hierarchy of transactions and deduct the duration of the transaction that I am deleting from all of the parent.
    • Transaction types 400, 427 and 428 all record PeopleTools SQL time (metric 66). When I come to that transaction I also deduct the duration of the deleted transaction from the PeopleTools SQL time metric in an parent transaction.
    • Delete any children of the transactions that I delete. 
    • I must also count each PeopleTools SQL Execution transaction (type 408) and each PeopleTools SQL Fetch transaction (type 414) that I delete. These counts are also deducted from the summaries on the parent transaction 400. 
    The summaries in transaction 400 are used on the 'Round Trip Details' components, and if they are not adjusted you can get misleading results. Without the adjustments, I have encountered PMUs where more than 100% of the total duration is spent in SQL - which is obviously impossible.
    Although this technique of first cloning the whole trace and then deleting the PeopleTools operations can be quite slow, it is not something that you are going to do very often. 
    REM xPT.sql
    REM (c)Go-Faster Consultancy Ltd. 2014
    set serveroutput on echo on
    DECLARE
    l_pm_instance_id_m4 INTEGER;
    l_fetch_count INTEGER;
    l_exec_count INTEGER;
    BEGIN /*now remove PeopleTools SQL transaction and any children and adjust trans durations*/
    FOR i IN (
    WITH x AS ( /*returns PeopleTools tables as defined in Object security*/
    SELECT o.entname recname
    FROM psobjgroup o
    WHERE o.objgroupid = 'PEOPLETOOLS'
    AND o.enttype = 'R'
    )
    SELECT h.pm_instance_id, h.pm_parent_inst_id, h.pm_trans_duration, h.pm_trans_defn_id
    FROM pspmtranshist h
    LEFT OUTER JOIN x
    ON h.pm_metric_value7 LIKE 'SELECT '||x.recname||'%'
    AND x.recname = upper(regexp_substr(pm_metric_value7,'[^ ,]+',8,1)) /*first word after select*/
    WHERE pm_perf_trace like 'xPT%' /*restrict to cloned traces*/
    -- AND pm_perf_trace = 'xPT9b. XXXXXXXXXX' /*work on a specific trace*/
    AND pm_instance_id < 1E16 /*restrict to cloned traces*/
    AND ( x.recname IS NOT NULL
    OR h.pm_trans_defn_id IN(425 /*Implicit Commit*/))
    ORDER BY pm_instance_id DESC
    ) LOOP
    l_pm_instance_id_m4 := TO_NUMBER(NULL);
     
        IF i.pm_parent_inst_id>0 AND i.pm_trans_duration>0 THEN
    FOR j IN(
    SELECT h.pm_instance_id, h.pm_parent_inst_id, h.pm_top_inst_id, h.pm_trans_defn_id
    , d.pm_metricid_3, d.pm_metricid_4
    FROM pspmtranshist h
    INNER JOIN pspmtransdefn d
    ON d.pm_trans_defn_set = h.pm_trans_defn_set
    AND d.pm_trans_defn_id = h.pm_trans_Defn_id
    START WITH h.pm_instance_id = i.pm_parent_inst_id
    CONNECT BY prior h.pm_parent_inst_id = h.pm_instance_id
    ) LOOP
    /*decrement parent transaction times*/
    IF j.pm_metricid_4 = 66 /*PeopleTools SQL Time (ms)*/ THEN --decrement metric 4 on transaction 400
    --dbms_output.put_line('ID:'||i.pm_instance_id||' Type:'||i.pm_trans_defn_id||' decrement metric_value4 by '||i.pm_trans_duration);
    UPDATE pspmtranshist
    SET pm_metric_value4 = pm_metric_value4 - i.pm_trans_duration
    WHERE pm_instance_id = j.pm_instance_id
    AND pm_trans_Defn_id = j.pm_trans_defn_id
    AND pm_metric_value4 >= i.pm_trans_duration
    RETURNING pm_instance_id INTO l_pm_instance_id_m4;
    ELSIF j.pm_metricid_3 = 66 /*PeopleTools SQL Time (ms)*/ THEN --SQL time on serialisation
    --dbms_output.put_line('ID:'||i.pm_instance_id||' Type:'||i.pm_trans_defn_id||' decrement metric_value3 by '||i.pm_trans_duration);
    UPDATE pspmtranshist
    SET pm_metric_value3 = pm_metric_value3 - i.pm_trans_duration
    WHERE pm_instance_id = j.pm_instance_id
    AND pm_trans_Defn_id = j.pm_trans_defn_id
    AND pm_metric_value3 >= i.pm_trans_duration;
    END IF;

    UPDATE pspmtranshist
    SET pm_trans_duration = pm_trans_duration - i.pm_trans_duration
    WHERE pm_instance_id = j.pm_instance_id
    AND pm_trans_duration >= i.pm_trans_duration;
    END LOOP;
    END IF;

    l_fetch_count := 0;
    l_exec_count := 0;
    FOR j IN( /*identify transaction to be deleted and any children*/
    SELECT pm_instance_id, pm_parent_inst_id, pm_top_inst_id, pm_trans_defn_id, pm_metric_value3
    FROM pspmtranshist
    START WITH pm_instance_id = i.pm_instance_id
    CONNECT BY PRIOR pm_instance_id = pm_parent_inst_id
    ) LOOP
    IF j.pm_trans_defn_id = 408 THEN /*if PeopleTools SQL*/
    l_exec_count := l_exec_count + 1;
    ELSIF j.pm_trans_defn_id = 414 THEN /*if PeopleTools SQL Fetch*/
    l_fetch_count := l_fetch_count + j.pm_metric_value3;
    END IF;
    DELETE FROM pspmtranshist h /*delete tools transaction*/
    WHERE h.pm_instance_id = j.pm_instance_id;
    END LOOP;

    IF l_pm_instance_id_m4 > 0 THEN
    --dbms_output.put_line('ID:'||l_pm_instance_id_m4||' Decrement '||l_exec_Count||' executions, '||l_fetch_count||' fetches');
    UPDATE pspmtranshist
    SET pm_metric_value5 = pm_metric_value5 - l_exec_count
    , pm_metric_value6 = pm_metric_value6 - l_fetch_count
    WHERE pm_instance_id = l_pm_instance_id_m4;
    l_fetch_count := 0;
    l_exec_count := 0;
    END IF;

    END LOOP;
    END;
    /
    Now, I have a second PPM trace that I can open in the analytic component. Original and Cloned PPM tracesFigure 4: Original and Cloned PPM traces

    When I open the cloned trace, both timings in the duration summary have reduced as have the number of executions and fetches.  The durations of the individual server round trips have also reduced.
     Component Trace without PeopleTools transactionsFigure 5: Component Trace without PeopleTools transactions
    All of the PeopleTools SQL operations have disappeared from the SQL summary.
     SQL Summary of PPM trace after removing PeopleTools SQL transactionsFigure 6: SQL Summary of PPM trace after removing PeopleTools SQL transactions
    The SQL summary now only has 125 rows of data.
    Figure 7: SQL Summary of PMU without PeopleTools SQL
    Now, the PPM tree component opens quickly and without error.
     PMU Tree after removing PeopleTools SQLFigure 8: PMU Tree after removing PeopleTools SQL
    There may still be more transactions in a PMU than I can show in a screenshot, but I can now find the statement that took the most time quite quickly.

     Long SQL transaction further down same PMU treeFigure 9: Long SQL transaction further down same PMU tree
    Conclusions I think that it is reasonable and useful to remove PeopleTools SQL operations from a PPM trace.
    In normal production operation, components will mostly be cached, and this approach renders traces collected in non-production environments both usable in the PPM analytic components and more realistic for performance tuning. However, it is essential that when deleting some transactions from a PMU, that summary data held in other transactions in the same PMU are also corrected so that the metrics remain consistent. ©David Kurtz, Go-Faster Consultancy Ltd.

    Minimising Parse Time in Application Engine with ReUseStatement

    Fri, 2014-10-24 09:15
    This article explains how to determine the overhead of using literal values rather than bind variables in SQL submitted by PeopleSoft Application Engine programs. Using a combination of PeopleSoft Application Engine Batch Timings and Oracle Active Session History (ASH), it is possible to determine where it is most effective to change to alter this behaviour by setting the ReUse attribute.
    ReUse Statement Flag I originally wrote about the Performance Benefits of ReUseStatement in Application Engine over 5 years ago, and the problem is no less significant today than it was then.  There are still many places in the delivered PeopleSoft application that would benefit from it.  However, it is not possible to simply set the attribute across all Application Engine programs because it will cause errors in steps where SQL is dynamically generated, so each case must be considered.  Where the ReUse attributed is changed, it must be tested and migrated like any other customisation.

    Recently, I have been encountering problems in a number of places on a Financials system which were resolved by enabling ReUseStatement.  So I started by calculating how much time was lost by not setting it.
    Application Engine Batch TimingsIf an Application Engine step is not reused, then it is compiled prior to every execution during which the Application Engine bind variables are resolved to literals. The number and duration of compilations and executions are reported in the Batch Timings, the production of which are controlled with the TraceAE flag in the Process Scheduler configuration file (psprcs.cfg).  

    ;-------------------------------------------------------------------------
    ; AE Tracing Bitfield
    ;
    ; Bit Type of tracing
    ; --- ---------------
    ...
    ; 128 - Timings Report to AET file
    ...
    ; 1024 - Timings Report to tables
    ...
    TraceAE=1152
    ;------------------------------------------------------------------------

    Whatever other TraceAE flags you set, I would always recommend that you set 128 and 1024 so that batch timings are always emitted to both file and database.  The overhead of enabling them is negligible because they are managed in memory at run time and physically written once when the Application Engine program ends.

    NB: The settings in the configuration file can be overridden by command line options.  If you specify -TRACE parameter in the Process Scheduler definitions remember to also set these flags.

    Batch timings are written to the AE Trace file at end of an Application Engine program, and to PS_BAT_TIMINGS PeopleTools tables at the successful end of an Application Engine program. 
    It can be useful to have batch timings in the trace file of an Application Engine that failed because they are not written to the database.  As your system runs, you will build up batch timings for all of your successful Application Engine processes (which will be most of them.  This is a useful source of performance metrics.
    Compilations, Execution and ReUseIn this example, the number of compilations is equal to the number of executions because ReUseStatement is not set.

                              PeopleSoft Application Engine Timings
    (All timings in seconds)

    C o m p i l e E x e c u t e F e t c h Total
    SQL Statement Count Time Count Time Count Time Time
    ------------------------------ ------- -------- ------- -------- ------- -------- --------
    99XxxXxx.Step02.S 8453 2.8 8453 685.6 0 0.0 688.4
    ...

    With ReUse Statement enabled, there is now only a single compilation, and most of the time is saved in execution not compilation.

                                   C o m p i l e    E x e c u t e    F e t c h        Total
    SQL Statement Count Time Count Time Count Time Time
    ------------------------------ ------- -------- ------- -------- ------- -------- --------
    99XxxXxx.Step02.S 1 0.0 8453 342.3 0 0.0 342.3
    ...

    So we can use the batch timings to identify steps where ReUseStatement is not set because they have as many compilations as executions, and then we can profile the top statements.
     Profile CompilationsThis query produces a simple profile of batch timings for statements. 
    • In sub-query x it extract batch timings for statements with more than one compilation in processes that ended in the last 7 days.
    • There is a long-standing bug in batch timings where negative timings can be returned when the clock that returns milliseconds recycles back to zero every 216 milliseconds.  Sub-query y calculates an adjustment that is applied in sub-query z if the timing is negative.
    • Arbitrarily, I am only looking at statements with more than a total of 10000 compilations.

    REM ReUseCand.sql
    REM (c)Go-Faster COnsultancy Ltd. 2014
    COLUMN detail_id FORMAT a32
    COLUMN step_time FORMAT 999990 HEADING 'AE|Step|Secs'
    COLUMN compile_count HEADING 'AE|Compile|Count'
    COLUMN execute_count HEADING 'AE|Execute|Count'
    COLUMN processes HEADING 'Num|Process|Instances'
    COLUMN process_name HEADING 'Process|Name'
    SPOOL ReUseCand
    WITH x AS (
    SELECT l.process_instance, l.process_name
    , l.time_elapsed/1000 time_elapsed
    , l.enddttm-l.begindttm diffdttm
    , d.bat_program_name||'.'||d.detail_id detail_id
    , d.compile_count, d.compile_time/1000 compile_time
    , d.execute_time/1000 execute_time
    FROM ps_bat_Timings_dtl d
    , ps_bat_timings_log l
    WHERE d.process_instance = l.process_instance
    AND d.compile_count = d.execute_count
    AND d.compile_count > 1
    AND l.enddttm > SYSDATE-7
    ), y as (
    SELECT x.*
    , GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
    +EXTRACT(hour FROM diffdttm))
    +EXTRACT(minute FROM diffdttm))
    +EXTRACT(second FROM diffdttm)-x.time_elapsed) delta
    FROM x)
    , z as (
    SELECT process_instance, process_name, detail_id
    , CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
    ELSE time_elapsed END time_elapsed
    , compile_count
    , CASE WHEN compile_time < 0 THEN compile_time+delta
    ELSE compile_time END AS compile_time
    , CASE WHEN execute_time < 0 THEN execute_time+delta
    ELSE execute_time END AS execute_time
    FROM y
    ), a as (
    SELECT process_name, detail_id
    , SUM(compile_time+execute_time) step_time
    , SUM(compile_count) compile_count
    , COUNT(DISTINCT process_instance) processes
    FROM z
    GROUP BY process_name, detail_id)
    SELECT * FROM a
    WHERE compile_count >= 10000
    ORDER BY step_time DESC
    /
    SPOOL OFF

    So now I have a list of steps with lots of compilations.  I know how long they took, but I don't know how much time I might save by enabling ReUseStatement. That will save some time in Application Engine, but it will also cut down database parse time.  So now I need determine the parse time from ASH data.

    WITH x AS (
    Process Step Compile Process
    Name DETAIL_ID SEcs Count Instances
    ------------ -------------------------------- ------ ---------- ----------
    AP_PSTVCHR AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10
    AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step03.S 4313 49536 28
    FS_VATUPDFS FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3
    AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step02.S 2799 49536 28
    PC_BI_TO_PC PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10
    FS_VATUPDFS FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3
    GL_JEDIT_0 FS_CEDT_ECFS.iTSELog.iTSELog.S 1628 13104 519
    AP_APY2015 AP_APY2015.V_CREATE.Step14.H 1192 11318 19

    This query is based on the previous one, but includes scalar queries on the ASH data for each step.
    • WARNING: This query can run for a long period because it has to scan ASH data for each entry in BAT_TIMINGS_DTL.
    • This time in sub-query x I am looking for a rolling 7-day period up to the last hour, because ASH data will have been flushed to the ASH repository.
    • In sub-query y there are two scalar queries that retrieve the DB time spent on hard parse, and all DB time for each batch timing entry.  These queries count 10 seconds for each distinct sample ID to estimate elapsed time rather than total DB time.
    • The query does not group by process name because one step can be called from many Application Engine programs and I want to aggregate the total time across all of them.

    REM ReUseCandASH.sql
    REM ReUseCandASH.sql
    REM (c)Go-Faster Consultancy Ltd. 2014
    COLUMN processes HEADING 'Num|Process|Instances'
    COLUMN process_name HEADING 'Process|Name'
    COLUMN detail_id FORMAT a32
    COLUMN step_time HEADING 'AE|Step|SEcs' FORMAT 999990
    COLUMN compile_count HEADING 'AE|Compile|Count'
    COLUMN execute_count HEADING 'AE|Execute|Count'
    COLUMN hard_parse_secs HEADING 'Hard|Parse|Secs' FORMAT 99990
    COLUMN ash_secs HEADING 'DB|Time' FORMAT 99990
    SPOOL ReUseCandASH
    WITH x AS (
    SELECT l.process_instance, l.process_name
    , l.time_elapsed/1000 time_elapsed
    , l.begindttm, l.enddttm
    , l.enddttm-l.begindttm diffdttm
    , d.bat_program_name||'.'||d.detail_id detail_id
    , d.compile_count, d.compile_time/1000 compile_time
    , d.execute_time/1000 execute_time
    FROM ps_bat_timings_dtl d
    , ps_bat_timings_log l
    WHERE d.process_instance = l.process_instance
    AND d.compile_count = d.execute_count
    AND d.compile_count > 1
    AND l.enddttm >= TRUNC(SYSDATE-7,'HH24')
    AND l.enddttm < TRUNC(SYSDATE,'HH24')
    ), y as (
    SELECT x.*
    , GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
    +EXTRACT(hour FROM diffdttm))
    +EXTRACT(minute FROM diffdttm))
    +EXTRACT(second FROM diffdttm)-x.time_Elapsed) delta
    FROM x)
    , z as (
    SELECT process_instance, process_name, detail_id
    , CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
    ELSE time_elapsed END AS time_elapsed
    , compile_count
    , CASE WHEN compile_time < 0 THEN compile_time+delta
    ELSE compile_time END AS compile_time
    , CASE WHEN execute_time < 0 THEN execute_time+delta
    ELSE execute_time END AS execute_time
    , (
    SELECT 10*COUNT(DISTINCT h.sample_id)
    FROM psprcsque q
    , dba_hist_snapshot x
    , dba_hist_active_sess_history h
    WHERE q.prcsinstance = y.process_instance
    AND x.begin_interval_time <= y.enddttm
    AND X.END_INTERVAL_TIME >= y.begindttm
    AND h.sample_time between y.begindttm and y.enddttm
    AND h.SNAP_id = x.SNAP_id
    AND h.dbid = x.dbid
    AND h.instance_number = x.instance_number
    AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
    AND h.action = y.detail_id
    AND h.in_hard_parse = 'Y'
    ) hard_parse_secs
    , (
    SELECT 10*COUNT(DISTINCT h.sample_id)
    FROM psprcsque q
    , dba_hist_snapshot x
    , dba_hist_active_sess_history h
    WHERE q.prcsinstance = y.process_instance
    AND x.begin_interval_time <= y.enddttm
    AND X.END_INTERVAL_TIME >= y.begindttm
    AND h.sample_time between y.begindttm and y.enddttm
    AND h.SNAP_id = X.SNAP_id
    AND h.dbid = x.dbid
    AND h.instance_number = x.instance_number
    AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
    AND h.action = y.detail_id
    ) ash_secs
    FROM y
    ), a AS (
    SELECT /*process_name ,*/ detail_id
    , SUM(compile_time+execute_time) step_time
    , SUM(compile_count) compile_count
    , COUNT(DISTINCT process_instance) processes
    , SUM(hard_parse_secs) hard_parse_secs
    , SUM(ash_secs) ash_secs
    FROM z
    GROUP BY /*process_name,*/ detail_id)
    SELECT a.*
    FROM a
    WHERE compile_count >= 10000
    ORDER BY step_time DESC
    /
    spool off

    Now we can also see how much time the database is spending on hard parse on each step, and it is this time that is likely to be saved by enabling ReUseStatement.
    However, before we can enable the ReUseStatement attribute we must first manually inspect each step in Application Designer and determine whether doing so would break anything.  The Comment column in this profile was added manually as I did that.  Some statements I can change, some I have to accept the overhead.

                                       Step    Compile    Process      Parse         DB
    DETAIL_ID Secs Count Instances Secs Time Comment
    -------------------------------- ------ ---------- ---------- ---------- ---------- …………………………………………………………………………………………………………………………………
    AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10 11820 11920 Set ReUseStatement
    FS_CEDT_ECMB.4EditCDT.uValCDT.S 5531 10289 679 620 5870 Dynamic SQL, can't set ReUseStatement
    APVEDTMOVE.UPDQTY03.Step03.S 4306 49471 27 4020 4100 Set ReUseStatement
    FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3 3150 3860 Dynamic SQL, can't set ReUseStatement
    FS_CEDT_ECFS.iTSELog.iTSELog.S 3332 19073 716 2130 3520 Dynamic SQL, can't set ReUseStatement
    APVEDTMOVE.UPDQTY03.Step02.S 2796 49471 27 2730 2820 Set ReUseStatement
    PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10 230 1920 Set ReUseStatement
    FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3 0 0 Dynamic SQL, can't set ReUseStatement
    FS_CEDT_ECMB.4uAnchCT.uAnchCDT.S 1319 10289 679 510 1290 Dynamic SQL, can't set ReUseStatement
    AP_APY2015.V_CREATE.Step14.H 1169 11094 19 0 0 Set ReUseStatement
    GL_JETSE.GA100.CHKEDT.S 1121 15776 569 860 930 Dynamic SQL, can't set ReUseStatement
    FS_CEDT_ECMB.iTSELog.iTSELog.S 988 10289 679 450 990 Dynamic SQL, can't set ReUseStatement
    FS_CEDT_ECMB.uMarkVal.uMarkVal.S 711 10289 679 50 670 Dynamic SQL, can't set ReUseStatement
    FS_CEDT_ECMB.uMarkInv.uMarkInv.S 668 10289 679 40 790 Dynamic SQL, can't set ReUseStatement
    • Due to a bug in the instrumentation of Application Engine, the session's action attribute is not set for Do Select (type D) and Do When (type H) steps.  ASH data cannot therefore be matched for them.
    • More DB Time is reported for FS_CEDT_ECMB.uMarkInv.uMarkInv.S than is reported by batch timings.  This is a consequence of ASH sampling, where we count 10 seconds for each sample.
    ConclusionSetting ReUseStatement is very simple because it doesn't involve changing SQL, but there are lots of places where it could be set.  This technique picks out the relatively few places where doing so could potentially have a significant effect.
      ©David Kurtz, Go-Faster Consultancy Ltd.