Feed aggregator
temporary tablespace
SUCCESSFUL_ROW_COUNT and ERROR_ROW_COUNT in APEX_AUTOMATION_LOG
Timestamp appends "000" when formating.
Quarantined SQL Plans for PS/Queries
This follows on from my previous post, Management of Long Running PS/Queries Cancelled by Resource Manager.
From 19c, on Engineered Systems only (such as Exadata and Exadata Cloud Service) the 'Oracle Database automatically quarantines the plans for SQL statements terminated by … the Resource Manager for exceeding resource limits.
The Resource Manager can set a maximum estimated execution time for a SQL statement, for example, 20 minutes. If a statement execution exceeds this limit, then the Resource Manager terminates the statement. However, the statement may run repeatedly before being terminated, wasting 20 minutes of resources each time it is executed.
Starting in Oracle Database 19c, if a statement exceeds the specified resource limit, then the Resource Manager terminates the execution and “quarantines” the plan. To quarantine the plan means to put it on a blacklist of plans that the database will not execute. Note that the plan is quarantined, not the statement itself.'
[Oracle SQL Tuning Guide: 4.7 About Quarantined SQL Plans]
- See also Tim Hall's article: SQL Quarantine in Oracle Database 19c
When an attempt is made to execute a quarantined execution plan an error is produced: ORA-56955: quarantined plan used.
Oracle does not log timed-out or quarantined queries. On V$SQL and V$SQLSTAT, AVOIDED_EXECUTIONS records the number of times a SQL query has been prevented from running. However, this will not stay in the library cache long on a PeopleSoft system, due to the continual parse of dynamically generated SQL statements. As of Oracle 19.20, it is not recorded in AWR on DBA_HIST_SQLSTAT.
If an error condition occurs during a PSQUERY process run on the process scheduler, the process terminates with an error. The SQL statement and the error message are recorded in the Application Engine message log. As demonstrated in the previous blog, we can detect such failures by inspecting the message log of the PSQUERY process that did not finish successfully (ie. it has an end time, but the run status does not indicate success).
Matching Quarantine Directives to Cancelled QueriesQuarantine directives are visible via DBA_SQL_QUARANTINE, including SQL text and execution plan hash value.
It would be useful to know which quarantine directive relates to which query. However, it is not easy to match the SQL in the PeopleSoft message log entries with that in the quarantine entries. The SQL text in the message log can have multiple spaces. These are stripped out in the DBA_SQL_QUARANTINE view where the normalised SQL statement is visible.
The timestamp of creation and last execution of the quarantine directive is stored on it, but matching these to when the query was running can result in false positives.
Also, you cannot tell which quarantine directive was created by which consumer group. The maximum CPU timeout is recorded on DBA_SQL_QUARANTINE. In my example, it is only possible to distinguish the originating consumer group because the two consumer groups happen to have different timeouts.
A method that matches exactly, but only returns partial rows is to:
- Obtain ASH data for queries terminated by the resource manager. It can be matched by timestamp, MODULE, and ACTION (provided that EnableAEMonitoring is enabled).
- Profile the ASH to find the statement that took the longest during each PSQUERY process, and that is almost certain to be the SQL query. Thus obtaining the SQL_ID, SQL Plan Hash Value and consumer group ID. It is also possible to determine the total database time for the query, and the database time spent on CPU.
- The consumer group name can then be obtained from DBA_HIST_RSRC_CONSUMER_GROUP
- Obtain the SQL text for the long-running query. It would also have to be captured by an AWR snapshot. This does often occur because it was a long-running SQL, but it is not certain.
- The signature for the SQL statement (not the force-matching signature) can be derived using the SQLTEXT_TO_SIGNATURE function in DBMS_SQLTUNE. This can be matched to the signature recorded in DBA_SQL_QUARANTINE.
- You can have multiple quarantine directives for the same signature (i.e. the same SQL statement), each with a different plan hash value.
- NB: The plan hash value on DBA_SQL_QUARANTINE is the adaptive plan hash value (with all of its possible plan alternatives), and therefore it matches SQL_FULL_PLAN_HASH_VALUE in the ASH data, and not SQL_PLAN_HASH_VALUE (the plan that actually executed).
- When a query executes until timed-out, producing ORA-00040, you usually can find the SQL statement in the AWR repository and so generate the signature to exactly match the quarantine record.
- When an attempt is made to run a quarantined statement and execution plan, you usually cannot find the SQL statement because it hasn't run for long enough to produce an ASH sample. Even when it has, you also have to rely on the statement having been captured previously by AWR. Those conditions only come together occasionally.
- We can see the quarantine directives that were created when the resource manager cancelled a query, raising error ORA-00040: active time limit exceeded - call aborted.
- However, where quarantine directives have prevented SQL from executing, raising error ORA-56955: quarantined plan used, the ASH data from the event that originally created the directive has since been purged, so we cannot use it to match directives.
Mon Feb 24 page 1
PS/Queries terminated by Resource Manager/quarantined Execution Plan
Public/ ASH
Private Ru Oracle Exec ASH CPU Message Log
P.I. DBNAME OPRID RUNCNTLID Query QRYNAME St Err. # Secs Secs Secs Date/Time Stamp SQL_ID
--------- ------- -------- ------------------------------ ------- ------------------------------ -- --------- ------ ------ ------ ---------------------------- -------------
SQL Plan Full Plan CPU
Hash Value Hash Value Consumer Group Name SIGNATURE Quarantine Name Time Quarantine Created Quarantine Last Executed
----------- ----------- ------------------------- --------------------- ------------------------------------ ----- ---------------------------- ----------------------------
…
31452465 FSPROD USR0001 GBR_JRNL_LN_DETAIL_ACCT Public GBR_JRNL_LN_DETAIL_ACCT 10 ORA-56955 36 10 10 20-FEB-25 06.28.03.578218 PM 0wm9g6xkys12h
4009529842 653370716 PSQUERY_BATCH_GROUP 5584654620166156419
31451318 FSPROD USR0002 GBR_JRNL_LN_DETAIL_ACCT Public GBR_JRNL_LN_DETAIL_ACCT 10 ORA-56955 36 20-FEB-25 02.36.38.590841 PM
31451292 FSPROD USR0002 GBR_JRNL_LN_DETAIL_ACCT Public GBR_JRNL_LN_DETAIL_ACCT 10 ORA-56955 36 20-FEB-25 02.30.51.777351 PM
31438602 FSPROD USR0003 1 Private DK_GBR_GL_DETAIL_NEW 10 ORA-00040 28316 28275 14203 18-FEB-25 11.39.19.502412 PM 5qrbrf775whky
3446094907 3491308607 PSQUERY_BATCH_GROUP 16266909742923016361 SQL_QUARANTINE_f3gxc76u48u59d019243f 14400 18-FEB-25 11.49.33.091081 PM
31437925 FSPROD USR0004 16 Private TB_TEST2 10 ORA-00040 17684 17654 17541 18-FEB-25 06.09.14.060615 PM 06xqrgj18wp05
4256462904 2062199471 PSQUERY_BATCH_GROUP 6341132966559464532 SQL_QUARANTINE_5h01uuscnrg2n7aeaaaaf 14400 18-FEB-25 06.17.20.679769 PM
31437907 FSPROD USR0004 16 Private TB_TEST2 10 ORA-00040 17694 17695 17592 18-FEB-25 06.04.05.942470 PM 4yurn75y2p0t2
3232504707 121066138 PSQUERY_BATCH_GROUP 4966087806133732884 SQL_QUARANTINE_49usqjjc001hn0737529a 14400 18-FEB-25 06.17.24.869185 PM
…
Virtual Nuisance
Here’s a note that’s been sitting as a draft for the last 7 years – finally dusted down, retested, and published. Following my new publishing paradigm, I’m going to tell you the conclusion at the start, then show the working that demonstrates the point.
SummaryWhen cloning a table, or subset of a table, the simplest method in the “good old days” was to execute a call to: “create table cloned_table as select * from source_table where …” (CTAS) then create any indexes that you needed on the clone. Unfortunately if you’ve created any (visible) virtual columns the CTAS will generate real columns holding the calculated values.
There are various workarounds to this, of course, though not all viable for all systems.
- The “obvious and sensible” option is (used to be) to identify just the real columns in the select list. But then you’d have to remember to check and add all the virtual bits afterwards (including the invisible bits).
- A “lazy” option that I’ve seen suggested a few times is to drop the virtual columns, clone the table, then recreate the virtual columns – a slightly better bet might be to make them invisible before cloning then make them visible (on the source) afterwards. Either way, though, this isn’t a viable strategy for a production/shared developer table and there are cases where it simply can’t be used and, again, you have to recreate the the virtual columns on the clone.
A fairly nice strategy appeared in the 12.2 feature “create table XX for exchange …” which doesn’t have to be aimed at a partitioned table. This will create the virtual, including invisible, columns. You still have to insert the data, however, which means you still have to create a select list that covers all the “real” columns in the right order; and you still have to recreate the indexes and some constraints.
Discussion and demonstrationQuoting from the SQL Language Reference manual 23c for the Create Table statement, you will find under the subtitle “for exchange with table” the statement:
This clause lets you create a table that matches the structure of an existing partitioned table.
This is true – but not the whole truth. It will do what it says, but it can also be used to create a table that matches the structure of an existing non-partitioned table. This means that when you want to clone a table (or subset of a table) that includes virtual columns and hidden columns, it’s a little easier to get it right. You still have to get the data into the table, though and recreate indexes and some constraints.
The following script shows two attempts at cloning – first using a simple “create as select * ..”, the second using a “create for exchange” followed by steps to populate the copy – first the wrong way then the right way.
We start by creating a source table (a subset of the view all_objects) then add a couple of indexes, a primary key (which also means not null) constraint, and a couple of virtual columns (one invisible).
rem
rem Script: virtual_nuisance.sql
rem Author: Jonathan Lewis
rem Dated: Feb 2018
rem Purpose:
rem
rem Last tested
rem 23.4.0.0 (23ai)
rem 19.11.0.0
rem 12.2.0.1
rem 11.2.0.4
prompt ====================
prompt Basic Table Creation
prompt ====================
create table source_table nologging
as
select *
from sys.all_objects
where rownum <= 10000
;
-- alter table source_table modify object_id not null;
alter table source_table add constraint st_pk primary key(object_id);
create unique index st_u1 on source_table(object_type, object_id);
create index st_i1 on source_table(object_name);
prompt ===================
prompt Add virtual columns
prompt ===================
alter table source_table
add (
new_name_visible
generated always as ( initcap(object_name) ) virtual,
new_name_invisible invisible
generated always as ( lower(object_name) ) virtual
)
;
Now we create two copies of the table, one with a simple “create as select *”, the other as a “create for exchange” which means we then have to populate it, which we try in two ways – first using “insert select *” which fails, leading us to insert a list of named columns:
prompt ======================================================
prompt Clone table - which instantiates the visible virtual column
prompt ======================================================
create table clone_table as select * from source_table;
prompt ====================================
prompt Create for exchange (12.2+)
prompt ====================================
create table exchange_table for exchange with table source_table;
insert into exchange_table select * from source_table;
-- *
-- ERROR at line 1:
-- ORA-54013: INSERT operation disallowed on virtual columns
-- Help: https://docs.oracle.com/error-help/db/ora-54013/
--
insert into exchange_table (
owner, object_name, subobject_name, object_id, data_object_id, object_type,
created, last_ddl_time, timestamp, status, temporary, generated,
secondary, namespace, edition_name, sharing, editionable, oracle_maintained,
application, default_collation, duplicated, sharded, created_appid,
created_vsnid, modified_appid, modified_vsnid
)
select
owner, object_name, subobject_name, object_id, data_object_id, object_type,
created, last_ddl_time, timestamp, status, temporary, generated,
secondary, namespace, edition_name, sharing, editionable, oracle_maintained,
application, default_collation, duplicated, sharded, created_appid,
created_vsnid, modified_appid, modified_vsnid
from
source_table
/
-- 10000 rows created.
As a check we can select a few rows in the exchange table to see that the virtual columns exist and hold the expected data. (Remember that the invisible columns can be selected by name, even though they are invisible in a describe or a “select *”.)
column object_name format a24
column new_name_visible format a24
column new_name_invisible format a24
select
object_name, object_type, new_name_visible, new_name_invisible
from
exchange_table
where
rownum <= 5
/
OBJECT_NAME OBJECT_TYPE NEW_NAME_VISIBLE NEW_NAME_INVISIBLE
------------------------ ----------------------- ------------------------ ------------------------
ALL_OBJECT_TABLES VIEW All_Object_Tables all_object_tables
ALL_OBJECT_TABLES SYNONYM All_Object_Tables all_object_tables
ALL_ALL_TABLES VIEW All_All_Tables all_all_tables
ALL_ALL_TABLES SYNONYM All_All_Tables all_all_tables
DBA_TABLES VIEW Dba_Tables dba_tables
5 rows selected.
Now check which of the three interesting columns exist in the three tables, and what their state is:
break on table_name skip 1
select
table_name, column_name,
virtual_column "Virtual",
hidden_column "Hidden"
from user_tab_cols
where table_name in ( 'SOURCE_TABLE', 'CLONE_TABLE', 'EXCHANGE_TABLE')
and column_name in ('OBJECT_NAME', 'NEW_NAME_VISIBLE','NEW_NAME_INVISIBLE')
order by
table_name desc, column_name desc
/
TABLE_NAME COLUMN_NAME Vir Hid
------------------------- -------------------- --- ---
SOURCE_TABLE OBJECT_NAME NO NO
NEW_NAME_VISIBLE YES NO
NEW_NAME_INVISIBLE YES YES
EXCHANGE_TABLE OBJECT_NAME NO NO
NEW_NAME_VISIBLE YES NO
NEW_NAME_INVISIBLE YES YES
CLONE_TABLE OBJECT_NAME NO NO
NEW_NAME_VISIBLE NO NO
8 rows selected.
Critically, the clone (CTAS) table doesn’t have the invisible virtual column, and the visible virtual column is no longer virtual. Both the virtual columns have appeared in the exchange table, both virtual, and the invisible column is still invisible (hidden).
But what of the indexes (including the implicit primary key index):
select
table_name, index_name, uniqueness
from
user_indexes
where table_name in ( 'SOURCE_TABLE', 'CLONE_TABLE', 'EXCHANGE_TABLE')
order by
table_name desc, index_name
;
TABLE_NAME INDEX_NAME UNIQUENES
------------------------- -------------------- ---------
SOURCE_TABLE ST_I1 NONUNIQUE
ST_PK UNIQUE
ST_U1 UNIQUE
3 rows selected.
Neither of the copies has any indexes – which is not a surprise for the clone (CTAS) table but is a little disappointing for a table we have (nominally) created in anticipation of an exchange with partition. But that is exactly what the manual would have told us would happen – if we’d looked.
There is one other detail to be careful of when cloning tables – will columns that report as NOT NULL in the source table be declared NOT NULL in the clone/exchange table? The answer depends on why they show as NOT NULL.
In my example the object_id in the source table shows up as NOT NULL when described, but doesn’t in either of the copies because it is a side-effect of the primary key constraint and that constraint has not been recreated in the copying process.
Find the Dependencies
ORA-06502 exception_dump tracing does not work??
Multitenant Architecture and Application Container: Drop failed install
DDL Event Trigger and alter package compile
Cannot create index on virtual column of type RAW or RAW(30)
Backing Up your Linux Desktop to Google Drive using Deja Dup on Ubuntu
Let’s face it, ensuring that you have regular backups of your home computer to a remote location can seem like a bit of a chore…right up to the time when you’ve managed to lose your data and need a fast and reliable way to get it back.
If you’re using Ubuntu, this task is made much easier by Deja Dup Backups.
Additionally, the cloud storage that comes free with mail accounts from most of the major tech companies can offer a reliable remote home for your backups until you need them. Handy when you don’t have a spare large-capacity thumb drive kicking around.
Using Ubuntu 24.04.02 LTS, what we’re going to look at here is :
- Making your Google Drive accessible from the Ubuntu Gnome Desktop
- Configuring Deja Dup to backup files to the Google Drive
- Testing the backup
- Scheduling backups to run regularly
For this purpose we’ll be using Ubuntu’s standard backup tool – Deja Dup ( listed as “Backups” if you need to search for it in the Gnome Application Menu).
We’ll also be making use of a Gmail account other than that used as the main account on an Android phone.
You may be relieved to know that we’ll be able to accomplish all of the above without opening a Terminal window ( well, not much anyway).
I’ve chosen to use a gmail account in this example because :
- Google offers 15GB of free space compared to the 5GB you get with Microsoft or Apple
- You don’t have to use the Google account you use for your Anrdoid phone so you don’t have to share the space with the phone data backups.
First you need to open Settings – either from the Gnome Application Menu, or from the Panel at the top of the screen – and select Online Accounts :
Now sign in with your Google account…


At this point you’ll be asked to specify what Gnome is allowed to access. In this case I’ve enabled everything :


If you now open Files on your computer, you should see your Google Drive listed :

In this case, I can also see that I have the full 15GB available :

Now we have a remote location to backup to, it’s time to figure out what to backup and how often…
Choosing what to BackupOK, we’ve got some space, but it’s not unlimited so we want to take some care to ensure we backup only files we really need.
By default, the Backup tool will include everything under the current user’s $HOME apart from the Rubbish Bin and the Downloads folder :

Lurking under your $HOME are a number of hidden folders. In the main, these contain useful stuff such as application configuration ( e.g. web browser bookmarks etc). However, there may also be stuff that you really don’t need to keep hold of.
You remember what I said about not having to open a Terminal window ? Well, if you really want to know how big each of these directories is, You can just right-click each one in Files and then look at it’s properties. Alternatively, if you’re happy to open a Terminal Window, you can run :
du --max-depth=1 --human-readable $HOME |sort -hr
This will output the total size of $HOME followed by the total size of each immediate child directory in descending order :

In this case, I can see that the whole of my home is a mere 80M so I’m not going to worry about excluding anything else from the backup for the moment.
One final point to consider. By default, Deja Dup encrypts backup files using GPG (Gnu Privacy Guard), which will compress files as well as encrypting them. This means that the actual space required for the backup may be considerably less than the current size on disk.
Before we start, we’re going to create a test file so that we can make sure that our backup is working once we’ve configured it.
I’ve created a file in $HOME/Documents...

…which contains…

Now look for “Backups” in the Gnome Application Menu :

Open the tool and click on Create Your First Backup :

Next, we need to select which folders to backup. As mentioned above, Deja Dup uses a reasonably sensible default :

You can add folders to both the Folders to Back Up and Folders to Ignore lists by clicking on the appropriate “+” button and entering either an absolute path, or a relative path from your $HOME.
Next, we need to specify the destination for the backup.
Looks like Deja Dup has been clever enough to detect our mapped Google Drive :

The destination folder name default to the name of the current machine.
When the backup runs, the folder will be created automatically if it doesn’t already exist.
When we click the Forward button, Deja Dup may well ask for the installation of additional packages :

…before asking for access to your Google account :

The next screen asks for an Encryption password for your backup. A couple of important points to note here are :
- Password protecting the backup is an outstandingly good idea. Whilst this screen does allow you to turn this off, I’d recommend against it – especially if the backup is being stored on a remote server.
- Whilst you can ask Deja Dup to remember the password so that you won’t need it if you need to restore files from a backup on the current system, the password is essential if you want to restore the backed up files on another system…or the current system after some major disaster renders your existing settings unavailable. MAKE SURE YOU REMEMBER IT.
When we click Forward, we should now be running our first backup :

Once the backup completes, we’ll have the option to schedule them at regular intervals. First though, let’s take a look at the files that have been created :

Let’s make sure that we can restore files from this backup.
Click on the Restore tab in Deja Dup…

… and you should be able to see all of the files in our backup, including the test file we created earlier :

If we navigate to the Documents folder we can select files to restore :

Click on Restore and select the location to restore to ( in this case, we’ll put it directly into $HOME ) :

Again, Click Restore and you’ll get a confirmation that your file is back :

…which we can confirm in Files :


Deja Dup can perform backups automatically at a set interval.
In the Overview tab, select Back Up Automatically.
If you then open the “hamburger” menu and select Preferences, you get additional options, namely :
- Automated Backup Frequency
- Keep Backups

Once you’ve configured them, these backups will take place automatically on the schedule you’ve specified.
Guide on Taking Social Security: 62 vs. 67 vs. 70
https://www.schwab.com/learn/story/guide-on-taking-social-security
https://www.schwab.com/learn/story/can-you-afford-to-retire-early
House is paid off and property is tax $2,100.
Please perform research for your situation as there is no one size fits all.
Building a Real-Time Weather App with Streamlit and Open-Meteo
I recently embarked on a project to build a real-time weather application, and I wanted to share my experience using Streamlit and Open-Meteo. The goal was to create a dynamic web app that provides users with up-to-date weather information and 10 day weather forecasts, all while leveraging the convenience of cloud-based development.
Streamlit: Rapid Web App Development in the Browser
One of the most compelling aspects of Streamlit is its ability to facilitate rapid web application development directly within the browser. For this project, I utilized GitHub Codespaces, which provided a seamless development environment. This eliminated the need for complex local setups and allowed me to focus solely on coding.
Key Advantages of Using GitHub Codespaces:
- Browser-Based Workflow: All development activities were performed within a web browser, streamlining the process.
- Dependency Management: Installing necessary Python packages was straightforward using
pip install
. - Version Control: Integrating with Git enabled efficient version control with
git commit
andpush
commands.
Data Acquisition with Open-Meteo
To obtain accurate and current weather data, I employed the Open-Meteo API. This API offers a comprehensive set of weather parameters, allowing for detailed data visualization.
Visualizing Weather Data with Streamlit's Graph Capabilities
Streamlit's built-in graph visualization tools proved to be highly effective. Creating dynamic charts to represent weather data was quick and efficient. The clarity and responsiveness of these visualizations significantly enhanced the user experience.
Technical Implementation:
The application was developed using Python, leveraging Streamlit for the front-end and the requests
library to interact with the Open-Meteo API. The workflow involved:
- Fetching weather data from the Open-Meteo API.
- Processing the data to extract relevant information.
- Utilizing Streamlit's charting functions to create graphical representations.
- Deploying the application via the streamlit sharing platform.
Observations:
- Streamlit's simplicity and ease of use allowed for rapid prototyping and development.
- GitHub Codespaces provided a consistent and reliable development environment.
- Open-Meteo API provided accurate data.
- Streamlit sharing made deployment very easy.
Conclusion:
This project demonstrated the power of Streamlit and Open-Meteo for building data-driven web applications. The ability to develop entirely within a browser, combined with powerful visualisation tools, made the development process efficient and enjoyable.
You can view the final app here: https://data-exchange.streamlit.app/
Oracle DDL: Duplicate not null constraints
How to use Union of two select queries for a database export?
Delphix Data Virtualization and Masking
After successfully completing several customer projects involving Delphix products, Data Virtualization and Data Masking, I decided to set up a complete infrastructure internally to test the combination of both products.
Delphix Data Virtualization Creates only a virtual copy of your database blocks instead of a full physical copy. For example this allows making a new big database available only in a few seconds.
Delphix Data Masking Protects sensitive data by replacing it with realistic but non-sensitive values, ensuring compliance with privacy regulations while maintaining data usability for development, testing, and analytics. For reference, dbi-services has developed Swiss masking rules for customers to apply all their specific needs.
SetupAs always, setting up the prerequisites took some time, but once the environment was built, it was truly rewarding to demonstrate the combined power of these tools to my colleagues. It’s a real pleasure, like a game, building database environments in a flash, where the data is directly masked.
This allows you to rapidly provision databases to your internal or external developers, without taking any risk of a data phishing attack!
Watch this video! It’s only a minute-long and you’ll see how easy it is! to create a new masked Oracle pluggable database VPDB2_masked on another server (These tasks can also be fully automated in a pipeline using an API)
Relevant outcomesThe Delphix pricing model is based on the size of your databases, thus you can apply the same Virtualization and Masking capabilities at least for PostgreSQL, SQL Server, and Sybase at no additional cost. Are you interested in making your database development environments fully secure? Don’t wait, dbi services has your solution!
For those who are interested in more details1. Table data from the source database PDB1

2. Table data in the new created and masked virtual database VPDB2_Masked

3. Output of the masking job

Blog that presents Data Masking in detail
Customer success story
Webinar that presents Data Virtualization
L’article Delphix Data Virtualization and Masking est apparu en premier sur dbi Blog.
Management of Long Running PS/Queries Cancelled by Resource Manager CPU Limit
I have written previously about using the Oracle database resource manager to prioritise the allocation of CPU to different processes in a PeopleSoft system. I proposed a sample resource plan that can be used as a starting point to build a resource plan that meets a system's specific objectives and requirements.
- see PSFT_PLAN: A Sample Oracle Database Resource Manager Plan for PeopleSoft
- The script to implement the sample PSFT_PLAN resource plan is available from GitHub.
This post looks at
- How to configure the resource manager to cancel long-running queries,
- What happens when it does and what PeopleSoft users experience,
- How the system administrators can monitor such queries,
- What action could they take.
- PSQUERY_BATCH_GROUP
- Applies to scheduled PSQUERY Application Engine Programs
- Limited to 4 hours on CPU (or estimated at >= 4 hours)
- PSQUERY_ONLINE
- Applies to queries run online via the PeopleSoft Internet Architecture (PIA).
- There is no resource manager limit for this consumer group.
- The PIA has a session timeout (default 20 minutes).
- The ICQuery Tuxedo service that runs the queries also has a timeout (default 20 minutes)
- When the resource manager cancels a SQL call, it simply raises an Oracle error that appears in a message box in the PIA without any further explanation. I think it is better to let the PIA timeouts handle online queries in a more controlled fashion.
- Users should generally be encouraged to schedule queries that will take more than a few minutes to run on the process scheduler.
- Resist the temptation to increase either the PIA or ICQuery service timeout from the delivered setting of 20 minutes.
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
'PSFT_PLAN', 'PSQUERY_BATCH_GROUP'
,mgmt_p6 => 1
,switch_group => 'CANCEL_SQL'
,switch_time => 14400
,switch_estimate => TRUE
,switch_for_call => TRUE
);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
'PSFT_PLAN', 'PSQUERY_ONLINE_GROUP'
,mgmt_p6 => 90
);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
'PSFT_PLAN', 'LOW_LIMITED_GROUP'
,mgmt_p8 => 1
,switch_group => 'CANCEL_SQL'
,switch_time => 7200
,switch_elapsed_time => 7200
,switch_estimate => TRUE
,switch_for_call => TRUE
);
- SWITCH_GROUP specifies the consumer group to which the session is switched when a switch condition is met. If the group switches to CANCEL_SQL the current call is cancelled, raising error ORA-00400.
- SWITCH_TIME specified the number of seconds on CPU (not elapsed time).
- If SWITCH_ESTIMATE is true, the resource manager also switches group if the estimated run time is greater than the switch time
- SWITCH_FOR_CALL is set to true so that if the consumer group is switched, it is then restored to the original consumer group at the end of the top call. Thus a persistent session is not permanently switched. This is important if switching an application engine server (PSAESRV) session.
- The query may be cancelled immediately because the estimated execution time is greater than the limit.
- Otherwise, it is quite likely to run for an elapsed time that is greater than the CPU time limit. Some time will be consumed in the client process, during which the database will be idle waiting for the next fetch request from the client,
- Some of the database time may not be on CPU because it may be doing something else, such as physical IO (in which case the session will report being on an event other than NULL).
- The database session may be held back by the resource manager allocating CPU to higher priority processes, in which case the session will again not be on CPU, and will report being on event resmgr: cpu quantum.
For this analysis, I have made some alterations to the message log query (see message_log_checker-psquery.sql).
- This query is restricted to messages generated by PSQUERY processes that did not run to success (not run status 9).
- PeopleSoft messages are typically defined with up to 9 substitution variables, but long SQL statements can have many more entries in PS_MESSAGE_LOGPARM. So the PL/SQL function in this query simply appends any additional log parameter rows beyond the 9 substitution variables to the end of the generated string.
- Once the message has been generated we can look for one of the error messages associated with the resource manager terminating a query:
- ORA-00040: active time limit exceeded - call aborted
- ORA-56955: quarantined plan used
It is necessary to filter by message number because even in PS/Query users can write invalid SQL that produces other error messages. However, all this text processing for each row retrieved makes the query quite slow.
Public/
Private Ru Exec Msg Msg Msg
P.I. DBNAME OPRID RUNCNTLID Query QRYNAME St Secs DTTM_STAMP_SEC Seq Set Nbr
--------- ------- -------- ------------------------------ ------- ------------------------------ -- ------ ---------------------------- --- --- ----
MSG
----------------------------------------------------------------------------------------------------------------------------------------------------
12395311 FSPROD USR001 42 Private MY_TEST2 10 20772 10-FEB-25 04.41.47.384694 PM 1 65 30
File: C:\PT860P13B_2403250500-retail\peopletools\src\pssys\qpm.cppSQL error. Stmt #: 8526 Error Position: 189 Return: 40 - ORA-00040: active time
limit exceeded - call abortedFailed SQL stmt: SELECT A.LEDGER, A.FISCAL_YEAR, A.BUSINESS_UNIT, …
10-FEB-25 04.41.47.421800 PM 2 50 380
Error in running query because of SQL Error, Code=40, Message=ORA-00040: active time limit exceeded - call aborted
…
Public/ Private Ru Exec Msg Msg Msg P.I. DBNAME OPRID RUNCNTLID Query QRYNAME St Secs DTTM_STAMP_SEC Seq Set Nbr --------- ------- -------- ------------------------------ ------- ------------------------------ -- ------ ---------------------------- --- --- ---- MSG ---------------------------------------------------------------------------------------------------------------------------------------------------- 12319513 FSPROD USR002 Transactions Public GBR_JRNL_LINE_DTL_ACCT 10 25 13-FEB-25 11.13.35.746644 PM 1 65 30 File: C:\PT860P13B_2403250500-retail\peopletools\src\pssys\qpm.cppSQL error. Stmt #: 8526 Error Position: 2783 Return: -8581 - ORA-56955: quarant ined plan usedFailed SQL stmt: SELECT A.FISCAL_YEAR, A.ACCOUNTING_PERIOD, A.JRNL_HDR_STATUS, B.LED … 13-FEB-25 11.13.35.814112 PM 2 50 380 Error in running query because of SQL Error, Code=-8581, Message=ORA-56955: quarantined plan used
The final stage is to close the feedback loop and go back to the users who are producing the queries, find out what they are trying to do, and why the queries are running for such a long time.
Pages
