Feed aggregator
SQL ASSERTIONS vs triggers, materialized views with constraints, etc
TM lock on not-modified table
Explain plan using DBMS_XPLAN
Interactive Grid
ASMFD not working when we do an in-place upgrade or migrate to RHEL 7
Logging Run Controls and Bind Variables for Scheduled PS/Queries
This blog proposes additional logging for scheduled PS/Queries so that long-running queries can be reconstructed and analysed.
Previous blog posts have discussed limiting PS/Query runtime with the resource manager (see Management of Long Running PS/Queries Cancelled by Resource Manager CPU Limit). From 19c, on Engineered Systems only, the 'Oracle Database automatically quarantines the plans for SQL statements terminated by … the Resource Manager for exceeding resource limits'. SQL Quarantine is enabled by default in Oracle 19c on Exadata (unless patch 30104721 is applied that backports the new 23c parameters, see Oracle Doc ID 2635030.1: 19c New Feature SQL Quarantine - How To Stop Automatic SQL Quarantine).
What is the Problem?SQL Quarantine prevents a query from executing. Therefore, AWR will not capture the execution plan. AWR will also purge execution plans where an execution has not been captured within the AWR retention period. The original long-running query execution that was quarantined, if captured by AWR, will be aged out because it will not execute again.
If we want to investigate PS/Queries that produced execution plans that exceeded the runtime limit and were then quarantined, we need to reproduce the execution plan, either with the EXPLAIN PLAN FOR command or by executing the query in a session where the limited resource manager consumer group does not apply.
However, PS/Queries with bind variables present a challenge. A PS/Query run with different bind variables can produce different execution plans. One execution plan might be quarantined and so never complete, while another may complete within an acceptable time.
In AWR, a plan is only captured once for each statement. Therefore, it is possible to find one set of bind variables for each plan, although there may be many sets of bind variables that all produce the same execution plan. However, we cannot obtain Oracle bind variables for quarantined execution plans that did not execute. To regenerate their execution plans, we need another way to obtain their bind variables.
This problem occurs more generally where the Diagnostics Pack is not available, then it is not possible to reconstruct long-running queries without additional logging or tracing.
Solution- PS_QUERY_RUN_CNTRL: Scheduled Query Run Control. This record identifies the query executed. Rows in this table will be copied to PS_QRYRUN_CTL_HST.
- PS_QUERY_RUN_PARM: Scheduled Query Run Parameters. This record holds the bind variables and the values passed to the query. The table contains a row for each bind variable for each execution. Rows in this table will be copied to PS_QRYRUN_PARM_HST
Two database triggers manage the history tables:
- A database trigger that fires when the run status of the request is updated to '7' (processing). It copies rows for the current run control into two corresponding history tables. Thus, we will have a log of every bind variable for every scheduled query.
- A second database trigger will fire when a PSQUERY request record is deleted. It deletes the corresponding rows from these history tables.
When a PS/Query produces a quarantined execution plan, the PSQUERY process terminates with error ORA-56955: quarantined plan used (see Quarantined SQL Plans for PS/Queries). Now we can obtain the bind variables that resulted in attempts to execute a quarantined query execution plan.
The following script (ps_query_run_cntrl_hist_trigger.sql) creates the tables and triggers.
- Application Designer record definitions should be created for the two history tables by importing the project QRYRUN_HST (download QRYRUN_HST.zip from GitHub).
REM ps_query_run_cntrl_hist_trigger.sql
REM 21.4.2025 - trigger and history tables to capture
set echo on serveroutput on timi on
clear screen
spool ps_query_run_cntrl_hist_trigger
rollback;
CREATE TABLE SYSADM.PS_QRYRUN_CTL_HST
(PRCSINSTANCE INTEGER DEFAULT 0 NOT NULL,
OPRID VARCHAR2(30) DEFAULT ' ' NOT NULL,
RUN_CNTL_ID VARCHAR2(30) DEFAULT ' ' NOT NULL,
DESCR VARCHAR2(30) DEFAULT ' ' NOT NULL,
QRYTYPE SMALLINT DEFAULT 1 NOT NULL,
PRIVATE_QUERY_FLAG VARCHAR2(1) DEFAULT 'N' NOT NULL,
QRYNAME VARCHAR2(30) DEFAULT ' ' NOT NULL,
URL VARCHAR2(254) DEFAULT ' ' NOT NULL,
ASIAN_FONT_SETTING VARCHAR2(3) DEFAULT ' ' NOT NULL,
PTFP_FEED_ID VARCHAR2(30) DEFAULT ' ' NOT NULL) TABLESPACE PTTBL
/
CREATE UNIQUE iNDEX SYSADM.PS_QRYRUN_CTL_HST
ON SYSADM.PS_QRYRUN_CTL_HST (PRCSINSTANCE) TABLESPACE PSINDEX PARALLEL NOLOGGING
/
ALTER INDEX SYSADM.PS_QRYRUN_CTL_HST NOPARALLEL LOGGING
/
CREATE TABLE SYSADM.PS_QRYRUN_PARM_HST
(PRCSINSTANCE INTEGER DEFAULT 0 NOT NULL,
OPRID VARCHAR2(30) DEFAULT ' ' NOT NULL,
RUN_CNTL_ID VARCHAR2(30) DEFAULT ' ' NOT NULL,
BNDNUM SMALLINT DEFAULT 0 NOT NULL,
FIELDNAME VARCHAR2(18) DEFAULT ' ' NOT NULL,
BNDNAME VARCHAR2(30) DEFAULT ' ' NOT NULL,
BNDVALUE CLOB) TABLESPACE PSIMAGE2
/
CREATE UNIQUE iNDEX SYSADM.PS_QRYRUN_PARM_HST
ON SYSADM.PS_QRYRUN_PARM_HST (PRCSINSTANCE, BNDNUM) TABLESPACE PSINDEX PARALLEL NOLOGGING
/
ALTER INDEX SYSADM.PS_QRYRUN_PARM_HST NOPARALLEL LOGGING
/
- PSQUERY is not a restartable Application Engine program. Therefore, there is no risk of duplicate inserts into the history tables.
- The exception handlers in the triggers deliberately suppress any error, in case that causes the process scheduler to crash.
CREATE OR REPLACE TRIGGER sysadm.query_run_cntrl_hist_ins
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus ='7' AND old.runstatus != '7' AND new.prcsname = 'PSQUERY' AND new.prcstype = 'Application Engine')
BEGIN
INSERT INTO PS_QRYRUN_CTL_HST
(PRCSINSTANCE, OPRID, RUN_CNTL_ID, DESCR ,QRYTYPE, PRIVATE_QUERY_FLAG, QRYNAME, URL, ASIAN_FONT_SETTING, PTFP_FEED_ID)
SELECT :new.prcsinstance, OPRID, RUN_CNTL_ID, DESCR ,QRYTYPE, PRIVATE_QUERY_FLAG, QRYNAME, URL, ASIAN_FONT_SETTING, PTFP_FEED_ID
FROM ps_query_run_cntrl WHERE oprid = :new.oprid AND run_cntl_id = :new.runcntlid;
INSERT INTO PS_QRYRUN_PARM_HST
(PRCSINSTANCE, OPRID, RUN_CNTL_ID, BNDNUM, FIELDNAME, BNDNAME, BNDVALUE)
SELECT :new.prcsinstance prcsinstance, OPRID, RUN_CNTL_ID, BNDNUM, FIELDNAME, BNDNAME, BNDVALUE
FROM ps_query_run_parm WHERE oprid = :new.oprid AND run_cntl_id = :new.runcntlid;
EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions
END;
/
CREATE OR REPLACE TRIGGER sysadm.query_run_cntrl_hist_del
BEFORE DELETE ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (old.prcsname = 'PSQUERY' AND old.prcstype = 'Application Engine')
BEGIN
DELETE FROM PS_QRYRUN_CTL_HST WHERE prcsinstance = :old.prcsinstance;
DELETE FROM PS_QRYRUN_PARM_HST WHERE prcsinstance = :old.prcsinstance;
EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions
END;
/
show errors
spool off
ExampleWhen a query is scheduled to run on the process scheduler, the bind variables are specified through this generic dialogue.
Once the PSQUERY process has started (it immediately commits its update to RUNSTATUS), these values are written to the new history tables.
select * from ps_qryrun_ctl_hst; PRCSINSTANCE OPRID RUN_CNTL_ID DESCR QRYTYPE P QRYNAME URL ASI PTFP_FEED_ID ------------ -------- ------------ ------------------------------ ---------- - ------------------------------ -------------------------------------------------- --- ------------------------------ 12345678 ABCDEF 042225 Journal Line Detail - Account 1 N XXX_JRNL_LINE_DTL_ACCT https://xxxxxxx.yyyyy.com/psp/XXXXXXX/EMPLOYEE/ERP select * from ps_qryrun_ctl_hst; PRCSINSTANCE OPRID RUN_CNTL_ID BNDNUM FIELDNAME BNDNAME BNDVALUE ------------ -------- ------------ ------ ------------------ -------------------- ------------------------------ 12345678 ABCDEF 042225 1 bind1 BUSINESS_UNIT 354XX 12345678 ABCDEF 042225 2 bind2 BUSINESS_UNIT 354XX 12345678 ABCDEF 042225 3 FISCAL_YEAR FISCAL_YEAR 2025 12345678 ABCDEF 042225 4 ACCOUNTING_PD_FROM ACCOUNTING_PD_FROM 2 12345678 ABCDEF 042225 5 ACCOUNTING_PD_TO ACCOUNTING_PD_TO 2 12345678 ABCDEF 042225 6 bind6 ACCOUNT 23882XXXXX 12345678 ABCDEF 042225 7 bind7 ACCOUNT 23882XXXXX 12345678 ABCDEF 042225 8 bind8 ALTACCOUNT 23882XXXXX 12345678 ABCDEF 042225 9 bind9 ALTACCOUNT 23882XXXXXConclusion
If the query is quarantined, PSQUERY will terminate with error ORA-56955: quarantined plan used. The SQL statement can be extracted from the message log, and the execution plan can be generated with the EXPLAIN PLAN FOR command, using the bind variable values captured in the history tables.
Note: The signature of the SQL Quarantine directive is the exact matching signature of the SQL text (it can be generated from the SQL text with dbms_sqltune.sqltext_to_signature). There can be multiple PLAN_HASH_VALUEs for the same signature (because there can be multiple execution plans for the same SQL). Verify that the FULL_PLAN_HASH_VALUE of the execution plan generated with the captured bind variables corresponds to the PLAN_HASH_VALUE of a SQL Quarantine directive.
Setting up a Local Only SMTP server in Ubuntu
If you’re using bash scripts to automate tasks on your Ubuntu desktop but don’t want to have to wade through logs to find out what’s happening, you can setup a Local Only SMTP server and have the scripts mail you their output.
Being Linux, there are several ways to do this.
What follows is how I managed to set this up on my Ubuntu 24_04 desktop (minus all the mistakes and swearing).
Specifically we’ll be looking at :
- installing Dovecot
- installing and configuring Postfix
- installing mailx
- configuring Thunderbird to handle local emails
As AI appears to be the “blockchain du jour” I thought I should make some attempt to appear up-to-date and relevant. Therefore, various AI entities from Ian M. Banks’ Culture will be making an appearance in what follows…
Software VersionsIt’s probably useful to know the versions I’m using for this particular exercise :
- Ubuntu 24.04.1 LTS
- Thunderbird 128.10.0esr (64-bit)
- Postfix 3.8.6
- Dovecot 2.3.21
- mailx 8.1.2
If you happen to have any of these installed already, you can check the version of Postfix by running…
postconf mail_version
…Dovecot by running..
dovecot --version
…and mailx by running :
mail -v
The Ubuntu version can be found under Settings/System/About
The Thunderbird version can be found under Help/About.
Before we get to installing or configuring anything, we need to…
Specify a domain for localhostWe need a domain for Postfix. As wer’re only flinging traffic around the current machine, it doesn’t have to be known to the wider world, but it does need to point to localhost.
So, in the Terminal :
sudo nano /etc/hosts
…and edit the line for 127.0.0.1 so it includes your chosen domain name.
Conventionally this is something like localhost.com, but “It’s My Party and I’ll Sing If I Want To”…
127.0.0.1 localhost culture.org
Once we’ve saved the file, we can test the new entry by running :
ping -c1 culture.org
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.062 ms
--- localhost ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
Install Dovecot
Simply install the package by running :
sudo apt-get install dovecot-imapd
The installation will complete automatically after which, in this instance, we don’t need to do any configuration at all.
Install and configure PostfixYou can install postfix by running :
sudo apt-get install postfix
As part of the installation you will be presented with a configuration menu screen :

Using the arrow keys, select Local Only then hit Return
In the next screen set the domain name to the one you setup in /etc/hosts ( in my case that’s culture.org ).
Hit Return and the package installation will complete.
Note that if you need to re-run this configuration you can do so by running :
dpkg-reconfigure postfix
Next we need to create a virtual alias map, which we do by running :
sudo nano /etc/postfix/virtual
and populating this new file with two lines that look like :
@localhost <your-os-user>
@domain.name <your-os-user>
In my case the lines in the file are :
@localhost mike
@culture.org mike
Now we need to tell Postfix to read this file so :
sudo nano /etc/postfix/main.cf
…and add this line at the bottom of the file :
virtual_alias_maps = hash:/etc/postfix/virtual
To activate the mapping :
sudo postmap /etc/postfix/virtual
…and restart the postfix service…
sudo systemctl restart postfix
Once that’s done, we can confirm that the postfix service is running :
sudo systemctl status postfix

As with dovecot, we don’t need to do any more than install the package :
sudo apt-get install bsd-mailx
Now we have mailx, we can test our configuration :
echo "While you were sleeping, I ran your backup for you. Your welcome" | mail -r sleeper.service@culture.org -s "Good Morning" mike@culture.org
To check if we’ve received the email, run :
cat /var/mail/mike

Actually having to cat the inbox seems a lot of effort.
If I’m going to be on the receiving end of condescending sarcasm from my own laptop I should at least be able to read it from the comfort of an email client.
Currently, Thunderbird is the default mail client in Ubuntu and comes pre-installed.
If this is the first time you’ve run Thunderbird, you’ll be prompted to setup an account. If not then you can add an account by going to the “hamburger” menu in Thunderbird and selecting Account Settings. On the following screen click the Account Actions drop-down near the bottom of the screen and select Add Mail Account :

Fill in the details (the password is that of your os account) :

…and click Continue.
Thunderbird will now try and pick an appropriate configuration. After thinking about it for a bit it should come up with something like :

…which needs a bit of tweaking, so click the Configure Manually link and make the following changes :
Incoming Server Hostnamesmtp.localhostPort143Connection SecuritySTARTTLSAuthentication MethodNormal passwordUsernameyour os user (e.g. mike) Outgoing Server Hostnamesmtp.localhostPort25Connection SecuritySTARTTLSAuthentication MethodNormal passwordUsernameyour os user (e.g. mike )If you now click the Re-test button, you should get this reassuring message :

If so, then click Done.
You will be prompted to add a security exemption for the Incoming SMTP server

Click Confirm Security Exception
NOTE – you may then get two pop-ups, one after the other, prompting you to sign in via google.com. Just dismiss them. This shouldn’t happen when you access your inbox via Thunderbird after this initial configuration session.
You should now see the inbox you’ve setup, complete with the message sent earlier :

You should also be able to read any mail sent to mike@localhost. To test this :

Incidentally, the first time you send a mail, you’ll get prompted to add a security exception for the Outgoing Mail server. Once again, just hit Confirm Security Exception.
Once you do this, you’ll get a message saying that sending the mail failed. Dismiss it and resend and it will work.
Once again, this is a first-time only issue.
After a few seconds, you’ll see the mail appear in the inbox :

As you’d expect, you can’t send mail to an external email address from this account with this configuration :

The whole point of this exercise was so I could get emails from a shell script. To test this, I’ve created the following file – called funny_it_worked_last_time.sh
#!/bin/bash
body=$'Disk Usage : \n'
space=$(df -h)
body+="$space"
body+=$'\nTime to tidy up, you messy organic entity!'
echo "$body" | mail -r me.im.counting@culture.org -s "Status Update" mike@culture.org
If I make this script executable and then run it :
chmod u+x funny_it_worked_last_time.sh
. ./funny_it_worked_last_time.sh
…something should be waiting for me in the inbox…

Part of the reason for writing this was because I couldn’t find one place where the instructions were still applicable on the latest versions of the software I used here.
The links I found most useful were :
- This askUbuntu question
- This very useful GitHub gist by Rael Gugelmin Cunha
Finally, for those of a geeky disposition, here’s a list of Culture space craft.
Backup and Restore a Standby Database
I have seen some I.T. managers that decide to backup only the Primary Database and not the Standby. The logic is "if the Storage or Server for the Standby go down, we can rebuild the database from the Primary". OR "we haven't allocated storage / tape drive space at the Standby site" OR "our third-party backup tool does not know how to backup a Standby database and handle the warning about Archive Logs that is generated when it issues a "PLUS ARCHIVELOG" {see the warning below when I run the backup command)
Do they factor the time that is required to run Backup, Copy, Restore commands OR run the Duplicate command to rebuild the Standby ? All that while their Critical database is running without a Standby -- without a D.R. site.
Given a moderately large Database, it can be faster to restore from a "local" Backup at the Standby then to copy / duplicate across the network. Also, this method does NOT require rebuilding DataGuard Broker configuration.
Firstly, you CAN backup a Standby even while Recovery (i.e. Redo Apply) is running. The only catch is the "PLUS ARCHIVELOG" clause in "BACKUP ... DATABASE PLUS ARCHIVELOG" returns a minor error because a Standby cannot issue "ALTER SYSTEM ARCHIVE LOG CURRENT" (or "ALTER SYSTEM SWITCH LOGFILE")
Here's my Backup command at the Standby (while Redo Apply -- i.e. Media Recovery -- is running) without issuing a CANCEL RECOVERY.
RMAN> backup as compressed backupset 2> database 3> format '/tmp/STDBY_Backup/DB_DataFiles_%U.bak' 4> plus archivelog 5> format '/tmp/STDBY_Backup/DB_ArchLogs_%U.bak'; .... .... RMAN> backup current controlfile 2> format '/tmp/STDBY_Backup/standby_controlfile.bak';
So, when I run the Backup, it starts of with and also ends with :
RMAN-06820: warning: failed to archive current log at primary database cannot connect to remote database .... .... .... RMAN-06820: warning: failed to archive current log at primary database cannot connect to remote database using channel ORA_DISK_1 specification does not match any archived log in the repository backup cancelled because there are no files to backup
because it cannot issue an "ALTER DATABASE ARCHIVE LOG COMMAND" -- which can only be done at a Primary. These warnings do not trouble me.
RMAN> list backup; list backup; using target database control file instead of recovery catalog List of Backup Sets =================== BS Key Size Device Type Elapsed Time Completion Time ------- ---------- ----------- ------------ --------------- 311 44.10M DISK 00:00:02 01-MAY-25 BP Key: 311 Status: AVAILABLE Compressed: YES Tag: TAG20250501T074832 Piece Name: /tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak List of Archived Logs in backup set 311 Thrd Seq Low SCN Low Time Next SCN Next Time ---- ------- ---------- --------- ---------- --------- 1 393 11126161 01-MAY-25 11126287 01-MAY-25 1 394 11126287 01-MAY-25 11127601 01-MAY-25 ... ... 2 338 11126158 01-MAY-25 11126290 01-MAY-25 2 339 11126290 01-MAY-25 11127596 01-MAY-25 BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 312 Full 1.07G DISK 00:00:57 01-MAY-25 BP Key: 312 Status: AVAILABLE Compressed: YES Tag: TAG20250501T074835 Piece Name: /tmp/STDBY_Backup/DB_DataFiles_ae3objej_334_1_1.bak List of Datafiles in backup set 312 File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name ---- -- ---- ---------- --------- ----------- ------ ---- .... .... BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 313 Full 831.00M DISK 00:00:43 01-MAY-25 BP Key: 313 Status: AVAILABLE Compressed: YES Tag: TAG20250501T074835 Piece Name: /tmp/STDBY_Backup/DB_DataFiles_af3objgk_335_1_1.bak List of Datafiles in backup set 313 Container ID: 3, PDB Name: PDB1 File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name ---- -- ---- ---------- --------- ----------- ------ ---- .... .... BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 314 Full 807.77M DISK 00:00:42 01-MAY-25 BP Key: 314 Status: AVAILABLE Compressed: YES Tag: TAG20250501T074835 Piece Name: /tmp/STDBY_Backup/DB_DataFiles_ag3obji1_336_1_1.bak List of Datafiles in backup set 314 Container ID: 5, PDB Name: TSTPDB File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name ---- -- ---- ---------- --------- ----------- ------ ---- .... .... BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 315 Full 807.75M DISK 00:00:43 01-MAY-25 BP Key: 315 Status: AVAILABLE Compressed: YES Tag: TAG20250501T074835 Piece Name: /tmp/STDBY_Backup/DB_DataFiles_ah3objje_337_1_1.bak List of Datafiles in backup set 315 Container ID: 2, PDB Name: PDB$SEED File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name ---- -- ---- ---------- --------- ----------- ------ ---- .... .... BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 317 Full 19.58M DISK 00:00:01 01-MAY-25 BP Key: 317 Status: AVAILABLE Compressed: NO Tag: TAG20250501T075522 Piece Name: /tmp/STDBY_Backup/standby_controlfile.bak Standby Control File Included: Ckp SCN: 11128626 Ckp time: 01-MAY-25 RMAN>
So I can confirm that I have *local* backups (including ArchiveLogs present at the Standby and backed up before the Datafile backup begins). The last ArchiveLog backed up at the Standby is SEQ#394 for Thread#1 and SEQ#339 for Thread#2
DGMGRL> connect sys Password: Connected to "RACDB" Connected as SYSDBA. DGMGRL> EDIT DATABASE 'RACDB' SET STATE='TRANSPORT-OFF'; Succeeded. DGMGRL>
Next I Restore the *standby* controlfile at my Standby server (note that I connect to "target" and specify "standby controlfile"). Note : If my SPFILE or PFILE is not available at the Standby, I have to restore that as well before I STARTUP NOMOUNT.
[oracle@stdby ~]$ rman target / Recovery Manager: Release 19.0.0.0.0 - Production on Thu May 1 08:27:23 2025 Version 19.25.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. connected to target database (not started) RMAN> startup nomount; startup nomount; Oracle instance started Total System Global Area 2147480256 bytes Fixed Size 9179840 bytes Variable Size 486539264 bytes Database Buffers 1644167168 bytes Redo Buffers 7593984 bytes RMAN> restore standby controlfile from '/tmp/STDBY_Backup/standby_controlfile.bak'; restore standby controlfile from '/tmp/STDBY_Backup/standby_controlfile.bak'; Starting restore at 01-MAY-25 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=310 device type=DISK channel ORA_DISK_1: restoring control file channel ORA_DISK_1: restore complete, elapsed time: 00:00:01 output file name=/Standby_DB/oradata/control01.ctl output file name=/Standby_DB/FRA/control02.ctl Finished restore at 01-MAY-25 RMAN>I am now ready to CATALOG the Backups and RESTORE the Database
RMAN> alter database mount; alter database mount; released channel: ORA_DISK_1 Statement processed RMAN> catalog start with '/tmp/STDBY_Backup'; catalog start with '/tmp/STDBY_Backup'; Starting implicit crosscheck backup at 01-MAY-25 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=11 device type=DISK Crosschecked 13 objects Finished implicit crosscheck backup at 01-MAY-25 Starting implicit crosscheck copy at 01-MAY-25 using channel ORA_DISK_1 Finished implicit crosscheck copy at 01-MAY-25 searching for all files in the recovery area cataloging files... no files cataloged searching for all files that match the pattern /tmp/STDBY_Backup List of Files Unknown to the Database ===================================== File Name: /tmp/STDBY_Backup/standby_controlfile.bak Do you really want to catalog the above files (enter YES or NO)? YES cataloging files... cataloging done List of Cataloged Files ======================= File Name: /tmp/STDBY_Backup/standby_controlfile.bak RMAN>
In this case, the Standby Controlfile backup was taken *after* the Datafile and ArchiveLog backups, so this Controlfile is already "aware" of the backups (they are already included in the controlfile). Neverthless, I can do some verification : ( I have excluded listing each ArchiveLog / Datafile from the output here)
RMAN> list backup ; list backup ; List of Backup Sets =================== BS Key Size Device Type Elapsed Time Completion Time ------- ---------- ----------- ------------ --------------- 311 44.10M DISK 00:00:02 01-MAY-25 BP Key: 311 Status: AVAILABLE Compressed: YES Tag: TAG20250501T074832 Piece Name: /tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 312 Full 1.07G DISK 00:00:57 01-MAY-25 BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 313 Full 831.00M DISK 00:00:43 01-MAY-25 BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 314 Full 807.77M DISK 00:00:42 01-MAY-25 BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 315 Full 807.75M DISK 00:00:43 01-MAY-25 BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 316 Full 19.61M DISK 00:00:01 01-MAY-25 BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 317 Full 19.58M DISK 00:00:01 01-MAY-25 BP Key: 317 Status: AVAILABLE Compressed: NO Tag: TAG20250501T075522 Piece Name: /tmp/STDBY_Backup/standby_controlfile.bak Standby Control File Included: Ckp SCN: 11128626 Ckp time: 01-MAY-25 RMAN>
For good measure, I can also verify that this "database" is a Standby (only the controlfile is presently restored" is a *Standby Database* (that a database is a Primary or a Standby is information in the *Controlfile*, not in the Datafiles)
RMAN> exit exit RMAN Client Diagnostic Trace file : /u01/app/oracle/diag/clients/user_oracle/host_4144547424_110/trace/ora_4560_140406053321216.trc Recovery Manager complete. [oracle@stdby ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Thu May 1 08:37:54 2025 Version 19.25.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.25.0.0.0 SQL> select open_mode, database_role from v$database; OPEN_MODE DATABASE_ROLE -------------------- ---------------- MOUNTED PHYSICAL STANDBY SQL>
I can return to RMAN and RESTORE the Database. (I still invoke RMAN to connect to "target", not "auxiliary"
SQL> exit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.25.0.0.0 [oracle@stdby ~]$ rman target / Recovery Manager: Release 19.0.0.0.0 - Production on Thu May 1 08:40:32 2025 Version 19.25.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. connected to target database: RACDB (DBID=1162136313, not open) RMAN> RMAN> restore database; restore database; Starting restore at 01-MAY-25 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=2 device type=DISK channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00001 to /Standby_DB/oradata/STDBY/datafile/o1_mf_system_m33j9fqn_.dbf ... ... ... channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00005 to /Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_system_m33jb79n_.dbf channel ORA_DISK_1: restoring datafile 00006 to /Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_sysaux_m33jbbgz_.dbf channel ORA_DISK_1: restoring datafile 00008 to /Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_undotbs1_m33jbgrs_.dbf channel ORA_DISK_1: reading from backup piece /tmp/STDBY_Backup/DB_DataFiles_ah3objje_337_1_1.bak channel ORA_DISK_1: piece handle=/tmp/STDBY_Backup/DB_DataFiles_ah3objje_337_1_1.bak tag=TAG20250501T074835 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:45 Finished restore at 01-MAY-25 RMAN>
Next, I restore the ArchiveLogs that I have in the local backup instead of having to wait for them to be shipped from the Primary during the Recover Phase
RMAN> restore archivelog from time "trunc(sysdate)"; restore archivelog from time "trunc(sysdate)"; Starting restore at 01-MAY-25 using channel ORA_DISK_1 channel ORA_DISK_1: starting archived log restore to default destination channel ORA_DISK_1: restoring archived log archived log thread=1 sequence=391 channel ORA_DISK_1: restoring archived log archived log thread=1 sequence=392 channel ORA_DISK_1: restoring archived log archived log thread=2 sequence=336 channel ORA_DISK_1: restoring archived log archived log thread=2 sequence=337 channel ORA_DISK_1: restoring archived log archived log thread=2 sequence=338 channel ORA_DISK_1: restoring archived log archived log thread=1 sequence=393 channel ORA_DISK_1: restoring archived log archived log thread=1 sequence=394 channel ORA_DISK_1: restoring archived log archived log thread=2 sequence=339 channel ORA_DISK_1: reading from backup piece /tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak channel ORA_DISK_1: piece handle=/tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak tag=TAG20250501T074832 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 Finished restore at 01-MAY-25 RMAN> RMAN> list archivelog all completed after "trunc(sysdate)"; list archivelog all completed after "trunc(sysdate)"; List of Archived Log Copies for database with db_unique_name STDBY ===================================================================== Key Thrd Seq S Low Time ------- ---- ------- - --------- 675 1 391 A 27-APR-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_391_n16fcchs_.arc 667 1 391 A 27-APR-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_391_n169cng5_.arc 682 1 392 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_392_n16fcbh7_.arc 670 1 392 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_392_n169fh7s_.arc 678 1 393 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_393_n16fcckd_.arc 671 1 393 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_393_n169g77l_.arc 677 1 394 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_394_n16fccjb_.arc 674 1 394 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_394_n169my72_.arc 680 2 336 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_336_n16fccnv_.arc 668 2 336 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_336_n169d0fm_.arc 681 2 337 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_337_n16fcbhy_.arc 669 2 337 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_337_n169fh6j_.arc 679 2 338 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_338_n16fccm6_.arc 672 2 338 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_338_n169g790_.arc 676 2 339 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_339_n16fcchw_.arc 673 2 339 A 01-MAY-25 Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_339_n169mxfp_.arc RMAN>(the output shows duplicate entries if either the ArchiveLogs were already present at the Standby OR the Restore was executed twice)
So, I also have the ArchiveLogs now.
RMAN> exit exit sqlRMAN Client Diagnostic Trace file : /u01/app/oracle/diag/clients/user_oracle/host_4144547424_110/trace/ora_5380_139777366395392.trc Recovery Manager complete. [oracle@stdby ~]$ sqlplus / as sysdba SQL> select thread#, group# from v$standby_log; THREAD# GROUP# ---------- ---------- 1 5 2 6 0 7 1 8 1 9 1 10 2 11 2 12 8 rows selected. SQL> alter database drop standby logfile group 5; Database altered. SQL> alter database drop standby logfile group 6; Database altered. SQL> alter database drop standby logfile group 7; Database altered. SQL> alter database drop standby logfile group 8; alter database drop standby logfile group 8 * ERROR at line 1: ORA-00313: open failed for members of log group 8 of thread 1 ORA-00312: online log 8 thread 1: '/Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_mb6rdbos_.log' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 7 ORA-00312: online log 8 thread 1: '/Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_mb6rd9h8_.log' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 7 SQL> alter database drop standby logfile group 9; Database altered. SQL> alter database drop standby logfile group 10; Database altered. SQL> alter database drop standby logfile group 11; alter database drop standby logfile group 11 * ERROR at line 1: ORA-00313: open failed for members of log group 11 of thread 2 ORA-00312: online log 11 thread 2: '/Standby_DB/FRA/STDBY/onlinelog/o1_mf_11_mb6rf8ob_.log' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 7 ORA-00312: online log 11 thread 2: '/Standby_DB/oradata/STDBY/onlinelog/o1_mf_11_mb6rf7hs_.log' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 7 SQL> alter database drop standby logfile group 12; Database altered. SQL> SQL> select thread#, group# from v$standby_log; THREAD# GROUP# ---------- ---------- 1 8 2 11 SQL> SQL> alter database clear logfile group 8; Database altered. SQL> alter database clear logfile group 11; Database altered. SQL> SQL> alter database drop standby logfile group 8; Database altered. SQL> alter database drop standby logfile group 11; Database altered. SQL> select thread#, group# from v$standby_log; no rows selected SQL> SQL> alter database add standby logfile thread 1 size 512M; Database altered. SQL> alter database add standby logfile thread 1 size 512M; Database altered. SQL> alter database add standby logfile thread 2 size 512M; Database altered. SQL> alter database add standby logfile thread 2 size 512M; Database altered. SQL> alter database add standby logfile thread 2 size 512M; Database altered. SQL> SQL> select thread#, group# from v$standby_log order by 1,2; THREAD# GROUP# ---------- ---------- 1 5 1 6 1 7 2 8 2 9 2 10 6 rows selected. SQL>I have to clear and then drop and recreate one Standby Log of each Thread that were last being used just before all the files were lost -- so the controlfile expected Group 8 and Group 11 to be present. These were the entries in the alert log for the last set of Recover commands before the storage was lost :
2025-05-01T08:10:41.554409+00:00 Recovery of Online Redo Log: Thread 2 Group 11 Seq 343 Reading mem 0 Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_11_mb6rf7hs_.log Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_11_mb6rf8ob_.log 2025-05-01T08:10:41.557828+00:00 ARC1 (PID:1813): Archived Log entry 680 added for B-1164519547.T-1.S-397 LOS:0x0000000000a9f8a3 NXS:0x0000000000a9f8d2 NAB:12 ID 0x46c5be03 LAD:1 2025-05-01T08:10:41.563027+00:00 rfs (PID:1825): Selected LNO:8 for T-1.S-398 dbid 1162136313 branch 1164519547 2025-05-01T08:10:41.642227+00:00 PR00 (PID:1863): Media Recovery Waiting for T-1.S-398 (in transit) 2025-05-01T08:10:41.642508+00:00 Recovery of Online Redo Log: Thread 1 Group 8 Seq 398 Reading mem 0 Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_mb6rd9h8_.log Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_mb6rdbos_.log 2025-05-01T08:14:02.648081+00:00 Shutting down ORACLE instance (abort) (OS id: 3584)
Now I can begin Recovery of the Standby
SQL> alter database recover managed standby database using current logfile disconnect from session; Database altered. SQL> SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> SQL> startup mount; ORACLE instance started. Total System Global Area 2147480256 bytes Fixed Size 9179840 bytes Variable Size 486539264 bytes Database Buffers 1644167168 bytes Redo Buffers 7593984 bytes Database mounted. SQL> alter database recover managed standby database using current logfile disconnect from session; Database altered. SQL> exit
I resume Redo Shipping from the Primary
[oracle@srv1 ~]$ dgmgrl DGMGRL for Linux: Release 19.0.0.0.0 - Production on Thu May 1 09:11:56 2025 Version 19.25.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect sys Password: Connected to "RACDB" Connected as SYSDBA. DGMGRL> EDIT DATABASE 'RACDB' SET STATE='TRANSPORT-ON'; Succeeded. DGMGRL> DGMGRL> show configuration; Configuration - racdb_dg Protection Mode: MaxPerformance Members: racdb - Primary database stdby - Physical standby database Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 34 seconds ago) DGMGRL> show configuration lag; Configuration - racdb_dg Protection Mode: MaxPerformance Members: racdb - Primary database stdby - Physical standby database Transport Lag: 0 seconds (computed 1 second ago) Apply Lag: 0 seconds (computed 1 second ago) Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 37 seconds ago) DGMGRL>Note : I have to wait for a few seconds to a few minutes for the SHOW CONFIGURATION and SHOW CONFIGURATION LAG commands to return the correct information. Initially, they may show that there are errors but once Primary and Standby are "talking to each other", these errors would clear.
2025-05-01T09:19:30.530588+00:00 Recovery of Online Redo Log: Thread 2 Group 8 Seq 344 Reading mem 0 Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_n16gb9wm_.log Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_n16gbb4m_.log 2025-05-01T09:19:30.611573+00:00 rfs (PID:7642): krsr_rfs_atc: Identified database type as 'PHYSICAL': Client is ASYNC (PID:11557) 2025-05-01T09:19:30.623795+00:00 rfs (PID:7642): Selected LNO:5 for T-1.S-399 dbid 1162136313 branch 1164519547 2025-05-01T09:19:30.631133+00:00 PR00 (PID:7486): Media Recovery Waiting for T-1.S-399 (in transit) 2025-05-01T09:19:30.631475+00:00 Recovery of Online Redo Log: Thread 1 Group 5 Seq 399 Reading mem 0 Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_5_n16g90qv_.log Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_5_n16g910n_.log 2025-05-01T09:20:51.263052+00:00 ARC2 (PID:7470): Archived Log entry 691 added for B-1164519547.T-2.S-344 LOS:0x0000000000aa394b NXS:0x0000000000aa3b02 NAB:102 ID 0x46c5be03 LAD:1 2025-05-01T09:20:51.274060+00:00 rfs (PID:7640): Selected LNO:8 for T-2.S-345 dbid 1162136313 branch 1164519547 2025-05-01T09:20:51.285312+00:00 PR00 (PID:7486): Media Recovery Log /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_344_n16h7m85_.arc PR00 (PID:7486): Media Recovery Waiting for T-2.S-345 (in transit) 2025-05-01T09:20:51.387005+00:00 Recovery of Online Redo Log: Thread 2 Group 8 Seq 345 Reading mem 0 Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_n16gb9wm_.log Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_n16gbb4m_.log 2025-05-01T09:20:51.433894+00:00 ARC0 (PID:7462): Archived Log entry 692 added for B-1164519547.T-1.S-399 LOS:0x0000000000aa394e NXS:0x0000000000aa3b06 NAB:265 ID 0x46c5be03 LAD:1 2025-05-01T09:20:51.445431+00:00 rfs (PID:7642): Selected LNO:5 for T-1.S-400 dbid 1162136313 branch 1164519547 2025-05-01T09:20:51.514317+00:00 PR00 (PID:7486): Media Recovery Waiting for T-1.S-400 (in transit) 2025-05-01T09:20:51.514664+00:00 Recovery of Online Redo Log: Thread 1 Group 5 Seq 400 Reading mem 0 Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_5_n16g90qv_.log Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_5_n16g910n_.logI see that the SEQ# have already advanced to 399 and 345 for Thread 1 and 2 respectively.
400 Bad Request Request Header Or Cookie Too Large
Pages
