Feed aggregator

SQL ASSERTIONS vs triggers, materialized views with constraints, etc

Tom Kyte - Fri, 2025-05-09 19:06
I attended a Hotsos session by Toon Koppelaars yesterday morning on Semantic Query Optimization. Among other interesting topics, Toon lamented the lack of any DBMS implementing SQL "assertions". By which he meant a database-enforced constraint than encompassed more than a single column or a single tuple (record). The example he gave was "a manager cannot manage more than 2 departments." One should be able to have a DDL statement "CREATE ASSERTION max_manager_departments AS CHECK... " containing some appropriate SQL statement. But of course no DBMS, including Oracle, allows such. It seemed to me that these were the sorts of constraints that are usually implemented by the database designer in the form of triggers or materialized views with constraints. (Admittedly, as-implemented, most trigger-based constaints fail to account for Oracle's locking mechanisms, but that's an implementation issue). Here's an example of yours: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7144179386439 As a practical matter, are there any assertions that cannot be implemented via triggers or constrained materialized views? Or are there, ahem, "rules of thumb" or guidelines as to when one approach is better than another? It would seem to me that a discussion of "what we can implement (and this way is best)" and "what we can't implement" would be helpful. Thanks!
Categories: DBA Blogs

TM lock on not-modified table

Tom Kyte - Fri, 2025-05-09 19:06
Hi Tom, Oracle puts a TM lock on a table even if no rows are affected by a DML statement: TX_A>create table t (x int); Table created. -- just to show there are no locks on the table: TX_A>alter table t move; Table altered. -- in another session (marked as TX_B) we now issue a statement which affects no rows: TX_B>delete from t; 0 rows deleted. -- now there's a TM lock: TX_A>alter table t move; alter table t move * ERROR at line 1: ORA-00054: resource busy and acquire with NOWAIT specified TX_B>select type, lmode, 2 decode(lmode,2,'row-share',3,'row-exclusive'), 3 decode(type,'TM',(select object_name from dba_objects where object_id=id1)) name 4 from v$lock 5 where sid = (select sid from v$mystat where rownum=1); TYPE LMODE DECODE(LMODE,2,'ROW-SHARE',3,'ROW-EXCLU NAME ------ ---------- --------------------------------------- -------------------- TM 3 row-exclusive T TX_B>commit; Commit complete. TX_A>alter table t move; Table altered. It seems to me a bit counter-intuitive (which doesn't mean that it's "bad", of course) to retain an unnecessary lock (even a "weak" row-exclusive one) on a not-modified table ... so there's probably a reason that I can't see. Do you happen to know a strong reason for this behaviour - or shall I classify it as "just a quirk" ? -- For sure it's something to remember while coding - ie think about a code such as this: delete from t where ... if (sql%rowcount > 0) then log deletions in logging table commit; end if; This would leave the tm lock on the table preventing DDL possibly forever - the commit has to be moved after the if block, most definitely. Thanks Alberto
Categories: DBA Blogs

Explain plan using DBMS_XPLAN

Tom Kyte - Fri, 2025-05-09 19:06
Hi Tom, I am trying to view explain plan for a select statement in Livesql built on 19c oracle version. I am able to view the explan result using select * from table(dbms_xplan.display_cursor); but i see Error: cannot fetch last explain plan from PLAN_TABLE while using select * from table(dbms_xplan.display); Could you please help me understand . explain plan for select * from customers where id = 1; select * from TABLE(DBMS_XPLAN.DISPLAY); PLAN_TABLE_OUTPUT Error: cannot fetch last explain plan from PLAN_TABLE select * from customers where id = 1; select * from table(dbms_xplan.display_cursor); PLAN_TABLE_OUTPUT SQL_ID 92qqjqaghv843, child number 1 ------------------------------------- SELECT O.OBJECT_NAME, O.OBJECT_TYPE, O.STATUS, E.TEXT, E.LINE, E.POSITION FROM SYS.DBA_OBJECTS O, SYS.DBA_ERRORS E WHERE O.OBJECT_NAME
Categories: DBA Blogs

Interactive Grid

Tom Kyte - Fri, 2025-05-09 19:06
El erro al crear un Interactive Grid en la version Oracle APEX 24.2.0 en versiones anteriores funcionaba sin problemas ORA-01400: cannot insert NULL into ("APEX_240200"."WWV_FLOW_IG_REPORTS"."ID")
Categories: DBA Blogs

ASMFD not working when we do an in-place upgrade or migrate to RHEL 7

Tom Kyte - Fri, 2025-05-09 19:06
We are currently in the process of upgrading our Oracle database servers from RHEL 7 to RHEL 8. We are considering two possible approaches: Migrating to a new server built with RHEL 8. Performing an in-place upgrade on the existing server. In both approaches, we intend to use ASMFD (ASM Filter Driver). However, Oracle Support has advised us to discontinue using ASMFD on RHEL 8, as it is not enabled by default. Instead, they recommend switching to ASMLIB. That said, we see several advantages in continuing to use ASMFD, particularly in terms of I/O filtering and performance benefits. We?re curious to know if the claim about ASMFD not being supported on RHEL 8 or RHEL 9 is accurate. If you?ve successfully implemented ASMFD on RHEL 8 or RHEL 9, we would love to hear about your experience. How did you make it work? Any insights or steps you followed would be greatly appreciated.
Categories: DBA Blogs

Logging Run Controls and Bind Variables for Scheduled PS/Queries

David Kurtz - Thu, 2025-05-08 03:06

This blog proposes additional logging for scheduled PS/Queries so that long-running queries can be reconstructed and analysed.

Previous blog posts have discussed limiting PS/Query runtime with the resource manager (see Management of Long Running PS/Queries Cancelled by Resource Manager CPU Limit).  From 19c, on Engineered Systems only, the 'Oracle Database automatically quarantines the plans for SQL statements terminated by … the Resource Manager for exceeding resource limits'.  SQL Quarantine is enabled by default in Oracle 19c on Exadata (unless patch 30104721 is applied that backports the new 23c parameters, see Oracle Doc ID 2635030.1: 19c New Feature SQL Quarantine - How To Stop Automatic SQL Quarantine).

What is the Problem?

SQL Quarantine prevents a query from executing.  Therefore, AWR will not capture the execution plan.  AWR will also purge execution plans where an execution has not been captured within the AWR retention period.  The original long-running query execution that was quarantined, if captured by AWR, will be aged out because it will not execute again.

If we want to investigate PS/Queries that produced execution plans that exceeded the runtime limit and were then quarantined, we need to reproduce the execution plan, either with the EXPLAIN PLAN FOR command or by executing the query in a session where the limited resource manager consumer group does not apply.

However, PS/Queries with bind variables present a challenge.  A PS/Query run with different bind variables can produce different execution plans.  One execution plan might be quarantined and so never complete, while another may complete within an acceptable time.  

In AWR, a plan is only captured once for each statement.  Therefore, it is possible to find one set of bind variables for each plan, although there may be many sets of bind variables that all produce the same execution plan.  However, we cannot obtain Oracle bind variables for quarantined execution plans that did not execute.  To regenerate their execution plans, we need another way to obtain their bind variables.

This problem occurs more generally where the Diagnostics Pack is not available, then it is not possible to reconstruct long-running queries without additional logging or tracing.

Solution
Scheduled PS/Queries are executed by the PSQUERY application engine.  The name of the query and the bind variables are passed via two run control records.  Users typically reuse an existing run control but provide different bind variable values.  I propose to introduce two tables to hold a copy of the data in these tables for each process instance.
  • PS_QUERY_RUN_CNTRL: Scheduled Query Run Control.  This record identifies the query executed.  Rows in this table will be copied to PS_QRYRUN_CTL_HST.
  • PS_QUERY_RUN_PARM: Scheduled Query Run Parameters.  This record holds the bind variables and the values passed to the query.  The table contains a row for each bind variable for each execution.  Rows in this table will be copied to PS_QRYRUN_PARM_HST

Two database triggers manage the history tables:

  • A database trigger that fires when the run status of the request is updated to '7' (processing).  It copies rows for the current run control into two corresponding history tables.  Thus, we will have a log of every bind variable for every scheduled query.
  • A second database trigger will fire when a PSQUERY request record is deleted.  It deletes the corresponding rows from these history tables.

When a PS/Query produces a quarantined execution plan, the PSQUERY process terminates with error ORA-56955: quarantined plan used (see Quarantined SQL Plans for PS/Queries).  Now we can obtain the bind variables that resulted in attempts to execute a quarantined query execution plan.

Implementation

The following script (ps_query_run_cntrl_hist_trigger.sql) creates the tables and triggers.  

REM ps_query_run_cntrl_hist_trigger.sql
REM 21.4.2025 - trigger and history tables to capture 
set echo on serveroutput on timi on
clear screen
spool ps_query_run_cntrl_hist_trigger
rollback;

CREATE TABLE SYSADM.PS_QRYRUN_CTL_HST 
  (PRCSINSTANCE INTEGER  DEFAULT 0 NOT NULL,
   OPRID VARCHAR2(30)  DEFAULT ' ' NOT NULL,
   RUN_CNTL_ID VARCHAR2(30)  DEFAULT ' ' NOT NULL,
   DESCR VARCHAR2(30)  DEFAULT ' ' NOT NULL,
   QRYTYPE SMALLINT  DEFAULT 1 NOT NULL,
   PRIVATE_QUERY_FLAG VARCHAR2(1)  DEFAULT 'N' NOT NULL,
   QRYNAME VARCHAR2(30)  DEFAULT ' ' NOT NULL,
   URL VARCHAR2(254)  DEFAULT ' ' NOT NULL,
   ASIAN_FONT_SETTING VARCHAR2(3)  DEFAULT ' ' NOT NULL,
   PTFP_FEED_ID VARCHAR2(30)  DEFAULT ' ' NOT NULL) TABLESPACE PTTBL
/
CREATE UNIQUE  iNDEX SYSADM.PS_QRYRUN_CTL_HST 
ON SYSADM.PS_QRYRUN_CTL_HST (PRCSINSTANCE) TABLESPACE PSINDEX PARALLEL NOLOGGING
/
ALTER INDEX SYSADM.PS_QRYRUN_CTL_HST NOPARALLEL LOGGING
/
CREATE TABLE SYSADM.PS_QRYRUN_PARM_HST 
  (PRCSINSTANCE INTEGER  DEFAULT 0 NOT NULL,
   OPRID VARCHAR2(30)  DEFAULT ' ' NOT NULL,
   RUN_CNTL_ID VARCHAR2(30)  DEFAULT ' ' NOT NULL,
   BNDNUM SMALLINT  DEFAULT 0 NOT NULL,
   FIELDNAME VARCHAR2(18)  DEFAULT ' ' NOT NULL,
   BNDNAME VARCHAR2(30)  DEFAULT ' ' NOT NULL,
   BNDVALUE CLOB) TABLESPACE PSIMAGE2 
/
CREATE UNIQUE  iNDEX SYSADM.PS_QRYRUN_PARM_HST 
ON SYSADM.PS_QRYRUN_PARM_HST (PRCSINSTANCE, BNDNUM) TABLESPACE PSINDEX PARALLEL NOLOGGING
/
ALTER INDEX SYSADM.PS_QRYRUN_PARM_HST NOPARALLEL LOGGING
/
  • PSQUERY is not a restartable Application Engine program.  Therefore, there is no risk of duplicate inserts into the history tables.
  • The exception handlers in the triggers deliberately suppress any error, in case that causes the process scheduler to crash.
CREATE OR REPLACE TRIGGER sysadm.query_run_cntrl_hist_ins
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus ='7' AND old.runstatus != '7' AND new.prcsname = 'PSQUERY' AND new.prcstype = 'Application Engine')
BEGIN
  INSERT INTO PS_QRYRUN_CTL_HST 
  (PRCSINSTANCE, OPRID, RUN_CNTL_ID, DESCR ,QRYTYPE, PRIVATE_QUERY_FLAG, QRYNAME, URL, ASIAN_FONT_SETTING, PTFP_FEED_ID)
  SELECT :new.prcsinstance, OPRID, RUN_CNTL_ID, DESCR ,QRYTYPE, PRIVATE_QUERY_FLAG, QRYNAME, URL, ASIAN_FONT_SETTING, PTFP_FEED_ID 
  FROM ps_query_run_cntrl WHERE oprid = :new.oprid AND run_cntl_id = :new.runcntlid;
  
  INSERT INTO PS_QRYRUN_PARM_HST
  (PRCSINSTANCE, OPRID, RUN_CNTL_ID, BNDNUM, FIELDNAME, BNDNAME, BNDVALUE) 
  SELECT :new.prcsinstance prcsinstance, OPRID, RUN_CNTL_ID, BNDNUM, FIELDNAME, BNDNAME, BNDVALUE
  FROM ps_query_run_parm WHERE oprid = :new.oprid AND run_cntl_id = :new.runcntlid;

  EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions 
END;
/

CREATE OR REPLACE TRIGGER sysadm.query_run_cntrl_hist_del
BEFORE DELETE ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (old.prcsname = 'PSQUERY' AND old.prcstype = 'Application Engine')
BEGIN
  DELETE FROM PS_QRYRUN_CTL_HST WHERE prcsinstance = :old.prcsinstance;
  DELETE FROM PS_QRYRUN_PARM_HST WHERE prcsinstance = :old.prcsinstance;

  EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions
END;
/ 
show errors

spool off
Example

When a query is scheduled to run on the process scheduler, the bind variables are specified through this generic dialogue.

Scheduled Query Bind Variable Diaglogue

Once the PSQUERY process has started (it immediately commits its update to RUNSTATUS), these values are written to the new history tables.

select * from ps_qryrun_ctl_hst;

PRCSINSTANCE OPRID    RUN_CNTL_ID  DESCR                             QRYTYPE P QRYNAME                        URL                                                ASI PTFP_FEED_ID                  
------------ -------- ------------ ------------------------------ ---------- - ------------------------------ -------------------------------------------------- --- ------------------------------
    12345678 ABCDEF   042225       Journal Line Detail - Account           1 N XXX_JRNL_LINE_DTL_ACCT         https://xxxxxxx.yyyyy.com/psp/XXXXXXX/EMPLOYEE/ERP                                   

select * from ps_qryrun_ctl_hst;

PRCSINSTANCE OPRID    RUN_CNTL_ID  BNDNUM FIELDNAME          BNDNAME              BNDVALUE                      
------------ -------- ------------ ------ ------------------ -------------------- ------------------------------
    12345678 ABCDEF   042225            1 bind1              BUSINESS_UNIT        354XX 
    12345678 ABCDEF   042225            2 bind2              BUSINESS_UNIT        354XX 
    12345678 ABCDEF   042225            3 FISCAL_YEAR        FISCAL_YEAR          2025 
    12345678 ABCDEF   042225            4 ACCOUNTING_PD_FROM ACCOUNTING_PD_FROM   2 
    12345678 ABCDEF   042225            5 ACCOUNTING_PD_TO   ACCOUNTING_PD_TO     2 
    12345678 ABCDEF   042225            6 bind6              ACCOUNT              23882XXXXX 
    12345678 ABCDEF   042225            7 bind7              ACCOUNT              23882XXXXX 
    12345678 ABCDEF   042225            8 bind8              ALTACCOUNT           23882XXXXX 
    12345678 ABCDEF   042225            9 bind9              ALTACCOUNT           23882XXXXX

Conclusion

If the query is quarantined, PSQUERY will terminate with error ORA-56955: quarantined plan used. The SQL statement can be extracted from the message log, and the execution plan can be generated with the EXPLAIN PLAN FOR command, using the bind variable values captured in the history tables.

Note: The signature of the SQL Quarantine directive is the exact matching signature of the SQL text (it can be generated from the SQL text with dbms_sqltune.sqltext_to_signature).  There can be multiple PLAN_HASH_VALUEs for the same signature (because there can be multiple execution plans for the same SQL). Verify that the FULL_PLAN_HASH_VALUE of the execution plan generated with the captured bind variables corresponds to the PLAN_HASH_VALUE of a SQL Quarantine directive.

Setting up a Local Only SMTP server in Ubuntu

The Anti-Kyte - Tue, 2025-05-06 01:30

If you’re using bash scripts to automate tasks on your Ubuntu desktop but don’t want to have to wade through logs to find out what’s happening, you can setup a Local Only SMTP server and have the scripts mail you their output.
Being Linux, there are several ways to do this.
What follows is how I managed to set this up on my Ubuntu 24_04 desktop (minus all the mistakes and swearing).
Specifically we’ll be looking at :

  • installing Dovecot
  • installing and configuring Postfix
  • installing mailx
  • configuring Thunderbird to handle local emails

As AI appears to be the “blockchain du jour” I thought I should make some attempt to appear up-to-date and relevant. Therefore, various AI entities from Ian M. Banks’ Culture will be making an appearance in what follows…

Software Versions

It’s probably useful to know the versions I’m using for this particular exercise :

  • Ubuntu 24.04.1 LTS
  • Thunderbird 128.10.0esr (64-bit)
  • Postfix 3.8.6
  • Dovecot 2.3.21
  • mailx 8.1.2

If you happen to have any of these installed already, you can check the version of Postfix by running…

postconf mail_version

…Dovecot by running..

dovecot --version

…and mailx by running :

mail -v

The Ubuntu version can be found under Settings/System/About

The Thunderbird version can be found under Help/About.

Before we get to installing or configuring anything, we need to…

Specify a domain for localhost

We need a domain for Postfix. As wer’re only flinging traffic around the current machine, it doesn’t have to be known to the wider world, but it does need to point to localhost.

So, in the Terminal :

sudo nano /etc/hosts

…and edit the line for 127.0.0.1 so it includes your chosen domain name.

Conventionally this is something like localhost.com, but “It’s My Party and I’ll Sing If I Want To”…

127.0.0.1 localhost culture.org

Once we’ve saved the file, we can test the new entry by running :

ping -c1 culture.org

PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.062 ms

--- localhost ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
Install Dovecot

Simply install the package by running :

sudo apt-get install dovecot-imapd

The installation will complete automatically after which, in this instance, we don’t need to do any configuration at all.

Install and configure Postfix

You can install postfix by running :

sudo apt-get install postfix

As part of the installation you will be presented with a configuration menu screen :

Using the arrow keys, select Local Only then hit Return

In the next screen set the domain name to the one you setup in /etc/hosts ( in my case that’s culture.org ).

Hit Return and the package installation will complete.

Note that if you need to re-run this configuration you can do so by running :

dpkg-reconfigure postfix

Next we need to create a virtual alias map, which we do by running :

sudo nano /etc/postfix/virtual

and populating this new file with two lines that look like :

@localhost <your-os-user>
@domain.name <your-os-user>

In my case the lines in the file are :

@localhost mike
@culture.org mike

Now we need to tell Postfix to read this file so :

sudo nano /etc/postfix/main.cf

…and add this line at the bottom of the file :

virtual_alias_maps = hash:/etc/postfix/virtual

To activate the mapping :

sudo postmap /etc/postfix/virtual

…and restart the postfix service…

sudo systemctl restart postfix

Once that’s done, we can confirm that the postfix service is running :

sudo systemctl status postfix
Installing mailx

As with dovecot, we don’t need to do any more than install the package :

sudo apt-get install bsd-mailx
Testing the Configuration

Now we have mailx, we can test our configuration :

echo "While you were sleeping, I ran your backup for you. Your welcome" | mail -r sleeper.service@culture.org -s "Good Morning" mike@culture.org

To check if we’ve received the email, run :

cat /var/mail/mike

Actually having to cat the inbox seems a lot of effort.
If I’m going to be on the receiving end of condescending sarcasm from my own laptop I should at least be able to read it from the comfort of an email client.

Thunderbird

Currently, Thunderbird is the default mail client in Ubuntu and comes pre-installed.

If this is the first time you’ve run Thunderbird, you’ll be prompted to setup an account. If not then you can add an account by going to the “hamburger” menu in Thunderbird and selecting Account Settings. On the following screen click the Account Actions drop-down near the bottom of the screen and select Add Mail Account :

Fill in the details (the password is that of your os account) :

…and click Continue.

Thunderbird will now try and pick an appropriate configuration. After thinking about it for a bit it should come up with something like :

…which needs a bit of tweaking, so click the Configure Manually link and make the following changes :

Incoming Server Hostnamesmtp.localhostPort143Connection SecuritySTARTTLSAuthentication MethodNormal passwordUsernameyour os user (e.g. mike) Outgoing Server Hostnamesmtp.localhostPort25Connection SecuritySTARTTLSAuthentication MethodNormal passwordUsernameyour os user (e.g. mike )

If you now click the Re-test button, you should get this reassuring message :

If so, then click Done.

You will be prompted to add a security exemption for the Incoming SMTP server

Click Confirm Security Exception

NOTE – you may then get two pop-ups, one after the other, prompting you to sign in via google.com. Just dismiss them. This shouldn’t happen when you access your inbox via Thunderbird after this initial configuration session.

You should now see the inbox you’ve setup, complete with the message sent earlier :

You should also be able to read any mail sent to mike@localhost. To test this :

Incidentally, the first time you send a mail, you’ll get prompted to add a security exception for the Outgoing Mail server. Once again, just hit Confirm Security Exception.
Once you do this, you’ll get a message saying that sending the mail failed. Dismiss it and resend and it will work.
Once again, this is a first-time only issue.

After a few seconds, you’ll see the mail appear in the inbox :

As you’d expect, you can’t send mail to an external email address from this account with this configuration :

Sending an email from a Shell Script

The whole point of this exercise was so I could get emails from a shell script. To test this, I’ve created the following file – called funny_it_worked_last_time.sh

#!/bin/bash  
body=$'Disk Usage : \n'
space=$(df -h)
body+="$space"
body+=$'\nTime to tidy up, you messy organic entity!'
echo "$body" | mail -r me.im.counting@culture.org -s "Status Update" mike@culture.org

If I make this script executable and then run it :

chmod u+x funny_it_worked_last_time.sh

. ./funny_it_worked_last_time.sh

…something should be waiting for me in the inbox…

Further Reading

Part of the reason for writing this was because I couldn’t find one place where the instructions were still applicable on the latest versions of the software I used here.
The links I found most useful were :

Finally, for those of a geeky disposition, here’s a list of Culture space craft.

Backup and Restore a Standby Database

Hemant K Chitale - Thu, 2025-05-01 04:44

 I have seen some I.T. managers that decide to backup only the Primary Database and not the Standby. The logic is "if the Storage or Server for the Standby go down, we can rebuild the database from the Primary".   OR "we haven't allocated storage  /  tape drive space at the Standby site"    OR  "our third-party backup tool does not know how to backup a Standby database and handle the warning about Archive Logs that is generated when it issues  a "PLUS ARCHIVELOG"   {see the warning below when I run the backup command)

Do they factor the time that is required to run Backup, Copy, Restore commands  OR run the Duplicate command to rebuild the Standby ?  All that while their Critical database is running without a Standby -- without a D.R. site.

Given a moderately large Database, it can be faster to restore from a "local" Backup at the Standby then to copy / duplicate across the network.  Also, this method does NOT require rebuilding DataGuard Broker configuration.

Firstly, you CAN backup a Standby even while Recovery (i.e. Redo Apply) is running.  The only catch is the "PLUS ARCHIVELOG" clause in "BACKUP ... DATABASE PLUS ARCHIVELOG" returns a minor error because a Standby cannot issue "ALTER SYSTEM ARCHIVE LOG CURRENT" (or "ALTER SYSTEM SWITCH LOGFILE")

Here's my Backup command at the Standby (while Redo Apply -- i.e. Media Recovery -- is running) without issuing a CANCEL RECOVERY.


RMAN> backup as compressed backupset
2> database
3> format '/tmp/STDBY_Backup/DB_DataFiles_%U.bak'
4> plus archivelog
5> format '/tmp/STDBY_Backup/DB_ArchLogs_%U.bak';
....
....
RMAN> backup current controlfile
2> format '/tmp/STDBY_Backup/standby_controlfile.bak';

So, when I run the Backup, it starts of with  and also ends with :
RMAN-06820: warning: failed to archive current log at primary database
cannot connect to remote database
....
....
....
RMAN-06820: warning: failed to archive current log at primary database
cannot connect to remote database
using channel ORA_DISK_1
specification does not match any archived log in the repository
backup cancelled because there are no files to backup


because it cannot issue an "ALTER DATABASE ARCHIVE LOG COMMAND" -- which can only be done at a Primary.  These warnings do not trouble me.


A LIST BACKUP command at the Standby *does* show Backups created locally (it will not show Backups at the Primary unless I connect to the same Recovery Catalog that is being used by the Primary)  ( I have excluded listing each ArchiveLog / Datafile from the output here)
RMAN> list backup;
list backup;
using target database control file instead of recovery catalog

List of Backup Sets
===================

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
311     44.10M     DISK        00:00:02     01-MAY-25
        BP Key: 311   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074832
        Piece Name: /tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak

  List of Archived Logs in backup set 311
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    393     11126161   01-MAY-25 11126287   01-MAY-25
  1    394     11126287   01-MAY-25 11127601   01-MAY-25
...
...
  2    338     11126158   01-MAY-25 11126290   01-MAY-25
  2    339     11126290   01-MAY-25 11127596   01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
312     Full    1.07G      DISK        00:00:57     01-MAY-25
        BP Key: 312   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074835
        Piece Name: /tmp/STDBY_Backup/DB_DataFiles_ae3objej_334_1_1.bak
  List of Datafiles in backup set 312
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
....
....

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
313     Full    831.00M    DISK        00:00:43     01-MAY-25
        BP Key: 313   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074835
        Piece Name: /tmp/STDBY_Backup/DB_DataFiles_af3objgk_335_1_1.bak
  List of Datafiles in backup set 313
  Container ID: 3, PDB Name: PDB1
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
....
....

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
314     Full    807.77M    DISK        00:00:42     01-MAY-25
        BP Key: 314   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074835
        Piece Name: /tmp/STDBY_Backup/DB_DataFiles_ag3obji1_336_1_1.bak
  List of Datafiles in backup set 314
  Container ID: 5, PDB Name: TSTPDB
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
....
....

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
315     Full    807.75M    DISK        00:00:43     01-MAY-25
        BP Key: 315   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074835
        Piece Name: /tmp/STDBY_Backup/DB_DataFiles_ah3objje_337_1_1.bak
  List of Datafiles in backup set 315
  Container ID: 2, PDB Name: PDB$SEED
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
....
....

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
317     Full    19.58M     DISK        00:00:01     01-MAY-25
        BP Key: 317   Status: AVAILABLE  Compressed: NO  Tag: TAG20250501T075522
        Piece Name: /tmp/STDBY_Backup/standby_controlfile.bak
  Standby Control File Included: Ckp SCN: 11128626     Ckp time: 01-MAY-25

RMAN>

So I can confirm that I have *local* backups (including ArchiveLogs present at the Standby and backed up before the Datafile backup begins).  The last ArchiveLog backed up at the Standby is SEQ#394 for Thread#1 and SEQ#339 for Thread#2
Meanwhile, the Standby has already applied subsequent ArchiveLogs as Recovery had not been cancelled.

Now I simulate loss / corruption of the filesystem holding the Standby datafiles and controlfiles and attempt a Restore (if the Standby Redo Logs are also lost, I must add them again later before I resume recovery).

I do a SHUTDOWN ABORT at the Standby if he instance seems to be running.  
(not shown here)

First I stop Redo Shipping from the Primary
DGMGRL> connect sys
Password:
Connected to "RACDB"
Connected as SYSDBA.
DGMGRL> EDIT DATABASE 'RACDB' SET STATE='TRANSPORT-OFF';
Succeeded.
DGMGRL>

Next I Restore the *standby* controlfile at my Standby server (note that I connect to "target" and specify "standby controlfile").  Note : If my SPFILE or PFILE is not available at the Standby, I have to restore that as well before I  STARTUP NOMOUNT.
[oracle@stdby ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Thu May 1 08:27:23 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount;
startup nomount;
Oracle instance started

Total System Global Area    2147480256 bytes

Fixed Size                     9179840 bytes
Variable Size                486539264 bytes
Database Buffers            1644167168 bytes
Redo Buffers                   7593984 bytes


RMAN> restore standby controlfile from '/tmp/STDBY_Backup/standby_controlfile.bak';
restore standby controlfile from '/tmp/STDBY_Backup/standby_controlfile.bak';
Starting restore at 01-MAY-25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=310 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/Standby_DB/oradata/control01.ctl
output file name=/Standby_DB/FRA/control02.ctl
Finished restore at 01-MAY-25


RMAN>
I am now ready to CATALOG the Backups and RESTORE the Database
RMAN> alter database mount;
alter database mount;
released channel: ORA_DISK_1
Statement processed


RMAN> catalog start with '/tmp/STDBY_Backup';
catalog start with '/tmp/STDBY_Backup';
Starting implicit crosscheck backup at 01-MAY-25
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=11 device type=DISK
Crosschecked 13 objects
Finished implicit crosscheck backup at 01-MAY-25

Starting implicit crosscheck copy at 01-MAY-25
using channel ORA_DISK_1
Finished implicit crosscheck copy at 01-MAY-25

searching for all files in the recovery area
cataloging files...
no files cataloged

searching for all files that match the pattern /tmp/STDBY_Backup

List of Files Unknown to the Database
=====================================
File Name: /tmp/STDBY_Backup/standby_controlfile.bak

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /tmp/STDBY_Backup/standby_controlfile.bak


RMAN>

In this case, the Standby Controlfile backup was taken *after* the Datafile and ArchiveLog backups, so this Controlfile is already "aware" of the backups (they are already included in the controlfile).  Neverthless, I can do some verification : ( I have excluded listing each ArchiveLog / Datafile from the output here)
RMAN> list backup ;
list backup ;

List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
311     44.10M     DISK        00:00:02     01-MAY-25
        BP Key: 311   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074832
        Piece Name: /tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
312     Full    1.07G      DISK        00:00:57     01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
313     Full    831.00M    DISK        00:00:43     01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
314     Full    807.77M    DISK        00:00:42     01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
315     Full    807.75M    DISK        00:00:43     01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
316     Full    19.61M     DISK        00:00:01     01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
317     Full    19.58M     DISK        00:00:01     01-MAY-25
        BP Key: 317   Status: AVAILABLE  Compressed: NO  Tag: TAG20250501T075522
        Piece Name: /tmp/STDBY_Backup/standby_controlfile.bak
  Standby Control File Included: Ckp SCN: 11128626     Ckp time: 01-MAY-25


RMAN>

For good measure, I can also verify that this "database" is  a Standby  (only the controlfile is presently restored" is a *Standby Database* (that a database is a Primary or a Standby is information in the *Controlfile*, not in the Datafiles)
RMAN> exit
exit

RMAN Client Diagnostic Trace file : /u01/app/oracle/diag/clients/user_oracle/host_4144547424_110/trace/ora_4560_140406053321216.trc

Recovery Manager complete.
[oracle@stdby ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu May 1 08:37:54 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> select open_mode, database_role from v$database;

OPEN_MODE            DATABASE_ROLE
-------------------- ----------------
MOUNTED              PHYSICAL STANDBY

SQL>

I can return to RMAN and RESTORE the Database. (I still invoke RMAN to connect to "target", not "auxiliary"
SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.25.0.0.0
[oracle@stdby ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Thu May 1 08:40:32 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=1162136313, not open)

RMAN> 
RMAN> restore database;
restore database;
Starting restore at 01-MAY-25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=2 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /Standby_DB/oradata/STDBY/datafile/o1_mf_system_m33j9fqn_.dbf
...
...
...
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_system_m33jb79n_.dbf
channel ORA_DISK_1: restoring datafile 00006 to /Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_sysaux_m33jbbgz_.dbf
channel ORA_DISK_1: restoring datafile 00008 to /Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_undotbs1_m33jbgrs_.dbf
channel ORA_DISK_1: reading from backup piece /tmp/STDBY_Backup/DB_DataFiles_ah3objje_337_1_1.bak
channel ORA_DISK_1: piece handle=/tmp/STDBY_Backup/DB_DataFiles_ah3objje_337_1_1.bak tag=TAG20250501T074835
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:45
Finished restore at 01-MAY-25


RMAN>

Next, I restore the ArchiveLogs that I have in the local backup instead of having to wait for them to be shipped from the Primary during the Recover Phase
RMAN> restore archivelog from time "trunc(sysdate)";
restore archivelog from time "trunc(sysdate)";
Starting restore at 01-MAY-25
using channel ORA_DISK_1

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=391
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=392
channel ORA_DISK_1: restoring archived log
archived log thread=2 sequence=336
channel ORA_DISK_1: restoring archived log
archived log thread=2 sequence=337
channel ORA_DISK_1: restoring archived log
archived log thread=2 sequence=338
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=393
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=394
channel ORA_DISK_1: restoring archived log
archived log thread=2 sequence=339
channel ORA_DISK_1: reading from backup piece /tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak
channel ORA_DISK_1: piece handle=/tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak tag=TAG20250501T074832
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 01-MAY-25


RMAN>
RMAN>  list archivelog all completed after "trunc(sysdate)";
 list archivelog all completed after "trunc(sysdate)";
List of Archived Log Copies for database with db_unique_name STDBY
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
675     1    391     A 27-APR-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_391_n16fcchs_.arc

667     1    391     A 27-APR-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_391_n169cng5_.arc

682     1    392     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_392_n16fcbh7_.arc

670     1    392     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_392_n169fh7s_.arc

678     1    393     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_393_n16fcckd_.arc

671     1    393     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_393_n169g77l_.arc

677     1    394     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_394_n16fccjb_.arc

674     1    394     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_394_n169my72_.arc

680     2    336     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_336_n16fccnv_.arc

668     2    336     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_336_n169d0fm_.arc

681     2    337     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_337_n16fcbhy_.arc

669     2    337     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_337_n169fh6j_.arc

679     2    338     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_338_n16fccm6_.arc

672     2    338     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_338_n169g790_.arc

676     2    339     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_339_n16fcchw_.arc

673     2    339     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_339_n169mxfp_.arc



RMAN>
(the output shows duplicate entries if either the ArchiveLogs were already present at the Standby OR the Restore was executed twice)

So, I also have the ArchiveLogs now. 

I add Standby Logs first (Add for each Thread in the Primary, and at the same size as Online Logs at the Primary)
RMAN> exit
exit

sqlRMAN Client Diagnostic Trace file : /u01/app/oracle/diag/clients/user_oracle/host_4144547424_110/trace/ora_5380_139777366395392.trc

Recovery Manager complete.
[oracle@stdby ~]$ sqlplus / as sysdba

SQL> select thread#, group# from v$standby_log;

   THREAD#     GROUP#
---------- ----------
         1          5
         2          6
         0          7
         1          8
         1          9
         1         10
         2         11
         2         12

8 rows selected.

SQL> alter database drop standby logfile group 5;

Database altered.

SQL> alter database drop standby logfile group 6;

Database altered.

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database drop standby logfile group 8;
alter database drop standby logfile group 8
*
ERROR at line 1:
ORA-00313: open failed for members of log group 8 of thread 1
ORA-00312: online log 8 thread 1: '/Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_mb6rdbos_.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
ORA-00312: online log 8 thread 1: '/Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_mb6rd9h8_.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7


SQL> alter database drop standby logfile group 9;

Database altered.

SQL> alter database drop standby logfile group 10;

Database altered.

SQL> alter database drop standby logfile group 11;
alter database drop standby logfile group 11
*
ERROR at line 1:
ORA-00313: open failed for members of log group 11 of thread 2
ORA-00312: online log 11 thread 2: '/Standby_DB/FRA/STDBY/onlinelog/o1_mf_11_mb6rf8ob_.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
ORA-00312: online log 11 thread 2: '/Standby_DB/oradata/STDBY/onlinelog/o1_mf_11_mb6rf7hs_.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7


SQL> alter database drop standby logfile group 12;

Database altered.

SQL>
SQL> select thread#, group# from v$standby_log;

   THREAD#     GROUP#
---------- ----------
         1          8
         2         11

SQL>
SQL> alter database clear logfile group 8;

Database altered.

SQL> alter database clear logfile group 11;

Database altered.

SQL>
SQL> alter database drop standby logfile group 8;

Database altered.

SQL>  alter database drop standby logfile group 11;

Database altered.

SQL> select thread#, group# from v$standby_log;

no rows selected

SQL>
SQL> alter database add standby logfile thread 1 size 512M;

Database altered.

SQL> alter database add standby logfile thread 1 size 512M;

Database altered.

SQL> alter database add standby logfile thread 2 size 512M;

Database altered.

SQL> alter database add standby logfile thread 2 size 512M;

Database altered.

SQL> alter database add standby logfile thread 2 size 512M;

Database altered.

SQL>
SQL> select thread#, group# from v$standby_log order by 1,2;

   THREAD#     GROUP#
---------- ----------
         1          5
         1          6
         1          7
         2          8
         2          9
         2         10

6 rows selected.

SQL>
I have to clear and then drop and recreate one Standby Log of each Thread that were last being used just before all the files were lost -- so the controlfile expected Group 8 and Group 11 to be present. These were the entries in the alert log for the last set of Recover commands before the storage was lost :
2025-05-01T08:10:41.554409+00:00
Recovery of Online Redo Log: Thread 2 Group 11 Seq 343 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_11_mb6rf7hs_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_11_mb6rf8ob_.log
2025-05-01T08:10:41.557828+00:00
ARC1 (PID:1813): Archived Log entry 680 added for B-1164519547.T-1.S-397 LOS:0x0000000000a9f8a3 NXS:0x0000000000a9f8d2 NAB:12 ID 0x46c5be03 LAD:1
2025-05-01T08:10:41.563027+00:00
 rfs (PID:1825): Selected LNO:8 for T-1.S-398 dbid 1162136313 branch 1164519547
2025-05-01T08:10:41.642227+00:00
PR00 (PID:1863): Media Recovery Waiting for T-1.S-398 (in transit)
2025-05-01T08:10:41.642508+00:00
Recovery of Online Redo Log: Thread 1 Group 8 Seq 398 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_mb6rd9h8_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_mb6rdbos_.log
2025-05-01T08:14:02.648081+00:00
Shutting down ORACLE instance (abort) (OS id: 3584)

Now I can begin Recovery of the Standby
SQL> alter database recover managed standby database using current logfile disconnect from session;

Database altered.

SQL>
SQL> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> 
SQL> startup mount;
ORACLE instance started.

Total System Global Area 2147480256 bytes
Fixed Size                  9179840 bytes
Variable Size             486539264 bytes
Database Buffers         1644167168 bytes
Redo Buffers                7593984 bytes
Database mounted.
SQL> alter database recover managed standby database using current logfile disconnect from session;

Database altered.

SQL> exit



I resume Redo Shipping from the Primary
[oracle@srv1 ~]$ dgmgrl
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Thu May 1 09:11:56 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys
Password:
Connected to "RACDB"
Connected as SYSDBA.
DGMGRL> EDIT DATABASE 'RACDB' SET STATE='TRANSPORT-ON';
Succeeded.
DGMGRL>
DGMGRL> show configuration;

Configuration - racdb_dg

  Protection Mode: MaxPerformance
  Members:
  racdb - Primary database
    stdby - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 34 seconds ago)

DGMGRL> show configuration lag;

Configuration - racdb_dg

  Protection Mode: MaxPerformance
  Members:
  racdb - Primary database
    stdby - Physical standby database
            Transport Lag:      0 seconds (computed 1 second ago)
            Apply Lag:          0 seconds (computed 1 second ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 37 seconds ago)

DGMGRL>
Note : I have to wait for a few seconds to a few minutes for the SHOW CONFIGURATION and SHOW CONFIGURATION LAG commands to return the correct information.  Initially, they may show that there are errors but once Primary and Standby are "talking to each other", these errors would clear.

Now my Standby is syncing with the Primary
2025-05-01T09:19:30.530588+00:00
Recovery of Online Redo Log: Thread 2 Group 8 Seq 344 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_n16gb9wm_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_n16gbb4m_.log
2025-05-01T09:19:30.611573+00:00
 rfs (PID:7642): krsr_rfs_atc: Identified database type as 'PHYSICAL': Client is ASYNC (PID:11557)
2025-05-01T09:19:30.623795+00:00
 rfs (PID:7642): Selected LNO:5 for T-1.S-399 dbid 1162136313 branch 1164519547
2025-05-01T09:19:30.631133+00:00
PR00 (PID:7486): Media Recovery Waiting for T-1.S-399 (in transit)
2025-05-01T09:19:30.631475+00:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 399 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_5_n16g90qv_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_5_n16g910n_.log
2025-05-01T09:20:51.263052+00:00
ARC2 (PID:7470): Archived Log entry 691 added for B-1164519547.T-2.S-344 LOS:0x0000000000aa394b NXS:0x0000000000aa3b02 NAB:102 ID 0x46c5be03 LAD:1
2025-05-01T09:20:51.274060+00:00
 rfs (PID:7640): Selected LNO:8 for T-2.S-345 dbid 1162136313 branch 1164519547
2025-05-01T09:20:51.285312+00:00
PR00 (PID:7486): Media Recovery Log /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_344_n16h7m85_.arc
PR00 (PID:7486): Media Recovery Waiting for T-2.S-345 (in transit)
2025-05-01T09:20:51.387005+00:00
Recovery of Online Redo Log: Thread 2 Group 8 Seq 345 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_n16gb9wm_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_n16gbb4m_.log
2025-05-01T09:20:51.433894+00:00
ARC0 (PID:7462): Archived Log entry 692 added for B-1164519547.T-1.S-399 LOS:0x0000000000aa394e NXS:0x0000000000aa3b06 NAB:265 ID 0x46c5be03 LAD:1
2025-05-01T09:20:51.445431+00:00
 rfs (PID:7642): Selected LNO:5 for T-1.S-400 dbid 1162136313 branch 1164519547
2025-05-01T09:20:51.514317+00:00
PR00 (PID:7486): Media Recovery Waiting for T-1.S-400 (in transit)
2025-05-01T09:20:51.514664+00:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 400 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_5_n16g90qv_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_5_n16g910n_.log

 
I see that the SEQ# have already advanced to 399 and 345 for Thread 1 and 2 respectively.


Categories: DBA Blogs

400 Bad Request Request Header Or Cookie Too Large

Tom Kyte - Thu, 2025-05-01 03:31
After accessing a few questions on Ask TOM, I'm getting an error: 400 Bad Request Request Header Or Cookie Too Large This "solves itself" if I try later - I don't know exactly how much later - it typically works the next day again, but it doesn't work immediately if I refresh the page. I can also manually delete the cookies for the site, which works but only until I've read a few questions/answers again, but I'd like to find a way to solve this permanently.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator