DBA Blogs

Backup and Restore a Standby Database

Hemant K Chitale - Thu, 2025-05-01 04:44

 I have seen some I.T. managers that decide to backup only the Primary Database and not the Standby. The logic is "if the Storage or Server for the Standby go down, we can rebuild the database from the Primary".   OR "we haven't allocated storage  /  tape drive space at the Standby site"    OR  "our third-party backup tool does not know how to backup a Standby database and handle the warning about Archive Logs that is generated when it issues  a "PLUS ARCHIVELOG"   {see the warning below when I run the backup command)

Do they factor the time that is required to run Backup, Copy, Restore commands  OR run the Duplicate command to rebuild the Standby ?  All that while their Critical database is running without a Standby -- without a D.R. site.

Given a moderately large Database, it can be faster to restore from a "local" Backup at the Standby then to copy / duplicate across the network.  Also, this method does NOT require rebuilding DataGuard Broker configuration.

Firstly, you CAN backup a Standby even while Recovery (i.e. Redo Apply) is running.  The only catch is the "PLUS ARCHIVELOG" clause in "BACKUP ... DATABASE PLUS ARCHIVELOG" returns a minor error because a Standby cannot issue "ALTER SYSTEM ARCHIVE LOG CURRENT" (or "ALTER SYSTEM SWITCH LOGFILE")

Here's my Backup command at the Standby (while Redo Apply -- i.e. Media Recovery -- is running) without issuing a CANCEL RECOVERY.


RMAN> backup as compressed backupset
2> database
3> format '/tmp/STDBY_Backup/DB_DataFiles_%U.bak'
4> plus archivelog
5> format '/tmp/STDBY_Backup/DB_ArchLogs_%U.bak';
....
....
RMAN> backup current controlfile
2> format '/tmp/STDBY_Backup/standby_controlfile.bak';

So, when I run the Backup, it starts of with  and also ends with :
RMAN-06820: warning: failed to archive current log at primary database
cannot connect to remote database
....
....
....
RMAN-06820: warning: failed to archive current log at primary database
cannot connect to remote database
using channel ORA_DISK_1
specification does not match any archived log in the repository
backup cancelled because there are no files to backup


because it cannot issue an "ALTER DATABASE ARCHIVE LOG COMMAND" -- which can only be done at a Primary.  These warnings do not trouble me.


A LIST BACKUP command at the Standby *does* show Backups created locally (it will not show Backups at the Primary unless I connect to the same Recovery Catalog that is being used by the Primary)  ( I have excluded listing each ArchiveLog / Datafile from the output here)
RMAN> list backup;
list backup;
using target database control file instead of recovery catalog

List of Backup Sets
===================

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
311     44.10M     DISK        00:00:02     01-MAY-25
        BP Key: 311   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074832
        Piece Name: /tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak

  List of Archived Logs in backup set 311
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    393     11126161   01-MAY-25 11126287   01-MAY-25
  1    394     11126287   01-MAY-25 11127601   01-MAY-25
...
...
  2    338     11126158   01-MAY-25 11126290   01-MAY-25
  2    339     11126290   01-MAY-25 11127596   01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
312     Full    1.07G      DISK        00:00:57     01-MAY-25
        BP Key: 312   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074835
        Piece Name: /tmp/STDBY_Backup/DB_DataFiles_ae3objej_334_1_1.bak
  List of Datafiles in backup set 312
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
....
....

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
313     Full    831.00M    DISK        00:00:43     01-MAY-25
        BP Key: 313   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074835
        Piece Name: /tmp/STDBY_Backup/DB_DataFiles_af3objgk_335_1_1.bak
  List of Datafiles in backup set 313
  Container ID: 3, PDB Name: PDB1
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
....
....

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
314     Full    807.77M    DISK        00:00:42     01-MAY-25
        BP Key: 314   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074835
        Piece Name: /tmp/STDBY_Backup/DB_DataFiles_ag3obji1_336_1_1.bak
  List of Datafiles in backup set 314
  Container ID: 5, PDB Name: TSTPDB
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
....
....

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
315     Full    807.75M    DISK        00:00:43     01-MAY-25
        BP Key: 315   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074835
        Piece Name: /tmp/STDBY_Backup/DB_DataFiles_ah3objje_337_1_1.bak
  List of Datafiles in backup set 315
  Container ID: 2, PDB Name: PDB$SEED
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
....
....

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
317     Full    19.58M     DISK        00:00:01     01-MAY-25
        BP Key: 317   Status: AVAILABLE  Compressed: NO  Tag: TAG20250501T075522
        Piece Name: /tmp/STDBY_Backup/standby_controlfile.bak
  Standby Control File Included: Ckp SCN: 11128626     Ckp time: 01-MAY-25

RMAN>

So I can confirm that I have *local* backups (including ArchiveLogs present at the Standby and backed up before the Datafile backup begins).  The last ArchiveLog backed up at the Standby is SEQ#394 for Thread#1 and SEQ#339 for Thread#2
Meanwhile, the Standby has already applied subsequent ArchiveLogs as Recovery had not been cancelled.

Now I simulate loss / corruption of the filesystem holding the Standby datafiles and controlfiles and attempt a Restore (if the Standby Redo Logs are also lost, I must add them again later before I resume recovery).

I do a SHUTDOWN ABORT at the Standby if he instance seems to be running.  
(not shown here)

First I stop Redo Shipping from the Primary
DGMGRL> connect sys
Password:
Connected to "RACDB"
Connected as SYSDBA.
DGMGRL> EDIT DATABASE 'RACDB' SET STATE='TRANSPORT-OFF';
Succeeded.
DGMGRL>

Next I Restore the *standby* controlfile at my Standby server (note that I connect to "target" and specify "standby controlfile").  Note : If my SPFILE or PFILE is not available at the Standby, I have to restore that as well before I  STARTUP NOMOUNT.
[oracle@stdby ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Thu May 1 08:27:23 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount;
startup nomount;
Oracle instance started

Total System Global Area    2147480256 bytes

Fixed Size                     9179840 bytes
Variable Size                486539264 bytes
Database Buffers            1644167168 bytes
Redo Buffers                   7593984 bytes


RMAN> restore standby controlfile from '/tmp/STDBY_Backup/standby_controlfile.bak';
restore standby controlfile from '/tmp/STDBY_Backup/standby_controlfile.bak';
Starting restore at 01-MAY-25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=310 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/Standby_DB/oradata/control01.ctl
output file name=/Standby_DB/FRA/control02.ctl
Finished restore at 01-MAY-25


RMAN>
I am now ready to CATALOG the Backups and RESTORE the Database
RMAN> alter database mount;
alter database mount;
released channel: ORA_DISK_1
Statement processed


RMAN> catalog start with '/tmp/STDBY_Backup';
catalog start with '/tmp/STDBY_Backup';
Starting implicit crosscheck backup at 01-MAY-25
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=11 device type=DISK
Crosschecked 13 objects
Finished implicit crosscheck backup at 01-MAY-25

Starting implicit crosscheck copy at 01-MAY-25
using channel ORA_DISK_1
Finished implicit crosscheck copy at 01-MAY-25

searching for all files in the recovery area
cataloging files...
no files cataloged

searching for all files that match the pattern /tmp/STDBY_Backup

List of Files Unknown to the Database
=====================================
File Name: /tmp/STDBY_Backup/standby_controlfile.bak

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /tmp/STDBY_Backup/standby_controlfile.bak


RMAN>

In this case, the Standby Controlfile backup was taken *after* the Datafile and ArchiveLog backups, so this Controlfile is already "aware" of the backups (they are already included in the controlfile).  Neverthless, I can do some verification : ( I have excluded listing each ArchiveLog / Datafile from the output here)
RMAN> list backup ;
list backup ;

List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
311     44.10M     DISK        00:00:02     01-MAY-25
        BP Key: 311   Status: AVAILABLE  Compressed: YES  Tag: TAG20250501T074832
        Piece Name: /tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
312     Full    1.07G      DISK        00:00:57     01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
313     Full    831.00M    DISK        00:00:43     01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
314     Full    807.77M    DISK        00:00:42     01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
315     Full    807.75M    DISK        00:00:43     01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
316     Full    19.61M     DISK        00:00:01     01-MAY-25

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
317     Full    19.58M     DISK        00:00:01     01-MAY-25
        BP Key: 317   Status: AVAILABLE  Compressed: NO  Tag: TAG20250501T075522
        Piece Name: /tmp/STDBY_Backup/standby_controlfile.bak
  Standby Control File Included: Ckp SCN: 11128626     Ckp time: 01-MAY-25


RMAN>

For good measure, I can also verify that this "database" is  a Standby  (only the controlfile is presently restored" is a *Standby Database* (that a database is a Primary or a Standby is information in the *Controlfile*, not in the Datafiles)
RMAN> exit
exit

RMAN Client Diagnostic Trace file : /u01/app/oracle/diag/clients/user_oracle/host_4144547424_110/trace/ora_4560_140406053321216.trc

Recovery Manager complete.
[oracle@stdby ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu May 1 08:37:54 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> select open_mode, database_role from v$database;

OPEN_MODE            DATABASE_ROLE
-------------------- ----------------
MOUNTED              PHYSICAL STANDBY

SQL>

I can return to RMAN and RESTORE the Database. (I still invoke RMAN to connect to "target", not "auxiliary"
SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.25.0.0.0
[oracle@stdby ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Thu May 1 08:40:32 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=1162136313, not open)

RMAN> 
RMAN> restore database;
restore database;
Starting restore at 01-MAY-25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=2 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /Standby_DB/oradata/STDBY/datafile/o1_mf_system_m33j9fqn_.dbf
...
...
...
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_system_m33jb79n_.dbf
channel ORA_DISK_1: restoring datafile 00006 to /Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_sysaux_m33jbbgz_.dbf
channel ORA_DISK_1: restoring datafile 00008 to /Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_undotbs1_m33jbgrs_.dbf
channel ORA_DISK_1: reading from backup piece /tmp/STDBY_Backup/DB_DataFiles_ah3objje_337_1_1.bak
channel ORA_DISK_1: piece handle=/tmp/STDBY_Backup/DB_DataFiles_ah3objje_337_1_1.bak tag=TAG20250501T074835
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:45
Finished restore at 01-MAY-25


RMAN>

Next, I restore the ArchiveLogs that I have in the local backup instead of having to wait for them to be shipped from the Primary during the Recover Phase
RMAN> restore archivelog from time "trunc(sysdate)";
restore archivelog from time "trunc(sysdate)";
Starting restore at 01-MAY-25
using channel ORA_DISK_1

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=391
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=392
channel ORA_DISK_1: restoring archived log
archived log thread=2 sequence=336
channel ORA_DISK_1: restoring archived log
archived log thread=2 sequence=337
channel ORA_DISK_1: restoring archived log
archived log thread=2 sequence=338
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=393
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=394
channel ORA_DISK_1: restoring archived log
archived log thread=2 sequence=339
channel ORA_DISK_1: reading from backup piece /tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak
channel ORA_DISK_1: piece handle=/tmp/STDBY_Backup/DB_ArchLogs_ad3objeg_333_1_1.bak tag=TAG20250501T074832
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 01-MAY-25


RMAN>
RMAN>  list archivelog all completed after "trunc(sysdate)";
 list archivelog all completed after "trunc(sysdate)";
List of Archived Log Copies for database with db_unique_name STDBY
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
675     1    391     A 27-APR-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_391_n16fcchs_.arc

667     1    391     A 27-APR-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_391_n169cng5_.arc

682     1    392     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_392_n16fcbh7_.arc

670     1    392     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_392_n169fh7s_.arc

678     1    393     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_393_n16fcckd_.arc

671     1    393     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_393_n169g77l_.arc

677     1    394     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_394_n16fccjb_.arc

674     1    394     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_1_394_n169my72_.arc

680     2    336     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_336_n16fccnv_.arc

668     2    336     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_336_n169d0fm_.arc

681     2    337     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_337_n16fcbhy_.arc

669     2    337     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_337_n169fh6j_.arc

679     2    338     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_338_n16fccm6_.arc

672     2    338     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_338_n169g790_.arc

676     2    339     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_339_n16fcchw_.arc

673     2    339     A 01-MAY-25
        Name: /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_339_n169mxfp_.arc



RMAN>
(the output shows duplicate entries if either the ArchiveLogs were already present at the Standby OR the Restore was executed twice)

So, I also have the ArchiveLogs now. 

I add Standby Logs first (Add for each Thread in the Primary, and at the same size as Online Logs at the Primary)
RMAN> exit
exit

sqlRMAN Client Diagnostic Trace file : /u01/app/oracle/diag/clients/user_oracle/host_4144547424_110/trace/ora_5380_139777366395392.trc

Recovery Manager complete.
[oracle@stdby ~]$ sqlplus / as sysdba

SQL> select thread#, group# from v$standby_log;

   THREAD#     GROUP#
---------- ----------
         1          5
         2          6
         0          7
         1          8
         1          9
         1         10
         2         11
         2         12

8 rows selected.

SQL> alter database drop standby logfile group 5;

Database altered.

SQL> alter database drop standby logfile group 6;

Database altered.

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database drop standby logfile group 8;
alter database drop standby logfile group 8
*
ERROR at line 1:
ORA-00313: open failed for members of log group 8 of thread 1
ORA-00312: online log 8 thread 1: '/Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_mb6rdbos_.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
ORA-00312: online log 8 thread 1: '/Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_mb6rd9h8_.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7


SQL> alter database drop standby logfile group 9;

Database altered.

SQL> alter database drop standby logfile group 10;

Database altered.

SQL> alter database drop standby logfile group 11;
alter database drop standby logfile group 11
*
ERROR at line 1:
ORA-00313: open failed for members of log group 11 of thread 2
ORA-00312: online log 11 thread 2: '/Standby_DB/FRA/STDBY/onlinelog/o1_mf_11_mb6rf8ob_.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
ORA-00312: online log 11 thread 2: '/Standby_DB/oradata/STDBY/onlinelog/o1_mf_11_mb6rf7hs_.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7


SQL> alter database drop standby logfile group 12;

Database altered.

SQL>
SQL> select thread#, group# from v$standby_log;

   THREAD#     GROUP#
---------- ----------
         1          8
         2         11

SQL>
SQL> alter database clear logfile group 8;

Database altered.

SQL> alter database clear logfile group 11;

Database altered.

SQL>
SQL> alter database drop standby logfile group 8;

Database altered.

SQL>  alter database drop standby logfile group 11;

Database altered.

SQL> select thread#, group# from v$standby_log;

no rows selected

SQL>
SQL> alter database add standby logfile thread 1 size 512M;

Database altered.

SQL> alter database add standby logfile thread 1 size 512M;

Database altered.

SQL> alter database add standby logfile thread 2 size 512M;

Database altered.

SQL> alter database add standby logfile thread 2 size 512M;

Database altered.

SQL> alter database add standby logfile thread 2 size 512M;

Database altered.

SQL>
SQL> select thread#, group# from v$standby_log order by 1,2;

   THREAD#     GROUP#
---------- ----------
         1          5
         1          6
         1          7
         2          8
         2          9
         2         10

6 rows selected.

SQL>
I have to clear and then drop and recreate one Standby Log of each Thread that were last being used just before all the files were lost -- so the controlfile expected Group 8 and Group 11 to be present. These were the entries in the alert log for the last set of Recover commands before the storage was lost :
2025-05-01T08:10:41.554409+00:00
Recovery of Online Redo Log: Thread 2 Group 11 Seq 343 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_11_mb6rf7hs_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_11_mb6rf8ob_.log
2025-05-01T08:10:41.557828+00:00
ARC1 (PID:1813): Archived Log entry 680 added for B-1164519547.T-1.S-397 LOS:0x0000000000a9f8a3 NXS:0x0000000000a9f8d2 NAB:12 ID 0x46c5be03 LAD:1
2025-05-01T08:10:41.563027+00:00
 rfs (PID:1825): Selected LNO:8 for T-1.S-398 dbid 1162136313 branch 1164519547
2025-05-01T08:10:41.642227+00:00
PR00 (PID:1863): Media Recovery Waiting for T-1.S-398 (in transit)
2025-05-01T08:10:41.642508+00:00
Recovery of Online Redo Log: Thread 1 Group 8 Seq 398 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_mb6rd9h8_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_mb6rdbos_.log
2025-05-01T08:14:02.648081+00:00
Shutting down ORACLE instance (abort) (OS id: 3584)

Now I can begin Recovery of the Standby
SQL> alter database recover managed standby database using current logfile disconnect from session;

Database altered.

SQL>
SQL> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> 
SQL> startup mount;
ORACLE instance started.

Total System Global Area 2147480256 bytes
Fixed Size                  9179840 bytes
Variable Size             486539264 bytes
Database Buffers         1644167168 bytes
Redo Buffers                7593984 bytes
Database mounted.
SQL> alter database recover managed standby database using current logfile disconnect from session;

Database altered.

SQL> exit



I resume Redo Shipping from the Primary
[oracle@srv1 ~]$ dgmgrl
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Thu May 1 09:11:56 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys
Password:
Connected to "RACDB"
Connected as SYSDBA.
DGMGRL> EDIT DATABASE 'RACDB' SET STATE='TRANSPORT-ON';
Succeeded.
DGMGRL>
DGMGRL> show configuration;

Configuration - racdb_dg

  Protection Mode: MaxPerformance
  Members:
  racdb - Primary database
    stdby - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 34 seconds ago)

DGMGRL> show configuration lag;

Configuration - racdb_dg

  Protection Mode: MaxPerformance
  Members:
  racdb - Primary database
    stdby - Physical standby database
            Transport Lag:      0 seconds (computed 1 second ago)
            Apply Lag:          0 seconds (computed 1 second ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 37 seconds ago)

DGMGRL>
Note : I have to wait for a few seconds to a few minutes for the SHOW CONFIGURATION and SHOW CONFIGURATION LAG commands to return the correct information.  Initially, they may show that there are errors but once Primary and Standby are "talking to each other", these errors would clear.

Now my Standby is syncing with the Primary
2025-05-01T09:19:30.530588+00:00
Recovery of Online Redo Log: Thread 2 Group 8 Seq 344 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_n16gb9wm_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_n16gbb4m_.log
2025-05-01T09:19:30.611573+00:00
 rfs (PID:7642): krsr_rfs_atc: Identified database type as 'PHYSICAL': Client is ASYNC (PID:11557)
2025-05-01T09:19:30.623795+00:00
 rfs (PID:7642): Selected LNO:5 for T-1.S-399 dbid 1162136313 branch 1164519547
2025-05-01T09:19:30.631133+00:00
PR00 (PID:7486): Media Recovery Waiting for T-1.S-399 (in transit)
2025-05-01T09:19:30.631475+00:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 399 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_5_n16g90qv_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_5_n16g910n_.log
2025-05-01T09:20:51.263052+00:00
ARC2 (PID:7470): Archived Log entry 691 added for B-1164519547.T-2.S-344 LOS:0x0000000000aa394b NXS:0x0000000000aa3b02 NAB:102 ID 0x46c5be03 LAD:1
2025-05-01T09:20:51.274060+00:00
 rfs (PID:7640): Selected LNO:8 for T-2.S-345 dbid 1162136313 branch 1164519547
2025-05-01T09:20:51.285312+00:00
PR00 (PID:7486): Media Recovery Log /Standby_DB/FRA/STDBY/archivelog/2025_05_01/o1_mf_2_344_n16h7m85_.arc
PR00 (PID:7486): Media Recovery Waiting for T-2.S-345 (in transit)
2025-05-01T09:20:51.387005+00:00
Recovery of Online Redo Log: Thread 2 Group 8 Seq 345 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_8_n16gb9wm_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_8_n16gbb4m_.log
2025-05-01T09:20:51.433894+00:00
ARC0 (PID:7462): Archived Log entry 692 added for B-1164519547.T-1.S-399 LOS:0x0000000000aa394e NXS:0x0000000000aa3b06 NAB:265 ID 0x46c5be03 LAD:1
2025-05-01T09:20:51.445431+00:00
 rfs (PID:7642): Selected LNO:5 for T-1.S-400 dbid 1162136313 branch 1164519547
2025-05-01T09:20:51.514317+00:00
PR00 (PID:7486): Media Recovery Waiting for T-1.S-400 (in transit)
2025-05-01T09:20:51.514664+00:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 400 Reading mem 0
  Mem# 0: /Standby_DB/oradata/STDBY/onlinelog/o1_mf_5_n16g90qv_.log
  Mem# 1: /Standby_DB/FRA/STDBY/onlinelog/o1_mf_5_n16g910n_.log

 
I see that the SEQ# have already advanced to 399 and 345 for Thread 1 and 2 respectively.


Categories: DBA Blogs

400 Bad Request Request Header Or Cookie Too Large

Tom Kyte - Thu, 2025-05-01 03:31
After accessing a few questions on Ask TOM, I'm getting an error: 400 Bad Request Request Header Or Cookie Too Large This "solves itself" if I try later - I don't know exactly how much later - it typically works the next day again, but it doesn't work immediately if I refresh the page. I can also manually delete the cookies for the site, which works but only until I've read a few questions/answers again, but I'd like to find a way to solve this permanently.
Categories: DBA Blogs

Weird Cost-Based-Optimizer behavior

Tom Kyte - Thu, 2025-05-01 03:31
Hello! We encountered strange (and logically inconsistent) CBO behavior. To provide some context: we are working with an EAV (entity-attribute-value) schema, all objects are stored in a single table, let's call it "my_objects", another table, "object_ancestry", stores all ancestor/descendant relations. Below is the script that creates all tables in question, populates them with some data, and builds indexes: Environment ----------- ? Oracle 19.22.0.0 on AIX ? optimizer_mode = ALL_ROWS ? cursor_sharing = EXACT ? Full stats gathered on all objects (see script) --- script start <code> declare procedure drop_table(p_table_name in varchar2) is e_no_table exception; pragma exception_init(e_no_table, -942); begin execute immediate 'drop table ' || p_table_name; exception when e_no_table then null; -- ignore end; begin drop_table('object_ancestry'); drop_table('my_objects'); end; / --- OEV (object-entity-value) schema: create table my_objects(object_type_id number, object_id number, object_key varchar2(100)); create table object_ancestry(parent_type_id number, child_type_id number, parent_id number, child_id number); --- adding some data ----- objects insert /*+ append*/ into my_objects(object_type_id, object_id, object_key) select 1 as object_type, level as object_id, 'location_' || level as object_key from dual connect by level <= 1e+06; commit; insert /*+ append*/ into my_objects(object_type_id, object_id, object_key) select 2 as object_type, 1e+06 + level as object_id, 'subregion_' || level as object_key from dual connect by level <= 10000; commit; insert /*+ append*/ into my_objects(object_type_id, object_id, object_key) select 3 as object_type, 2e+06 + level as object_id, 'region_' || level as object_key from dual connect by level <= 100; commit; ----- EOF objects ----- object ancestry insert into object_ancestry(parent_type_id, child_type_id, parent_id, child_id) select subregion.object_type_id as parent_type_id, location.object_type_id as child_type_id, subregion.object_id as parent_id, location.object_id as child_id from my_objects location join my_objects subregion on mod(location.object_id, 10000) = (subregion.object_id - 1e+06 - 1) where location.object_type_id = 1 and subregion.object_type_id = 2 ; commit; insert into object_ancestry(parent_type_id, child_type_id, parent_id, child_id) select region.object_type_id as parent_type_id, subregion.object_type_id as child_type_id, region.object_id as parent_id, subregion.object_id as child_id from my_objects subregion join my_objects region on mod(subregion.object_id - 1e+06, 100) = (region.object_id - 2e+06 - 1) where subregion.object_type_id = 2 and region.object_type_id = 3 ; commit; insert into object_ancestry(parent_type_id, child_type_id, parent_id, child_id) select 3 /*region*/ as parent_type_id, 1 /*location*/ as child_type_id, region_id as parent_...
Categories: DBA Blogs

How to configure proxy server between Golden Gate source and destination servers?

Tom Kyte - Mon, 2025-04-28 17:20
Environment Information Database : Oracle Database 19c (Multi-Tenant Architecture) - Version 19.25.0.0.0 GoldenGate : Oracle GoldenGate Microservices 19c Replication Setup : Data replication will be configured between: Source : On-premises database server (Server A) Target : Azure VM Cloud database server (Server B) Operating System : Linux x86_64 on-prem. RHEL8 on cloud. Security Concern The security team has flagged the direct connection between the on-premises database server (Server A) and the cloud-based database server (Server B). They have mandated the use of a reverse proxy server to transfer GoldenGate trail files between the source and destination environments. This is to ensure secure data transfer and minimize exposure of sensitive database connections. Challenge The current GoldenGate configuration uses the WebUI for managing database connections and replication processes for both the source and destination. Trail files generated by the Extract process on the source (on-premises) need to be transferred securely to the target (cloud) via the reverse proxy server. The question is: How can we integrate a reverse proxy server into the GoldenGate replication setup to securely transfer trail files between the source and target environments?
Categories: DBA Blogs

Use of dbms_scheuler

Tom Kyte - Mon, 2025-04-28 17:20
After using dbms_scheduler since its introduction I was surprised to learn that it is not the way application development should go. I was explained that having scheduling outside of the database is better than having it inside the database. And the replacement are crontab, complicated self-made scheduler using Perl, Python, favorite language, preferred third party application, or refreshable materialized views. I did not get the answer to my question why using developed app stored code or other Oracle supplied packages inside the database is good practice and using dbms_scheduler is bad. What is your view on that issue?
Categories: DBA Blogs

Make not reproducible baselines reproducible again

Tom Kyte - Thu, 2025-04-24 22:06
Hello, I have multiple baselines and after migrating the application many of them got not reproducible. The reason was renaming of some indexes during the migration. I have renamed those indexes back to the original name. My question is how I get Oracle to set those baseline back to reproduced = YES. Non of the DBMS_SPM routines is able to do that. Thanks
Categories: DBA Blogs

Solving a logical problem using analytical functions

Tom Kyte - Thu, 2025-04-24 22:06
(https://livesql.oracle.com/ords/livesql/s/da34i74lkmxt2mqrtp0k4xsk6 ) (I was able to format better with screenshots on the Oracle forums - https://forums.oracle.com/ords/apexds/post/help-solving-a-logical-problem-using-analytical-query-3778) Hi, I am trying to solve a problem using analytical functions but I am stuck. 1. I have a list of coupons that I can use. The usage sequence is in alphabetical order of the coupon name. +---------+-------+ | Coupons | Value | +---------+-------+ | A | 100 | +---------+-------+ | B | 40 | +---------+-------+ | C | 120 | +---------+-------+ | D | 10 | +---------+-------+ | E | 200 | +---------+-------+ 2. There is a limit (cap) on the total value across all coupons that can be used in a day +----------+-----------+ | Cap Name | Cap Limit | +----------+-----------+ | Cap 1 | 150 | +----------+-----------+ | Cap 2 | 70 | +----------+-----------+ 3. Each coupon is subject to 1 or 2 caps. If it is subject to 2 caps, there is a specified sequence to apply caps. +--------+--------------+----------+ | Coupon | Cap Sequence | Cap Name | +--------+--------------+----------+ | A | 1 | Cap 1 | +--------+--------------+----------+ | A | 2 | Cap 2 | +--------+--------------+----------+ | B | 1 | Cap 2 | +--------+--------------+----------+ | C | 1 | Cap 2 | +--------+--------------+----------+ | C | 2 | Cap 1 | +--------+--------------+----------+ | D | 1 | Cap 1 | +--------+--------------+----------+ | E | 1 | Cap 1 | +--------+--------------+----------+ | E | 2 | Cap 2 | +--------+--------------+----------+ 4. I have to now find how much coupon value could be utilized before my daily cap was reached. i.e. find "coupon usage" and ?Cap Remaining? below. So, If I join the tables above +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | # | Coupon | Value | Cap Name | Cap Sequence | Cap Limit | Coupon Usage | Cap Remaining | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 1 | A | 100 | Cap 1 | 1 | 150 | 100 | 50 | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 2 | A | 100 | Cap 2 | 2 | 70 | 0 | 70 | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 3 | B | 40 | Cap 2 | 1 | 70 | 40 | 30 | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 4 | C | 120 | Cap 2 | 1 | 70 | 30 | 0 | +---+--------+-------+----------+--------------+--...
Categories: DBA Blogs

Bitmap indexes and BITMAP CONVERSION TO ROWIDS

Tom Kyte - Thu, 2025-04-24 04:03
Hello Chris. Hello Connor, I?m struggling with a performance issue ? We have a single big table (167 million of rows, 118 columns). For ad hoc queries we have created a simple Apex application with 15 most important columns as filters. To avoid long running queries we start a simple count(*) query bevor running the main query. Depending of a result of our count(*) we warn or even force the end user for better filtering. select count(*) from big_table where filter_column_1 = filter_value_1 and filter_column_2 = filter_value_2 and filter_column_3 between filter_value_3 and filter_value_4; To support the count(*) query I have created a bitmap index for each filter column. It works fine! Oracle combines the relevant bitmap indexes with a BITMAP AND or BITMAP MERGE. The response time is excellent, maximum a few seconds, often lower than 1 second. Works as expected and I?m happy! But: sometimes the optimizer decides to quit the path of bitmap processing and converts all bitmap indexes with BITMAP CONVERSION TO ROWIDS and uses then HASH JOINs. In this case the response time of the count(*) query is much worse, sometimes even minutes! My questions: ? Why does the optimizer use BITMAP CONVERSION TO ROWIDS? My feeling is, that it is more probably if the where clause is complex (many filters in use) or the count(*) delivers a big number. ? Are there any SQL-hints to avoid BITMAP CONVERSION TO ROWIDS? ? Are there any tips for my count(*) query ( WITH clauses, subqueries, ? ) to avoid BITMAP CONVERSION TO ROWIDS? The DB-Version is 19.24.0.0.0, the big table and all indexes have fresh statistics. Thank you in advance! Christian
Categories: DBA Blogs

Looking for SQL Query to get below output

Tom Kyte - Wed, 2025-04-23 10:01
Can someone please provide Oracle sql query to get below output? Deptno Dept Name No. of emp 10 a 10 b 20 c 30 20 d 40 e 50 f 60 30 g 70 h 80 i 90
Categories: DBA Blogs

Recommendation for Oracle feature to rollback an app delployment

Tom Kyte - Wed, 2025-04-23 10:01
Hello ASK TOM team, Recently our client has come out with a requirement to be able to rollback database part of any application deployment. Now in our case, the application deployment can mean any or all of the following from a database perspective: a) PL/SQL code (functions, packages, stored procedures) add/change/delete b) Tables add/change/drop c) Index add/change/drop d) Table data insert/update/delete e) Possibly other database application schema objects add/change/delete like views, triggers etc. Being a DBA,my first thought was flashback database, but we are brainstorming if any other Oracle features like OWM/EBR would be better/easier choice. The idea is that the new version of application objects co-exist with the previous one. If testing indicates any issue, we rollback to the previous version. I am not conversant at all with OWM/EBR so cannot compare these with flashback. Any advise will be appreciated. Thanks
Categories: DBA Blogs

Oracle Database Architecture

Tom Kyte - Wed, 2025-04-23 10:01
How do I know which tablespaces have which data files and their path on disk?
Categories: DBA Blogs

oracle is doing full table scans on a table with raw column and the column is primary key

Tom Kyte - Tue, 2025-04-22 15:58
Hi Tom, Thanks for your continuous support. We have a table with blob data type as one column. We are using secure file option as we are using 11g. We kept the blob column in a seperate tablespace. In the main table we have a column called streamID which is RAW datatype. And this column is primary key index. So, when we are trying to retrieve the blob data using streamid, it is always doing full table scan. But, when I tried to use HEXTORAW then it is doing index scans. These queries are generated by application people using SQLGEN and Hibernate. We use bind variables too. I don't know whether hextoraw is the proper solution and if yes don't know how to implement it in the application. *) sql query: SELECT streamId, originalOffset, originalSize, chunkData FROM PackageStore where streamId= :"SYS_B_0"; explain plan: PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- ----------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost | ----------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 131 | 1749 | | 1 | TABLE ACCESS FULL| PACKAGESTORESTREAMCHUNK | 1 | 131 | 1749 | ----------------------------------------------------------------------------- *) Using Hextoraw function SQL> explain plan for SELECT streamId, originalOffset,chunkData FROM PackageStore where streamId=hextoraw(:"SYS_B_0"); Explained. PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ---------- | Id | Operation | Name | Rows | Bytes | Cost | -------------------------------------------------------------------------------- ---------- | 0 | SELECT STATEMENT | | 1 | 131 PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- | 3 | | 1 | TABLE ACCESS BY INDEX ROWID| PACKAGESTORESTREAMCHUNK | 1 | 131 | 3 | | 2 | INDEX UNIQUE SCAN | PK_PACKAGESTORESTREAMCHUNK | 1 | | 2 | -------------------------------------------------------------------------------- SQL> select count(*) from packagestore; COUNT(*) ---------- 584768
Categories: DBA Blogs

Picking Right Plan for SQL Profile

Bobby Durrett's DBA Blog - Mon, 2025-04-21 14:39

Sunday four batch jobs that normally run in an hour had been stuck for 3 hours and had not completed the first unit of work out of many. Earlier in the day the on call DBA had applied a SQL Profile and cancelled the jobs and rerun them but it did not help. We picked a different SQL Profile, killed the jobs, and the jobs ran normally. How did we figure out the right plan to use for a SQL Profile?

The main clue came from the output of my sqlstat.sql script:

The good plan seemed to be 2367956558. EXECUTIONS_DELTA = 1 means that the SQL finished in that hour. Elapsed Average ms of 45177.105 was 45 seconds. 19790.066 was 19 seconds.

PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms
--------------- ------------------------- ---------------- ------------------
     2367956558 20-APR-25 01.00.23.638 AM                1          45177.105
     2367956558 20-APR-25 03.00.29.921 AM                1          19790.066

The other plans 2166514251 and 3151484146 seemed to have executions that spanned multiple hours. For example, the first 2166514251 line had EXECUTIONS_DELTA = 0 which means it didn’t finish in that hour. Plus, the elapsed time of 8995202.39 ms = 8995 seconds = 2.5 hours suggests that it was running in parallel, probably for the entire hour.

PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms
--------------- ------------------------- ---------------- ------------------
     2166514251 20-APR-25 01.00.23.638 AM                0         8995202.39

So, it seems clear that 2367956558 is the best plan.

I could say a lot more about this incident, but I wanted to focus on the sqlstat.sql output. The values of EXECUTIONS_DELTA and Elapsed Average ms are keys to identifying plans with the best behavior.

Bobby

Categories: DBA Blogs

Best way to add the 60 CLOB columns to the oracle table with around 17TB in size.

Tom Kyte - Thu, 2025-04-17 09:35
Hi Team, This is the first time I am asking question on asktom, while I am very much thankful to the solutions you have provided and kept helping people always. We want to add 60 CLOB columns to the table which is around 17TB in size. This is a datawarehouse DB and data loading to this table is completely stopped now, as the source table of MS SQL Server has 60 additional columns to it. Kindly suggest the best and efficient way to add the column to the table. The table is partitioned one. Regards Ojas
Categories: DBA Blogs

DBlink not connecting to MS SQL database

Tom Kyte - Thu, 2025-04-17 09:35
Dear Tom I have Oracle EBS 12.8 with Oracle 12C. I upgraded my database to 19C with the latest patches, and everything is working fine, except my DBlink with another MS SQL server. Note that it was working before the upgrade. I dropped the DBlink and re-created it, but I have no luck connecting. The Oracle gateway I installed in my SQL server is 11G. When I connect, I get the following error: <code>ORA-28511: lost RPC connection to heterogeneous remote agent using SID=ORA-28511: lost RPC connection to heterogeneous remote agent using SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=wapps24)(PORT=1521))(CONNECT_DATA=(SID=VestioOracleFinIntegration))) ORA-02063: preceding line from VESTIOORACLEFININTEGRATION Process ID: 42099</code> My DBlink setup is <code>OWNER PUBLIC DB_LINK VESTIOORACLEFININTEGRATION.WAFRA.LOCAL USERNAME OracleFin HOST VESTIOORACLEFININTEGRATION CREATED 16-APR-25 HIDDEN NO SHARD_INTERNAL NO VALID YES INTRA_CDB NO</code>
Categories: DBA Blogs

Question MEMOPTIMZE - Fast ingest

Tom Kyte - Wed, 2025-04-16 15:33
Hi, I see Oracle document shows an example of insert statement for fast ingest as INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (1,'test'); Can fast ingest using memoptimize be used for insert statement, that is INSERT as select... INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest select col1,col2,col2 from another_table; Thanks, Girish
Categories: DBA Blogs

Parallel index creation on different table on same tablespace

Tom Kyte - Wed, 2025-04-16 15:33
I?m having trouble with the parallel execution of an Oracle SQL query for creating indexes on <b>separate table (assume each table have single index)</b>. The performance boost isn?t even close to what I was hoping for. I therefore need your help in determining what to check for in the Oracle DB host in order to troubleshoot the issue. Although, I found all SQL queries having WAIT_CLASS as ?USER I/O?. I also did a check on IOPS through NetData UI on parallel index creation env, its max observed near ~24%. Just to notice, we already aware that PARALLEL hint may help here, but we are looking further improvement on top of that. Please let me know, whether any tunning on Oracle DB Side required or We can assume there won't be any benefit by executing index creation in parallel (even on different table) and we can only specify PARALLEL degree to enhance the performance.
Categories: DBA Blogs

Problems EntityFramework with Oracle.ManagedDataAccess

Tom Kyte - Tue, 2025-04-15 03:28
I created a new project in Visual Studio 2019 with Entity Framework v6.5.1, Oracle.ManagedDataAccess v23.8, and Oracle.ManagedDataAccess.EntityFramework v23.8. When I add an ADO.NET Entity Data Model, I select EF Designer from the database. It then prompts me for the connection string. I add it, test it, and everything works. When I click Next, a window appears asking me to select the EntityFramework version. The wizard automatically exits and returns me to the first screen to reselect the ADO.NET element. Therefore, it doesn't create the model or the EDMX file. This doesn't happen when I use Oracle.ManagedDataAccess v19.27 and Oracle.ManagedDataAccess.EntityFramework v19.27. I can't update to the latest version? Please help.
Categories: DBA Blogs

Failed to set wallet path to system:

Tom Kyte - Mon, 2025-04-14 09:25
Hi, While listening through the video "Developer Coaching - Oracle Database 23ai - Zero Slides, Zero Marketing, 100% Live Demo" there mentioned that we can set the oracle wallet path to system: to access the windows level certificates, but when I tried the same using below PLSQL code, I couldn't succeed, I'm getting the error "ORA-29248: an unrecognized WRL was used to open a wallet". Kindly let me know where I went wrong with the code? and how to use the system: as mentioned? Oracle Database: 19.3.0.0.0 Error: ORA-29248: an unrecognized WRL was used to open a wallet Command Used: utl_http.set_wallet('system:',null); full code is given below: <code> declare p_url varchar2(200) := 'https://www.oracle.com'; l_http_request utl_http.req; l_http_response utl_http.resp; l_text varchar2(32767); begin utl_http.set_wallet('system:',null); l_http_request := utl_http.begin_request(p_url); l_http_response := utl_http.get_response(l_http_request); utl_http.read_text(l_http_response,l_text,32766); dbms_output.put_line(substr(l_text,1,100)); exception when utl_http.end_of_body then utl_http.end_response(l_http_response); end; </code> Thanks for your support in advance.
Categories: DBA Blogs

gather Stale Statistics ASAP

Tom Kyte - Mon, 2025-04-14 09:25
hello I have a table FCM PARTITIONED ON (Q) Column BY RANGE INTERVAL( NUMTODSINTERVAL(7, 'DAY')) i have an issue that "current/active" partition becomes stale several times a day vast majority of DMLs on the table are INSERTS (basic inserts , with values , something like insert into table values :1 :2 ) while we do have a maintenance window , it is scheduled at night time , so during day we often have Stale statistics for current/active partition here are the questions : 1. is there any optimal way (something like trigger ) so that i can gather statistics on the partition AS SOON as it becomes stale , while it is possible to schedule a manual job to check for stale partitions , it still leaves the window between jobs , where partition can become stale , i dont really like the idea of a job running every 20 minutes , but as a last resort it can also be done .
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs