DBA Blogs

RAG with Vector Index in 26ai

Hemant K Chitale - 14 hours 51 min ago

 Updating my previous demo that was in 23ai  to run in  Oracle AI Database 26ai with two enhancements :

vector_memory_size  set to 512MB   (yes, this is a very small on-premises Free 26ai image)

INMEMORY NEIGHBOR GRAPH Index using Hierarchical Navigable Small World (HNSW)

[oracle@localhost ~]$ sqlplus vector_demo/vector_demo

SQL*Plus: Release 23.26.0.0.0 - Production on Sun Nov 16 09:37:39 2025
Version 23.26.0.0.0

Copyright (c) 1982, 2025, Oracle.  All rights reserved.

Last Successful login time: Sun Nov 16 2025 09:32:43 +00:00

Connected to:
Oracle AI Database 26ai Free Release 23.26.0.0.0 - Develop, Learn, and Run for Free
Version 23.26.0.0.0

SQL> set echo on
SQL> !ls *sql
Create_Vector_Index.sql  Query_Vectors.sql

SQL> @Create_Vector_Index.sql
SQL> CREATE VECTOR INDEX my_data_vectors_ndx ON my_data_vectors (sentence_vector)
  2    ORGANIZATION INMEMORY NEIGHBOR GRAPH
  3    DISTANCE COSINE
  4    WITH TARGET ACCURACY 95
  5  /

Index created.

SQL> show parameter vector_memory

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
vector_memory_size                   big integer 512M
SQL> @Query_Vectors.sql
SQL> set pages600
SQL> set linesize 156
SQL> col my_sentence format a148 wrap
SQL>
SQL> ACCEPT text_input CHAR PROMPT 'Enter your query : '
Enter your query : image processing
SQL> VARIABLE text_variable VARCHAR2(1000)
SQL> VARIABLE query_vector VECTOR
SQL> BEGIN
  2    :text_variable := '&text_input';
  3    SELECT vector_embedding(ALL_MINILM_L12_V2_AUGMENTED USING  :text_variable as data) into :query_vector;
  4  END;
  5  /
old   2:   :text_variable := '&text_input';
new   2:   :text_variable := 'image processing';

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> SELECT my_sentence, vector_distance(sentence_vector , :query_vector, COSINE) as Calc_Vector_Distance
  2  FROM my_data_vectors
  3  ORDER BY 2
  4  FETCH FIRST 3 ROWS ONLY;

MY_SENTENCE
----------------------------------------------------------------------------------------------------------------------------------------------------
CALC_VECTOR_DISTANCE
--------------------
      VanceAI.com   image enhancement
           5.16E-001

      Stable Diffusion: An open source model that generates high quality images from text or other images, offering customization and control
           5.51E-001

      Hotpot.ai   AI image editing
          6.109E-001


SQL>
SQL> SELECT my_sentence, vector_distance(sentence_vector , :query_vector, COSINE) as Calc_Vector_Distance
  2  FROM my_data_vectors
  3  ORDER BY 2
  4  FETCH APPROX FIRST 3 ROWS ONLY;

MY_SENTENCE
----------------------------------------------------------------------------------------------------------------------------------------------------
CALC_VECTOR_DISTANCE
--------------------
      VanceAI.com   image enhancement
           5.16E-001

      Stable Diffusion: An open source model that generates high quality images from text or other images, offering customization and control
           5.51E-001

      Hotpot.ai   AI image editing
          6.109E-001


SQL>

SQL> select * from dbms_xplan.display_cursor();

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  1z2ujsrc9xsb0, child number 0
-------------------------------------
SELECT my_sentence, vector_distance(sentence_vector , :query_vector,
COSINE) as Calc_Vector_Distance FROM my_data_vectors ORDER BY 2 FETCH
APPROX FIRST 3 ROWS ONLY

Plan hash value: 3894957757

------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                     |       |       |     2 (100)|          |
|*  1 |  COUNT STOPKEY                 |                     |       |       |            |          |
|   2 |   VIEW                         |                     |     3 |  6024 |     2  (50)| 00:00:01 |
|*  3 |    SORT ORDER BY STOPKEY       |                     |     3 |  4938 |     2  (50)| 00:00:01 |
|   4 |     TABLE ACCESS BY INDEX ROWID| MY_DATA_VECTORS     |     3 |  4938 |     1   (0)| 00:00:01 |
|   5 |      VECTOR INDEX HNSW SCAN    | MY_DATA_VECTORS_NDX |     3 |  4938 |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=3)
   3 - filter(ROWNUM<=3)


25 rows selected.

SQL>





Here I demonstrate querying  the same set of 130 sentences about AI as in the previous demo, but now with a Vector Index configured as an In-Memory Neighbour Vector Graph Index and a Target Accuracy of 95% based on COSINE Distance.

My next run would be with a much larger data set (instead of just 130 sentences)



Categories: DBA Blogs

Problem with loading data from XML decimal point with german nls-settings

Tom Kyte - Fri, 2025-11-07 17:20
<code>-- Settings select banner from v$version; --Oracle Database 23c Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free select * from v$nls_parameters where parameter in ('NLS_NUMERIC_CHARACTERS','NLS_TERRITORY','NLS_LANGUAGE'); -- NLS_LANGUAGE GERMAN 3 -- NLS_TERRITORY GERMANY 3 -- NLS_NUMERIC_CHARACTERS ,. 3 create table asktom_gpx_tab (i number ,xmldata xmltype ); INSERT INTO asktom_gpx_tab -- (i,xmldata) VALUES (1 ,xmltype( '<?xml version="1.0" encoding="UTF-8" standalone="no" ?> <gpx> <trkpt lat="49.773210" lon="8.930600"><ele>340.8</ele></trkpt> <trkpt lat="49.772980" lon="8.930280"><ele>342.0</ele></trkpt> <trkpt lat="49.772740" lon="8.929900"><ele>341.5</ele></trkpt> </gpx>') ); commit; select elevation from asktom_gpx_tab t, xmltable('/gpx/trkpt' passing t.xmldata columns elevation varchar2(20) path 'ele') x1; -- 340,8 -- converted because NLS sets decimal sign to comma -- 342.0 -- why is this value not converted to comma??? -- 341,5 -- converted because NLS sets decimal sign to comma -- I found a Question here with a supposed solution but it doesnt work right select to_number(elevation,'FM9999D99','nls_numeric_characters = '',.''') from asktom_gpx_tab t, xmltable('/gpx/trkpt' passing t.xmldata columns elevation varchar2(20) path 'ele') x1; -- 340,8 -- ok -- 3420 -- This is definitely a wrong value -- 341,5 -- ok</code> Shouldnt all values be treated the same way?
Categories: DBA Blogs

What are the recommended BIG-IP F5 settings for routing HTTP traffic to ORDS 24.4 without SSL in an Oracle EBS-integrated environment?

Tom Kyte - Fri, 2025-11-07 17:20
What are the recommended BIG-IP F5 settings for routing HTTP traffic to ORDS 24.4 without SSL in an Oracle EBS-integrated environment?
Categories: DBA Blogs

impdp commit size parameter

Tom Kyte - Fri, 2025-11-07 17:20
Is there a commit size parameter for datapump import utility in the future? Currently there is none. I see direct path insert and parallel process are there and if so will adding commit size parameter help in faster data loading.
Categories: DBA Blogs

Performance Analysis of SQL over DB Link

Tom Kyte - Fri, 2025-11-07 17:20
Hello and thank you for your response. Sorry I could not provide a live example SQL. I have an application running in JBoss connecting to a database Reporting_DB and executes SQLs there. The SQLs actually would access views in the Reporting_DB. The views are accessing tables which are in a remote DB, say ReadOnlyStandby_DB accessed over a DB link to a remote database. Here is a simple schematic: Application --Network--> Reporting_DB ---DB Link--> ReadOnlyStandby_DB ORM sql ---> View --> Table SQL Signature: Example of SQL seen in Reporting_DB: The application uses some ORM (hibernate), so the SQLs are always changing as seen in the Reporting_DB: select this_.Column1 as Column1_485_0_, this_.Column2 as Column2_485_0_, this_.Column3 as AUDIT3_485_0 etc. Example of SQL seen in ReadOnlyStandby_DB The sql seen in the ReadOnlyStandby_DB looks like this: SELECT "A1"."col1" ... FROM "TABLE_1" where CONDITION ... UNION ALL ... The users are complaining that the performance is slow, developers are sharing application logs that show SQLs are timing out with slowness. The final database where SQL is executed is a Data Guard read only Standby database. I have OEM on Reporting_DB and ReadOnlyStandby_DB but the app server (JBOSS) is external to my jurisdiction and have no access or insight. How can I get the following details: 1) Connect the dots for session: How to connect the dots from App, db session to Reporting_DB, db session to ReadOnlyStandby_DB 2) Session Trace: How to trace session coming from DB link on ReadOnlyStandby_DB 3) SQL Analysis: The SQL on Reporting_DB is not the same on ReadOnlyStandby_DB; it seems to change. How to connect SQL "a" on Reporting_DB and its related SQL "b" on ReadOnlyStandby_DB ?
Categories: DBA Blogs

Materialized view based on View expanded to underlying objects

Tom Kyte - Fri, 2025-11-07 17:20
Hello All I have created a Materialized view using the following code <code>CREATE MATERIALIZED VIEW "APPS"."XXBST_UNPACK_PENDING_MV" ("CUST_ACCOUNT_ID", "CUSTOMER_NAME", "SUPPLIER_ID", "SUPPLIER_NAME", "SHIPMENT_ID", "SHIPMENT_NUMBER", "UNPACK_DATE", "DAYS_IN_UNPACK_PENDING_STATUS") SEGMENT CREATION IMMEDIATE ORGANIZATION HEAP PCTFREE 10 PCTUSED 0 INITRANS 2 MAXTRANS 255 NOCOMPRESS NOLOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "APPS_TS_TX_DATA" BUILD DEFERRED USING INDEX REFRESH COMPLETE ON DEMAND START WITH sysdate+0 NEXT sysdate + (1/24/60)*30 USING DEFAULT LOCAL ROLLBACK SEGMENT USING TRUSTED CONSTRAINTS EVALUATE USING CURRENT EDITION DISABLE ON QUERY COMPUTATION DISABLE QUERY REWRITE AS SELECT * FROM xxbst_unpack_pending_v;</code> This is in a non prod environment which has just been restored from a backup due to the code being changed in prod to this lowest level expanded code <code>CREATE MATERIALIZED VIEW "APPS"."XXBST_UNPACK_PENDING_MV" ("CUST_ACCOUNT_ID", "CUSTOMER_NAME", "SUPPLIER_ID", "SUPPLIER_NAME", "SHIPMENT_ID", "SHIPMENT_NUMBER", "UNPACK_DATE", "DAYS_IN_UNPACK_PENDING_STATUS") SEGMENT CREATION IMMEDIATE ON PREBUILT TABLE WITHOUT REDUCED PRECISION USING INDEX REFRESH FORCE ON DEMAND USING DEFAULT LOCAL ROLLBACK SEGMENT USING ENFORCED CONSTRAINTS EVALUATE USING CURRENT EDITION DISABLE ON QUERY COMPUTATION DISABLE QUERY REWRITE AS SELECT "A1"."CUST_ACCOUNT_ID" "CUST_ACCOUNT_ID","A1"."CUSTOMER_NAME" "CUSTOMER_NAME","A1"."SUPPLIER_ID" "SUPPLIER_ID","A1"."SUPPLIER_NAME" "SUPPLIER_NAME","A1"."SHIPMENT_ID" "SHIPMENT_ID","A1"."SHIPMENT_NUMBER" "SHIPMENT_NUMBER","A1"."UNPACK_DATE" "UNPACK_DATE","A1"."DAYS_IN_UNPACK_PENDING_STATUS" "DAYS_IN_UNPACK_PENDING_STATUS" FROM (SELECT "A7"."CUST_ACCOUNT_ID" "CUST_ACCOUNT_ID","APPS"."XXBST_DEV_UTILS_PKG"."GET_CUST_SHORT_NAME"("A7"."CUST_ACCOUNT_ID") "CUSTOMER_NAME","A3"."VENDOR_ID" "SUPPLIER_ID","A3"."VENDOR_NAME" "SUPPLIER_NAME","A10"."SHIPMENT_ID" "SHIPMENT_ID","A10"."SHIPMENT_NUMBER" "SHIPMENT_NUMBER",MAX("A7"."CREATION_DATE") "UNPACK_DATE",TRUNC(SYSDATE)-TRUNC(MAX("A7"."CREATION_DATE")) "DAYS_IN_UNPACK_PENDING_STATUS" FROM "XXBST"."XXBST_TNT_SHPMNT" "A10","XXBST"."XXBST_TNT_MV_PLAN" "A9","XXBST"."XXBST_TNT_MV" "A8","XXICE"."XXICE_SM_RCV_HEADERS" "A7","XXBST"."XXBST_TNT_HEADERS_ALL" "A6","XXBST"."XXBST_TNT_LINES_ALL" "A5","XXBST"."XXBST_BOM_ORDER_HEADERS_ALL" "A4", (SELECT "A12"."VENDOR_ID" "VENDOR_ID","A12"."VENDOR_NAME" "VENDOR_NAME" FROM "AP"."AP_SUPPLIERS" "A12","AR"."HZ_PARTIES" "A11" WHERE "A12"."PARTY_ID"="A11"."PARTY_ID") "A3","XXBST"."XXBST_TNT_MV_PLAN" "A2" WHERE "A10"."SHIPMENT_ID"="A7"."SHIPMENT_ID" AND "A10"."SHIPMENT_ID"="A9"."SHIPMENT_ID" AND "A9"."PLAN_ID"="A8"."PLAN_ID" AND "A8"."MV_TYPE"='DELIVERY' AND "A8"."MV_STATUS"='UNPACK_PENDING' AND "A6"."SHIPM...
Categories: DBA Blogs

Export Oracle Label Security info from one DB to be imported into another DB

Tom Kyte - Fri, 2025-11-07 17:20
Hi Sir, I'm trying to copy the Oracle Label Security information from one database and apply it to another one. I tried to use expdp/impdp and dbms_metadata.get_ddl, but I didn't have success using them. Is there a special package or tool to get this work done? Thanks in advance. Valerio Almeida
Categories: DBA Blogs

Oracle GoldenGate Response Files: Understanding the Evolution from 12c to 23ai

DBASolved - Fri, 2025-11-07 09:48

Your Oracle GoldenGate 21c response file won't work for 23ai deployments—and that's by design. Oracle fundamentally restructured GoldenGate configuration with 23ai, introducing deployment-level administrator separation, granular directory control, Configuration Service options, and enhanced security with TLS 1.3 support. This detailed comparison reveals the critical differences between response file schemas, explains why backward compatibility breaks, and provides practical migration strategies for maintaining automation across GoldenGate versions from 12c through 23ai. Learn how to leverage new capabilities like remote metrics and centralized configuration management while keeping your existing deployments running smoothly.

The post Oracle GoldenGate Response Files: Understanding the Evolution from 12c to 23ai appeared first on DBASolved.

Categories: DBA Blogs

Incremental Backups of a Standby Database

Hemant K Chitale - Fri, 2025-11-07 00:16

 In response to a comment on a previous Blog Post RMAN Backup of a Standby Database, I show incremental Backups as well below


RMAN> backup as compressed backupset incremental level 0 database;
backup as compressed backupset incremental level 0 database;
Starting backup at 07-NOV-25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=219 device type=DISK
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf
input datafile file number=00003 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_sysaux_n16dxhob_.dbf
input datafile file number=00004 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_undotbs1_n16dxhom_.dbf
input datafile file number=00009 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_undotbs2_n16dxhon_.dbf
input datafile file number=00007 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_users_n16dxhoo_.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2nxdx_.bkp tag=TAG20251107T055853 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:15
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_system_n16dz6sq_.dbf
input datafile file number=00011 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_sysaux_n16dz6sr_.dbf
input datafile file number=00012 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_undotbs1_n16dz6t1_.dbf
input datafile file number=00013 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_undo_2_n16dz6t2_.dbf
input datafile file number=00014 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_users_n16dz6tc_.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/1476C653214704CFE0635A38A8C08494/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2q8kr_.bkp tag=TAG20251107T055853 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00060 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-SYSTEM_FNO-60.dbf
input datafile file number=00061 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-SYSAUX_FNO-61.dbf
input datafile file number=00062 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-UNDOTBS1_FNO-62.dbf
input datafile file number=00063 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-UNDO_2_FNO-63.dbf
input datafile file number=00064 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-USERS_FNO-64.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/38AE431466FE1FDBE0635A38A8C085D8/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2rzo7_.bkp tag=TAG20251107T055853 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_system_n16f20wr_.dbf
input datafile file number=00006 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_sysaux_n16f20x3_.dbf
input datafile file number=00008 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_undotbs1_n16f20xg_.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/14769E258FBB5FD8E0635A38A8C09D43/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2tprh_.bkp tag=TAG20251107T055853 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55
Finished backup at 07-NOV-25

Starting Control File and SPFILE Autobackup at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/autobackup/2025_11_07/o1_mf_s_1216533563_njv2wg2k_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-NOV-25


RMAN>
RMAN> exit
exit

RMAN Client Diagnostic Trace file : /u01/app/oracle/diag/clients/user_oracle/host_4144547424_110/trace/ora_2133_140607082957312.trc

Recovery Manager complete.
[oracle@stdby trace]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Nov 7 06:04:10 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> set pages600 linesize 132
SQL> select database_role, open_mode from v$database
  2  /

DATABASE_ROLE    OPEN_MODE
---------------- --------------------
PHYSICAL STANDBY MOUNTED

SQL>
SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.25.0.0.0
[oracle@stdby trace]$ 

[oracle@stdby trace]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Nov 7 06:04:55 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=1162136313, not open)

RMAN> list backup of datafile 1;
list backup of datafile 1;
using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
320     Incr 0  1.08G      DISK        00:01:10     07-NOV-25
        BP Key: 320   Status: AVAILABLE  Compressed: YES  Tag: TAG20251107T055853
        Piece Name: /Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2nxdx_.bkp
  List of Datafiles in backup set 320
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
  1    0  Incr 13423959   07-NOV-25              NO    /Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf


RMAN>
RMAN> select current_scn from v$database;
select current_scn from v$database;
CURRENT_SCN
-----------
   13442704


RMAN>

[oracle@stdby trace]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Nov 7 06:07:11 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> select current_scn, database_role, open_mode from v$database;

CURRENT_SCN DATABASE_ROLE    OPEN_MODE
----------- ---------------- --------------------
   13442704 PHYSICAL STANDBY MOUNTED

SQL> /

CURRENT_SCN DATABASE_ROLE    OPEN_MODE
----------- ---------------- --------------------
   13443585 PHYSICAL STANDBY MOUNTED

SQL>


SQL> SQL> l
  1* select current_scn, database_role, open_mode from v$database
SQL> /

CURRENT_SCN DATABASE_ROLE    OPEN_MODE
----------- ---------------- --------------------
   13443585 PHYSICAL STANDBY MOUNTED

SQL> /

CURRENT_SCN DATABASE_ROLE    OPEN_MODE
----------- ---------------- --------------------
   13449819 PHYSICAL STANDBY MOUNTED

SQL> /

CURRENT_SCN DATABASE_ROLE    OPEN_MODE
----------- ---------------- --------------------
   13449819 PHYSICAL STANDBY MOUNTED

SQL>


[oracle@stdby trace]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Nov 7 06:10:58 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=1162136313, not open)

RMAN> backup as compressed backupset incremental level 1 database;
backup as compressed backupset incremental level 1 database;
Starting backup at 07-NOV-25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=219 device type=DISK
channel ORA_DISK_1: starting compressed incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf
input datafile file number=00003 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_sysaux_n16dxhob_.dbf
input datafile file number=00004 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_undotbs1_n16dxhom_.dbf
input datafile file number=00009 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_undotbs2_n16dxhon_.dbf
input datafile file number=00007 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_users_n16dxhoo_.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd1_TAG20251107T061125_njv3dg1n_.bkp tag=TAG20251107T061125 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting compressed incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_system_n16dz6sq_.dbf
input datafile file number=00011 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_sysaux_n16dz6sr_.dbf
input datafile file number=00012 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_undotbs1_n16dz6t1_.dbf
input datafile file number=00013 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_undo_2_n16dz6t2_.dbf
input datafile file number=00014 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_users_n16dz6tc_.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/1476C653214704CFE0635A38A8C08494/backupset/2025_11_07/o1_mf_nnnd1_TAG20251107T061125_njv3do5w_.bkp tag=TAG20251107T061125 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting compressed incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00060 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-SYSTEM_FNO-60.dbf
input datafile file number=00061 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-SYSAUX_FNO-61.dbf
input datafile file number=00062 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-UNDOTBS1_FNO-62.dbf
input datafile file number=00063 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-UNDO_2_FNO-63.dbf
input datafile file number=00064 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-USERS_FNO-64.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/38AE431466FE1FDBE0635A38A8C085D8/backupset/2025_11_07/o1_mf_nnnd1_TAG20251107T061125_njv3dw8p_.bkp tag=TAG20251107T061125 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_system_n16f20wr_.dbf
skipping datafile 00005 because it has not changed
input datafile file number=00006 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_sysaux_n16f20x3_.dbf
skipping datafile 00006 because it has not changed
input datafile file number=00008 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_undotbs1_n16f20xg_.dbf
skipping datafile 00008 because it has not changed
channel ORA_DISK_1: backup cancelled because all files were skipped
Finished backup at 07-NOV-25

Starting Control File and SPFILE Autobackup at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/autobackup/2025_11_07/o1_mf_s_1216534110_njv3dzjt_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-NOV-25


RMAN>

RMAN> list backup of datafile 1;
list backup of datafile 1;

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
320     Incr 0  1.08G      DISK        00:01:10     07-NOV-25
        BP Key: 320   Status: AVAILABLE  Compressed: YES  Tag: TAG20251107T055853
        Piece Name: /Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2nxdx_.bkp
  List of Datafiles in backup set 320
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
  1    0  Incr 13423959   07-NOV-25              NO    /Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
325     Incr 1  2.93M      DISK        00:00:05     07-NOV-25
        BP Key: 325   Status: AVAILABLE  Compressed: YES  Tag: TAG20251107T061125
        Piece Name: /Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd1_TAG20251107T061125_njv3dg1n_.bkp
  List of Datafiles in backup set 325
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
  1    1  Incr 13449820   07-NOV-25              NO    /Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf


RMAN>

RMAN> exit
exit


Recovery Manager complete.
[oracle@stdby trace]$ export NLS_DATE_FORMAT=DD_MON_RR_HH24_MI
[oracle@stdby trace]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Nov 7 06:15:24 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=1162136313, not open)

RMAN> list backup of datafile 1;
list backup of datafile 1;
using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
320     Incr 0  1.08G      DISK        00:01:10     07_NOV_25_06_00
        BP Key: 320   Status: AVAILABLE  Compressed: YES  Tag: TAG20251107T055853
        Piece Name: /Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2nxdx_.bkp
  List of Datafiles in backup set 320
  File LV Type Ckp SCN    Ckp Time        Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------------- ----------- ------ ----
  1    0  Incr 13423959   07_NOV_25_05_56              NO    /Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
325     Incr 1  2.93M      DISK        00:00:05     07_NOV_25_06_11
        BP Key: 325   Status: AVAILABLE  Compressed: YES  Tag: TAG20251107T061125
        Piece Name: /Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd1_TAG20251107T061125_njv3dg1n_.bkp
  List of Datafiles in backup set 325
  File LV Type Ckp SCN    Ckp Time        Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------------- ----------- ------ ----
  1    1  Incr 13449820   07_NOV_25_06_08              NO    /Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf


RMAN>



After the Level-0 Backup I waited for some time to confirm that the SCN at the Standby had advanced.
Then, I ran an Level-1 Backup.

The final listing shows two backups of datafile 1, the first being a Level-0 backup at SCN 13423959 at 05:56 of 07-Nov and the second being a Level-1 backup at SCN 13449820 at 06:08 of 07-Nov/

Of course, I *must* backup ArchiveLogs at the Standby.

Note : Oracle's recommendation is to use an RMAN Catalog when running Database Backups.
Categories: DBA Blogs

APEX Error 572 autopsy

Flavio Casetta - Mon, 2025-11-03 07:35

A couple of days ago I exported an application from APEX 24.1.5 and imported it into APEX 24.2.9 on my OCI free tier instance.

Whilst the application itself was working fine, I got a problem when I tried to edit page 4 in the App Builder: the spinner kept spinning for an unusual amount of time and eventually stopped showing an empty page, and in the error notification area APEX showed "Error: 572".

 

Now, according to other users this Error 572 was a symptom of an overloaded database, which seemed strange to me because the page is not particularly complex and I could open other pages in the editor without problems.

After various unsuccessful attempts to see if I could gather some meaningful information from the APEX Builder debug logs (see this interesting Steve Muench's blog posting on how to do that), I decided to have a closer look at the differences between the previous version of this page and the current, problematic one.

Here are some relevant facts:

  • The current version of the page can be edited without problems in APEX 24.1.5.
  • If I export this single page from 24.1.5 and import into 24.2.9 the problem persists, so if there is a problem, it's not caused by the import process of the whole application.
  • The "new" version of the page works correctly when I run the application, so, APEX metadata must be good.
  • The only difference between the old version and the new version is in the source of the query I am using to populate an Interactive Report.
  • If I import the old version of the page, then the App Builder editor resumes working properly. 

The difference between the old and the new query is the following CASE function, the old version did not contain the lines in yellow: 

case 
  when json_value(e.value,'$.email[0]') is null
  then apex_lang.message('NO_RECIPIENT')
  when json_value(e.value,'$.email[1]') is null
  then apex_lang.message('SINGLE_RECIPIENT')
  else apex_lang.message('OTHER_RECIPIENTS')
end as RECIPIENTS

If I comment out the two lines in yellow in the page source, import the page and try to edit it, everything is back to normal.

The tricky part in this type of problems is in that if you receive this error upon entering the editor then you cannot delete the offending page because Error code 572 is breaking the delete page functionality, so your only option is to replace the page with an import of a working version.

If you receive this error while you are still inside the editor of the page, you may have a chance of reverting the page to its previous state.

In my case it was easy, I had an older working version of the page, but if you don't, then you must work around the problem in some other fashion, in the worst case I presume you must edit the import file and get rid of some components.

In conclusion, my wild guess is that something wrong is happening at parse time and in particular I strongly suspect that the absence of meaningful values for the bind variables used in the conditions of my query are breaking the logic of the JSON_VALUE function that relies on the content of the JSON array "email", which at parse time of course is empty, resulting in this weird error.

Moreover, the IR query handling must have changed in some way between the two APEX releases.

Now, it could be that Error 572 is also returned when there is a "database overload", but in this specific case I can't see any correlation.

Or may be there is something else going on that I can't fathom right now.

Categories: DBA Blogs

Tip of the day: hide (or show) multiple buttons or other page components with a single dynamic action

Flavio Casetta - Wed, 2025-10-29 10:52

Need to hide or show multiple buttons or page items basing on a common client-side condition?

Instead  of creating a single dynamic action for each button, define a common CSS class (i.e. mybuttons) on each of the buttons and then use a single DA with jQuery selector like ".mybuttons" for the Hide action and one for the Show action, thus reducing the page complexity.

 


And you can also extend this to other components like page items or regions, if they follow the same logic.

Categories: DBA Blogs

Design decision on database tables oracle DB

Tom Kyte - Sat, 2025-10-25 00:00
Hello experts, I have a UNIQUE constraint on (col1, col2) for a table. But, due to new functionality, I need a conditional unique constraint based on col 3. So something like this: if col3 = 'Val1': UNIQUE constraint on (col1, col2) else: UNIQUE constraint on (col4, col5) I'm on oracle DB 19c, and found that creating a unique index with case type helps. Can you guide me on alternative options and the pros, cons for this design. I do not want to create any new table and want the best way to achieve conditional unique constraints. Thanks
Categories: DBA Blogs

SQL developer hanging

Tom Kyte - Sat, 2025-10-25 00:00
I am using version 23.1.1 and when I start up the application, it just hangs. I have no idea what to look at to even begin to diagnose the problem.
Categories: DBA Blogs

Social Sign-In with Azure

Tom Kyte - Sat, 2025-10-25 00:00
Dear Experts, Social Sign-in with Microsoft Azure/Office 365 in Oracle APEX applications is working well. I used this How-To: https://tm-apex.hashnode.dev/implementing-social-sign-in-with-microsoft-azureoffice-365-in-apex When I use the substitution variable in APEX (&APP_USER.) I get the correct name. So far so good! But we need the samAccountName for checking Authorization. I tried #samAccountName#, #sam_account_name# and #sam#. It doesn't work! :( APEX is referencing in help the site https://openid.net/specs/openid-connect-basic-1_0.html#Scopes where I found other keys ("claims"). Not all are working, for instance #sub# and #family_name# works, #preferred_username# does not work. With the help of Google I found other keys like #upn# (https://promatis.com/ch/en/build-a-secure-oracle-apex-app-with-microsoft-azure-ad-login-and-delegated-calendar-access/) which works fine and is not mentioned in the above website. But my question to you is how I get the samAccountName from Azure??? What is the correct name/key/claim? May I have to configure other things than "profile,email" in scope textfield maybe?
Categories: DBA Blogs

Oracle returns default value for column on rows inserted before the column was added

Tom Kyte - Sat, 2025-10-25 00:00
<code>create table add_column_default ( id number ) / insert into add_column_default ( id ) values ( 0 ) / alter table add_column_default add col1 number default 5 / insert into add_column_default ( id, col1 ) values ( 11, null ) / select * from add_column_default order by id / ID COL1 ---------- ---------- 0 5 11 2 rows selected. drop table add_column_default /</code> <b>Assumptions:</b> My understanding is that the data block is not modified during the ALTER. That is, the row with id = 0 is not updated to add a col1 value of 5. https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ALTER-TABLE.html#:~:text=If%20you%20specify%20the%20DEFAULT,subject%20to%20the%20following%20restrictions%3A <i>"If you specify the DEFAULT clause for a column, then the default value is stored as metadata but the column itself is not populated with data. However, subsequent queries that specify the new column are rewritten so that the default value is returned in the result set."</i> Note: This used to not be the case for nullable columns in 11.2. https://docs.oracle.com/cd/E18283_01/server.112/e17118/statements_3001.htm#CJAHHIBI:~:text=When%20you%20add%20a%20column%2C%20the,a%20default%20value%20or%20NULL. <i>"When you add a column, the initial value of each row for the new column is null. ... If you specify the DEFAULT clause for a nullable column, then the default value is added to existing rows as part of this ALTER TABLE statement, and any update triggers defined on the table are fired. This behavior also results if you change a NOT NULL column with a default value to be nullable."</i> My understanding is that the data block does not store any information (not even the length byte) regarding col1 when the value is null and col1 is the last column in the table. <b>Therefore:</b> Rows inserted before the ALTER do not have col1 information. Rows inserted after the ALTER may not have col1 information (inserting null into the last fixed width column in a table). <b>Confusion/Question:</b> If both rows look the same (with respect to col1 information) in the data block, then how does Oracle know to return a col1 value of 5 for the row with id = 0 and return a col1 value of null for the row with id = 11?
Categories: DBA Blogs

XMLTYPE returning unknown special character

Tom Kyte - Sat, 2025-10-25 00:00
Hello Sir, We're using Oracle Database 19C Enterprise edition. We're getting a XML tag in Varchar2 format. Then we're using XMLTYPE to convert that to XML. However, incase when there's special character in the XML tag it's giving a unknown special character in the output. What can we do to get the same output Sample SQL: <code>select XMLTYPE('<tag1> a''bc </tag1>') from dual;</code> Output: <code><tag1> a&apos;bc </tag1></code> Expected output: <code><tag1> a'bc </tag1></code>
Categories: DBA Blogs

Sql Plan Baseline

Tom Kyte - Sat, 2025-10-25 00:00
Hi Dear Experts, I want to create a SQL PLAN baseline for one of the My system query in a two-node RAC 19.21. With user SYS and inside a PDB with the following commands <code>var v_num number; exec :v_num:=dbms_spm.load_plans_from_cursor_cache(sql_id => '0b3...............',plan_hash_value => 2170529430 ,fixed=>'YES');</code> It gives a successfull response. Of course, the output of the above command is returned as 0. And when I check the Plan baselines with the following command, I see that nothing new has been created: <code>select * from dba_sql_plan_baselines</code> I check with the following command and see that plan_hash_value equal to 2170529430 exists in memory: <code>select sql_id, plan_hash_value, parsing_schema_name, sql_text from v$sql where sql_id = '0b3...............';</code> What is the problem?
Categories: DBA Blogs

Deleting duplicate records without using rowid and rownum

Tom Kyte - Sat, 2025-10-25 00:00
Hi Tom, If there is any duplications of records in a table, we know how to eliminate the duplicate rows using rowid. But is there any possibility to delete the duplicate records in a table without using rowid and rownum. my friend who is working in Oracle for arnd 4 years says that it is not possible. But i think there should be some way to do this. Pls give me some suggestion TOM. i would be thankful to you, if you can illustrate with examples. Thanks in Advance Prakash
Categories: DBA Blogs

Table Design

Tom Kyte - Sat, 2025-10-25 00:00
Hello, I work as a dba and get requests from developers to create tables which we often review and sometimes change for better design and/or implementation. One of the developers recently sent a request to create a table such as the following ; table_name:t1_relation Column_names: c1_master_id_pk (foreign key to t1_master table) c1_attribute c1_value primary key all 3 columns. They explained that the data in all these columns are all non nullable and that they are all needed to uniquely identify a record. having all the columns of a table as a primary key didn't look very right to me and so I suggested we create a surrogate key and make the 3 columns unique with not null constraints on all of them. they initially said yes then came back and said to change it to be how they requested initially. I'm messaging to ask if this is proper database design and what would be the ideal way to implement this?
Categories: DBA Blogs

ORA-04063 Errors During Datapatch Rollback of Oracle Patch 30763851

Bobby Durrett's DBA Blog - Fri, 2025-10-24 13:50

I am working on an Oracle 19c database running on a RHEL 7 VM. I have been trying to roll back this patch:

Patch 30763851 IMPDP 11.2 TO 18C OR HIGHER HITS ORA-904 WHEN TABLES HAVE EXTENDED STATISTICS

I have it sitting on top of 19.5

Database Release Update: 19.5.0.0.191015 (30125133)

The Opatch rollback command ran fine, but Datapatch threw these errors:

[2025-10-22 18:50:15]   -> Error at line 11329: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_OPQTYPE_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11331: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_OPQTYPE_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11333: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_OPQTYPE_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11335: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_OPQTYPE_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11337: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_OPQTYPE_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11339: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_OPQTYPE_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11341: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_OPQTYPE_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11343: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_OPQTYPE_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11355: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_P2TPARTCOL_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11357: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_P2TPARTCOL_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11363: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_SP2TPARTCOL_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11365: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_SP2TPARTCOL_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11381: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_COLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11383: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_COLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11385: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_COLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11387: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_COLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11389: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_PCOLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11391: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_PCOLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11393: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_PCOLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11395: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_PCOLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11397: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_P2TCOLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11399: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_P2TCOLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11401: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_SP2TCOLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11403: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_SP2TCOLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11405: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_COLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11407: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_COLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11409: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_PCOLUMN_VIEW" has errors"
[2025-10-22 18:50:15]   -> Error at line 11411: script rdbms/admin/dpload.sql
[2025-10-22 18:50:15]       - ORA-04063: view "SYS.KU$_PCOLUMN_VIEW" has errors"

I checked DBA_OBJECTS, and all the SYS objects are VALID. I tried querying one of the views and it worked fine. So, I went to My Oracle Support, our Oracle database support site, and searched for ORA-04063 and one of the view names and found nothing. A Google search also came up empty. I tried just ignoring it but that didn’t work. My whole goal in doing this was to apply the October 2025 patches that just came out this week. But because the SQL patch registry indicated that patch 30763851 rolled back with errors, every time I applied a new patch it would try to roll 30763851 back first and error again. Here is what DBA_REGISTRY_SQLPATCH looked like after two failed rollback attempts:

INSTALL_ID   PATCH_ID PATCH_TYPE ACTION          STATUS        
---------- ---------- ---------- --------------- --------------
         1   30125133 RU         APPLY           SUCCESS       
         2   30763851 INTERIM    APPLY           SUCCESS       
         3   30763851 INTERIM    ROLLBACK        WITH ERRORS   
         3   30763851 INTERIM    ROLLBACK        WITH ERRORS   
         4   30763851 INTERIM    ROLLBACK        WITH ERRORS   
         4   30763851 INTERIM    ROLLBACK        WITH ERRORS   

Each rollback attempt tried twice so I have four failures with two rollback attempts.

I opened a case with Oracle support just in case this was a known issue that wasn’t available for me to find on my own. Sometimes that happens. But while waiting on Oracle I kept trying to fix it myself.

The errors refer to $ORACLE_HOME/rdbms/admin/dpload.sql which I think reloads datapump after some change. It runs catmetviews.sql and catmetviews_mig.sql which have the CREATE VIEW statements for the views getting errors, like SYS.KU$_OPQTYPE_VIEW. But the code in catmetviews_mig.sql wasn’t straightforward. I imagined running some sort of trace to see why the script was throwing the ORA-04063 errors, but I never had to take it that far.

At first all this stressed me out. I thought, “I can’t back out this patch. I will never be able to patch this database to a current patch level.” Then I chilled out and realized that if it was a problem with Oracle’s code, they had to help me back out 30763851. But it might take some time to work through an SR with Oracle.

But what if it wasn’t an issue with Oracle’s code but something weird in our environment? I didn’t think it indicated a real problem, but there were some weird messages coming out that I am used to seeing. They were from triggers that come with an auditing tool called DB Protect. They were throwing messages like this:

[SYS.SENSOR_DDL_TRIGGER_A] Caught a standard exception: aliasId=100327, error=-29260, message="ORA-29260: network error: TNS:no listener"

We are used to seeing these errors when we do DDL but prior to this it didn’t cause any actual problems. We had already decommisioned the DB Protect tool but had not cleaned up the triggers. Dropping SYS.SENSOR_DDL_TRIGGER_A eliminated the ORA-04063 errors.

Probably no one will ever encounter this same issue, but I thought I would document it. If you have the same symptoms and you are not using DB Protect any more, do these commands:

DROP TRIGGER SYS.SENSOR_DDL_TRIGGER_A;
DROP TRIGGER SYS.SENSOR_DDL_TRIGGER_B;

I think the A trigger was the problem, but we don’t need either one.

Anyway, this post is just so someone who searches for ORA-04063 and one of the views will find this information and drop the triggers if they have them. It’s a long shot but might as well document it for posterity and for me.

Bobby

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs