Hemant K Chitale
Querying the Alert log in RAC
Although v$diag_alert_ext works for a single instance, there is no corresponding gv$ view for RAC.
However, the gv$ function can be used to execute a cursor across all instances. Here I query for the alert log in the two nodes srv1 and srv2 and order the results by timestamp to capture messages in the last 15 minutes : Note all the activity that happens when I CLOSE and later OPEN a PDB in one node, resulting in the other node also responding to maintain Global Resources.
SQL> l 1 select * from table(gv$(cursor 2 (select originating_timestamp, host_id, con_id, message_text 3 from v$diag_alert_ext 4 where originating_timestamp > sysdate - 0.25/24 -- time filter 5 and message_text like '%' -- message filter 6 ) 7 ) 8 ) 9* order by 1,2,3 SQL> / ORIGINATING_TIMESTAMP --------------------------------------------------------------------------- HOST_ID CON_ID ------------------------------------------------------------------- ---------- MESSAGE_TEXT ------------------------------------------------------------------------------------------------------------------------------------ 07-JAN-25 02.23.37.941000000 PM +00:00 srv2 1 Thread 2 cannot allocate new log, sequence 278 07-JAN-25 02.23.37.941000000 PM +00:00 srv2 1 Checkpoint not complete 07-JAN-25 02.23.37.941000000 PM +00:00 srv2 1 Current log# 3 seq# 277 mem# 0: +DATA/RACDB/ONLINELOG/group_3.270.1164520171 07-JAN-25 02.23.37.941000000 PM +00:00 srv2 1 Current log# 3 seq# 277 mem# 1: +FRA/RACDB/ONLINELOG/group_3.259.1164520171 07-JAN-25 02.23.38.255000000 PM +00:00 srv1 1 Thread 1 cannot allocate new log, sequence 322 07-JAN-25 02.23.38.255000000 PM +00:00 srv1 1 Current log# 1 seq# 321 mem# 1: +FRA/RACDB/ONLINELOG/group_1.257.1164519549 07-JAN-25 02.23.38.255000000 PM +00:00 srv1 1 Current log# 1 seq# 321 mem# 0: +DATA/RACDB/ONLINELOG/group_1.263.1164519549 07-JAN-25 02.23.38.255000000 PM +00:00 srv1 1 Checkpoint not complete 07-JAN-25 02.23.40.951000000 PM +00:00 srv2 1 Thread 2 advanced to log sequence 278 (LGWR switch), current SCN: 10841502 07-JAN-25 02.23.40.951000000 PM +00:00 srv2 1 Current log# 4 seq# 278 mem# 0: +DATA/RACDB/ONLINELOG/group_4.271.1164520171 07-JAN-25 02.23.40.951000000 PM +00:00 srv2 1 Current log# 4 seq# 278 mem# 1: +FRA/RACDB/ONLINELOG/group_4.260.1164520173 07-JAN-25 02.23.40.976000000 PM +00:00 srv2 1 Deleted Oracle managed file +FRA/RACDB/ARCHIVELOG/2024_11_02/thread_1_seq_240.500.1183975081 07-JAN-25 02.23.40.977000000 PM +00:00 srv2 1 ARC2 (PID:17940): Archived Log entry 1023 added for B-1164519547.T-2.S-277 LOS:0x0000000000a568cc NXS:0x0000000000a56d9e NAB:372 ID 0x46c5be03 LAD:1 07-JAN-25 02.23.41.048000000 PM +00:00 srv1 1 Thread 1 advanced to log sequence 322 (LGWR switch), current SCN: 10841505 07-JAN-25 02.23.41.048000000 PM +00:00 srv1 1 Current log# 2 seq# 322 mem# 1: +FRA/RACDB/ONLINELOG/group_2.258.1164519549 07-JAN-25 02.23.41.048000000 PM +00:00 srv1 1 Current log# 2 seq# 322 mem# 0: +DATA/RACDB/ONLINELOG/group_2.262.1164519549 07-JAN-25 02.23.41.117000000 PM +00:00 srv2 1 Deleted Oracle managed file +FRA/RACDB/ARCHIVELOG/2024_11_02/thread_2_seq_221.499.1183975181 07-JAN-25 02.23.41.138000000 PM +00:00 srv1 1 Deleted Oracle managed file +FRA/RACDB/ARCHIVELOG/2024_11_02/thread_2_seq_222.498.1183975205 07-JAN-25 02.23.41.139000000 PM +00:00 srv1 1 NET (PID:15641): Archived Log entry 1025 added for B-1164519547.T-1.S-321 LOS:0x0000000000a568c9 NXS:0x0000000000a56da1 NAB:369 ID 0x46c5be03 LAD:1 07-JAN-25 02.23.41.209000000 PM +00:00 srv1 1 Deleted Oracle managed file +FRA/RACDB/ARCHIVELOG/2024_11_02/thread_1_seq_241.497.1183975551 07-JAN-25 02.27.41.986000000 PM +00:00 srv1 1 alter pluggable database pdb1 close 07-JAN-25 02.27.41.997000000 PM +00:00 srv1 3 Pluggable database PDB1 closing 07-JAN-25 02.27.42.004000000 PM +00:00 srv1 3 Increasing priority of 2 RS 07-JAN-25 02.27.42.011000000 PM +00:00 srv1 3 JIT: pid 19878 requesting stop 07-JAN-25 02.27.42.044000000 PM +00:00 srv1 3 Stopped service mypdb 07-JAN-25 02.27.42.208000000 PM +00:00 srv1 3 Closing sequence subsystem (3300983922). 07-JAN-25 02.27.42.332000000 PM +00:00 srv1 3 Buffer Cache flush started: 3 07-JAN-25 02.27.42.345000000 PM +00:00 srv1 3 Buffer Cache flush finished: 3 07-JAN-25 02.27.42.347000000 PM +00:00 srv2 1 Increasing priority of 2 RS 07-JAN-25 02.27.42.350000000 PM +00:00 srv1 3 queued detach DA request 0x934eafc8 for pdb 3, ospid 19878 07-JAN-25 02.27.42.351000000 PM +00:00 srv1 1 Domain Action Reconfiguration started (domid 3, new da inc 3, cluster inc 8) 07-JAN-25 02.27.42.351000000 PM +00:00 srv1 1 Instance 1 is detaching from domain 3 (lazy abort? 0, recovery member? 0) 07-JAN-25 02.27.42.352000000 PM +00:00 srv1 1 Global Resource Directory partially frozen for domain action 07-JAN-25 02.27.42.352000000 PM +00:00 srv1 1 * domain detach - domain 3 valid ? 1 07-JAN-25 02.27.42.430000000 PM +00:00 srv2 1 Domain Action Reconfiguration started (domid 3, new da inc 3, cluster inc 8) 07-JAN-25 02.27.42.430000000 PM +00:00 srv2 1 Instance 1 is detaching from domain 3 (lazy abort? 0, recovery member? 0) 07-JAN-25 02.27.42.430000000 PM +00:00 srv2 1 Global Resource Directory partially frozen for domain action 07-JAN-25 02.27.42.430000000 PM +00:00 srv2 1 * domain detach - domain 3 valid ? 1 07-JAN-25 02.27.42.432000000 PM +00:00 srv2 1 Non-local Process blocks cleaned out 07-JAN-25 02.27.42.434000000 PM +00:00 srv2 1 Set master node info 07-JAN-25 02.27.42.435000000 PM +00:00 srv2 1 Dwn-cvts replayed, VALBLKs dubious 07-JAN-25 02.27.42.435000000 PM +00:00 srv2 1 All grantable enqueues granted 07-JAN-25 02.27.42.437000000 PM +00:00 srv1 1 Non-local Process blocks cleaned out 07-JAN-25 02.27.42.438000000 PM +00:00 srv1 1 Set master node info 07-JAN-25 02.27.42.439000000 PM +00:00 srv1 1 Dwn-cvts replayed, VALBLKs dubious 07-JAN-25 02.27.42.440000000 PM +00:00 srv1 1 All grantable enqueues granted 07-JAN-25 02.27.42.440000000 PM +00:00 srv2 1 Domain Action Reconfiguration complete (total time 0.0 secs) 07-JAN-25 02.27.42.440000000 PM +00:00 srv2 1 Decreasing priority of 2 RS 07-JAN-25 02.27.42.444000000 PM +00:00 srv1 1 freeing the fusion rht of pdb 3 07-JAN-25 02.27.42.445000000 PM +00:00 srv1 1 freeing the pdb enqueue rht 07-JAN-25 02.27.42.445000000 PM +00:00 srv1 1 Decreasing priority of 2 RS 07-JAN-25 02.27.42.445000000 PM +00:00 srv1 1 Domain Action Reconfiguration complete (total time 0.1 secs) 07-JAN-25 02.27.42.542000000 PM +00:00 srv1 1 Pluggable database PDB1 closed 07-JAN-25 02.27.42.571000000 PM +00:00 srv1 1 Completed: alter pluggable database pdb1 close 07-JAN-25 02.28.10.713000000 PM +00:00 srv1 1 alter pluggable database pdb1 open 07-JAN-25 02.28.10.715000000 PM +00:00 srv1 3 Pluggable database PDB1 opening in read write 07-JAN-25 02.28.10.892000000 PM +00:00 srv1 3 Increasing priority of 2 RS 07-JAN-25 02.28.10.892000000 PM +00:00 srv1 3 Autotune of undo retention is turned on. 07-JAN-25 02.28.10.892000000 PM +00:00 srv1 3 SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18 07-JAN-25 02.28.10.929000000 PM +00:00 srv1 3 queued attach DA request 0x934eaf60 for pdb 3, ospid 19878 07-JAN-25 02.28.10.929000000 PM +00:00 srv2 1 Increasing priority of 2 RS 07-JAN-25 02.28.10.933000000 PM +00:00 srv1 1 Domain Action Reconfiguration started (domid 3, new da inc 4, cluster inc 8) 07-JAN-25 02.28.10.933000000 PM +00:00 srv1 1 Global Resource Directory partially frozen for domain action 07-JAN-25 02.28.10.933000000 PM +00:00 srv1 1 Instance 1 is attaching to domain 3 07-JAN-25 02.28.10.940000000 PM +00:00 srv2 1 Domain Action Reconfiguration started (domid 3, new da inc 4, cluster inc 8) 07-JAN-25 02.28.10.940000000 PM +00:00 srv2 1 Instance 1 is attaching to domain 3 07-JAN-25 02.28.10.941000000 PM +00:00 srv2 1 Global Resource Directory partially frozen for domain action 07-JAN-25 02.28.10.943000000 PM +00:00 srv2 1 Non-local Process blocks cleaned out 07-JAN-25 02.28.10.946000000 PM +00:00 srv2 1 Set master node info 07-JAN-25 02.28.10.947000000 PM +00:00 srv1 1 Non-local Process blocks cleaned out 07-JAN-25 02.28.10.947000000 PM +00:00 srv2 1 Dwn-cvts replayed, VALBLKs dubious 07-JAN-25 02.28.10.948000000 PM +00:00 srv1 1 Set master node info 07-JAN-25 02.28.10.949000000 PM +00:00 srv2 1 All grantable enqueues granted 07-JAN-25 02.28.10.952000000 PM +00:00 srv1 1 Dwn-cvts replayed, VALBLKs dubious 07-JAN-25 02.28.10.953000000 PM +00:00 srv1 1 All grantable enqueues granted 07-JAN-25 02.28.11.034000000 PM +00:00 srv2 1 Domain Action Reconfiguration complete (total time 0.1 secs) 07-JAN-25 02.28.11.034000000 PM +00:00 srv2 1 Decreasing priority of 2 RS 07-JAN-25 02.28.11.038000000 PM +00:00 srv1 1 Domain Action Reconfiguration complete (total time 0.1 secs) 07-JAN-25 02.28.11.038000000 PM +00:00 srv1 1 Decreasing priority of 2 RS 07-JAN-25 02.28.11.135000000 PM +00:00 srv1 3 Endian type of dictionary set to little 07-JAN-25 02.28.11.213000000 PM +00:00 srv1 3 Undo initialization recovery: err:0 start: 3329988 end: 3329989 diff: 1 ms (0.0 seconds) 07-JAN-25 02.28.11.352000000 PM +00:00 srv1 3 [19878] Successfully onlined Undo Tablespace 2. 07-JAN-25 02.28.11.352000000 PM +00:00 srv1 3 Undo initialization online undo segments: err:0 start: 3329989 end: 3330129 diff: 140 ms (0.1 seconds) 07-JAN-25 02.28.11.357000000 PM +00:00 srv1 3 Undo initialization finished serial:0 start:3329988 end:3330134 diff:146 ms (0.1 seconds) 07-JAN-25 02.28.11.361000000 PM +00:00 srv1 3 Database Characterset for PDB1 is AL32UTF8 07-JAN-25 02.28.11.627000000 PM +00:00 srv1 3 SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x18, new 0x0 (no suplog) 07-JAN-25 02.28.11.891000000 PM +00:00 srv1 3 Started service mypdb/mypdb/mypdb 07-JAN-25 02.28.11.925000000 PM +00:00 srv1 3 Opening pdb with no Resource Manager plan active 07-JAN-25 02.28.12.097000000 PM +00:00 srv1 3 joxcsys_required_dirobj_exists: directory object exists with required path /u02/app/oracle/product/19.0.0/dbhome_1/javavm/admin/, pi d 19878 cid 3 07-JAN-25 02.28.12.118000000 PM +00:00 srv1 1 Pluggable database PDB1 opened read write 07-JAN-25 02.28.12.122000000 PM +00:00 srv1 1 Completed: alter pluggable database pdb1 open 07-JAN-25 02.33.26.501000000 PM +00:00 srv1 1 Control autobackup written to DISK device 07-JAN-25 02.33.26.502000000 PM +00:00 srv1 1 handle '+FRA/RACDB/AUTOBACKUP/2025_01_07/s_1189780406.497.1189780407' 93 rows selected. SQL>At 07-JAN-25 02.27.41.986000000 PM +00:00 : I issued a command to close PDB1 (CON_ID=3) on host srv1
You can also see the "Stopped service mypdb" message on srv1 at 07-JAN-25 02.27.42
3million PageViews, 303thousand VideoViews
This blog, begun in December 2006, has now hit a cumulative count of 3million PageViews.
This Chart shows counts from the year 2011 :
My YouTube Channel, begun in January 2014, has hit a cumulative count of 303thousand views :
A Big Thank You to all the Viewers an Subscribers.
DataPump with CheckSum in 21c and above
Oracle introduced a CheckSum parameter in 21c. Here is a demo in 23.6.
First I run the datapump export without the CheckSum:
[oracle@localhost ~]$ expdp hemant/hemant@freepdb1 schemas=HEMANT dumpfile=HEMANT Export: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:31:26 2024 Version 23.6.0.24.10 Copyright (c) 1982, 2024, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Starting "HEMANT"."SYS_EXPORT_SCHEMA_01": hemant/********@freepdb1 schemas=HEMANT dumpfile=HEMANT Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA/LOGREP Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/COMMENT Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX . . exported "HEMANT"."MY_LOBS" 7.9 MB 75929 rows Master table "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for HEMANT.SYS_EXPORT_SCHEMA_01 is: /opt/oracle/admin/FREE/dpdump/2917EBB8705B3129E0630100007F3D2B/HEMANT.dmp Job "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Dec 28 12:32:06 2024 elapsed 0 00:00:38 [oracle@localhost ~]$
Now this is an export WITH the CheckSum (after verifying that COMPATIBLE is 20.0 or higher) by specifying CHECKSUM=YES :
[oracle@localhost ~]$ sqlplus hemant/hemant@freepdb1 SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Sat Dec 28 12:35:20 2024 Version 23.6.0.24.10 Copyright (c) 1982, 2024, Oracle. All rights reserved. Last Successful login time: Sat Dec 28 2024 12:31:26 +00:00 Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Version 23.6.0.24.10 SQL> show parameter compatible NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ compatible string 23.6.0 noncdb_compatible boolean FALSE SQL> quit Disconnected from Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Version 23.6.0.24.10 [oracle@localhost ~]$ expdp hemant/hemant@freepdb1 schemas=HEMANT dumpfile=HEMANT_With_CheckSum checksum=YES Export: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:35:55 2024 Version 23.6.0.24.10 Copyright (c) 1982, 2024, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Starting "HEMANT"."SYS_EXPORT_SCHEMA_01": hemant/********@freepdb1 schemas=HEMANT dumpfile=HEMANT_With_CheckSum checksum=YES Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA/LOGREP Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/COMMENT Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX . . exported "HEMANT"."MY_LOBS" 7.9 MB 75929 rows Master table "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded Generating checksums for dump file set ****************************************************************************** Dump file set for HEMANT.SYS_EXPORT_SCHEMA_01 is: /opt/oracle/admin/FREE/dpdump/2917EBB8705B3129E0630100007F3D2B/HEMANT_With_CheckSum.dmp Job "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Dec 28 12:36:33 2024 elapsed 0 00:00:36 [oracle@localhost ~]$
After I transfer the dumpfile to another server, I verify the CheckSum with VERIFY_ONLY=YES:
[oracle@localhost ~]$ sqlplus / as sysdba SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Sat Dec 28 12:40:54 2024 Version 23.6.0.24.10 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Version 23.6.0.24.10 SQL> create directory impdp_check as '/tmp'; Directory created. SQL> quit Disconnected from Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Version 23.6.0.24.10 [oracle@localhost ~]$ impdp directory=impdp_check dumpfile=HEMANT_With_CheckSum.dmp verify_only=YES Import: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:42:28 2024 Version 23.6.0.24.10 Copyright (c) 1982, 2024, Oracle and/or its affiliates. All rights reserved. Username: / as sysdba Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database. Verifying dump file checksums Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded dump file set is complete verified checksum for dump file "/tmp/HEMANT_With_CheckSum.dmp" dump file set is consistent Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Sat Dec 28 12:42:39 2024 elapsed 0 00:00:05 [oracle@localhost ~]$ [oracle@localhost ~]$ cd /tmp [oracle@localhost tmp]$ ls -l import.log -rw-r--r--. 1 oracle oinstall 600 Dec 28 12:42 import.log [oracle@localhost tmp]$ cat import.log ;;; Import: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:42:28 2024 Version 23.6.0.24.10 Copyright (c) 1982, 2024, Oracle and/or its affiliates. All rights reserved. ;;; Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Verifying dump file checksums Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded dump file set is complete verified checksum for dump file "/tmp/HEMANT_With_CheckSum.dmp" dump file set is consistent Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Sat Dec 28 12:42:39 2024 elapsed 0 00:00:05 [oracle@localhost tmp]$
The VERIFY_ONLY parameter verifies the file without actually importing the dump file.
The GROUP BY column_position enhancement in 23ai
Oracle 23ai allows specifying a Column Position (or Alias) in the GROUP BY clause.
For backward compatibility, the "group_by_position_enabled" parameter is a new feature that defaults to FALSE.
SQL> show parameter group_by_position_enabled NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ group_by_position_enabled boolean FALSE SQL> alter session set group_by_position_enabled=TRUE; Session altered. SQL> SQL> select p.prod_name, t.day_name, sum(s.amount_sold) 2 from sh.products p, sh.times t, sh.sales s 3 where p.prod_id=s.prod_id 4 and t.time_id=s.time_id 5 and t.calendar_month_name = 'January' 6 group by 1, 2 7 order by 1, 2 8 / PROD_NAME DAY_NAME SUM(S.AMOUNT_SOLD) -------------------------------------------------- --------- ------------------ 11" Youth Field Master Glove Friday 4635.73 11" Youth Field Master Glove Monday 2903.62 11" Youth Field Master Glove Saturday 3636.85 11" Youth Field Master Glove Sunday 6602.18 11" Youth Field Master Glove Thursday 5696.37 11" Youth Field Master Glove Tuesday 2843.81 11" Youth Field Master Glove Wednesday 6072.04 11.5" Youth Triple Stripe Series Glove Friday 6695.84 11.5" Youth Triple Stripe Series Glove Monday 5436.28 11.5" Youth Triple Stripe Series Glove Saturday 5653.8 11.5" Youth Triple Stripe Series Glove Sunday 10909.86 ... ... multiple rows returned ... ... PROD_NAME DAY_NAME SUM(S.AMOUNT_SOLD) -------------------------------------------------- --------- ------------------ Wicket Keeper Gloves Thursday 1550.94 Wicket Keeper Gloves Tuesday 3049.62 Wicket Keeper Gloves Wednesday 2583.16 Wide Brim Hat Friday 189.28 Wide Brim Hat Monday 1656.35 Wide Brim Hat Saturday 1689.48 Wide Brim Hat Sunday 560.7 Wide Brim Hat Thursday 1088.44 Wide Brim Hat Tuesday 2855.67 Wide Brim Hat Wednesday 250.19 461 rows selected. SQL>
This helps developers who already use the Column Position in the ORDER BY clause and can be consistent when writing the GROUP BY clause.
Using Oracle's Autonomous Health Framework to get an "Insight" into a RAC Cluster
I've posted a video demonstration on using Oracle's Autonomous Health Framework ("AHF") get an "Insight" into a RAC Cluster.
AHF (Oracle Support Doc ID 2550798.1) is Oracle's current diagnostic tool which includes Orachk and TFA (Trace File Analyzer). Explore it.
Video on DataGuard Snapshot Standby
I have posted a new video on creating a "Snapshot Standby" database in DataGuard.
Typically such a Standby can be used for "destructive testing" like D.R. simulation with dummy transactions, application upgrades, schema changes.
If your concern is about the Physical Standby being unavailable for long note :
(1) Redo Shipping to the Standby can continue as ArchiveLogs (of course, this means the Standby will be lagging the Primary by the typical interval between Archive Log switches)
(2) You can configure 2 Standbys of which one continues to be a Physical Standby with real time redo shipping and the other is periodically used to test schema changes / application upgrades and even for frequent D.R. testing
I have previously posted a few Blog posts on scenarios like Multiple Standbys, Standbys with Lag, Destructive testing in Standbys. See the list of posts here.
Video on DataGuard Switchover -- with RAC and Single Instance
I've posted a demonstration of DataGuard Switchover, using RAC and Single Instance as the Primary/Standby pair.
Fundamentally, there is no difference if either or both of the databases are RAC or Single Instance.
A Switchover is a Graceful operation, with No-Data-Loss as the Primary sends the "End-Of-Redo Marker" to the Standby at which point Reversal of Roles happens. Therefore, you can execute Switchover between the two databases (servers / data centres / sites) multiple times without loss of data.
A Failover, on the other hand, involves data loss and the erstwhile Primary does not revert to a Standby role but must be recreated / refreshed as a Standby from the new Primary.
A few dgmgrl (DataGuard Broker) commands
I haven't create a video with commentary but here are a few dgmgrl commands.
In the output "RACDB" (or "racdb") is a 2-node RAC database (managed by ClusterWare) and "STDBY" (or "stdby") is a Single Node non-RAC database not using Oracle Restart. Both are 19c (19.23)
Initial Setup :
[oracle@srv1 ~]$ dgmgrl sys DGMGRL for Linux: Release 19.0.0.0.0 - Production on Sat Jul 27 02:31:53 2024 Version 19.23.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Welcome to DGMGRL, type "help" for information. Password: Connected to "RACDB" Connected as SYSDBA. DGMGRL> remove configuration; Removed configuration DGMGRL> create configuration racdb_dg as primary database is RACDB connect identifier is RACDB; Configuration "racdb_dg" created with primary database "racdb" DGMGRL> add database STDBY as connect identifier is STDBY maintained as physical; Database "stdby" added DGMGRL> show configuration; Configuration - racdb_dg Protection Mode: MaxPerformance Members: racdb - Primary database stdby - Physical standby database Fast-Start Failover: Disabled Configuration Status: DISABLED DGMGRL> enable configuration; Enabled. DGMGRL> show configuration; Configuration - racdb_dg Protection Mode: MaxPerformance Members: racdb - Primary database Warning: ORA-16905: The member was not enabled yet. stdby - Physical standby database Fast-Start Failover: Disabled Configuration Status: WARNING (status updated 177 seconds ago) DGMGRL> show configuration ; Configuration - racdb_dg Protection Mode: MaxPerformance Members: racdb - Primary database Warning: ORA-16905: The member was not enabled yet. stdby - Physical standby database Fast-Start Failover: Disabled Configuration Status: WARNING (status updated 182 seconds ago) DGMGRL> DGMGRL> show configuration ; Configuration - racdb_dg Protection Mode: MaxPerformance Members: racdb - Primary database stdby - Physical standby database Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 44 seconds ago) DGMGRL> DGMGRL> show configuration lag; Configuration - racdb_dg Protection Mode: MaxPerformance Members: racdb - Primary database stdby - Physical standby database Transport Lag: 0 seconds (computed 0 seconds ago) Apply Lag: 0 seconds (computed 0 seconds ago) Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 23 seconds ago) DGMGRL>In the initial setup, it takes time to actually enable the databases so we see the "ORA-16095: the member was not enabled yet" warning messages.
GMGRL> validate database racdb; Database Role: Primary database Ready for Switchover: Yes Managed by Clusterware: racdb: YES DGMGRL> validate database stdby; Database Role: Physical standby database Primary Database: racdb Ready for Switchover: Yes Ready for Failover: Yes (Primary Running) Flashback Database Status: racdb: On stdby: Off Capacity Information: Database Instances Threads racdb 2 2 stdby 1 2 Warning: the target standby has fewer instances than the primary database, this may impact application performance Managed by Clusterware: racdb: YES stdby: NO Current Configuration Log File Sizes: Thread # Smallest Online Redo Smallest Standby Redo Log File Size Log File Size (racdb) (stdby) 1 512 MBytes 100 MBytes 2 512 MBytes 100 MBytes DGMGRL>Here we see some information :
DGMGRL> show configuration verbose Configuration - racdb_dg Protection Mode: MaxPerformance Members: racdb - Primary database stdby - Physical standby database Properties: FastStartFailoverThreshold = '30' OperationTimeout = '30' TraceLevel = 'USER' FastStartFailoverLagLimit = '30' CommunicationTimeout = '180' ObserverReconnect = '0' FastStartFailoverLagGraceTime = '0' FastStartFailoverAutoReinstate = 'TRUE' FastStartFailoverPmyShutdown = 'TRUE' BystandersFollowRoleChange = 'ALL' ObserverOverride = 'FALSE' ExternalDestination1 = '' ExternalDestination2 = '' PrimaryLostWriteAction = 'CONTINUE' ConfigurationWideServiceName = 'RACDB_CFG' FastStartFailoverLagType = 'APPLY' Fast-Start Failover: Disabled Configuration Status: SUCCESS DGMGRL> DGMGRL> show database verbose racdb; Database - racdb Role: PRIMARY Intended State: TRANSPORT-ON Instance(s): RACDB1 RACDB2 Properties: DGConnectIdentifier = 'racdb' ObserverConnectIdentifier = '' FastStartFailoverTarget = '' PreferredObserverHosts = '' LogShipping = 'ON' RedoRoutes = '' LogXptMode = 'ASYNC' DelayMins = '0' Binding = 'optional' MaxFailure = '0' ReopenSecs = '300' NetTimeout = '30' RedoCompression = 'DISABLE' PreferredApplyInstance = '' ApplyInstanceTimeout = '0' ApplyLagThreshold = '30' TransportLagThreshold = '30' TransportDisconnectedThreshold = '30' ApplyParallel = 'AUTO' ApplyInstances = '0' StandbyFileManagement = '' ArchiveLagTarget = '0' LogArchiveMaxProcesses = '0' LogArchiveMinSucceedDest = '0' DataGuardSyncLatency = '0' LogArchiveTrace = '0' LogArchiveFormat = '' DbFileNameConvert = '' LogFileNameConvert = '' ArchiveLocation = '' AlternateLocation = '' StandbyArchiveLocation = '' StandbyAlternateLocation = '' InconsistentProperties = '(monitor)' InconsistentLogXptProps = '(monitor)' LogXptStatus = '(monitor)' SendQEntries = '(monitor)' RecvQEntries = '(monitor)' HostName(*) StaticConnectIdentifier(*) TopWaitEvents(*) SidName(*) (*) - Please check specific instance for the property value Log file locations(*): (*) - Check specific instance for log file locations. Database Status: SUCCESS DGMGRL> DGMGRL> show instance verbose "RACDB1" Instance 'RACDB1' of database 'racdb' PFILE: Properties: HostName = 'srv1' StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.91)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=RACDB_DGMGRL)(INSTANCE_NAME=RACDB1)(SERVER=DEDICATED)))' TopWaitEvents = '(monitor)' SidName = '(monitor)' Log file locations: Alert log : /u01/app/oracle/diag/rdbms/racdb/RACDB1/trace/alert_RACDB1.log Data Guard Broker log : /u01/app/oracle/diag/rdbms/racdb/RACDB1/trace/drcRACDB1.log Instance Status: SUCCESS DGMGRL>Here we see some more information :
DGMGRL> validate static connect identifier for racdb; Oracle Clusterware on database "racdb" is available for database restart. DGMGRL> validate static connect identifier for stdby; Oracle Clusterware is not configured on database "stdby". Connecting to database "stdby" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=stdby)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=STDBY_DGMGRL)(INSTANCE_NAME=STDBY)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ... Succeeded. The static connect identifier allows for a connection to database "stdby".For "racdb" being managed by ClusterWare, I had specified "RACDB" {which is the SCAN address} as the Connect Identifier (with "primary database is RACDB connect identifier is RACDB" in the first configuration command) For "stdby", not being managed by ClusterWare or Oracle Restart, it DGMGRL entry for the Listener is required in this manner :
SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = STDBY) (SID_NAME = STDBY) (ORACLE_HOME = /u01/app/oracle/product/19.0.0/dbhome_1) ) (SID_DESC = (GLOBAL_DBNAME = STDBY_DGMGRL) (SID_NAME = STDBY) (ORACLE_HOME = /u01/app/oracle/product/19.0.0/dbhome_1) (ENVS="TNS_ADMIN=/u01/app/oracle/product/19.0.0/dbhome_1/network/admin") ) )So, if either your Primary or Standby or both are Not managed by ClusterWare or Oracle Restart, you need to have a DGMGRL entry for the Listener !
Video on OCR and OLR commands in RAC GI/ClusterWare
Last week I published a new video on OCR and OLR commands.
ocrcheck : Lists the locations of the OCR and checks for corruption (run as root to check for logical corruption as well)
ocrconfig -add DG Name (e.g. ocrconfig -add +DATA) : Adds a new copy of the OCR in the stated ASM DG
ocrconfig -delete DG Name : Deletes a copy of the OCR from the ASM DG
cat /etc/oracle/olr.loc : Shows the location of the OLR
ocrcheck -local : Checks the OLR
ocrconfig -showbackup : Shows the default location of OCR backups
ocrconfig -manualbackup : Create a manual backup of the OCR
(use asmcmd to copy the backup out from ASM to Filesystem)
ocrconfig -local -showbackuploc : Shows the location of OLR backups
ocrconfig -local -manualbackup : Create a manual backup of the OLR
ocrconfig -local -export : Create an Export backup of the OLR
Video on the Maximum Availability Protection Mode in Oracle DataGuard (with RAC)
I've published a new video demonstrating Maximum Availability Protection Mode in Oracle DataGuard.
Testing updated ORA-942 Error Message in 23ai
Oracle 23ai now has a long-requested fix to the ORA-942 "table or view does not exist" error message. The error message would not print out *which* table or view was missing -- this was particularly troublesome with very long SQL statements and multiple tables and views being referenced.
A demo of the fixed behaviour in 23ai (followed later by the same code in 19c 19.22)
SQL> select banner from v$version; BANNER --------------------------------------------------------------------------------------------------------------------------------- Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free SQL> SQL> -- create the two tables SQL> drop table employees; drop table employees * ERROR at line 1: ORA-00942: table or view "HEMANT"."EMPLOYEES" does not exist Help: https://docs.oracle.com/error-help/db/ora-00942/ SQL> drop table departments; Table dropped. SQL> create table departments (dept_id number(12) primary key, dept_name varchar2(30)); Table created. SQL> create table employees (emp_id number(12) primary key, emp_name varchar2(54), dept_id number(12)); Table created. SQL> SQL> -- insert data SQL> insert into departments values (1,'Human Resources'); 1 row created. SQL> insert into employees values (1,'Somebody',1); 1 row created. SQL> commit; Commit complete. SQL> SQL> -- test query SQL> select d.dept_id, d.dept_name, e.emp_name 2 from departments d, employees e 3 where e.dept_id = d.dept_id 4 order by d.dept_id,e.emp_name 5 / DEPT_ID DEPT_NAME EMP_NAME ---------- ------------------------------ ------------------------------------------------------ 1 Human Resources Somebody SQL> SQL> -- drop a table SQL> drop table employees; Table dropped. SQL> SQL> -- RETEST query SQL> select d.dept_id, d.dept_name, e.emp_name 2 from departments d, employees e 3 where e.dept_id = d.dept_id 4 order by d.dept_id,e.emp_name 5 / from departments d, employees e * ERROR at line 2: ORA-00942: table or view "HEMANT"."EMPLOYEES" does not exist Help: https://docs.oracle.com/error-help/db/ora-00942/ SQL>
So, now the error message text for ORA-942 includes the name of the missing <schema>.<table_or_view>
Here is the same code in 19c (19.22)
SQL> --select banner from v$version; SQL> -- use BANNER_FULL in 19c to get RU level -- 19.22 SQL> select banner_full from v$version; BANNER_FULL ------------------------------------------------------------------------------------------------------------------------------------ Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.22.0.0.0 SQL> SQL> -- create the two tables SQL> drop table employees; drop table employees * ERROR at line 1: ORA-00942: table or view does not exist SQL> drop table departments; Table dropped. SQL> create table departments (dept_id number(12) primary key, dept_name varchar2(30)); Table created. SQL> create table employees (emp_id number(12) primary key, emp_name varchar2(54), dept_id number(12)); Table created. SQL> SQL> -- insert data SQL> insert into departments values (1,'Human Resources'); 1 row created. SQL> insert into employees values (1,'Somebody',1); 1 row created. SQL> commit; Commit complete. SQL> SQL> -- test query SQL> select d.dept_id, d.dept_name, e.emp_name 2 from departments d, employees e 3 where e.dept_id = d.dept_id 4 order by d.dept_id,e.emp_name 5 / DEPT_ID DEPT_NAME EMP_NAME ---------- ------------------------------ ------------------------------------------------------ 1 Human Resources Somebody SQL> SQL> -- drop a table SQL> drop table employees; Table dropped. SQL> SQL> -- RETEST query SQL> select d.dept_id, d.dept_name, e.emp_name 2 from departments d, employees e 3 where e.dept_id = d.dept_id 4 order by d.dept_id,e.emp_name 5 / from departments d, employees e * ERROR at line 2: ORA-00942: table or view does not exist SQL>
So, upto 19c, ORA-942 would simply say "table or view does not exist". In my example using sqlplus and a very short from list, the "*" does indicate where the error is -- but with other tools / clients and long FROM or sub-queries with FROM clauses, it may well be difficult to identify the missing table or view unless you trace the execution or run a test query against each table or view.
Testing Open a PDB as a Hybrid Read Only PDB in 23ai
Oracle 23ai now allows the DBA to open a PDB in Hybrid Read Only mode. This mode allows Common Users (e.g. SYS or SYSTEM or others defined as Common Users from the Root CDB) to access a PDB in Read-Write mode while local (i.e. non-Common Users) can access the PDB only in Read-Only mode.
This facilitates live maintenance (e.g. patching or changes to the database / schema) being executed by a DBA or Common User while "normal" local users (eg. Application Accounts) can still query the database.
This is a quick demo :
SQL> -- open the PDB as "normal" Read Write SQL> connect / as sysdba Connected. SQL> -- Version 23ai Free Edition SQL> select banner from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free SQL> SQL> alter pluggable database freepdb1 close; Pluggable database altered. SQL> alter pluggable database freepdb1 open; Pluggable database altered. SQL> SQL> -- first demo a normal user in the PDB SQL> connect hemant/hemant@freepdb1 Connected. SQL> create table x_test (id number , data varchar2(15)); Table created. SQL> insert into x_test values (1,'First'); 1 row created. SQL> commit; Commit complete. SQL> select * from hemant.x_test order by 1; ID DATA ---------- --------------- 1 First SQL> SQL> SQL> -- now close and open the PDB in Hybrid Read Only mode SQL> connect / as sysdba Connected. SQL> alter pluggable database freepdb1 close; Pluggable database altered. SQL> alter pluggable database freepdb1 open hybrid read only ; Pluggable database altered. SQL> -- test that SYSTEM (a Common User) can manipulate data -- e.g. INSERT SQL> connect system/manager@freepdb1 Connected. SQL> insert into hemant.x_test values(2,'System'); 1 row created. SQL> commit; Commit complete. SQL> select * from hemant.x_test order by 1; ID DATA ---------- --------------- 1 First 2 System SQL> -- even grant DBA to hemant SQL> grant dba to hemant; Grant succeeded. SQL> SQL> -- test hemant a non-common user SQL> -- see if the user can execute INSERT and SELECT SQL> connect hemant/hemant@freepdb1 Connected. SQL> insert into hemant.x_test values(3,'Third'); insert into hemant.x_test values(3,'Third') * ERROR at line 1: ORA-16000: Attempting to modify database or pluggable database that is open for read-only access. Help: https://docs.oracle.com/error-help/db/ora-16000/ SQL> commit; Commit complete. SQL> select * from hemant.x_test order by 1; ID DATA ---------- --------------- 1 First 2 System SQL> -- so SELECT works, but not INSERT SQL> SQL> SQL> -- reopen PDB as normal "Read Write" SQL> connect / as sysdba Connected. SQL> alter pluggable database freepdb1 close; Pluggable database altered. SQL> alter pluggable database freepdb1 open; Pluggable database altered. SQL> connect hemant/hemant@freepdb1 Connected. SQL> insert into hemant.x_test values(4,'Fourth'); 1 row created. SQL> commit; Commit complete. SQL> select * from hemant.x_test order by 1; ID DATA ---------- --------------- 1 First 2 System 4 Fourth SQL>
Thus, when the PDB was first opened in "normal" mode (ie the default OPEN mode is OPEN READ WRITE), the local user "HEMANT" could execute DDL and DML (create the table and Insert).
Testing RENAME LOB (Segment) in 23ai
SQL> -- Version 23ai Free Edition SQL> select banner from v$version; BANNER --------------------------------------------------------------------------------------------------------------------------------- Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free SQL> SQL> SQL> DROP TABLE my_lob_objects purge; Table dropped. SQL> SQL> -- create the table with a LOB, column name "c", lob segment name also "c" SQL> CREATE TABLE my_lob_objects (object_id NUMBER primary key, c CLOB) 2 lob (c) STORE AS SECUREFILE c 3 ( TABLESPACE users 4 DISABLE STORAGE IN ROW 5 NOCACHE LOGGING 6 RETENTION AUTO 7 COMPRESS 8 ); Table created. SQL> SQL> -- query the data dictionary SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs; TABLE_NAME COLUMN_NAME SEGMENT_NAME TABLESPACE_NAME ---------------- ---------------- -------------------- ---------------- MY_LOB_OBJECTS C C USERS SQL> SQL> -- insert three rows SQL> insert into my_lob_objects values (1, dbms_random.string('X',100)); 1 row created. SQL> insert into my_lob_objects values (2, dbms_random.string('X',100)); 1 row created. SQL> insert into my_lob_objects values (3, dbms_random.string('X',100)); 1 row created. SQL> SQL> -- verify the column name when querying the table SQL> select * from my_lob_objects; OBJECT_ID C ---------- -------------------------------------------------------------------------------- 1 IBGOGKA9QKK56O746IJL3C56ZK9LEO0G1W4LWBN11T8EWCFTTLUW9TPIVQAU8BPSGPQ2ZV57BS0ZPK0S 2 7K04DVVYDQB1URIQ1OQ2458M8ZOURHWW50XIZDMVGAZH6XVN2KKN4PIGKPY5CSVIQ9KU45LHZPJB33AA 3 2G5194Z7TSR3XG0K698G587AOZOJ8VN6KFCTCH3074TNCOWCSMOPRJLRGTLIZMDD73XAY4KDD14IW4MG SQL> SQL> -- now rename the column SQL> alter table my_lob_objects rename column c to clob_col; Table altered. SQL> SQL> -- query the data dictionary SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs; TABLE_NAME COLUMN_NAME SEGMENT_NAME TABLESPACE_NAME ---------------- ---------------- -------------------- ---------------- MY_LOB_OBJECTS CLOB_COL C USERS SQL> SQL> -- now rename the lob segment SQL> alter table my_lob_objects rename lob(clob_col) c to my_lob_objects_clob; Table altered. SQL> SQL> -- query the data dictionary SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs; TABLE_NAME COLUMN_NAME SEGMENT_NAME TABLESPACE_NAME ---------------- ---------------- -------------------- ---------------- MY_LOB_OBJECTS CLOB_COL MY_LOB_OBJECTS_CLOB USERS SQL> SQL> -- verify the column name when querying the table SQL> select * from my_lob_objects; OBJECT_ID CLOB_COL ---------- -------------------------------------------------------------------------------- 1 IBGOGKA9QKK56O746IJL3C56ZK9LEO0G1W4LWBN11T8EWCFTTLUW9TPIVQAU8BPSGPQ2ZV57BS0ZPK0S 2 7K04DVVYDQB1URIQ1OQ2458M8ZOURHWW50XIZDMVGAZH6XVN2KKN4PIGKPY5CSVIQ9KU45LHZPJB33AA 3 2G5194Z7TSR3XG0K698G587AOZOJ8VN6KFCTCH3074TNCOWCSMOPRJLRGTLIZMDD73XAY4KDD14IW4MG SQL> SQL> -- identify the segment SQL> select tablespace_name, segment_name, segment_type, bytes/1024 Size_KB 2 from user_segments 3 where segment_name = 'MY_LOB_OBJECTS_CLOB' 4 / TABLESPACE_NAME SEGMENT_NAME SEGMENT_TYPE SIZE_KB ---------------- -------------------- ------------------ --------- USERS MY_LOB_OBJECTS_CLOB LOBSEGMENT 2304 SQL>
First I create a Table where the Column and LOB (Segment) are both called "C". In recent versions, SECUREFILE is the default and recommended for LOBs (e.g. with the COMPRESS, DEDUPLICATION and ENCRYPTION advantages).
Testing DEFAULT ON NULL FOR UPDATE in 23ai
Testing a new feature in 23ai that allows you to define a DEFAULT value for a column if a user/front-end/application sends a NULL :
[oracle@oel9 ~]$ sqlplus hemant/hemant@freepdb1 SQL*Plus: Release 23.0.0.0.0 - Production on Thu May 9 14:54:25 2024 Version 23.4.0.24.05 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Version 23.4.0.24.05 SQL> create table my_new_employees( 2 employee_id number(12) primary key, 3 employee_name varchar2(48), 4 department_id number(12) 5 ) 6 / Table created. SQL> SQL> insert into my_new_employees 2 values (1,'Hemant',NULL) 3 / 1 row created. SQL> SQL> select * from my_new_employees; EMPLOYEE_ID EMPLOYEE_NAME DEPARTMENT_ID ----------- ------------------------------------------------ ------------- 1 Hemant SQL> SQL> update my_new_employees 2 set department_id=100 -- setting a non-NULL value 3 where employee_id=1 4 / 1 row updated. SQL> select * from my_new_employees; EMPLOYEE_ID EMPLOYEE_NAME DEPARTMENT_ID ----------- ------------------------------------------------ ------------- 1 Hemant 100 SQL> SQL> alter table my_new_employees 2 modify (department_id default on null for insert and update 512); Table altered. SQL> insert into my_new_employees 2 values (2,'Larry'); -- I am not specifying a value for DEPARTMENT_ID insert into my_new_employees * ERROR at line 1: ORA-00947: not enough values Help: https://docs.oracle.com/error-help/db/ora-00947/ SQL> insert into my_new_employees 2 values(2,'Larry', NULL); -- I explicitly specify NULL for DEPARTMENT_ID 1 row created. SQL> select * from my_new_employees order by 1; EMPLOYEE_ID EMPLOYEE_NAME DEPARTMENT_ID ----------- ------------------------------------------------ ------------- 1 Hemant 100 2 Larry 512 -- it got set to 512 ON INSERT SQL> SQL> update my_new_employees 2 set employee_name = 'Hemant Chitale', department_id=NULL -- I explicitly specify NULL for DEPARMTENT_ID 3 where employee_id=1 4 / 1 row updated. SQL> select * from my_new_employees order by 1; EMPLOYEE_ID EMPLOYEE_NAME DEPARTMENT_ID ----------- ------------------------------------------------ ------------- 1 Hemant Chitale 512 -- it got set to 512 ON UPDATE 2 Larry 512 SQL> SQL> commit; Commit complete. SQL> select * from my_new_employees order by 1; EMPLOYEE_ID EMPLOYEE_NAME DEPARTMENT_ID ----------- ------------------------------------------------ ------------- 1 Hemant Chitale 512 2 Larry 512 SQL>
So, now the column has a proper DEFAULT value when an UPDATE statement sends a NULL. This allows the developer to avoid having to write a Front-End Value Check or a Database Table Trigger to convert an incoming NULL to a DEFAULT value.
Testing DB_FLASHBACK_LOG_DEST in 23ai
Pre-23ai, Database Flashback Logs always were created in the DB_RECOVERY_FILE_DEST location (i.e.. the well-known "Fast Recovery Area" also known as "FRA").
However, these would share space with ArchiveLogs (if you've configured ArchiveLogs to the same location) and RMAN Backups (if you aren't specifically writing RMAN Backups to a different location).
23ai introduced the DB_FLASHBACK_LOG_DEST (and corresponding DB_FLASHBACK_LOG_DEST_SIZE) parameter to allow you to separate your Database Flashback Logs to a dedicated location.
The 23ai New Features documentation has this to say :
In previous releases, you could store flashback database logs only in the fast recovery area. Now you can optionally designate a separate location for flashback logging. For example, if you have write-intensive database workloads, then flashback database logging can slow down the database if the fast recovery area is not fast enough. In this scenario, you can now choose to write the flashback logs to faster disks. Using a separate destination also eliminates the manual administration to manage the free space in the fast recovery area.
Managing flashback database logs outside the fast recovery area lowers the operational costs related to space management and guarantees the best performance for workloads that are typically impacted by flashback logging on traditional storage.
And it provides a link to the documentation on the parameter.
You might think that DB_FLASHBACK_LOG_DEST is now completely independent of DB_RECOVERY_FILE_DEST.
Here is my test run where I configured DB_FLASHBACK_LOG_DEST without configuring DB_RECOVERY_FILE_DEST :
h-4.4$ cd /opt/oracle sh-4.4$ mkdir FBL sh-4.4$ mkdir FRA sh-4.4$ sqlplus / as sysdba SQL*Plus: Release 23.0.0.0.0 - Production on Sun May 5 10:26:26 2024 Version 23.4.0.24.05 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Version 23.4.0.24.05 SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 1603726344 bytes Fixed Size 5360648 bytes Variable Size 402653184 bytes Database Buffers 1191182336 bytes Redo Buffers 4530176 bytes Database mounted. SQL> show parameter spfile; NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string /opt/oracle/product/23ai/dbhom eFree/dbs/spfileFREE.ora SQL> SQL> alter database archivelog; Database altered. SQL> alter database open; Database altered. SQL> show parameter db_flashback NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_flashback_log_dest string db_flashback_log_dest_size big integer 0 SQL> alter system set db_flashback_log_dest_size=10G; System altered. SQL> alter system set db_flashback_log_dest='/opt/oracle/FBL'; System altered. SQL> create restore point MY_FIRST_RP ; Restore point created. SQL> alter system archive log current; System altered. SQL> SQL> create table x as select * from cdb_objects; Table created. SQL> insert into x select * from x; 141420 rows created. SQL> delete x; 282840 rows deleted. SQL> commit; Commit complete. SQL> alter system archive log current; System altered. SQL> SQL> alter database flashback on; -- only here I enable Flashback Database altered. ============================================== alert log messages : 2024-05-05T10:38:35.262274+00:00 alter database flashback on 2024-05-05T10:38:35.423698+00:00 Allocating 8388608 bytes in shared pool for flashback generation buffer. Allocated 8388608 bytes in shared pool for flashback generation buffer Flashback Database Enabled at SCN 3124894 =============================================== SQL> create restore point MY_FIRST_RP; -- testing if I can create another RP with the same name create restore point MY_FIRST_RP * ERROR at line 1: ORA-38778: Restore point 'MY_FIRST_RP' already exists. Help: https://docs.oracle.com/error-help/db/ora-38778/ SQL> drop restore point MY_FIRST_RP; Restore point dropped. SQL> create restore point MY_FIRST_RP; Restore point created. SQL> drop table x; Table dropped. SQL> create table x as select * from cdb_objects; Table created. SQL> SQL> alter system archive log current; System altered. SQL> delete x; 141420 rows deleted. SQL> insert into x select * from cdb_objects; 141421 rows created. SQL> commit; Commit complete. SQL> alter system archive log current; System altered. SQL> SQL> select substr(name,1,32), scn, time from v$restore_point; -- identify the RP that has been created SUBSTR(NAME,1,32) -------------------------------------------------------------------------------------------------------------------------------- SCN TIME ---------- --------------------------------------------------------------------------- MY_FIRST_RP 3124955 05-MAY-24 10.39.30.000000000 AM SQL> select * from v$flashback_database_log; -- identify the FBDB Logs Size OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE CON_ID -------------------- --------- ---------------- -------------- ------------------------ ---------- 3124893 05-MAY-24 1440 419430400 0 0 SQL> select * from v$flashback_log_dest; -- identify the FB Log Dest (why isn't the view named V$FLASHBACK_DATABASE_LOG_DEST ?) NAME ------------------------------------------------------------------------------------------------------------------------------------ SPACE_LIMIT SPACE_USED NUMBER_OF_FILES CON_ID ----------- ---------- --------------- ---------- /opt/oracle/FBL 1.0737E+10 419430400 2 0 SQL> SQL> !sh sh-4.4$ cd /opt/oracle/FBL sh-4.4$ du -sh * 401M FREE sh-4.4$ cd FREE sh-4.4$ ls flashback sh-4.4$ cd flashback sh-4.4$ ls -l total 409620 -rw-r----- 1 oracle oinstall 209723392 May 5 10:41 o1_mf_m3grfc8t_.flb -rw-r----- 1 oracle oinstall 209723392 May 5 10:38 o1_mf_m3grfg1v_.flb sh-4.4$ sh-4.4$ cd $ORACLE_HOME/dbs sh-4.4$ ls -l arch1* -rw-r----- 1 oracle oinstall 98164736 May 5 10:31 arch1_2_1167168121.dbf -rw-r----- 1 oracle oinstall 106480640 May 5 10:33 arch1_3_1167168121.dbf -rw-r----- 1 oracle oinstall 37506048 May 5 10:40 arch1_4_1167168121.dbf -rw-r----- 1 oracle oinstall 52515840 May 5 10:40 arch1_5_1167168121.dbf sh-4.4$ sh-4.4$ exit exit SQL> select count(*) from x; COUNT(*) ---------- 141421 SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> SQL> startup mount; ORACLE instance started. Total System Global Area 1603726344 bytes Fixed Size 5360648 bytes Variable Size 419430400 bytes Database Buffers 1174405120 bytes Redo Buffers 4530176 bytes Database mounted. SQL> flashback database to restore point MY_FIRST_RP; -- try to Flashback the Database flashback database to restore point MY_FIRST_RP * ERROR at line 1: ORA-38760: This database instance failed to turn on flashback database Help: https://docs.oracle.com/error-help/db/ora-38760/ ============================================ alert log messages : 2024-05-05T10:45:28.380285+00:00 Successful mount of redo thread 1, with mount id 1440201864 2024-05-05T10:45:28.380506+00:00 Allocating 8388608 bytes in shared pool for flashback generation buffer. Allocated 5807328 bytes in shared pool for flashback generation buffer RVWR could not begin generation of flashback log data because DB_RECOVERY_FILE_DEST is not set. 2024-05-05T10:45:28.392865+00:00 Errors in file /opt/oracle/diag/rdbms/free/FREE/trace/FREE_rvwr_2343.trc: ORA-38776: cannot begin flashback generation - recovery area is disabled 2024-05-05T10:45:28.392899+00:00 WARNING: Cannot open the flashback thread for this instance due to the above error. WARNING: Flashback thread open failed - to resolve this, either correct the reported error or turn off database flashbac k. 2024-05-05T10:45:28.393060+00:00 Database mounted in Exclusive Mode Lost write protection mode set to "auto" Completed: ALTER DATABASE MOUNT 2024-05-05T10:46:04.458087+00:00 flashback database to restore point MY_FIRST_RP ORA-38760 signalled during: flashback database to restore point MY_FIRST_RP... 2024-05-05T10:50:43.887137+00:00 ============================================== Explanation of the Error : =========================== 38776, 00000, "cannot begin flashback generation - recovery area is disabled" // *Cause: During a database mount, the RVWR process discovered that the // recovery area was disabled. DB_RECOVERY_FILE_DEST must have // been set null or removed from the INIT.ORA file while the database // was unmounted. // *Action: Flashback database requires the recovery area to be enabled. // Either enable the recovery area by setting the // DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE initialization // parameters, or turn off flashback database with the // ALTER DATABASE FLASHBACK OFF command.
So, Oracle
Video on monitoring (a single instance) DataGuard Standby for a RAC (2-node) Cluster
I've just uploaded a new video on monitoring DataGuard where the Standby is a Single Instance environment for a 2-node RAC Cluster.
The script used in the demo are in this ZIP (script files with extension TXT)
Video on Client Side Failover in RAC
Video on Host Names, SCAN and IP Addresses
Grid Infrastructure --- OCR and Voting Disks Location at Installation
In Oracle Grid Infrastructure, the OCR (Oracle Cluster Registry) and Voting "Disks" must be on Shared Storage accessible by all the nodes of the Cluster. Typically, these are on ASM.
In ASM, a DiskGroup is created for the disks that hold the OCR.
Normally, an ASM DiskGroup may use External Redundancy (Mirroring or other protection against Physical Disk or LUN failure is provided by the underlying Storage) or Normal Redundancy (Two-Way Mirroring, i.e. two Disks or LUN devices) or High Redundancy (Three-Way Mirroring with three Disks).
However, for the OCR and Voting "Disks" (i.e. Voting File), Normal Redundancy requires three Disks or LUN devices where three Voting Files and one OCR (Primary and Secondary copy) are created. High Redundancy requires five Disks or LUN devices where five Voting Files and once OCR (with one Primary and two Secondary copies) are created.
In Test or Lab environments, you might have created your OCR/Vote DiskGroup on ASM storage with External Redundancy so as to not have to provision 3 or 5 disks.
However, in the 19c Lab environment with 2 Virtual Box VMs that I recently built on my Home PC, I created 5 ASM Disks of 2GB each (using ASMLib instead of udev persistent naming) to hold the OCR + VOTE DiskGroup. I then selected High Redundancy for the consequent DiskGroup.
This is the Installer Screen :
This is the subsequent output from running root.sh from the Grid ORACLE_HOME towards the end of the installation :
[datetime] CLSRSC-482: Running command: '/u01/app/grid/product/19.3/bin/ocrconfig -upgrade grid grid'
CRS-4256: Updating the profile
Successful addition of voting disk 6c3ea5fbf0254fd5bfd489fc5c674409.
Successful addition of voting disk ff3b9da031064fccbfab4b57933f12e1.
Successful addition of voting disk 44e50015bcf24f7cbfc1b9348fdbe568.
Successful addition of voting disk de64da366c164f5cbfba2761df5948d5.
Successful addition of voting disk 4485ff5940384f85bf524a81090c6bd8.
Successfully replaced voting disk group with +OCR_VOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 6c3ea5fbf0254fd5bfd489fc5c674409 (/dev/oracleasm/disks/OCR_VOTE_DISK_1) [OCR_VOTE]
2. ONLINE ff3b9da031064fccbfab4b57933f12e1 (/dev/oracleasm/disks/OCR_VOTE_DISK_2) [OCR_VOTE]
3. ONLINE 44e50015bcf24f7cbfc1b9348fdbe568 (/dev/oracleasm/disks/OCR_VOTE_DISK_3) [OCR_VOTE]
4. ONLINE de64da366c164f5cbfba2761df5948d5 (/dev/oracleasm/disks/OCR_VOTE_DISK_4) [OCR_VOTE]
5. ONLINE 4485ff5940384f85bf524a81090c6bd8 (/dev/oracleasm/disks/OCR_VOTE_DISK_5) [OCR_VOTE]
Located 5 voting disk(s).
Thus it did create 5 Voting "Disks" (Voting Files).
After the installation is completed, I verified this again
from the first node "srv1":
and from the second node "srv2" :
Note : Whether I create the DiskGroup with Normal or High Redundancy, it will still show only 1 OCR because there is only 1 Primary OCR location (Normal or High Redundancy will automatically create 1 or 2 Secondary OCR copy).
It is possible to add another location for OCR in this manner (where I add to the FRA DiskGroup):
Furthermore, each node of the Cluster has a Local Cluster Registry (that is called an OLR) :
If you are worried about Failure Groups for the OCR_VOTE DiskGroup, you can see that the FailureGroups are automatically created for this High Redundancy DiskGroup :
Installing and Running DBSAT on 21c
DBSAT is Oracle's "Database Security Assessment Tool" that you can get from Oracle Support Document "Oracle Database Security Assessment Tool (DBSAT) (Doc ID 2138254.1)" .
This is the User Guide for the current release (3.1)
It does checks against "proven Oracle Database Security best practices, CIS benchmark recommendations and STIG rules".
See Oracle Support Document "Does DBSAT Scan for all of the STIG And CIS Benchmark Controls? (Doc ID 2651827.1)" for a disclaimer.
Here I demonstrate quick installation :
[oracle@node2 ~]$ cd /u01/app/oracle [oracle@node2 oracle]$ cd DB* [oracle@node2 DBSAT_Installer]$ pwd /u01/app/oracle/DBSAT_Installer [oracle@node2 DBSAT_Installer]$ ls -l total 45180 -rwxr-x---. 1 oracle dba 46264143 Mar 5 22:15 DBSAT.zip [oracle@node2 DBSAT_Installer]$ which unzip /bin/unzip [oracle@node2 DBSAT_Installer]$ [oracle@node2 DBSAT_Installer]$ unzip DBSAT.zip Archive: DBSAT.zip inflating: dbsat inflating: dbsat.bat inflating: sat_collector.sql inflating: sa.jar inflating: jython-standalone-2.7.3.jar inflating: xlsxwriter/app.py inflating: xlsxwriter/chart_area.py inflating: xlsxwriter/chart_bar.py inflating: xlsxwriter/chart_column.py .... .... deleted a few lines of output of the unzip command inflating: Discover/conf/sensitive_en.ini inflating: Discover/conf/sensitive_es.ini inflating: Discover/conf/sensitive_de.ini inflating: Discover/conf/sensitive_pt.ini inflating: Discover/conf/sensitive_it.ini inflating: Discover/conf/sensitive_fr.ini inflating: Discover/conf/sensitive_nl.ini inflating: Discover/conf/sensitive_el.ini [oracle@node2 DBSAT_Installer]$ --- create the default script from the User Guide [oracle@node2 DBSAT_Installer]$ cat > DBSAT_User.sql create user dbsat_user identified by dbsat_user; --If Database Vault is enabled, connect as DV_ACCTMGR to run this command grant create session to dbsat_user; grant select_catalog_role to dbsat_user; grant select on sys.registry$history to dbsat_user; grant read on sys.dba_audit_mgmt_config_params to dbsat_user; grant select on sys.dba_users_with_defpwd to dbsat_user; grant read on sys.dba_credentials to dbsat_user; grant execute on sys.dbms_sql to dbsat_user; grant audit_viewer to dbsat_user; // 12c and later grant capture_admin to dbsat_user;// 12c and later covers sys.dba_priv_captures, sys.priv_capture$, sys.capture_run_log$ [oracle@node2 DBSAT_Installer]$ --- verify the script [oracle@node2 DBSAT_Installer]$ cat DBSAT_User.sql create user dbsat_user identified by dbsat_user; --If Database Vault is enabled, connect as DV_ACCTMGR to run this command grant create session to dbsat_user; grant select_catalog_role to dbsat_user; grant select on sys.registry$history to dbsat_user; grant read on sys.dba_audit_mgmt_config_params to dbsat_user; grant select on sys.dba_users_with_defpwd to dbsat_user; grant read on sys.dba_credentials to dbsat_user; grant execute on sys.dbms_sql to dbsat_user; grant audit_viewer to dbsat_user; // 12c and later grant capture_admin to dbsat_user;// 12c and later covers sys.dba_priv_captures, sys.priv_capture$, sys.capture_run_log$ [oracle@node2 DBSAT_Installer]$ -- I then create this user in my custom PDB [oracle@node2 DBSAT_Installer]$ sqlplus / as sysdba SQL*Plus: Release 21.0.0.0.0 - Production on Tue Mar 5 22:30:29 2024 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 SQL> alter session set container=HEMANTPDB; Session altered. SQL> @DBSAT_User.sql User created. Grant succeeded. Grant succeeded. Grant succeeded. Grant succeeded. Grant succeeded. Grant succeeded. Grant succeeded. SQL>
I am now ready to run the Collector.
[oracle@node2 DBSAT_Installer]$ ./dbsat collect dbsat_user/dbsat_user@hemantpdb Database Security Assessment Tool version 3.1 (Jan 2024) This tool is intended to assist you in securing your Oracle database system. You are solely responsible for your system and the effect and results of the execution of this tool (including, without limitation, any damage or data loss). Further, the output generated by this tool may include potentially sensitive system configuration data and information that could be used by a skilled attacker to penetrate your system. You are solely responsible for ensuring that the output of this tool, including any generated reports, is handled in accordance with your company's policies. Usage: dbsat collect [ -n ] -lt database_connect_string > -lt output_file > dbsat report [ -a ] [ -n ] [ -g ] [ -x -lt section > ] [ -u -lt user > ] -lt input_file > dbsat discover [ -n ] -c -lt config_file > -lt output_file > Options: -a Report with all user accounts, including locked and schema-only, Oracle-supplied users -n No encryption for output -g Show all grants including Common Grants in a Pluggable Database -x Specify sections to exclude from report (may be repeated for multiple sections) -u Specify users to exclude from report -c Configuration file for discoverer [oracle@node2 DBSAT_Installer]$ ./dbsat collect dbsat_user/dbsat_user@hemantpdb hemantpdb_DBSAT_Report Database Security Assessment Tool version 3.1 (Jan 2024) This tool is intended to assist you in securing your Oracle database system. You are solely responsible for your system and the effect and results of the execution of this tool (including, without limitation, any damage or data loss). Further, the output generated by this tool may include potentially sensitive system configuration data and information that could be used by a skilled attacker to penetrate your system. You are solely responsible for ensuring that the output of this tool, including any generated reports, is handled in accordance with your company's policies. Connecting to the target Oracle database... SQL*Plus: Release 21.0.0.0.0 - Production on Tue Mar 5 22:34:39 2024 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 Setup complete. SQL queries complete. /bin/cat: /u01/app/oracle/product/21.3.0.0/dbhome_1/network/admin/sqlnet.ora: No such file or directory Warning: Exit status 256 from OS rule: sqlnet.ora /bin/ls: cannot access /u01/app/oracle/product/21.3.0.0/dbhome_1/network/admin/sqlnet.ora: No such file or directory Warning: Exit status 512 from OS rule: ls_sqlnet.ora /bin/cat: /u01/app/oracle/product/21.3.0.0/dbhome_1/network/admin/listener.ora: No such file or directory Warning: Exit status 256 from OS rule: listener.ora /bin/ls: cannot access /u01/app/oracle/product/21.3.0.0/dbhome_1/network/admin/listener.ora: No such file or directory Warning: Exit status 512 from OS rule: ls_listener.ora Warning: Exit status 256 from OS rule: dbcs_status /bin/cat: /u01/app/oracle/product/21.3.0.0/dbhome_1/network/admin/cman.ora: No such file or directory Warning: Exit status 256 from OS rule: cman.ora /bin/cat: /u01/app/oracle/product/21.3.0.0/dbhome_1/ldap/admin/fips.ora: No such file or directory Warning: Exit status 256 from OS rule: fips1.ora /bin/cat: /fips.ora: No such file or directory Warning: Exit status 256 from OS rule: fips2.ora /bin/ls: cannot access /diag: No such file or directory Warning: Exit status 512 from OS rule: diag_dest_base /bin/ls: cannot access /u01/app/oracle/product/21.3.0.0/dbhome_1/rdbms/log/diag: No such file or directory Warning: Exit status 512 from OS rule: diag_dest_home OS commands complete. Disconnected from Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 DBSAT Collector completed successfully. Calling /u01/app/oracle/product/21.3.0.0/dbhome_1/bin/zip to encrypt hemantpdb_DBSAT_Report.json... Enter password: Verify password: adding: hemantpdb_DBSAT_Report.json (deflated 86%) zip completed successfully. [oracle@node2 DBSAT_Installer]$
So, apparently it assumes the old convention of the network folders being under ORACLE_HOME.
[oracle@node2 DBSAT_Installer]$ rm hemantpdb_DBSAT_Report.json [oracle@node2 DBSAT_Installer]$ ORACLE_BASE=/u01/app/oracle;export ORACLE_BASE [oracle@node2 DBSAT_Installer]$ ./dbsat collect dbsat_user/dbsat_user@hemantpdb hemantpdb_DBSAT_Report Database Security Assessment Tool version 3.1 (Jan 2024) This tool is intended to assist you in securing your Oracle database system. You are solely responsible for your system and the effect and results of the execution of this tool (including, without limitation, any damage or data loss). Further, the output generated by this tool may include potentially sensitive system configuration data and information that could be used by a skilled attacker to penetrate your system. You are solely responsible for ensuring that the output of this tool, including any generated reports, is handled in accordance with your company's policies. Connecting to the target Oracle database... SQL*Plus: Release 21.0.0.0.0 - Production on Tue Mar 5 23:00:54 2024 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Last Successful login time: Tue Mar 05 2024 23:00:00 +08:00 Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 Setup complete. SQL queries complete. /bin/cat: /u01/app/oracle/product/21.3.0.0/dbhome_1/network/admin/sqlnet.ora: No such file or directory Warning: Exit status 256 from OS rule: sqlnet.ora /bin/ls: cannot access /u01/app/oracle/product/21.3.0.0/dbhome_1/network/admin/sqlnet.ora: No such file or directory Warning: Exit status 512 from OS rule: ls_sqlnet.ora /bin/cat: /u01/app/oracle/product/21.3.0.0/dbhome_1/network/admin/listener.ora: No such file or directory Warning: Exit status 256 from OS rule: listener.ora /bin/ls: cannot access /u01/app/oracle/product/21.3.0.0/dbhome_1/network/admin/listener.ora: No such file or directory Warning: Exit status 512 from OS rule: ls_listener.ora Warning: Exit status 256 from OS rule: dbcs_status /bin/cat: /u01/app/oracle/product/21.3.0.0/dbhome_1/network/admin/cman.ora: No such file or directory Warning: Exit status 256 from OS rule: cman.ora /bin/cat: /u01/app/oracle/product/21.3.0.0/dbhome_1/ldap/admin/fips.ora: No such file or directory Warning: Exit status 256 from OS rule: fips1.ora /bin/cat: /fips.ora: No such file or directory Warning: Exit status 256 from OS rule: fips2.ora /bin/ls: cannot access /u01/app/oracle/product/21.3.0.0/dbhome_1/rdbms/log/diag: No such file or directory Warning: Exit status 512 from OS rule: diag_dest_home OS commands complete. Disconnected from Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 DBSAT Collector completed successfully. Calling /u01/app/oracle/product/21.3.0.0/dbhome_1/bin/zip to encrypt hemantpdb_DBSAT_Report.json... Enter password: Verify password: adding: hemantpdb_DBSAT_Report.json (deflated 86%) zip completed successfully. [oracle@node2 DBSAT_Installer]$
I can afford to ignore the network/admin lookups under $ORACLE_HOME as they are not valid. I might go back and check the "diag_dest_home" check (e.g. review "sat_collector.sql")
[oracle@node2 DBSAT_Installer]$ PATH=/u01/app/21.3.0.0/grid/jdk/bin:$PATH;export PATH [oracle@node2 DBSAT_Installer]$ java -version java version "1.8.0_291" Java(TM) SE Runtime Environment (build 1.8.0_291-b09) Java HotSpot(TM) 64-Bit Server VM (build 25.291-b09, mixed mode) [oracle@node2 DBSAT_Installer]$ [oracle@node2 DBSAT_Installer]$ JAVA_HOME=/u01/app/21.3.0.0/grid/jdk;export JAVA_HOME [oracle@node2 DBSAT_Installer]$ ./dbsat report hemantpdb_DBSAT_Report Database Security Assessment Tool version 3.1 (Jan 2024) This tool is intended to assist you in securing your Oracle database system. You are solely responsible for your system and the effect and results of the execution of this tool (including, without limitation, any damage or data loss). Further, the output generated by this tool may include potentially sensitive system configuration data and information that could be used by a skilled attacker to penetrate your system. You are solely responsible for ensuring that the output of this tool, including any generated reports, is handled in accordance with your company's policies. DBSAT Reporter ran successfully. Calling /usr/bin/zip to encrypt the generated reports... Enter password: Verify password: zip warning: hemantpdb_DBSAT_Report_report.zip not found or empty adding: hemantpdb_DBSAT_Report_report.txt (deflated 76%) adding: hemantpdb_DBSAT_Report_report.html (deflated 83%) adding: hemantpdb_DBSAT_Report_report.xlsx (deflated 2%) adding: hemantpdb_DBSAT_Report_report.json (deflated 81%) zip completed successfully. [oracle@node2 DBSAT_Installer]$ [oracle@node2 DBSAT_Installer]$ unzip -l hemantpdb_DBSAT_Report_report.zip Archive: hemantpdb_DBSAT_Report_report.zip Length Date Time Name --------- ---------- ----- ---- 161417 03-05-2024 23:09 hemantpdb_DBSAT_Report_report.txt 261378 03-05-2024 23:09 hemantpdb_DBSAT_Report_report.html 36732 03-05-2024 23:09 hemantpdb_DBSAT_Report_report.xlsx 197620 03-05-2024 23:09 hemantpdb_DBSAT_Report_report.json --------- ------- 657147 4 files [oracle@node2 DBSAT_Installer]$ [oracle@node2 DBSAT_Installer]$ unzip hemantpdb_DBSAT_Report_report.zip hemantpdb_DBSAT_Report_report.txt Archive: hemantpdb_DBSAT_Report_report.zip [hemantpdb_DBSAT_Report_report.zip] hemantpdb_DBSAT_Report_report.txt password: inflating: hemantpdb_DBSAT_Report_report.txt [oracle@node2 DBSAT_Installer]$ [oracle@node2 DBSAT_Installer]$ more hemantpdb_DBSAT_Report_report.txt ### Oracle Database Security Assessment - Highly Sensitive ### * Assessment Date & Time * Date of Data Collection Date of Report Reporter Version ---------------------------------- ---------------------------------- --------------------- Tue Mar 05 2024 23:00:54 UTC+08:00 Tue Mar 05 2024 23:09:51 UTC+08:00 3.1 (Jan 2024) - b73a * Database Identity * Name Container (Type:ID) Platform Database Role Log Mode Created -------- ------------------- ---------------- ------------- ------------ ---------------------------------- DB21CRAC HEMANTPDB (PDB:3) Linux x86 64-bit PRIMARY NOARCHIVELOG Fri Jan 19 2024 15:12:46 UTC+08:00 ### Summary ### Section Pass Evaluate Advisory Low Risk Medium Risk High Risk Total Findings --------------------------- ---- -------- -------- -------- ----------- --------- -------------- Basic Information 0 0 0 0 0 1 1 User Accounts 7 8 1 3 2 1 22 Privileges and Roles 6 23 1 0 0 0 30 Authorization Control 0 3 1 0 0 0 4 Fine-Grained Access Control 0 0 5 0 0 0 5 Auditing 0 2 9 3 0 0 14 Encryption 0 3 1 0 0 0 4 Database Configuration 7 9 0 2 2 0 20 Network Configuration 0 0 0 0 0 1 1 Operating System 4 3 0 1 1 0 9 Total 24 51 18 9 5 3 110 ### Basic Information ### * Database Version * Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0
The TXT format of the report is 161,417 bytes and the HTML format is 261,378 bytes.
* Patch Check * - The Oracle Database should be patched Status: High Risk Summary: Oracle Database version is supported but latest patch is missing. Latest comprehensive patch has not been applied. Details: Latest patch not applied for a supported database version. Remarks: Unsupported commercial and database systems should not be used because fixes to newly identified bugs will not be implemented by the vendor. The lack of support can result in potential vulnerabilities. Systems at unsupported servicing levels or releases will not receive security updates for new vulnerabilities, which leaves them subject to exploitation. When maintenance updates and patches are no longer available, the database software is no longer considered supported and should be upgraded or decommissioned. It is vital to keep the database software up-to-date with security fixes as they are released. Oracle issues comprehensive patches in the form of Release Updates on a regular quarterly schedule. These updates should be applied as soon as they are available. References: Oracle Best Practice CIS Benchmark: Recommendation 1.1 DISA STIG: V-237697, V-237748, V-251802 * Users with Default Passwords * - User accounts should not have default passwords Status: High Risk Summary: Found 1 unlocked user account with default password. Details: Users with default password: HR Remarks: Default passwords for predefined Oracle accounts are well known and provide a trivial means of entry for attackers. Database or account administrators should also change well-known passwords for locked accounts. Having default passwords can lead to unauthorized data manipulation and theft of confidential information. Note that if a script creates the database and the SYS or SYSTEM user password remains unchanged, these users are considered to possess a default password. Your database may be at risk due to the password presence within the script. Change the password to improve security. References: Oracle Best Practice CIS Benchmark: Recommendation 4.1 DISA STIG: V-237698 * Network Encryption * - Check configurations used for Native Network Encryption Status: High Risk Summary: Found unencrypted connections. Clients can connect to the database using unencrypted communication channels. Details: Found 3 connections established over unencrypted channel. Remarks: Network encryption is crucial for protecting the confidentiality and integrity of communication between a database server and its clients. Connections to a database instance must be established using the encrypted channels. References: Oracle Best Practice CIS Benchmark: Recommendation 2.3.1, 2.3.2 DISA STIG: V-219841, V-220263, V-220291, V-237699, V-237700, V-237723
Of course, the Report goes on to detail the 110 "findings".
[oracle@node2 DBSAT_Installer]$ sqlplus / as sysdba SQL*Plus: Release 21.0.0.0.0 - Production on Tue Mar 5 23:18:49 2024 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 SQL> alter session set container=HEMANTPDB; Session altered. SQL> drop user dbsat_user; User dropped. SQL> quit Disconnected from Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 [oracle@node2 DBSAT_Installer]$
So, DBSAT does reference CIS and STIG in its report.