Hemant K Chitale

Subscribe to Hemant K Chitale feed
I am an Oracle Database Specialist in Singapore. Please note that this site uses cookies.

Updated: 14 hours 14 min ago

Domains in 23ai Schema Development

Sun, 2025-02-02 03:12

 I had heard of Data Use Case Domains in 23ai. However, the New Features nd Database Concepts documentation didn't provide enough examples for me to build on.

However, this blog post by Ulrike Schwinn  (which was shared by @thatjeffsmith  on X) helped me explore domains.

In this demo, I am using the Pre-Seeded Domains.  However, you can see the example posted by Ulrike Schwimm  or even read in the Database Concepts documentation  to help build your own custom Domains.

A Data Use Case Domain is like defining a Custom DataType such that only valid values are permitted.  The Domain name can be a self-identifier (just as "DATE" or "NUMBER" identifies the type of data being stored).

Here  is my demonstration  (I also use the Annotations feature -- the Data Use Case Domains documentation links above also lead to this feature)


SQL> set pages600 linesize 132
SQL> col contact_person format a32
SQL> col contact_email format a24
SQL>
SQL> drop table forex_rates_contacts;

Table dropped.

SQL>
SQL>
SQL> create table forex_rates_contacts
  2  (
  3   country_iso_code  varchar2(3) domain country_code_d,  -- preseeded SYS domain
  4   currency_code varchar2(3) domain currency_code_d, -- preseeded SYS domain
  5   contact_person varchar2(128),
  6   contact_email     varchar2(4000) domain email_d -- preseed SYS domain
  7  )
  8  annotations (display 'Forex Contact Persons')
  9  /

Table created.

SQL>
SQL> desc forex_rates_contacts
 Name                                                                     Null?    Type
 ------------------------------------------------------------------------ -------- -------------------------------------------------
 COUNTRY_ISO_CODE                                                                  VARCHAR2(3) SYS.COUNTRY_CODE_D
 CURRENCY_CODE                                                                     VARCHAR2(3) SYS.CURRENCY_CODE_D
 CONTACT_PERSON                                                                    VARCHAR2(128)
 CONTACT_EMAIL                                                                     VARCHAR2(4000) SYS.EMAIL_D

SQL>
SQL>
SQL> set long 1000
SQL> set longc 1000
SQL> set serveroutput on
SQL>
SQL> rem  FROM clause is no longer required in 23ai
SQL> select dbms_metadata.get_ddl('TABLE','FOREX_RATES_CONTACTS','HEMANT');

DBMS_METADATA.GET_DDL('TABLE','FOREX_RATES_CONTACTS','HEMANT')
------------------------------------------------------------------------------------------------------------------------------------

  CREATE TABLE "HEMANT"."FOREX_RATES_CONTACTS"
   (    "COUNTRY_ISO_CODE" VARCHAR2(3) DOMAIN "SYS"."COUNTRY_CODE_D",
        "CURRENCY_CODE" VARCHAR2(3) DOMAIN "SYS"."CURRENCY_CODE_D",
        "CONTACT_PERSON" VARCHAR2(128),
        "CONTACT_EMAIL" VARCHAR2(4000) DOMAIN "SYS"."EMAIL_D"
   ) SEGMENT CREATION DEFERRED
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  TABLESPACE "USERS"
  ANNOTATIONS("DISPLAY" 'Forex Contact Persons')


SQL>
SQL>
SQL>
SQL> rem  MULTI-ROW Insert
SQL> insert into forex_rates_contacts
  2  values
  3  ('US','USD','Mr Unknown','unknown@nowhere.gov'),
  4  ('IN','INR','Someone at RBI','someone@rbi.gov.in')
  5  /

2 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> select * from forex_rates_contacts
  2  order by country_iso_code
  3  /

COU CUR CONTACT_PERSON                   CONTACT_EMAIL
--- --- -------------------------------- ------------------------
IN  INR Someone at RBI                   someone@rbi.gov.in
US  USD Mr Unknown                       unknown@nowhere.gov

SQL>
SQL> -- Note that the country_code_d and currency_code_d do not check validity against really ISO codes
SQL> -- thus, it does not disallow "ZZ" and "ZZZ"
SQL> insert into forex_rates_contacts
  2  values
  3  ('ZZ','ZZZ','Mr Unknown','unknown@nowhere.zz')
  4  /

1 row created.

SQL>
SQL> select * from forex_rates_contacts
  2  order by country_iso_code
  3  /

COU CUR CONTACT_PERSON                   CONTACT_EMAIL
--- --- -------------------------------- ------------------------
IN  INR Someone at RBI                   someone@rbi.gov.in
US  USD Mr Unknown                       unknown@nowhere.gov
ZZ  ZZZ Mr Unknown                       unknown@nowhere.zz

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> -- But the rules for email validation are encoded
SQL> insert into forex_rates_contacts
  2  values
  3  ('UK','GBP','Mr Someone','someone@x')
  4  /
insert into forex_rates_contacts
*
ERROR at line 1:
ORA-11534: check constraint (HEMANT.SYS_C0013464) involving column CONTACT_EMAIL due to domain constraint SYS.SYS_DOMAIN_C0030 of
domain SYS.EMAIL_D violated
Help: https://docs.oracle.com/error-help/db/ora-11534/


SQL>
SQL> select * from forex_rates_contacts
  2  order by country_iso_code
  3  /

COU CUR CONTACT_PERSON                   CONTACT_EMAIL
--- --- -------------------------------- ------------------------
IN  INR Someone at RBI                   someone@rbi.gov.in
US  USD Mr Unknown                       unknown@nowhere.gov
ZZ  ZZZ Mr Unknown                       unknown@nowhere.zz

SQL>
SQL> spool off


I haven't added my own custom Domains but used the PreSeeded domains for Country, Currency and Email.  Look at "10.1.12 Built-In Use Case Domains" in the documentation.



Categories: DBA Blogs

23ai New Feature : Partition HIGH_VALUE in JSON format

Sun, 2025-01-26 04:10

 A quick demonstration of the new HIGH_VALUE_JSON column in the USER_TAB_PARTITIONS view in 23ai :


[oracle@localhost Hemant]$ sqlplus hemant/hemant@freepdb1

SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Sun Jan 26 10:07:09 2025
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle.  All rights reserved.

Last Successful login time: Sun Jan 26 2025 10:05:18 +00:00

Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.6.0.24.10

SQL> @demo_part_high_value.sql
SQL> set pages600 linesize 132
SQL> set long 10000
SQL>
SQL> spool demo_part_high_value
SQL>
SQL> -- 23ai has two new columns in the USER_TAB_PARTTIIONS view
SQL> -- HIGH_VALUE_CLOB and     HIGH_VALUE_JSON
SQL> --- unlike HIGH_VALUE which is a LONG, these two can be used programmatically
SQL> -- here I show HIGH_VALUE_JSON along with the HIGH_VALUE
SQL>
SQL> set pages600 linesize 132
SQL> set long 10000
SQL> col partition_name format a8 hea 'P_Name'
SQL> col high_value format a56 trunc hea 'High_Value_LONG' trunc
SQL> col high_value_json format a48 hea 'High_Value_JSON'
SQL>
SQL>
SQL> drop table hkc_test_intvl;

Table dropped.

SQL>
SQL> create table hkc_test_intvl
  2  (date_column date,
  3  data_column varchar2(50))
  4  partition by range (date_column)
  5  interval (numtoyminterval(1,'MONTH'))
  6  (partition P_1 values less than (to_date('01-FEB-2024','DD-MON-YYYY')))
  7  /

Table created.

SQL>
SQL>
SQL> insert into hkc_Test_intvl
  2  values (to_date('15-AUG-2024','DD-MON-YYYY'), 'August Row')
  3  /

1 row created.

SQL>
SQL>
SQL> insert into hkc_test_intvl
  2  values (to_date('15-OCT-2024','DD-MON-YYYY'),'October Row')
  3  /

1 row created.

SQL>
SQL> insert into hkc_test_intvl
  2  values (to_date('15-DEC-2024','DD-MON-YYYY'),'December Row')
  3  /

1 row created.

SQL>
SQL> select partition_name, high_value,  high_value_json
  2  from user_tab_partitions
  3  where table_name = 'HKC_TEST_INTVL'
  4  /

P_Name   High_Value_LONG                                          High_Value_JSON
-------- -------------------------------------------------------- ------------------------------------------------
P_1      TO_DATE(' 2024-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' {"high_value":"2024-02-01T00:00:00"}
SYS_P447 TO_DATE(' 2024-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' {"high_value":"2024-09-01T00:00:00"}
SYS_P448 TO_DATE(' 2024-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' {"high_value":"2024-11-01T00:00:00"}
SYS_P449 TO_DATE(' 2025-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' {"high_value":"2025-01-01T00:00:00"}

SQL>
SQL>
SQL> spool off


Earlier, HIGH_VALUE was presented as a LONG.
23ai adds two columns to the USER_TAB_PARTITIONS data dictionary view :
HIGH_VALUE_CLOB
and 
HIGH_VALUE_JSON

These make it easier to query the data dictionary, for example, for Partition Life Cycle management.

Categories: DBA Blogs

Querying the Alert log in RAC

Tue, 2025-01-07 08:50

 Although v$diag_alert_ext works for a single instance, there is no corresponding gv$ view for RAC.

However, the gv$ function can be used to execute a cursor across all instances.  Here I query for the alert log in the two nodes srv1 and srv2 and order the results by timestamp to capture messages in the last 15 minutes :  Note all the activity that happens when I CLOSE and later OPEN a PDB in one node, resulting in the other node also responding to maintain Global Resources.


SQL> l
  1  select * from table(gv$(cursor
  2                         (select originating_timestamp, host_id, con_id, message_text
  3                          from v$diag_alert_ext
  4                          where originating_timestamp > sysdate - 0.25/24 -- time filter
  5                          and message_text like '%' -- message filter
  6                          )
  7                       )
  8             )
  9* order by 1,2,3
SQL> /

ORIGINATING_TIMESTAMP
---------------------------------------------------------------------------
HOST_ID                                                                 CON_ID
------------------------------------------------------------------- ----------
MESSAGE_TEXT
------------------------------------------------------------------------------------------------------------------------------------
07-JAN-25 02.23.37.941000000 PM +00:00
srv2                                                                         1
Thread 2 cannot allocate new log, sequence 278

07-JAN-25 02.23.37.941000000 PM +00:00
srv2                                                                         1
Checkpoint not complete

07-JAN-25 02.23.37.941000000 PM +00:00
srv2                                                                         1
  Current log# 3 seq# 277 mem# 0: +DATA/RACDB/ONLINELOG/group_3.270.1164520171

07-JAN-25 02.23.37.941000000 PM +00:00
srv2                                                                         1
  Current log# 3 seq# 277 mem# 1: +FRA/RACDB/ONLINELOG/group_3.259.1164520171

07-JAN-25 02.23.38.255000000 PM +00:00
srv1                                                                         1
Thread 1 cannot allocate new log, sequence 322

07-JAN-25 02.23.38.255000000 PM +00:00
srv1                                                                         1
  Current log# 1 seq# 321 mem# 1: +FRA/RACDB/ONLINELOG/group_1.257.1164519549

07-JAN-25 02.23.38.255000000 PM +00:00
srv1                                                                         1
  Current log# 1 seq# 321 mem# 0: +DATA/RACDB/ONLINELOG/group_1.263.1164519549

07-JAN-25 02.23.38.255000000 PM +00:00
srv1                                                                         1
Checkpoint not complete

07-JAN-25 02.23.40.951000000 PM +00:00
srv2                                                                         1
Thread 2 advanced to log sequence 278 (LGWR switch),  current SCN: 10841502

07-JAN-25 02.23.40.951000000 PM +00:00
srv2                                                                         1
  Current log# 4 seq# 278 mem# 0: +DATA/RACDB/ONLINELOG/group_4.271.1164520171

07-JAN-25 02.23.40.951000000 PM +00:00
srv2                                                                         1
  Current log# 4 seq# 278 mem# 1: +FRA/RACDB/ONLINELOG/group_4.260.1164520173

07-JAN-25 02.23.40.976000000 PM +00:00
srv2                                                                         1
Deleted Oracle managed file +FRA/RACDB/ARCHIVELOG/2024_11_02/thread_1_seq_240.500.1183975081

07-JAN-25 02.23.40.977000000 PM +00:00
srv2                                                                         1
ARC2 (PID:17940): Archived Log entry 1023 added for B-1164519547.T-2.S-277 LOS:0x0000000000a568cc NXS:0x0000000000a56d9e NAB:372 ID
0x46c5be03 LAD:1

07-JAN-25 02.23.41.048000000 PM +00:00
srv1                                                                         1
Thread 1 advanced to log sequence 322 (LGWR switch),  current SCN: 10841505

07-JAN-25 02.23.41.048000000 PM +00:00
srv1                                                                         1
  Current log# 2 seq# 322 mem# 1: +FRA/RACDB/ONLINELOG/group_2.258.1164519549

07-JAN-25 02.23.41.048000000 PM +00:00
srv1                                                                         1
  Current log# 2 seq# 322 mem# 0: +DATA/RACDB/ONLINELOG/group_2.262.1164519549

07-JAN-25 02.23.41.117000000 PM +00:00
srv2                                                                         1
Deleted Oracle managed file +FRA/RACDB/ARCHIVELOG/2024_11_02/thread_2_seq_221.499.1183975181

07-JAN-25 02.23.41.138000000 PM +00:00
srv1                                                                         1
Deleted Oracle managed file +FRA/RACDB/ARCHIVELOG/2024_11_02/thread_2_seq_222.498.1183975205

07-JAN-25 02.23.41.139000000 PM +00:00
srv1                                                                         1
NET  (PID:15641): Archived Log entry 1025 added for B-1164519547.T-1.S-321 LOS:0x0000000000a568c9 NXS:0x0000000000a56da1 NAB:369 ID
0x46c5be03 LAD:1

07-JAN-25 02.23.41.209000000 PM +00:00
srv1                                                                         1
Deleted Oracle managed file +FRA/RACDB/ARCHIVELOG/2024_11_02/thread_1_seq_241.497.1183975551

07-JAN-25 02.27.41.986000000 PM +00:00
srv1                                                                         1
alter pluggable database pdb1 close

07-JAN-25 02.27.41.997000000 PM +00:00
srv1                                                                         3
Pluggable database PDB1 closing

07-JAN-25 02.27.42.004000000 PM +00:00
srv1                                                                         3
Increasing priority of 2 RS

07-JAN-25 02.27.42.011000000 PM +00:00
srv1                                                                         3
JIT: pid 19878 requesting stop

07-JAN-25 02.27.42.044000000 PM +00:00
srv1                                                                         3
Stopped service mypdb

07-JAN-25 02.27.42.208000000 PM +00:00
srv1                                                                         3
Closing sequence subsystem (3300983922).

07-JAN-25 02.27.42.332000000 PM +00:00
srv1                                                                         3
Buffer Cache flush started: 3

07-JAN-25 02.27.42.345000000 PM +00:00
srv1                                                                         3
Buffer Cache flush finished: 3

07-JAN-25 02.27.42.347000000 PM +00:00
srv2                                                                         1
Increasing priority of 2 RS

07-JAN-25 02.27.42.350000000 PM +00:00
srv1                                                                         3
queued detach DA request 0x934eafc8 for pdb 3, ospid 19878

07-JAN-25 02.27.42.351000000 PM +00:00
srv1                                                                         1
Domain Action Reconfiguration started (domid 3, new da inc 3, cluster inc 8)

07-JAN-25 02.27.42.351000000 PM +00:00
srv1                                                                         1
Instance 1 is detaching from domain 3 (lazy abort? 0, recovery member? 0)

07-JAN-25 02.27.42.352000000 PM +00:00
srv1                                                                         1
 Global Resource Directory partially frozen for domain action

07-JAN-25 02.27.42.352000000 PM +00:00
srv1                                                                         1
* domain detach - domain 3 valid ? 1

07-JAN-25 02.27.42.430000000 PM +00:00
srv2                                                                         1
Domain Action Reconfiguration started (domid 3, new da inc 3, cluster inc 8)

07-JAN-25 02.27.42.430000000 PM +00:00
srv2                                                                         1
Instance 1 is detaching from domain 3 (lazy abort? 0, recovery member? 0)

07-JAN-25 02.27.42.430000000 PM +00:00
srv2                                                                         1
 Global Resource Directory partially frozen for domain action

07-JAN-25 02.27.42.430000000 PM +00:00
srv2                                                                         1
* domain detach - domain 3 valid ? 1

07-JAN-25 02.27.42.432000000 PM +00:00
srv2                                                                         1
 Non-local Process blocks cleaned out

07-JAN-25 02.27.42.434000000 PM +00:00
srv2                                                                         1
 Set master node info

07-JAN-25 02.27.42.435000000 PM +00:00
srv2                                                                         1
 Dwn-cvts replayed, VALBLKs dubious

07-JAN-25 02.27.42.435000000 PM +00:00
srv2                                                                         1
 All grantable enqueues granted

07-JAN-25 02.27.42.437000000 PM +00:00
srv1                                                                         1
 Non-local Process blocks cleaned out

07-JAN-25 02.27.42.438000000 PM +00:00
srv1                                                                         1
 Set master node info

07-JAN-25 02.27.42.439000000 PM +00:00
srv1                                                                         1
 Dwn-cvts replayed, VALBLKs dubious

07-JAN-25 02.27.42.440000000 PM +00:00
srv1                                                                         1
 All grantable enqueues granted

07-JAN-25 02.27.42.440000000 PM +00:00
srv2                                                                         1
Domain Action Reconfiguration complete (total time 0.0 secs)

07-JAN-25 02.27.42.440000000 PM +00:00
srv2                                                                         1
Decreasing priority of 2 RS

07-JAN-25 02.27.42.444000000 PM +00:00
srv1                                                                         1
freeing the fusion rht of pdb 3

07-JAN-25 02.27.42.445000000 PM +00:00
srv1                                                                         1
freeing the pdb enqueue rht

07-JAN-25 02.27.42.445000000 PM +00:00
srv1                                                                         1
Decreasing priority of 2 RS

07-JAN-25 02.27.42.445000000 PM +00:00
srv1                                                                         1
Domain Action Reconfiguration complete (total time 0.1 secs)

07-JAN-25 02.27.42.542000000 PM +00:00
srv1                                                                         1
Pluggable database PDB1 closed

07-JAN-25 02.27.42.571000000 PM +00:00
srv1                                                                         1
Completed: alter pluggable database pdb1 close

07-JAN-25 02.28.10.713000000 PM +00:00
srv1                                                                         1
alter pluggable database pdb1 open

07-JAN-25 02.28.10.715000000 PM +00:00
srv1                                                                         3
Pluggable database PDB1 opening in read write

07-JAN-25 02.28.10.892000000 PM +00:00
srv1                                                                         3
Increasing priority of 2 RS

07-JAN-25 02.28.10.892000000 PM +00:00
srv1                                                                         3
Autotune of undo retention is turned on.

07-JAN-25 02.28.10.892000000 PM +00:00
srv1                                                                         3
SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18

07-JAN-25 02.28.10.929000000 PM +00:00
srv1                                                                         3
queued attach DA request 0x934eaf60 for pdb 3, ospid 19878

07-JAN-25 02.28.10.929000000 PM +00:00
srv2                                                                         1
Increasing priority of 2 RS

07-JAN-25 02.28.10.933000000 PM +00:00
srv1                                                                         1
Domain Action Reconfiguration started (domid 3, new da inc 4, cluster inc 8)

07-JAN-25 02.28.10.933000000 PM +00:00
srv1                                                                         1
 Global Resource Directory partially frozen for domain action

07-JAN-25 02.28.10.933000000 PM +00:00
srv1                                                                         1
Instance 1 is attaching to domain 3

07-JAN-25 02.28.10.940000000 PM +00:00
srv2                                                                         1
Domain Action Reconfiguration started (domid 3, new da inc 4, cluster inc 8)

07-JAN-25 02.28.10.940000000 PM +00:00
srv2                                                                         1
Instance 1 is attaching to domain 3

07-JAN-25 02.28.10.941000000 PM +00:00
srv2                                                                         1
 Global Resource Directory partially frozen for domain action

07-JAN-25 02.28.10.943000000 PM +00:00
srv2                                                                         1
 Non-local Process blocks cleaned out

07-JAN-25 02.28.10.946000000 PM +00:00
srv2                                                                         1
 Set master node info

07-JAN-25 02.28.10.947000000 PM +00:00
srv1                                                                         1
 Non-local Process blocks cleaned out

07-JAN-25 02.28.10.947000000 PM +00:00
srv2                                                                         1
 Dwn-cvts replayed, VALBLKs dubious

07-JAN-25 02.28.10.948000000 PM +00:00
srv1                                                                         1
 Set master node info

07-JAN-25 02.28.10.949000000 PM +00:00
srv2                                                                         1
 All grantable enqueues granted

07-JAN-25 02.28.10.952000000 PM +00:00
srv1                                                                         1
 Dwn-cvts replayed, VALBLKs dubious

07-JAN-25 02.28.10.953000000 PM +00:00
srv1                                                                         1
 All grantable enqueues granted

07-JAN-25 02.28.11.034000000 PM +00:00
srv2                                                                         1
Domain Action Reconfiguration complete (total time 0.1 secs)

07-JAN-25 02.28.11.034000000 PM +00:00
srv2                                                                         1
Decreasing priority of 2 RS

07-JAN-25 02.28.11.038000000 PM +00:00
srv1                                                                         1
Domain Action Reconfiguration complete (total time 0.1 secs)

07-JAN-25 02.28.11.038000000 PM +00:00
srv1                                                                         1
Decreasing priority of 2 RS

07-JAN-25 02.28.11.135000000 PM +00:00
srv1                                                                         3
Endian type of dictionary set to little

07-JAN-25 02.28.11.213000000 PM +00:00
srv1                                                                         3
Undo initialization recovery: err:0 start: 3329988 end: 3329989 diff: 1 ms (0.0 seconds)

07-JAN-25 02.28.11.352000000 PM +00:00
srv1                                                                         3
[19878] Successfully onlined Undo Tablespace 2.

07-JAN-25 02.28.11.352000000 PM +00:00
srv1                                                                         3
Undo initialization online undo segments: err:0 start: 3329989 end: 3330129 diff: 140 ms (0.1 seconds)

07-JAN-25 02.28.11.357000000 PM +00:00
srv1                                                                         3
Undo initialization finished serial:0 start:3329988 end:3330134 diff:146 ms (0.1 seconds)

07-JAN-25 02.28.11.361000000 PM +00:00
srv1                                                                         3
Database Characterset for PDB1 is AL32UTF8

07-JAN-25 02.28.11.627000000 PM +00:00
srv1                                                                         3
SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x18, new 0x0 (no suplog)

07-JAN-25 02.28.11.891000000 PM +00:00
srv1                                                                         3
Started service mypdb/mypdb/mypdb

07-JAN-25 02.28.11.925000000 PM +00:00
srv1                                                                         3
Opening pdb with no Resource Manager plan active

07-JAN-25 02.28.12.097000000 PM +00:00
srv1                                                                         3
joxcsys_required_dirobj_exists: directory object exists with required path /u02/app/oracle/product/19.0.0/dbhome_1/javavm/admin/, pi
d 19878 cid 3

07-JAN-25 02.28.12.118000000 PM +00:00
srv1                                                                         1
Pluggable database PDB1 opened read write

07-JAN-25 02.28.12.122000000 PM +00:00
srv1                                                                         1
Completed: alter pluggable database pdb1 open

07-JAN-25 02.33.26.501000000 PM +00:00
srv1                                                                         1
Control autobackup written to DISK device

07-JAN-25 02.33.26.502000000 PM +00:00
srv1                                                                         1
handle '+FRA/RACDB/AUTOBACKUP/2025_01_07/s_1189780406.497.1189780407'


93 rows selected.

SQL>
At 07-JAN-25 02.27.41.986000000 PM +00:00 :  I issued a command to close PDB1 (CON_ID=3) on host srv1
You can also see the "Stopped service mypdb" message on srv1 at 07-JAN-25 02.27.42

Host srv2 started writing it's messages at 07-JAN-25 02.27.42.347000000 PM +00:00 from which you can see Domain Action Reconfiguration (07-JAN-25 02.27.42.351000000 PM +00:00), Global Resource Directory partially frozen (07-JAN-25 02.27.42.430000000 PM +00:00)/

Finally, srv1 logged "Completed: alter pluggable database pdb1 close"  at 07-JAN-25 02.27.42.571000000 PM +00:00 (under CON_ID=1).

When I issued the "alter pluggable database pdb1 open" on srv1 at 07-JAN-25 02.28.10.713000000 PM +00:00, Domain Actions and Global Resource Directory actions were executed again on srv2

Finally, srv1 logged "Completed: alter pluggable database pdb1 open" at 07-JAN-25 02.28.12.122000000 PM +00:00

So, it is obvious that closing and opening a PDB in RAC also results in a number of steps to maintain consistency (Resources etc) across the instances.


Categories: DBA Blogs

3million PageViews, 303thousand VideoViews

Wed, 2025-01-01 01:01

  This blog, begun in December 2006, has now hit a cumulative count of 3million PageViews.


This Chart shows counts from the year 2011 :



My YouTube Channel, begun in January 2014, has hit a cumulative count of 303thousand views :



A Big Thank You to all the Viewers an Subscribers.


Categories: DBA Blogs

DataPump with CheckSum in 21c and above

Sat, 2024-12-28 06:54

 Oracle introduced a CheckSum parameter in 21c.   Here is a demo in 23.6.  


First I run the datapump export without the CheckSum:


[oracle@localhost ~]$ expdp hemant/hemant@freepdb1 schemas=HEMANT dumpfile=HEMANT

Export: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:31:26 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Starting "HEMANT"."SYS_EXPORT_SCHEMA_01":  hemant/********@freepdb1 schemas=HEMANT dumpfile=HEMANT
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA/LOGREP
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
. . exported "HEMANT"."MY_LOBS"                            7.9 MB   75929 rows
Master table "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HEMANT.SYS_EXPORT_SCHEMA_01 is:
  /opt/oracle/admin/FREE/dpdump/2917EBB8705B3129E0630100007F3D2B/HEMANT.dmp
Job "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Dec 28 12:32:06 2024 elapsed 0 00:00:38

[oracle@localhost ~]$


Now this is an export WITH the CheckSum (after verifying that COMPATIBLE is 20.0 or higher) by specifying CHECKSUM=YES :


[oracle@localhost ~]$ sqlplus hemant/hemant@freepdb1

SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Sat Dec 28 12:35:20 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle.  All rights reserved.

Last Successful login time: Sat Dec 28 2024 12:31:26 +00:00

Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.6.0.24.10

SQL> show parameter compatible

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string      23.6.0
noncdb_compatible                    boolean     FALSE
SQL> quit
Disconnected from Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.6.0.24.10
[oracle@localhost ~]$ expdp hemant/hemant@freepdb1 schemas=HEMANT dumpfile=HEMANT_With_CheckSum checksum=YES

Export: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:35:55 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Starting "HEMANT"."SYS_EXPORT_SCHEMA_01":  hemant/********@freepdb1 schemas=HEMANT dumpfile=HEMANT_With_CheckSum checksum=YES
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA/LOGREP
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
. . exported "HEMANT"."MY_LOBS"                            7.9 MB   75929 rows
Master table "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Generating checksums for dump file set
******************************************************************************
Dump file set for HEMANT.SYS_EXPORT_SCHEMA_01 is:
  /opt/oracle/admin/FREE/dpdump/2917EBB8705B3129E0630100007F3D2B/HEMANT_With_CheckSum.dmp
Job "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Dec 28 12:36:33 2024 elapsed 0 00:00:36

[oracle@localhost ~]$


After I transfer the dumpfile to another server, I verify the CheckSum with VERIFY_ONLY=YES:


[oracle@localhost ~]$ sqlplus / as sysdba

SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Sat Dec 28 12:40:54 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.6.0.24.10

SQL> create directory impdp_check as '/tmp';

Directory created.

SQL> quit
Disconnected from Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.6.0.24.10
[oracle@localhost ~]$ impdp directory=impdp_check dumpfile=HEMANT_With_CheckSum.dmp verify_only=YES

Import: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:42:28 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free

Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Verifying dump file checksums
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
dump file set is complete
verified checksum for dump file "/tmp/HEMANT_With_CheckSum.dmp"
dump file set is consistent
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Sat Dec 28 12:42:39 2024 elapsed 0 00:00:05

[oracle@localhost ~]$

[oracle@localhost ~]$ cd /tmp
[oracle@localhost tmp]$ ls -l import.log
-rw-r--r--. 1 oracle oinstall 600 Dec 28 12:42 import.log
[oracle@localhost tmp]$ cat import.log
;;;
Import: Release 23.0.0.0.0 - Limited Availability on Sat Dec 28 12:42:28 2024
Version 23.6.0.24.10

Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.
;;;
Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Verifying dump file checksums
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
dump file set is complete
verified checksum for dump file "/tmp/HEMANT_With_CheckSum.dmp"
dump file set is consistent
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Sat Dec 28 12:42:39 2024 elapsed 0 00:00:05
[oracle@localhost tmp]$



The VERIFY_ONLY parameter verifies the file without actually importing the dump file.

The default CheckSum algorithm is SHA256.
You can override this by specifying CHECKSUM_ALGORITHM set to either of CRC32, SHA256, SHA384 or SHA512.

Categories: DBA Blogs

The GROUP BY column_position enhancement in 23ai

Sat, 2024-12-28 02:11

 Oracle 23ai allows specifying a Column Position (or Alias) in the GROUP BY clause.
For backward compatibility, the "group_by_position_enabled" parameter is a new feature that defaults to FALSE.


SQL> show parameter group_by_position_enabled

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
group_by_position_enabled            boolean     FALSE
SQL> alter session set group_by_position_enabled=TRUE;

Session altered.

SQL> 
SQL> select p.prod_name, t.day_name, sum(s.amount_sold)
  2  from sh.products p, sh.times t, sh.sales s
  3  where p.prod_id=s.prod_id
  4  and t.time_id=s.time_id
  5  and t.calendar_month_name = 'January'
  6  group by 1, 2
  7  order by 1, 2
  8  /

PROD_NAME                                          DAY_NAME  SUM(S.AMOUNT_SOLD)
-------------------------------------------------- --------- ------------------
11" Youth Field Master Glove                       Friday               4635.73
11" Youth Field Master Glove                       Monday               2903.62
11" Youth Field Master Glove                       Saturday             3636.85
11" Youth Field Master Glove                       Sunday               6602.18
11" Youth Field Master Glove                       Thursday             5696.37
11" Youth Field Master Glove                       Tuesday              2843.81
11" Youth Field Master Glove                       Wednesday            6072.04
11.5" Youth Triple Stripe Series Glove             Friday               6695.84
11.5" Youth Triple Stripe Series Glove             Monday               5436.28
11.5" Youth Triple Stripe Series Glove             Saturday              5653.8
11.5" Youth Triple Stripe Series Glove             Sunday              10909.86
...
...
multiple rows returned 
...
...
PROD_NAME                                          DAY_NAME  SUM(S.AMOUNT_SOLD)
-------------------------------------------------- --------- ------------------
Wicket Keeper Gloves                               Thursday             1550.94
Wicket Keeper Gloves                               Tuesday              3049.62
Wicket Keeper Gloves                               Wednesday            2583.16
Wide Brim Hat                                      Friday                189.28
Wide Brim Hat                                      Monday               1656.35
Wide Brim Hat                                      Saturday             1689.48
Wide Brim Hat                                      Sunday                 560.7
Wide Brim Hat                                      Thursday             1088.44
Wide Brim Hat                                      Tuesday              2855.67
Wide Brim Hat                                      Wednesday             250.19

461 rows selected.

SQL>


This helps developers who already use the Column Position in the ORDER BY clause and can be consistent when writing the GROUP BY clause.
The HAVING clause also supports Column Aliases.


Categories: DBA Blogs

Using Oracle's Autonomous Health Framework to get an "Insight" into a RAC Cluster

Sun, 2024-11-24 03:44

 I've posted a video demonstration on using Oracle's Autonomous Health Framework ("AHF") get an "Insight" into  a RAC Cluster.

AHF (Oracle Support Doc ID 2550798.1) is Oracle's current diagnostic tool which includes Orachk and TFA (Trace File Analyzer).  Explore it.




Categories: DBA Blogs

Video on DataGuard Snapshot Standby

Thu, 2024-09-26 22:11

 I have posted a new video on creating a "Snapshot Standby" database in DataGuard.

Typically such a Standby can be used for "destructive testing" like D.R. simulation with dummy transactions, application upgrades, schema changes.

If your concern is about the Physical Standby being unavailable for long note :

(1) Redo Shipping to the Standby can continue as ArchiveLogs (of course, this means the Standby will be lagging the Primary by the typical interval between Archive Log switches)

(2) You can configure 2 Standbys of which one continues to be a Physical Standby with real time redo shipping and the other is periodically used to test schema changes / application upgrades and even for frequent D.R. testing

I have previously posted a few Blog posts on scenarios like Multiple Standbys, Standbys with Lag, Destructive testing in Standbys.  See the list of posts here.



Categories: DBA Blogs

Video on DataGuard Switchover -- with RAC and Single Instance

Sun, 2024-09-01 04:36

 I've posted a demonstration of DataGuard Switchover, using RAC and Single Instance as the Primary/Standby pair.


Fundamentally, there is no difference if either or both of the databases are RAC or Single Instance.


A Switchover is a Graceful operation, with No-Data-Loss as the Primary sends the "End-Of-Redo Marker" to the Standby at which point Reversal of Roles happens.  Therefore, you can execute Switchover between the two databases (servers / data centres / sites) multiple times without loss of data.


A Failover, on the other hand, involves data loss and the erstwhile Primary does not revert to a Standby role but must be recreated / refreshed as a Standby from the new Primary.

Categories: DBA Blogs

A few dgmgrl (DataGuard Broker) commands

Sun, 2024-07-28 09:34

 I haven't create a video with commentary but here are a few dgmgrl commands.

In the output "RACDB" (or "racdb") is a 2-node RAC database (managed by ClusterWare) and "STDBY" (or "stdby") is a Single Node non-RAC database not using Oracle Restart.  Both are 19c (19.23)

Initial Setup :

[oracle@srv1 ~]$ dgmgrl sys
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Sat Jul 27 02:31:53 2024
Version 19.23.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Password:
Connected to "RACDB"
Connected as SYSDBA.
DGMGRL> remove configuration;
Removed configuration
DGMGRL> create configuration racdb_dg as primary database is RACDB connect identifier is RACDB;
Configuration "racdb_dg" created with primary database "racdb"
DGMGRL> add database STDBY as connect identifier is STDBY maintained as physical;
Database "stdby" added
DGMGRL> show configuration;

Configuration - racdb_dg

  Protection Mode: MaxPerformance
  Members:
  racdb - Primary database
    stdby - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
DISABLED
DGMGRL> enable configuration;
Enabled.
DGMGRL> show configuration;

Configuration - racdb_dg

  Protection Mode: MaxPerformance
  Members:
  racdb - Primary database
    Warning: ORA-16905: The member was not enabled yet.

    stdby - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
WARNING   (status updated 177 seconds ago)

DGMGRL> show configuration ;

Configuration - racdb_dg

  Protection Mode: MaxPerformance
  Members:
  racdb - Primary database
    Warning: ORA-16905: The member was not enabled yet.

    stdby - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
WARNING   (status updated 182 seconds ago)

DGMGRL>
DGMGRL> show configuration ;

Configuration - racdb_dg

  Protection Mode: MaxPerformance
  Members:
  racdb - Primary database
    stdby - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 44 seconds ago)

DGMGRL>
DGMGRL> show configuration lag;

Configuration - racdb_dg

  Protection Mode: MaxPerformance
  Members:
  racdb - Primary database
    stdby - Physical standby database
            Transport Lag:      0 seconds (computed 0 seconds ago)
            Apply Lag:          0 seconds (computed 0 seconds ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 23 seconds ago)

DGMGRL>
In the initial setup, it takes time to actually enable the databases so we see the "ORA-16095: the member was not enabled yet" warning messages.
Later, it takes time for communication of the Transport and Apply lag information so I present the lag only some significant seconds later.
Note that DataGuard is configured as "Maximum Performance" mode (not Maximum Availability as I show in my previous video)



The VALIDATE commands :


GMGRL> validate database racdb;

  Database Role:    Primary database

  Ready for Switchover:  Yes

  Managed by Clusterware:
    racdb:  YES

DGMGRL> validate database stdby;

  Database Role:     Physical standby database
  Primary Database:  racdb

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    racdb:  On
    stdby:  Off

  Capacity Information:
    Database  Instances        Threads
    racdb     2                2
    stdby     1                2
    Warning: the target standby has fewer instances than the
    primary database, this may impact application performance

  Managed by Clusterware:
    racdb:  YES
    stdby:  NO

  Current Configuration Log File Sizes:
    Thread #   Smallest Online Redo      Smallest Standby Redo
               Log File Size             Log File Size
               (racdb)                   (stdby)
    1          512 MBytes                100 MBytes
    2          512 MBytes                100 MBytes

DGMGRL>
Here we see some information :
(1) The configuration is Ready for Swtichover (from "racdb" as Primary to "stdb")

(2) Flashback Database is configured on "racdb" but not on "stdby"

(3) Although "racdb" has two instances (two redo threads), "stdby" has only 1 instance (and redo thread)  {Thus we know it is possible to have a Single Instance Standby Database for a MultiNode RAC --- I won't go into a debate / discussion on such a configuration if comments appear in response to this blog post, suffice to say that IMHO, it can make sense to do so}

(4) "racdb" is managed by ClusterWare but "stdby" is not {Thus we know that the Standby does not have to be in ClusterWare / Oracle Restart configuration}

(5)  Although "racdb" has Redo Log files of 512MB, "stdby" is, unfortunately, configured with some smaller Redo Logs



Some "Verbose" Configuration Information :


DGMGRL> show configuration verbose

Configuration - racdb_dg

  Protection Mode: MaxPerformance
  Members:
  racdb - Primary database
    stdby - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverLagGraceTime   = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'
    ExternalDestination1            = ''
    ExternalDestination2            = ''
    PrimaryLostWriteAction          = 'CONTINUE'
    ConfigurationWideServiceName    = 'RACDB_CFG'
    FastStartFailoverLagType        = 'APPLY'

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS

DGMGRL>
DGMGRL> show database verbose racdb;

Database - racdb

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    RACDB1
    RACDB2

  Properties:
    DGConnectIdentifier             = 'racdb'
    ObserverConnectIdentifier       = ''
    FastStartFailoverTarget         = ''
    PreferredObserverHosts          = ''
    LogShipping                     = 'ON'
    RedoRoutes                      = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '30'
    TransportLagThreshold           = '30'
    TransportDisconnectedThreshold  = '30'
    ApplyParallel                   = 'AUTO'
    ApplyInstances                  = '0'
    StandbyFileManagement           = ''
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '0'
    LogArchiveMinSucceedDest        = '0'
    DataGuardSyncLatency            = '0'
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = ''
    DbFileNameConvert               = ''
    LogFileNameConvert              = ''
    ArchiveLocation                 = ''
    AlternateLocation               = ''
    StandbyArchiveLocation          = ''
    StandbyAlternateLocation        = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    LogXptStatus                    = '(monitor)'
    SendQEntries                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    HostName(*)
    StaticConnectIdentifier(*)
    TopWaitEvents(*)
    SidName(*)
    (*) - Please check specific instance for the property value

  Log file locations(*):
    (*) - Check specific instance for log file locations.

Database Status:
SUCCESS

DGMGRL>
DGMGRL> show instance verbose "RACDB1"

Instance 'RACDB1' of database 'racdb'

  PFILE:
  Properties:
    HostName                        = 'srv1'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.91)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=RACDB_DGMGRL)(INSTANCE_NAME=RACDB1)(SERVER=DEDICATED)))'
    TopWaitEvents                   = '(monitor)'
    SidName                         = '(monitor)'

  Log file locations:
    Alert log               : /u01/app/oracle/diag/rdbms/racdb/RACDB1/trace/alert_RACDB1.log
    Data Guard Broker log   : /u01/app/oracle/diag/rdbms/racdb/RACDB1/trace/drcRACDB1.log

Instance Status:
SUCCESS

DGMGRL>
Here we see some more information :
(1) Timeouts (which I've left at defaults)

(2) Fast Start Failover and Observer information (which I am not using in this configuration)

(3) That Log Transport is Async, with 0 Delay and Compression Disabled.

(3) The Alert Log and DG Broker Log file location  (for "racdb", I extract the information for only the first instance, "RACDB1")



Connect Identifiers :

DGMGRL> validate static connect identifier for racdb;
Oracle Clusterware on database "racdb" is available for database restart.

DGMGRL> validate static connect identifier for stdby;
Oracle Clusterware is not configured on database "stdby".
Connecting to database "stdby" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=stdby)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=STDBY_DGMGRL)(INSTANCE_NAME=STDBY)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "stdby".
For "racdb" being managed by ClusterWare, I had specified "RACDB" {which is the SCAN address} as the Connect Identifier (with "primary database is RACDB connect identifier is RACDB" in the first configuration command) For "stdby", not being managed by ClusterWare or Oracle Restart, it DGMGRL entry for the Listener is required in this manner :
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = STDBY)
      (SID_NAME = STDBY)
      (ORACLE_HOME = /u01/app/oracle/product/19.0.0/dbhome_1)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = STDBY_DGMGRL)
      (SID_NAME = STDBY)
      (ORACLE_HOME = /u01/app/oracle/product/19.0.0/dbhome_1)
      (ENVS="TNS_ADMIN=/u01/app/oracle/product/19.0.0/dbhome_1/network/admin")
    )
  )
So, if either your Primary or Standby or both are Not managed by ClusterWare or Oracle Restart, you need to have a DGMGRL entry for the Listener !
Categories: DBA Blogs

Video on OCR and OLR commands in RAC GI/ClusterWare

Fri, 2024-07-19 21:14

 Last week I published a new video on OCR and OLR commands.

ocrcheck  :  Lists the locations of the OCR and checks for corruption (run as root to check for logical corruption as well)

ocrconfig -add DG Name (e.g. ocrconfig -add +DATA)   :  Adds a new copy of the OCR in the stated ASM DG

ocrconfig -delete DG Name  : Deletes a copy of the OCR from the ASM DG 


cat /etc/oracle/olr.loc :  Shows the location of the OLR

ocrcheck -local : Checks the OLR


ocrconfig -showbackup  :  Shows the default location of OCR backups

ocrconfig -manualbackup  : Create a manual backup of the OCR

(use asmcmd to copy the backup out from ASM to Filesystem)


ocrconfig -local -showbackuploc : Shows the location of OLR backups

ocrconfig -local -manualbackup :  Create a manual backup of the OLR

ocrconfig -local -export  : Create an Export backup of the OLR



Categories: DBA Blogs

Video on the Maximum Availability Protection Mode in Oracle DataGuard (with RAC)

Tue, 2024-06-25 09:17

 I've published a new video demonstrating Maximum Availability Protection Mode in Oracle DataGuard.


Categories: DBA Blogs

Testing updated ORA-942 Error Message in 23ai

Sun, 2024-06-02 04:09

 Oracle 23ai now has a long-requested fix to the ORA-942 "table or view does not exist" error message.  The error message would not print out *which* table or view was missing -- this was particularly troublesome with very long SQL statements and multiple tables and views being referenced.


A demo of the fixed behaviour in 23ai  (followed later by the same code in 19c 19.22) 



SQL> select banner from v$version;

BANNER
---------------------------------------------------------------------------------------------------------------------------------
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free

SQL>
SQL> -- create the two tables
SQL> drop table employees;
drop table employees
           *
ERROR at line 1:
ORA-00942: table or view "HEMANT"."EMPLOYEES" does not exist
Help: https://docs.oracle.com/error-help/db/ora-00942/


SQL> drop table departments;

Table dropped.

SQL> create table departments (dept_id number(12) primary key, dept_name varchar2(30));

Table created.

SQL> create table employees (emp_id number(12) primary key, emp_name varchar2(54), dept_id number(12));

Table created.

SQL>
SQL> -- insert data
SQL> insert into departments values (1,'Human Resources');

1 row created.

SQL> insert into employees values (1,'Somebody',1);

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> -- test query
SQL> select d.dept_id, d.dept_name, e.emp_name
  2  from departments d, employees e
  3  where e.dept_id = d.dept_id
  4  order by d.dept_id,e.emp_name
  5  /

   DEPT_ID DEPT_NAME                      EMP_NAME
---------- ------------------------------ ------------------------------------------------------
         1 Human Resources                Somebody

SQL>
SQL> -- drop a table
SQL> drop table employees;

Table dropped.

SQL>
SQL> -- RETEST query
SQL> select d.dept_id, d.dept_name, e.emp_name
  2  from departments d, employees e
  3  where e.dept_id = d.dept_id
  4  order by d.dept_id,e.emp_name
  5  /
from departments d, employees e
                    *
ERROR at line 2:
ORA-00942: table or view "HEMANT"."EMPLOYEES" does not exist
Help: https://docs.oracle.com/error-help/db/ora-00942/


SQL>


So, now the error message text for ORA-942 includes the name of the missing <schema>.<table_or_view>



Here is the same code in 19c (19.22) 


SQL> --select banner from v$version;
SQL> -- use BANNER_FULL in 19c to get RU level -- 19.22
SQL> select banner_full from v$version;

BANNER_FULL
------------------------------------------------------------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.22.0.0.0


SQL>
SQL> -- create the two tables
SQL> drop table employees;
drop table employees
           *
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> drop table departments;

Table dropped.

SQL> create table departments (dept_id number(12) primary key, dept_name varchar2(30));

Table created.

SQL> create table employees (emp_id number(12) primary key, emp_name varchar2(54), dept_id number(12));

Table created.

SQL>
SQL> -- insert data
SQL> insert into departments values (1,'Human Resources');

1 row created.

SQL> insert into employees values (1,'Somebody',1);

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> -- test query
SQL> select d.dept_id, d.dept_name, e.emp_name
  2  from departments d, employees e
  3  where e.dept_id = d.dept_id
  4  order by d.dept_id,e.emp_name
  5  /

   DEPT_ID DEPT_NAME                      EMP_NAME
---------- ------------------------------ ------------------------------------------------------
         1 Human Resources                Somebody

SQL>
SQL> -- drop a table
SQL> drop table employees;

Table dropped.

SQL>
SQL> -- RETEST query
SQL> select d.dept_id, d.dept_name, e.emp_name
  2  from departments d, employees e
  3  where e.dept_id = d.dept_id
  4  order by d.dept_id,e.emp_name
  5  /
from departments d, employees e
                    *
ERROR at line 2:
ORA-00942: table or view does not exist


SQL>


So, upto 19c, ORA-942 would simply say "table or view does not exist".  In my example using sqlplus and a very short from list, the "*" does indicate where the error is -- but with other tools / clients and long FROM or sub-queries with FROM clauses, it may well be difficult to identify the missing table or view unless you trace the execution or run a test query against each table or view.
Categories: DBA Blogs

Testing Open a PDB as a Hybrid Read Only PDB in 23ai

Sun, 2024-05-26 09:55

 Oracle 23ai now allows the DBA to open a PDB in Hybrid Read Only mode.  This mode allows Common Users (e.g. SYS or SYSTEM or others defined as Common Users from the Root CDB) to access a PDB in Read-Write mode while local (i.e. non-Common Users) can access the PDB only in Read-Only mode.

This facilitates live maintenance (e.g. patching or changes to the database / schema) being executed by a DBA or Common User while "normal" local users (eg. Application Accounts) can still query the database.

This is a quick demo :


SQL> -- open the PDB as "normal" Read Write
SQL> connect / as sysdba
Connected.
SQL> -- Version 23ai Free Edition
SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free

SQL>
SQL> alter pluggable database freepdb1 close;

Pluggable database altered.

SQL> alter pluggable database freepdb1 open;

Pluggable database altered.

SQL>
SQL> -- first demo a normal user in the PDB
SQL> connect hemant/hemant@freepdb1
Connected.
SQL> create table x_test (id number , data varchar2(15));

Table created.

SQL> insert into x_test values (1,'First');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from hemant.x_test order by 1;

        ID DATA
---------- ---------------
         1 First

SQL>
SQL>
SQL> -- now close and open the PDB in Hybrid Read Only mode
SQL> connect / as sysdba
Connected.
SQL> alter pluggable database freepdb1 close;

Pluggable database altered.

SQL> alter pluggable database freepdb1 open hybrid read only ;

Pluggable database altered.

SQL> -- test that SYSTEM (a Common User) can manipulate data -- e.g. INSERT
SQL> connect system/manager@freepdb1
Connected.
SQL> insert into hemant.x_test values(2,'System');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from hemant.x_test order by 1;

        ID DATA
---------- ---------------
         1 First
         2 System

SQL> -- even grant DBA to hemant
SQL> grant dba to hemant;

Grant succeeded.

SQL>
SQL> -- test hemant a non-common user
SQL> -- see if the user can execute INSERT and SELECT
SQL> connect hemant/hemant@freepdb1
Connected.
SQL> insert into hemant.x_test values(3,'Third');
insert into hemant.x_test values(3,'Third')
                   *
ERROR at line 1:
ORA-16000: Attempting to modify database or pluggable database that is open for
read-only access.
Help: https://docs.oracle.com/error-help/db/ora-16000/


SQL> commit;

Commit complete.

SQL> select * from hemant.x_test order by 1;

        ID DATA
---------- ---------------
         1 First
         2 System

SQL> -- so SELECT works, but not INSERT
SQL>
SQL>
SQL> -- reopen PDB as normal "Read Write"
SQL> connect / as sysdba
Connected.
SQL> alter pluggable database freepdb1 close;

Pluggable database altered.

SQL> alter pluggable database freepdb1 open;

Pluggable database altered.

SQL> connect hemant/hemant@freepdb1
Connected.
SQL> insert into hemant.x_test values(4,'Fourth');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from hemant.x_test order by 1;

        ID DATA
---------- ---------------
         1 First
         2 System
         4 Fourth

SQL>


Thus, when the PDB was first opened in "normal" mode (ie the default OPEN mode is OPEN READ WRITE), the local user  "HEMANT" could execute DDL and DML (create the table and Insert).  

When it was reopened in Hybrid Read Only Mode, the user could not make changes (insert the row with ID=3) but could still query the data (even if the user has been granted "DBA").  However,  the Common User "SYSTEM" was allowed to insert the row with ID=2, DATA='SYSTEM'.

Finally, reopening the PDB in "normal" OPEN READ WRITE mode, the user "HEMANT" could again insert a row (ID=4)
Categories: DBA Blogs

Testing RENAME LOB (Segment) in 23ai

Tue, 2024-05-21 09:46
Another new feature of 23ai is the ability to rename a LOB (Segment) in-place without having to use the MOVE clause.

A quick demo :


SQL> -- Version 23ai Free Edition
SQL> select banner from v$version;

BANNER
---------------------------------------------------------------------------------------------------------------------------------
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free

SQL>
SQL>
SQL> DROP TABLE my_lob_objects purge;

Table dropped.

SQL>
SQL> -- create the table with a LOB, column name "c",  lob segment name also "c"
SQL> CREATE TABLE my_lob_objects (object_id NUMBER primary key, c CLOB)
  2        lob (c) STORE AS SECUREFILE c
  3        ( TABLESPACE users
  4          DISABLE STORAGE IN ROW
  5          NOCACHE LOGGING
  6          RETENTION AUTO
  7          COMPRESS
  8        );

Table created.

SQL>
SQL> -- query the data dictionary
SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs;

TABLE_NAME       COLUMN_NAME      SEGMENT_NAME         TABLESPACE_NAME
---------------- ---------------- -------------------- ----------------
MY_LOB_OBJECTS   C                C                    USERS

SQL>
SQL> -- insert three rows
SQL> insert into my_lob_objects values (1, dbms_random.string('X',100));

1 row created.

SQL> insert into my_lob_objects values (2, dbms_random.string('X',100));

1 row created.

SQL> insert into my_lob_objects values (3, dbms_random.string('X',100));

1 row created.

SQL>
SQL> -- verify the column name when querying the table
SQL> select * from my_lob_objects;

 OBJECT_ID C
---------- --------------------------------------------------------------------------------
         1 IBGOGKA9QKK56O746IJL3C56ZK9LEO0G1W4LWBN11T8EWCFTTLUW9TPIVQAU8BPSGPQ2ZV57BS0ZPK0S
         2 7K04DVVYDQB1URIQ1OQ2458M8ZOURHWW50XIZDMVGAZH6XVN2KKN4PIGKPY5CSVIQ9KU45LHZPJB33AA
         3 2G5194Z7TSR3XG0K698G587AOZOJ8VN6KFCTCH3074TNCOWCSMOPRJLRGTLIZMDD73XAY4KDD14IW4MG

SQL>
SQL> -- now rename the column
SQL> alter table my_lob_objects rename column c to clob_col;

Table altered.

SQL>
SQL> -- query the data dictionary
SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs;

TABLE_NAME       COLUMN_NAME      SEGMENT_NAME         TABLESPACE_NAME
---------------- ---------------- -------------------- ----------------
MY_LOB_OBJECTS   CLOB_COL         C                    USERS

SQL>
SQL> -- now rename the lob segment
SQL> alter table my_lob_objects rename lob(clob_col) c to my_lob_objects_clob;

Table altered.

SQL>
SQL> -- query the data dictionary
SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs;

TABLE_NAME       COLUMN_NAME      SEGMENT_NAME         TABLESPACE_NAME
---------------- ---------------- -------------------- ----------------
MY_LOB_OBJECTS   CLOB_COL         MY_LOB_OBJECTS_CLOB  USERS

SQL>
SQL> -- verify the column name when querying the table
SQL> select * from my_lob_objects;

 OBJECT_ID CLOB_COL
---------- --------------------------------------------------------------------------------
         1 IBGOGKA9QKK56O746IJL3C56ZK9LEO0G1W4LWBN11T8EWCFTTLUW9TPIVQAU8BPSGPQ2ZV57BS0ZPK0S
         2 7K04DVVYDQB1URIQ1OQ2458M8ZOURHWW50XIZDMVGAZH6XVN2KKN4PIGKPY5CSVIQ9KU45LHZPJB33AA
         3 2G5194Z7TSR3XG0K698G587AOZOJ8VN6KFCTCH3074TNCOWCSMOPRJLRGTLIZMDD73XAY4KDD14IW4MG

SQL>
SQL> -- identify the segment
SQL> select tablespace_name, segment_name, segment_type, bytes/1024 Size_KB
  2  from user_segments
  3  where segment_name = 'MY_LOB_OBJECTS_CLOB'
  4  /

TABLESPACE_NAME  SEGMENT_NAME         SEGMENT_TYPE         SIZE_KB
---------------- -------------------- ------------------ ---------
USERS            MY_LOB_OBJECTS_CLOB  LOBSEGMENT              2304

SQL>



First I create a Table where the Column and LOB (Segment) are both called "C".  In recent versions, SECUREFILE is the default and recommended for LOBs (e.g. with the COMPRESS, DEDUPLICATION and ENCRYPTION advantages).

Then I insert 3 rows.

I then rename the column "C" to "CLOB_COL".

Next, I rename the LOB (Segment) to "MY_LOB_OBJECTS_CLOB".  I include the Table Name because the LOB segment is an independent segment that I might query in USER_SEGMENTS (where Table Name) is not available.  This RENAME LOB clause is new in 23ai and does not require the use of MOVE LOB.


I then verify the new Segment Name for the LOB as well.

Yes, the 2,304KB "size" seems excessive but this will make sense (with the COMPRESS attribute) when the LOB grows much much larger as new rows with long Character-Strings are inserted.




Categories: DBA Blogs

Testing DEFAULT ON NULL FOR UPDATE in 23ai

Thu, 2024-05-09 10:25

 Testing  a new feature in 23ai that allows you to define a DEFAULT value for a column if a user/front-end/application sends a NULL :


[oracle@oel9 ~]$ sqlplus hemant/hemant@freepdb1

SQL*Plus: Release 23.0.0.0.0 - Production on Thu May 9 14:54:25 2024
Version 23.4.0.24.05

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.4.0.24.05

SQL> create table my_new_employees(
  2  employee_id number(12) primary key,
  3  employee_name varchar2(48),
  4  department_id number(12)
  5  )
  6  /

Table created.

SQL>
SQL> insert into my_new_employees
  2  values (1,'Hemant',NULL)
  3  /

1 row created.

SQL>
SQL> select * from my_new_employees;

EMPLOYEE_ID EMPLOYEE_NAME                                    DEPARTMENT_ID
----------- ------------------------------------------------ -------------
          1 Hemant

SQL>
SQL> update my_new_employees
  2  set department_id=100  -- setting a non-NULL value
  3  where employee_id=1
  4  /

1 row updated.

SQL> select * from my_new_employees;

EMPLOYEE_ID EMPLOYEE_NAME                                    DEPARTMENT_ID
----------- ------------------------------------------------ -------------
          1 Hemant                                                     100

SQL>
SQL> alter table my_new_employees
  2  modify (department_id default on null for insert and update 512);

Table altered.

SQL> insert into my_new_employees
  2  values (2,'Larry');    -- I am not specifying a value for DEPARTMENT_ID 
insert into my_new_employees
            *
ERROR at line 1:
ORA-00947: not enough values
Help: https://docs.oracle.com/error-help/db/ora-00947/


SQL> insert into my_new_employees
  2  values(2,'Larry', NULL); -- I explicitly specify NULL for DEPARTMENT_ID

1 row created.

SQL> select * from my_new_employees order by 1;

EMPLOYEE_ID EMPLOYEE_NAME                                    DEPARTMENT_ID
----------- ------------------------------------------------ -------------
          1 Hemant                                                     100
          2 Larry                                                      512  -- it got set to 512 ON INSERT

SQL>
SQL> update my_new_employees
  2  set employee_name = 'Hemant Chitale', department_id=NULL -- I explicitly specify NULL for DEPARMTENT_ID
  3  where employee_id=1
  4  /

1 row updated.

SQL> select * from my_new_employees order by 1;

EMPLOYEE_ID EMPLOYEE_NAME                                    DEPARTMENT_ID
----------- ------------------------------------------------ -------------
          1 Hemant Chitale                                             512  -- it got set to 512 ON UPDATE
          2 Larry                                                      512

SQL>
SQL> commit;

Commit complete.

SQL> select * from my_new_employees order by 1;

EMPLOYEE_ID EMPLOYEE_NAME                                    DEPARTMENT_ID
----------- ------------------------------------------------ -------------
          1 Hemant Chitale                                             512
          2 Larry                                                      512

SQL>


So, now the column has a proper DEFAULT value when an UPDATE statement sends a NULL.  This allows the developer to avoid having to write a Front-End Value Check or a Database Table Trigger to convert an incoming NULL to a DEFAULT value.

Categories: DBA Blogs

Testing DB_FLASHBACK_LOG_DEST in 23ai

Mon, 2024-05-06 09:38

 Pre-23ai, Database Flashback Logs always were created in the DB_RECOVERY_FILE_DEST location (i.e.. the well-known "Fast Recovery Area" also known as "FRA").

However, these would share space with ArchiveLogs (if you've configured ArchiveLogs to the same location) and RMAN Backups (if you aren't specifically writing RMAN Backups to a different location).

23ai introduced the DB_FLASHBACK_LOG_DEST (and corresponding DB_FLASHBACK_LOG_DEST_SIZE) parameter to allow you to separate your Database Flashback Logs to a dedicated location.

The 23ai  New Features documentation has this to say :

In previous releases, you could store flashback database logs only in the fast recovery area. Now you can optionally designate a separate location for flashback logging. For example, if you have write-intensive database workloads, then flashback database logging can slow down the database if the fast recovery area is not fast enough. In this scenario, you can now choose to write the flashback logs to faster disks. Using a separate destination also eliminates the manual administration to manage the free space in the fast recovery area.

Managing flashback database logs outside the fast recovery area lowers the operational costs related to space management and guarantees the best performance for workloads that are typically impacted by flashback logging on traditional storage.


And it provides a link to the documentation on the parameter.


You might think that DB_FLASHBACK_LOG_DEST is now completely independent of DB_RECOVERY_FILE_DEST.


Here is my test run where I configured DB_FLASHBACK_LOG_DEST  without configuring DB_RECOVERY_FILE_DEST :




h-4.4$ cd /opt/oracle
sh-4.4$ mkdir FBL
sh-4.4$ mkdir FRA
sh-4.4$ sqlplus / as sysdba

SQL*Plus: Release 23.0.0.0.0 - Production on Sun May 5 10:26:26 2024
Version 23.4.0.24.05

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.4.0.24.05

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1603726344 bytes
Fixed Size                  5360648 bytes
Variable Size             402653184 bytes
Database Buffers         1191182336 bytes
Redo Buffers                4530176 bytes
Database mounted.
SQL> show parameter spfile;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /opt/oracle/product/23ai/dbhom
                                                 eFree/dbs/spfileFREE.ora
SQL>
SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> show parameter db_flashback

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_flashback_log_dest                string
db_flashback_log_dest_size           big integer 0
SQL> alter system set db_flashback_log_dest_size=10G;

System altered.

SQL> alter system  set db_flashback_log_dest='/opt/oracle/FBL';

System altered.

SQL> create restore point MY_FIRST_RP ;

Restore point created.

SQL> alter system archive log current;

System altered.

SQL>
SQL> create table x as select * from cdb_objects;

Table created.

SQL> insert into x select * from x;

141420 rows created.

SQL> delete x;

282840 rows deleted.

SQL> commit;

Commit complete.

SQL> alter system archive log current;

System altered.

SQL>
SQL> alter database flashback on;  -- only here I enable Flashback

Database altered.

==============================================
alert log messages :
2024-05-05T10:38:35.262274+00:00
alter database flashback on
2024-05-05T10:38:35.423698+00:00
Allocating 8388608 bytes in shared pool for flashback generation buffer.
Allocated 8388608 bytes in shared pool for flashback generation buffer
Flashback Database Enabled at SCN 3124894
===============================================

SQL> create restore point MY_FIRST_RP;  -- testing if I can create another RP with the same name
create restore point MY_FIRST_RP
*
ERROR at line 1:
ORA-38778: Restore point 'MY_FIRST_RP' already exists.
Help: https://docs.oracle.com/error-help/db/ora-38778/


SQL> drop restore point MY_FIRST_RP;

Restore point dropped.

SQL> create restore point MY_FIRST_RP;

Restore point created.

SQL> drop table x;

Table dropped.

SQL> create table x as select * from cdb_objects;

Table created.

SQL>
SQL> alter system archive log current;

System altered.

SQL> delete x;

141420 rows deleted.

SQL> insert into x select * from cdb_objects;

141421 rows created.

SQL> commit;

Commit complete.

SQL> alter system archive log current;

System altered.

SQL>
SQL> select substr(name,1,32), scn, time from v$restore_point;  -- identify the RP that has been created

SUBSTR(NAME,1,32)
--------------------------------------------------------------------------------------------------------------------------------
       SCN TIME
---------- ---------------------------------------------------------------------------
MY_FIRST_RP
   3124955 05-MAY-24 10.39.30.000000000 AM


SQL> select * from v$flashback_database_log;  -- identify the FBDB Logs Size

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE     CON_ID
-------------------- --------- ---------------- -------------- ------------------------ ----------
             3124893 05-MAY-24             1440      419430400                        0          0

SQL> select * from v$flashback_log_dest; -- identify the FB Log Dest (why isn't the view named V$FLASHBACK_DATABASE_LOG_DEST ?)

NAME
------------------------------------------------------------------------------------------------------------------------------------
SPACE_LIMIT SPACE_USED NUMBER_OF_FILES     CON_ID
----------- ---------- --------------- ----------
/opt/oracle/FBL
 1.0737E+10  419430400               2          0


SQL>
SQL> !sh
sh-4.4$ cd /opt/oracle/FBL
sh-4.4$ du -sh *
401M    FREE
sh-4.4$ cd FREE
sh-4.4$ ls
flashback
sh-4.4$ cd flashback
sh-4.4$ ls -l
total 409620
-rw-r----- 1 oracle oinstall 209723392 May  5 10:41 o1_mf_m3grfc8t_.flb
-rw-r----- 1 oracle oinstall 209723392 May  5 10:38 o1_mf_m3grfg1v_.flb
sh-4.4$
sh-4.4$ cd $ORACLE_HOME/dbs
sh-4.4$ ls -l arch1*
-rw-r----- 1 oracle oinstall  98164736 May  5 10:31 arch1_2_1167168121.dbf
-rw-r----- 1 oracle oinstall 106480640 May  5 10:33 arch1_3_1167168121.dbf
-rw-r----- 1 oracle oinstall  37506048 May  5 10:40 arch1_4_1167168121.dbf
-rw-r----- 1 oracle oinstall  52515840 May  5 10:40 arch1_5_1167168121.dbf
sh-4.4$
sh-4.4$ exit
exit

SQL> select count(*) from x;

  COUNT(*)
----------
    141421

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1603726344 bytes
Fixed Size                  5360648 bytes
Variable Size             419430400 bytes
Database Buffers         1174405120 bytes
Redo Buffers                4530176 bytes
Database mounted.
SQL> flashback database to restore point MY_FIRST_RP;   -- try to Flashback the Database
flashback database to restore point MY_FIRST_RP
*
ERROR at line 1:
ORA-38760: This database instance failed to turn on flashback database
Help: https://docs.oracle.com/error-help/db/ora-38760/


============================================
alert log messages :
2024-05-05T10:45:28.380285+00:00
Successful mount of redo thread 1, with mount id 1440201864
2024-05-05T10:45:28.380506+00:00
Allocating 8388608 bytes in shared pool for flashback generation buffer.
Allocated 5807328 bytes in shared pool for flashback generation buffer
RVWR could not begin generation of flashback log data because
DB_RECOVERY_FILE_DEST is not set.
2024-05-05T10:45:28.392865+00:00
Errors in file /opt/oracle/diag/rdbms/free/FREE/trace/FREE_rvwr_2343.trc:
ORA-38776: cannot begin flashback generation - recovery area is disabled
2024-05-05T10:45:28.392899+00:00
WARNING: Cannot open the flashback thread for this instance due to the above error.
WARNING: Flashback thread open failed - to resolve this, either correct the reported error or turn off database flashbac
k.
2024-05-05T10:45:28.393060+00:00
Database mounted in Exclusive Mode
Lost write protection mode set to "auto"
Completed: ALTER DATABASE   MOUNT
2024-05-05T10:46:04.458087+00:00
flashback database to restore point MY_FIRST_RP
ORA-38760 signalled during: flashback database to restore point MY_FIRST_RP...
2024-05-05T10:50:43.887137+00:00
==============================================


Explanation of the Error :
===========================
38776, 00000, "cannot begin flashback generation - recovery area is disabled"
// *Cause: During a database mount, the RVWR process discovered that the
//         recovery area was disabled.  DB_RECOVERY_FILE_DEST must have
//         been set null or removed from the INIT.ORA file while the database
//         was unmounted.
// *Action: Flashback database requires the recovery area to be enabled.
//          Either enable the recovery area by setting the
//          DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE initialization
//          parameters, or turn off flashback database with the
//          ALTER DATABASE FLASHBACK OFF command.



So, Oracle 
1.  allows me to create a Restore Point
2.  generates Flashback Log
3.  confirms that they exist
BUT DOES NOT ALLOW ME TO FLASHBACK THE DATABASE.

DB_RECOVERY_FILE_DEST is still mandatory (even if you have ArchiveLogs being written elsewhere -- my ArchiveLogs were going to the default location under $ORACLE_HOME/dbs).

If you look at the documentation (column "Required" in Table 5-4) it does say that DB__RECOVERY_FILE_DEST is a required parameter -- although it doesnt explicitly say that this is required for the FLASHBACK LOGS.  Also, my database was happy to generate ArchiveLogs in another, default, location ($ORACLE_HOME/dbs).

The CREATE RESTORE POINT command didn't give me any warning that the Restore Point would not be usable -- I would treat this as a Bug.

Categories: DBA Blogs

Video on monitoring (a single instance) DataGuard Standby for a RAC (2-node) Cluster

Wed, 2024-05-01 04:03

 I've just uploaded a new video on monitoring DataGuard where the Standby is a Single Instance environment for a 2-node RAC Cluster.


The script used in the demo are in this ZIP  (script files with extension TXT)

Categories: DBA Blogs

Video on Client Side Failover in RAC

Thu, 2024-04-11 01:54

 I've posted a new video demonstrating Client Side Failover defined by the tnsnames.ora file





Categories: DBA Blogs

Video on Host Names, SCAN and IP Addresses

Sat, 2024-03-30 04:31

 I've posted a new Video on Host Names, SCAN and Virtual IPs in RAC (using a 2-node 19c Cluster)



Categories: DBA Blogs

Pages