Feed aggregator
ETCH – A quick introduction to document templating
After my last post, I thought I’d give a quick intro to what ETCH is and what it allows you to do.
In a nutshell, ETCH is a service (i.e. set of REST Endpoints), that let you take some JSON data and a template (e.g. Microsoft Word, Excel, PowerPoint, Markdown etc) and it processes the template looking for any tags that it can replace with the JSON data.
Of course that’s a very (very!) simplified overview of what it is, since there’s much more functionality (e.g. logic like conditionals, formatting etc), but we’ll come to all that later.
A simple Microsoft Word (DOCX) Template ExampleFirst of all, here’s a simple JSON file we’ll use for our data

Obviously in a real case you wouldn’t use a static data file (unless that data was dynamically updated by a batch job or something like that), but it could be data from a database (we have Oracle APEX Plugins to make that easy for you!), or you might reference JSON from a URL, or reference some other dynamic data in one of the many ways we support (more on that in a future post!).
Ok so now we need a simple Microsoft Word (DOCX) template, where we can reference this data using the ETCH tag notation, so for example

We can use standard Word formatting for the static text. On the last line, we have used the notation {{customer.name}} to reference the nested JSON data that we want to reference.
Now we want to process this with ETCH. I’m going to use a curl command. It will send the JSON data and the template to the ETCH server. I’ve set a couple of environment variables for my ETCH API key and the URL for the ETCH server.
As mentioned before, this is just one simple way to use ETCH, there are Oracle APEX Plugins you can use if you use Oracle APEX (which I highly recommend) as well as a number of client libraries for different languages (again more on that later!).

If you’re not familiar with curl this might look complicated, but really you could copy / paste this command yourself and the only thing that really changes is the name of the DOCX template (simple-template.docx) and the name of the JSON data file (data.json)
Remember, I’m just using curl here as a way to execute an HTTP POST command to the ETCH server, sending the template and JSON data, there are many others ways to achieve this.
You can see in the screenshot, we get an HTTP-200 back for the HTTP Response code, and if we open up the output.docx file which was created for us, you should see (no real surprise):

As expected the {{customer.name}} tag was replaced with the corresponding data from the JSON data.
Ok, so let’s step it up slightly, modifying our template file to this –

So there’s a few things worth pointing out here, notice how we’ve applied formatting to the {{customer.name}} and {{order.id}}, we’ve made them bold, a larger font size and blue.
Also, notice we have used some new syntax to output the order items. We’ve used a {{#each items}} to loop through each of the items. We use {{name}} and {{price}} to reference the name and price of each item as we loop through. Then we end the loop with a {{/each}}, which is a bit like a closing HTML tag. Since we have put the {{#each}} and {{/each}} on a single line in our table, ETCH will generate a line for each item. It will do this for every item in the items array.
Running our curl command again will produce this output

Hopefully you’ll agree, pretty simple and intuitive. But notice a couple of things, since our JSON data didn’t have any currency symbol for the price data, it is just output as-is in the output, it would nice to fix that, but without having to change our data.
With ETCH we put in a lot of helpers, formatters and control statements that help you to output exactly what you want, without having to change your data.
So for example, let’s change our template to this

We’ve made a number of changes here –
- Right aligned the price column in the table so that the numbers align better
- changed {{price}} to {{price | currency}} this is a currency formatter so ETCH will format it better
- Used {{sum items.price}} to calculate the total price of the items in the array and also formatted it as a currency
- Used the {{#unless @last}} notation to determine if it is the last item in the array since we want to output the name of the items comma-seperated, but obviously don’t want to output a comma if it is the last item.
using formatters (such as currency) and logic controllers such as {{#unless @last}} we can start to do extremely powerful (and dynamic) templating output. So now when we process the template with ETCH we should see something like

(note that with the {{price | currency}} formatter, we have extra options, such as using {{price | currency:£}}, or {{price | currency:EUR}} if we wanted to specify the currency symbol, here is picking up my currenct default of US Dollars.
This is just scratching the surface of what ETCH is capable of, let’s amend our template again to be able to show a chart of the price of the items ordered.
If we insert a chart into template, you’ll get a default chart like this

If you right click on the chart and choose ‘Edit Data in Excel’ for the chart in the template, we can use the same {{#each items}} loop we used previously so the data is based on our JSON data

and when we process it in ETCH the output should look like this

Now we get a chart in our output document based on the data in our JSON file. You can control what type of chart it is using the usual Word chart options, so you can format, label it etc however you like, you have full control.
We’ve barely scratched the surface of what ETCH can achieve. I look forward to showing you more in the next post.
Securing MCP Servers for Enterprise Use: Beyond HTTPS Protocol
Model Context Protocol (MCP) servers are transforming how enterprises integrate AI with their business applications, but this powerful connectivity creates new security challenges that go far beyond basic HTTPS encryption. As MCP servers grant AI systems access to sensitive business data across multiple applications—from CRM systems to databases—they essentially become gateways to your entire digital infrastructure. This comprehensive guide explores the multi-layered security approach needed for enterprise MCP deployments, covering authentication strategies, network architecture, data protection, and monitoring capabilities that ensure your AI integration remains both powerful and secure.
The post Securing MCP Servers for Enterprise Use: Beyond HTTPS Protocol appeared first on DBASolved.
Why I’m Building AI Agents for Oracle GoldenGate (And You Should Too)
After implementing Oracle GoldenGate in more environments than I can count, I’ve noticed something: every implementation follows the same pattern. […]
The post Why I’m Building AI Agents for Oracle GoldenGate (And You Should Too) appeared first on DBASolved.
ZDM 21.5 Physical Online Migration failing during TEMP tablespace encryption
The ZDM 21.5 version brings some new functions like encrypting automatically now the TEMP, SYSTEM, SYSAUX and UNDO tablespaces. There is no need to do it manually any more after the ZDM migration, ZDM will take care of it on its own. Little problem, this is adding a new bug because the temporary database needs to be restarted to drop the previous TEMP tablespace. Let’s see how to resolve the problem and move forward with the ZDM migration.
Problem descriptionAfter switchover is run by ZDM, the TEMP, SYSTEM, SYSAUX and UNDO tablespaces are now encrypted by ZDM during the ZDM_POST_DATABASE_OPEN_TGT step. In my case described here, I was doing a dry run, so I manually ran a failover to the target environment to avoid any impact and downtime to the source database. It was currently a dry run and not the final migration. Thus, after having paused the ZDM migration once the Data Guard configuration was setup (ZDM_CONFIGURE_DG_SRC step), I manually ran the failover and then moved forward with the ZDM Migration using the -skip SWITCHOVER
option. I have explained this in one of my previous ZDM blog if you want more details.
I’m running same kind of migration I was doing with 21.4 version, non-cdb as source and including conversion to pdb during ZDM migration.
So, after failover has been run manually, I resumed the ZDM migration till the end.
[exauser@zdm-host ~]$ zdmcli resume job -jobid 580 -skip SWITCHOVER zdm-host.domain.com: Audit ID: 6963
As we can see, the ZDM job failed on the ZDM_POST_DATABASE_OPEN_TGT step.
[exauser@zdm-host ~]$ zdmcli query job -jobid 580 zdm-host.domain.com: Audit ID: 6966 Job ID: 580 User: exauser Client: zdm-host Job Type: "MIGRATE" Scheduled job command: "zdmcli migrate database -sourcesid SRCSID -rsp /home/exauser/migration/zdm_SRCSID_physical_online.rsp -sourcenode dbsource-host -srcauth dbuser -srcarg1 user:oracle -srcarg2 identity_file:/home/exauser/.ssh/id_rsa -targetnode exacc-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/exauser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_COPYFILES" Scheduled job execution start time: 2025-06-11T17:09:40+02. Equivalent local time: 2025-06-11 17:09:40 Current status: FAILED Result file path: "/u01/app/oracle/chkbase/scheduled/job-580-2025-06-11-17:10:06.log" Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-580-2025-06-11-17:10:06.json" Job execution start time: 2025-06-11 17:10:07 Job execution end time: 2025-06-12 10:04:40 Job execution elapsed time: 2 hours 52 minutes 48 seconds ZDM_GET_SRC_INFO .............. COMPLETED ZDM_GET_TGT_INFO .............. COMPLETED ZDM_PRECHECKS_SRC ............. COMPLETED ZDM_PRECHECKS_TGT ............. COMPLETED ZDM_SETUP_SRC ................. COMPLETED ZDM_SETUP_TGT ................. COMPLETED ZDM_PREUSERACTIONS ............ COMPLETED ZDM_PREUSERACTIONS_TGT ........ COMPLETED ZDM_VALIDATE_SRC .............. COMPLETED ZDM_VALIDATE_TGT .............. COMPLETED ZDM_DISCOVER_SRC .............. COMPLETED ZDM_COPYFILES ................. COMPLETED ZDM_PREPARE_TGT ............... COMPLETED ZDM_SETUP_TDE_TGT ............. COMPLETED ZDM_RESTORE_TGT ............... COMPLETED ZDM_RECOVER_TGT ............... COMPLETED ZDM_FINALIZE_TGT .............. COMPLETED ZDM_CONFIGURE_DG_SRC .......... COMPLETED ZDM_SWITCHOVER_SRC ............ COMPLETED ZDM_SWITCHOVER_TGT ............ COMPLETED ZDM_POST_DATABASE_OPEN_TGT .... FAILED ZDM_DATAPATCH_TGT ............. PENDING ZDM_NONCDBTOPDB_PRECHECK ...... PENDING ZDM_NONCDBTOPDB_CONVERSION .... PENDING ZDM_POST_MIGRATE_TGT .......... PENDING ZDM_POSTUSERACTIONS ........... PENDING ZDM_POSTUSERACTIONS_TGT ....... PENDING ZDM_CLEANUP_SRC ............... PENDING ZDM_CLEANUP_TGT ............... PENDING [exauser@zdm-host ~]$
ZDM log file would show the problem coming from the TEMP tablespace encryption.
[exauser@zdm-host ~]$ tail -f /u01/app/oracle/chkbase/scheduled/job-580-2025-06-11-17:10:06.log #################################################################### zdm-host: 2025-06-12T08:01:48.268Z : Resuming zero downtime migrate operation ... zdm-host: 2025-06-12T08:01:52.847Z : Skipping phase ZDM_PRECHECKS_SRC on resume zdm-host: 2025-06-12T08:01:53.255Z : Skipping phase ZDM_PRECHECKS_TGT on resume zdm-host: 2025-06-12T08:01:53.273Z : Executing phase ZDM_SETUP_SRC zdm-host: 2025-06-12T08:01:53.274Z : Setting up ZDM on the source node dbsource-host ... dbsource-host: 2025-06-12T08:02:23.476Z : TNS aliases successfully setup on the source node source-hostname... dbsource-host: 2025-06-12T08:02:34.101Z : successfully registered ZDM with the database SRCSID zdm-host: 2025-06-12T08:02:34.106Z : Execution of phase ZDM_SETUP_SRC completed zdm-host: 2025-06-12T08:02:34.123Z : Executing phase ZDM_SETUP_TGT zdm-host: 2025-06-12T08:02:34.123Z : Setting up ZDM on the target node exacc-cl01n1 ... exacc-cl01n1: 2025-06-12T08:03:00.739Z : TNS aliases successfully setup on the target node exacc-cl01n1... zdm-host: 2025-06-12T08:03:00.742Z : Execution of phase ZDM_SETUP_TGT completed zdm-host: 2025-06-12T08:03:00.754Z : Skipping phase ZDM_VALIDATE_SRC on resume zdm-host: 2025-06-12T08:03:00.765Z : Skipping phase ZDM_VALIDATE_TGT on resume zdm-host: 2025-06-12T08:03:00.779Z : Skipping phase ZDM_DISCOVER_SRC on resume zdm-host: 2025-06-12T08:03:00.789Z : Skipping phase ZDM_COPYFILES on resume zdm-host: 2025-06-12T08:03:00.802Z : Skipping phase ZDM_PREPARE_TGT on resume zdm-host: 2025-06-12T08:03:00.813Z : Skipping phase ZDM_SETUP_TDE_TGT on resume zdm-host: 2025-06-12T08:03:00.823Z : Skipping phase ZDM_RESTORE_TGT on resume zdm-host: 2025-06-12T08:03:00.833Z : Skipping phase ZDM_RECOVER_TGT on resume zdm-host: 2025-06-12T08:03:00.843Z : Skipping phase ZDM_FINALIZE_TGT on resume zdm-host: 2025-06-12T08:03:00.844Z : resuming job execution from phase "ZDM_CONFIGURE_DG_SRC" zdm-host: 2025-06-12T08:03:00.859Z : Skipping phase ZDM_CONFIGURE_DG_SRC on resume zdm-host: 2025-06-12T08:03:00.880Z : Executing phase ZDM_SWITCHOVER_SRC zdm-host: 2025-06-12T08:03:00.881Z : Switching database null on the source node dbsource-host to standby role ... dbsource-host: 2025-06-12T08:03:11.403Z : Switchover actions in the source environment executed successfully zdm-host: 2025-06-12T08:03:11.407Z : Execution of phase ZDM_SWITCHOVER_SRC completed #################################################################### zdm-host: 2025-06-12T08:03:11.445Z : Executing phase ZDM_SWITCHOVER_TGT zdm-host: 2025-06-12T08:03:11.445Z : Switching database SRCSID_CHZ1 on the target node exacc-cl01n1 to primary role ... exacc-cl01n1: 2025-06-12T08:03:23.137Z : Switchover actions in the target environment executed successfully zdm-host: 2025-06-12T08:03:23.241Z : Execution of phase ZDM_SWITCHOVER_TGT completed #################################################################### zdm-host: 2025-06-12T08:03:23.279Z : Executing phase ZDM_POST_DATABASE_OPEN_TGT zdm-host: 2025-06-12T08:03:23.280Z : Executing post database open actions at the target ... exacc-cl01n1: 2025-06-12T08:03:34.667Z : Restoring pluggable database state ... exacc-cl01n1: 2025-06-12T08:03:35.368Z : Creating SSO for keystore location /u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/TDE/tde/ ... exacc-cl01n1: 2025-06-12T08:03:35.569Z : Restoring RMAN configuration ... exacc-cl01n1: 2025-06-12T08:03:40.374Z : Encrypting unencrypted temp tablespaces... #################################################################### zdm-host: 2025-06-12T08:04:40.845Z : Oracle ZDM ONLINE PHYSICAL migration failed zdm-host: 2025-06-12T08:04:40.845Z : Failed at phase: ZDM_POST_DATABASE_OPEN_TGT zdm-host: 2025-06-12T08:04:40.846Z : Job duration: 2 minutes and 56 seconds PRGZ-3727 : Re-creation of temporary tablespace "TEMP" as encrypted for database "PDB001P" failed.Troubleshooting
Let’s investigate the issue.
The ZDM step log, named zdm_post_database_open_tgt, would show exactly the issue and the command that are expected to be run.
oracle@exacc-cl01n1:/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/ [PDB001P1 (CDB$ROOT)] view zdm_post_database_open_tgt_7924.log ... ... ... [jobid-580][2025-06-12T08:03:41Z][mZDM_Queries.pm:655]:[DEBUG] Will be running following sql statements as user: oracle: export ORACLE_HOME=/u02/app/oracle/product/19.0.0.0/dbhome_1, export ORACLE_SID=PDB001P1, /u02/app/oracle/product/19.0.0.0/dbhome_1/bin/sqlplus -L / as sys ****** set pagesize 0 feedback off verify off heading off echo off tab off linesize 32767 trimspool on trimout on wrap off CREATE BIGFILE TEMPORARY TABLESPACE TEMP_ENCRYPTED TEMPFILE ENCRYPTION ENCRYPT ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP_ENCRYPTED DROP TABLESPACE TEMP INCLUDING CONTENTS ALTER TABLESPACE TEMP_ENCRYPTED RENAME TO TEMP [jobid-580][2025-06-12T08:03:41Z][mZDM_Utils.pm:3438]:[DEBUG] run_as_user2: Running /*******@/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/out/sql/tmpsql_2025-06-12-08-03-41_7924.sql ' [jobid-580][2025-06-12T08:04:41Z][mZDM_Utils.pm:3460]:[DEBUG] Remove /tmp/xE6e9fcUzC [jobid-580][2025-06-12T08:04:41Z][mZDM_Utils.pm:3468]:[DEBUG] /bin/su successfully executed [jobid-580][2025-06-12T08:04:41Z][mZDM_Queries.pm:678]:[DEBUG] Output is : SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 12 08:03:41 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 DROP TABLESPACE TEMP INCLUDING CONTENTS * ERROR at line 1: ORA-60100: dropping temporary tablespace with tablespace ID number (tsn) 3 is blocked due to sort segments ALTER TABLESPACE TEMP_ENCRYPTED RENAME TO TEMP * ERROR at line 1: ORA-02154: a tablespace with the name 'TEMP' is found Disconnected from Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 [jobid-580][2025-06-12T08:04:41Z][mZDM_Queries.pm:11791]:[ERROR] SQL Error recreating temp tablespace TEMP as encrypted for database 'PDB001P' [jobid-580][2025-06-12T08:04:41Z][mZDM_Utils.pm:4632]:[ERROR] Reporting error :
The TEMP tablespace can not be dropped, the database needs to be restarted.
ResolutionLet’s first create a pfile for the temporary non-CDB database ZDM created for the migration. Reminder, this temporary database will have as instance name and db_unique_name the name of the final PDB. And of course as DB_NAME the same than our source database, mandatory, when using HA primary and standby databases. I will create a pfile just in case…
I created a pfile from memory:
oracle@exacc-cl01n1:/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/ [PDB001P1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 12 10:14:20 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> create pfile='/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/init_PDB001P1.ora.bck.before.restart' from memory; File created.
And a pfile from spfile:
SQL> show parameter spfile NAME TYPE VALUE ------------------------------------ --------------------------------- ------------------------------ spfile string /u02/app/oracle/product/19.0.0 .0/dbhome_1/dbs/spfileAVQPZ_AP P_001P1.ora SQL> create pfile='/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/init_PDB001P1.ora.bck.before.restart.from.spfile' from spfile='/u02/app/oracle/product/19.0.0.0/dbhome_1/dbs/spfilePDB001P1.ora'; File created. SQL>
I restarted the temporary non-CDB
SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 8577884400 bytes Fixed Size 13693168 bytes Variable Size 3439329280 bytes Database Buffers 5100273664 bytes Redo Buffers 24588288 bytes Database mounted. ORA-28374: typed master key not found in wallet
ORA-28374 Failure. Would be the same if I start with the pfile, so there was some missing parameter into the spfile.
Confirmed, there is no wallet_root instance parameter configured.
SQL> show parameter wallet NAME TYPE VALUE ------------------------------------ --------------------------------- ------------------------------ ssl_wallet string wallet_root string
And confirmed it was not in the pfile.
SQL> !grep -i wallet /u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/init_PDB001P1.ora.bck.before.restart SQL>
We can easily find the directory where ZDM copied the wallet…
oracle@exacc-cl01n1:/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/ [PDB001P1 (CDB$ROOT)] ls -ltrh /u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/TDE/tde total 92K -rw-r----- 1 oracle oinstall 2.5K Jun 27 2024 ewallet_2024062706422567.p12 -rw-r----- 1 oracle oinstall 0 Jun 27 2024 ewallet.p12.lck -rw-r----- 1 oracle oinstall 0 Jun 27 2024 cwallet.sso.lck -rw-r----- 1 oracle oinstall 4.0K Sep 19 2024 ewallet_2024091909561990_pre-refresh-AVQZZ_APP_001T.p12 -rw-r----- 1 oracle oinstall 5.2K Jan 16 10:54 ewallet_2025011609540081_pre-refresh-AVQZZ_APP_001T-20250116.p12 -rw-r----- 1 oracle oinstall 6.4K Feb 27 09:28 ewallet_2025022708284959_pre-refresh-AVQZZ_APP_001T-20250227.p12 -rw-r----- 1 oracle oinstall 7.7K Mar 31 14:10 ewallet_2025033112100549_pre-refresh-AVQZZ_APP_001T-20250331.p12 -rw-r----- 1 oracle oinstall 8.9K Jun 11 13:58 ewallet_2025061111581822_chg_password_to_prod.p12 -rw-r----- 1 oracle oinstall 11K Jun 11 14:00 ewallet.p12 -rw-r----- 1 oracle oinstall 8.9K Jun 11 14:00 ewallet_2025061112003957_pre-dryrun-SRCSID-to-prodAVAPCl.p12 -rw-r----- 1 oracle oinstall 11K Jun 11 14:00 cwallet.sso.back -rw------- 1 oracle asmdba 11K Jun 12 10:03 cwallet.sso oracle@exacc-cl01n1:/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/ [PDB001P1 (CDB$ROOT)]
Let’s configure it!
oracle@exacc-cl01n1:/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/ [PDB001P1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 12 12:34:45 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> alter system set WALLET_ROOT='/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/TDE' scope=spfile; System altered.
Let’s restart the database again.
SQL> shutdown immediate ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 8577884400 bytes Fixed Size 13693168 bytes Variable Size 3439329280 bytes Database Buffers 5100273664 bytes Redo Buffers 24588288 bytes Database mounted. ORA-28365: wallet is not open
Wallet is recognised but not opened.
Confirmed:
SQL> set lines 300 SQL> col WRL_PARAMETER for a30 SQL> col WRL_TYPE for a30 SQL> col WALLET_TYPE for a30 SQL> col status for a40 SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET; WRL_PARAMETER WRL_TYPE WALLET_TYPE STATUS ------------------------------ ------------------------------ ------------------------------ ---------------------------------------- FILE UNKNOWN NOT_AVAILABLE
tde_configuration parameter is not configured.
SQL> show parameter tde_configuration NAME TYPE VALUE ------------------------------------ --------------------------------- ------------------------------ tde_configuration string
Let’s configure it.
SQL> alter system set tde_configuration='keystore_configuration=FILE' scope=both; System altered.
Checking, we can see that the wallet is now opened.
SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET; WRL_PARAMETER WRL_TYPE WALLET_TYPE STATUS ------------------------------ ------------------------------ ------------------------------ ---------------------------------------- /u02/app/oracle/zdm/zdm_SRCSID_C FILE AUTOLOGIN OPEN HZ1_580/zdm/TDE/tde/
And we can open the database now.
SQL> alter database open; Database altered. SQL>
And completed the command for the TEMP tablespace encryption.
SQL> DROP TABLESPACE TEMP INCLUDING CONTENTS; Tablespace dropped. SQL> ALTER TABLESPACE TEMP_ENCRYPTED RENAME TO TEMP; Tablespace altered. SQL>
We can now resume the ZDM job. As the TEMP tablespace is now encrypted, ZDM will move forward with the next actions from the ZDM_POST_DATABASE_OPEN_TGT phase. Which is good because updating the ZDM XML metadata file to consider the phase as successful would have as consequence some missing steps and actions.
Resume ZDM job again:
[exauser@zdm-host ~]$ zdmcli resume job -jobid 580 -skip SWITCHOVER zdm-host.domain.com: Audit ID: 6969
And we can see that the ZDM migration is now completed successfully.
[exauser@zdm-host ~]$ zdmcli query job -jobid 580 zdm-host.domain.com: Audit ID: 6970 Job ID: 580 User: exauser Client: zdm-host Job Type: "MIGRATE" Scheduled job command: "zdmcli migrate database -sourcesid SRCSID -rsp /home/exauser/migration/zdm_SRCSID_physical_online.rsp -sourcenode dbsource-host -srcauth dbuser -srcarg1 user:oracle -srcarg2 identity_file:/home/exauser/.ssh/id_rsa -targetnode exacc-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/exauser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_COPYFILES" Scheduled job execution start time: 2025-06-11T17:09:40+02. Equivalent local time: 2025-06-11 17:09:40 Current status: SUCCEEDED Result file path: "/u01/app/oracle/chkbase/scheduled/job-580-2025-06-11-17:10:06.log" Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-580-2025-06-11-17:10:06.json" Job execution start time: 2025-06-11 17:10:07 Job execution end time: 2025-06-12 16:02:54 Job execution elapsed time: 4 hours 37 minutes 44 seconds ZDM_GET_SRC_INFO .............. COMPLETED ZDM_GET_TGT_INFO .............. COMPLETED ZDM_PRECHECKS_SRC ............. COMPLETED ZDM_PRECHECKS_TGT ............. COMPLETED ZDM_SETUP_SRC ................. COMPLETED ZDM_SETUP_TGT ................. COMPLETED ZDM_PREUSERACTIONS ............ COMPLETED ZDM_PREUSERACTIONS_TGT ........ COMPLETED ZDM_VALIDATE_SRC .............. COMPLETED ZDM_VALIDATE_TGT .............. COMPLETED ZDM_DISCOVER_SRC .............. COMPLETED ZDM_COPYFILES ................. COMPLETED ZDM_PREPARE_TGT ............... COMPLETED ZDM_SETUP_TDE_TGT ............. COMPLETED ZDM_RESTORE_TGT ............... COMPLETED ZDM_RECOVER_TGT ............... COMPLETED ZDM_FINALIZE_TGT .............. COMPLETED ZDM_CONFIGURE_DG_SRC .......... COMPLETED ZDM_SWITCHOVER_SRC ............ COMPLETED ZDM_SWITCHOVER_TGT ............ COMPLETED ZDM_POST_DATABASE_OPEN_TGT .... COMPLETED ZDM_DATAPATCH_TGT ............. COMPLETED ZDM_NONCDBTOPDB_PRECHECK ...... COMPLETED ZDM_NONCDBTOPDB_CONVERSION .... COMPLETED ZDM_POST_MIGRATE_TGT .......... COMPLETED ZDM_POSTUSERACTIONS ........... COMPLETED ZDM_POSTUSERACTIONS_TGT ....... COMPLETED ZDM_CLEANUP_SRC ............... COMPLETED ZDM_CLEANUP_TGT ............... COMPLETED [exauser@zdm-host ~]$
In the migrated PDB we can see that all tablespaces are encrypted!
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 3 PDB001P READ WRITE NO SQL> select count(*) from cdb_tablespaces a, (select x.con_id, y.ENCRYPTIONALG, x.name from V$TABLESPACE x, V$ENCRYPTED_TABLESPACES y 2 where x.ts#=y.ts# and x.con_id=y.con_id) b where a.con_id=b.con_id(+) and a.tablespace_name=b.name(+) and b.ENCRYPTIONALG='AES128'; COUNT(*) ---------- 2803 SQL> select count(*) from cdb_tablespaces a, (select x.con_id, y.ENCRYPTIONALG, x.name from V$TABLESPACE x, V$ENCRYPTED_TABLESPACES y 2 where x.ts#=y.ts# and x.con_id=y.con_id) b where a.con_id=b.con_id(+) and a.tablespace_name=b.name(+) and b.ENCRYPTIONALG is null; COUNT(*) ---------- 0 SQL> select a.con_id, a.tablespace_name, nvl(b.ENCRYPTIONALG,'NOT ENCRYPTED') from cdb_tablespaces a, (select x.con_id, y.ENCRYPTIONALG, x.name from V$TABLESPACE x, V$ENCRYPTED_TABLESPACES y 2 where x.ts#=y.ts# and x.con_id=y.con_id) b where a.con_id=b.con_id(+) and a.tablespace_name=b.name(+) and b.ENCRYPTIONALG is null; no rows selected SQL>To wrap up…
As explained in my previous blog, ZDM 21.5 version brings some correction, new functionnalities but also some new bug that I could easily mitigate.
L’article ZDM 21.5 Physical Online Migration failing during TEMP tablespace encryption est apparu en premier sur dbi Blog.
Oracle ZDM 21.5 new features
I have already written several articles about my experience migrating databases with Oracle Zero Downtime Migration. At that time, the version available was 21.4. The current version is now 21.5. Recently, I had the opportunity to patch our customer ZDM environment to 21.5 and to run new migrations in this new version. In this blog, I would like to share what new features brings the 21.5 version and also what I could see running the same kind of migration with the new version.
New 21.5 features for Physical Migration Inflight UpgradesWe know that until 21.4, physical migration workflow required that source and target databases are in the same database major release. This of course due to Data Guard constrains.
With 21.5, Data Guard automation is changing to a migration methodology. There is no database release constraint any more. There is of course no change in the way Data Guard is working, but ZDM Physical Migration workflow now integrates and take in account in-flight upgrades. It is now possible to have source database in version 11.2.0.4 or 12c been migrated with physical online workflow to 19c or even from source 19c to 23ai.
In case source is a multitenant database, the temporary target database will be created in the same version as the source database. After switchover to this temporary database, ZDM will upgrade it using oracle tools.
In case source is a non-multitenant database, ZDM will perform the migration using a non-CDB temporary database and running a switchover to it, like I explained in my previous articles. Then ZDM will use autoupgrade to upgrade the database to the desired version and converting it to multitenant in the target CDB.
Cloud Native Disaster Recovery automationUntil 21.4, we had to manually create the Data Guard configuration in the cloud once the migration was performed. Version 21.5 now integrates a function to create Data Guard configuration in the target environment with ZDM once the migration is done. Customer can then benefit from a Cloud Native DR architecture having a standby database in the cloud.
New 21.5 features for Logical Migration Oracle Autonomous Database as sourceOracle Autonomous Database as source database for migration is now supported. Migration from autonomous to autonomous is now possible. This will allow tiers, serverless and dedicated Exadata Infrastructure Autonomous Database move.
GoldenGate EnhancementsThere is several enhancement for logical online migration supporting now following GoldenGate functions:
- Integrated and non-integrated replication mode
- Audit trail import
- Large Transaction split
- Pre-checks for ggadmin
- Feature groups, Constraint Handling, Concurrent Migrations
- User-specified GoldenGate Schema
There is also further enhancements integrating following Data Pump functions:
- Dump file retention and reuse
- Advanced Queue Objects for post-import reload
There is some other new enhancements for logical workflow like:
- OCI File Storage Servie (FSS) for data transfer medium with Oracle Autonomous Database as target
- Automated refresh of Materialized Views if specified.
Version 21.5 brings a new migration method, the Offline Hybrid Migration using:
- RMAN transportable tablespaces for Data Migration
- Data Pump Export/Import for metadata
This method requires NFS as backup location and allows cross-endian and cross-version migration.
The supported targets are:
- Oracle Base Database Services (BaseDB)
- Oracle Exadata Database Service on Dedicated Infrastructure (ExaDB-D)
- Oracle Exadata Database Service on Dedicated Infrastructure (ExaDB-D) on Oracle Database@Azure
- Oracle Exadata database Service on Cloud@Customer (ExaDB-C@C)
- Oracle Exadata On-premises
Running the same migration method, I could see following enhancements from 21.4 to 21.5:
- Bug correction like switchover issue (described in a previous blog)
- TEMP, SYSTEM, SYSAUX and UNDO tablespaces are now automatically encrypted by ZDM. No need to do it manually after the migration.
On the other side, I could find a new bug during the TEMP tablespace encryption. The problem is that the temporary database needs to be restarted to complete deletion of the previous TEMP tablespace. I could easily resolve manually the problem and completed ZDM migration. I will describe this workaround in a further blog.
L’article Oracle ZDM 21.5 new features est apparu en premier sur dbi Blog.
Bind Peeking
Here’s a simple detail about bind peeking (during optimisation) that makes a big difference to “fetch first/next” queries. The optimizer knows about your bind types, and can peek at the bind values – but that doesn’t mean it will do something sensible with them. Here’s a little model to demonstrate the problem – starting with a trival table creation statement and a simple “old-fashioned” query:
rem
rem Script: bind_peeking_wasted.sql
rem Author: Jonathan Lewis
rem Dated: Aug 2025
rem Purpose:
rem
rem Last tested
rem 23.6.0.24.10
create table t1
as
select * from all_objects
;
alter table t1 add constraint t1_pk primary key(object_id);
set serveroutput off
variable b1 number
exec :b1 := 20
set feedback only
select *
from (
select object_id, object_name, object_type, owner
from t1
order by
object_id
)
where
rownum <= :b1
/
set feedback on
select * from table(dbms_xplan.display_cursor(format=>'peeked_binds'));
Since the optimizer is able to peek at the number variable :b1 it could choose to use an index full scan with stop key to fetch the first 20 ordered rows and, as you can see below, that’s exactly what it did:
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
SQL_ID 3vd5qn3smsw5s, child number 0
-------------------------------------
select * from ( select object_id, object_name, object_type, owner
from t1 order by object_id ) where rownum <= :b1
Plan hash value: 3766500789
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
|* 1 | COUNT STOPKEY | | | | | |
| 2 | VIEW | | 20 | 3160 | 3 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| T1 | 63684 | 3855K| 3 (0)| 00:00:01 |
| 4 | INDEX FULL SCAN | T1_PK | 20 | | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Peeked Binds (identified by position):
--------------------------------------
1 - :B1 (NUMBER): 20
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM<=:B1)
As you can see the bind value 20 is echoed in the Rows predication for the index full scan (even though the prediction for the table access doesn’t follow the logic). Now change the code so that the rownum is dictated by summing two variables:
variable b2 number
variable b3 number
exec :b2 := 15; :b3 := 5
set feedback only
select *
from (
select object_id, object_name, object_type, owner
from t1
order by
object_id
)
where
rownum <= :b2 + :b3
/
set feedback on
select * from table(dbms_xplan.display_cursor(format=>'peeked_binds'));
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
SQL_ID 4sjjdyq59rwyw, child number 0
-------------------------------------
select * from ( select object_id, object_name, object_type, owner
from t1 order by object_id ) where rownum <= :b2 + :b3
Plan hash value: 270731910
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | 1342 (100)| |
|* 1 | COUNT STOPKEY | | | | | | |
| 2 | VIEW | | 63684 | 9826K| | 1342 (1)| 00:00:01 |
|* 3 | SORT ORDER BY STOPKEY| | 63684 | 3855K| 4776K| 1342 (1)| 00:00:01 |
| 4 | TABLE ACCESS FULL | T1 | 63684 | 3855K| | 390 (1)| 00:00:01 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM<=:B2+:B3)
3 - filter(ROWNUM<=:B2+:B3)
In this case the index scan doesn’t happen, and the arithmetic predicting 20 rows doesn’t appear anywhere in the plan. In fact we don’t see the peeked binds reported, even though I specifically requested them and even though they will be visible in the view v$sql_bind_capture.
Has the optimizer not looked at them, or has it forgotten them, or has it simply not reported them because it didn’t know how to use them? A quick check of the 10053 (CBO) trace file shows that the optimizer does actually take note of the values fairly early on in its processing:
*******************************************
Peeked values of the binds in SQL statement
*******************************************
----- Bind Info (kkscoacd) -----
PAYLOAD
----------------------------------------------------------------------------------------------------
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1000000 frm=00 csi=00 siz=48 off=0
kxsbbbfp=7fce857384b8 bln=22 avl=02 flg=05
value=15
Bind#1
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
kxsbbbfp=7fce857384d0 bln=22 avl=02 flg=01
value=5
After this, though, there are no further indications of the values being used.
The reason this omission is worth pursuing shows up in an article I wrote a couple of months ago about the “fetch next” syntax. If you use bind variables for the offset and fetch size then Oracle will rewrite your SQL to produce a predicate which effectively reduces to: row_number() <= :offset + :fetch_size and, as with the rownum clause above, will not work out that the sum of the two bind variables means that an indexed access path would be the best choice.
In fact, although I’ve chosen to demonstrate the problem with a sum of two bind variables, any arithmetic applied at this point is enough to disable the use of peeked valued – “:b1 + 0” and “:b1 * 1” both result in a critical loss of information even though one feels that the optimizer ought to be able to transform the constants out of the unparsed query in these extreme cases.
You don’t have to wait for 23ai for these features!
I know it’s been a long time waiting for the on-prem release of Oracle database 23ai. I feel your pain! Added to that, the previous release was 21c, which was an innovation release, so very few people would have bothered to upgrade to it. Over time some of the functionality from 21c and 23ai has … Continue reading "You don’t have to wait for 23ai for these features!"
The post You don’t have to wait for 23ai for these features! first appeared on The ORACLE-BASE Blog.You don’t have to wait for 23ai for these features! was first posted on August 1, 2025 at 11:02 am.©2024 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at timseanhall@gmail.com
Customer case study – Partition switching and transactional replication
The client’s environment includes two SQL Server instances (I1 and I2). One of the databases, D1 (on I1), is replicated to I2 via transactional replication. Additionally, the tables in this database are partitioned. The largest table (which is the one of interest) is approximately 350 GB in size and contains around 800 million rows. This table represents transactions.
The client has submitted the following requirements:
- They want to archive data with minimal impact on data availability (during the archiving process)
- They want to be able to retain the archived data
- They want to be able to make the archived data accessible or make it visible again
The following elements also need to be taken into account:
- Transactional replication is enabled on this table
- This table is partitioned
The client’s environment consists of the following elements:
- A SQL Server 2019 Standard Edition instance (Server 1) – Publisher
- A SQL Server 2019 Standard Edition instance (Server 2) – Subscriber
- A database composed of several tables, including one table that represents transactions
- Transactional replication is used to replicate certain tables between the two SQL Server instances

The table we are interested in consists of several columns and is partitioned by quarter (based on the Transaction Date column):


One possible solution for archiving data (while keeping it easily accessible) is partition switching. It is indeed possible to “move” data from one partition to another. To do this, an identical table must be created. In our case, it must have the same number of partitions.
We will therefore need to:
- Create a “copy” or “archive” table that will contain the archived data from the transactions table
- Since we are using transactional replication, the ALTER SWITCH command must be accounted for and replicated in such a way that the subscriber also executes the ALTER SWITCH command
The command looks like this:
ALTER TABLE transactions SWITCH PARTITION 2 TO transactions_archive PARTITION 2;
There are many advantages:
- Data “movement” using the ALTER SWITCH command is almost instantaneous, even when millions of rows are involved
- The command is simple to implement
- A rollback is possible, meaning data can be moved from one partition to another, or from one table to another. The “movement” can occur in both directions:
- From the transactions table to the archive_transactions table
- From the archive_transactions table back to the transactions table
- Data remains easily accessible since it is only moved to another table
The data from the selected partition is “migrated” to the same partition in the destination table.

Problems that arise:
By default, it is not possible to perform partition switches when transactional replication is being used. You can verify this using the following command:
exec sp_helppublication @publication = N'Publication_MyTransactions';

It is therefore necessary to allow the publication to handle “ALTER SWITCH” commands so that the partition switch is replicated to the subscriber. This is done using the following command:
exec sp_changepublication
@publication = N'Publication_MyTransactions',
@property = N'allow_partition_switch',
@value = N'true'
go
exec sp_changepublication
@publication = N'Publication_MyTransactions',
@property = N'replicate_partition_switch',
@value = N'true'
go
In our client’s case, the indexes were not aligned. In other words, they were not partitioned. When this happens, it is not possible to perform a partition switch. More generally, the source and destination tables must have exactly the same structure.
From there, we have two issues:
- The unaligned indexes need to be rebuilt (and partitioned)
- The new table (transactions_archive) needs to be added to the publication
Once these two actions are completed, a new snapshot must be generated. When this occurs, all articles are taken into account, and the snapshot captures all of them. The issue is that our client’s publication includes several dozen tables with a total volume exceeding 1 TB.
To include only the modified articles (in our specific case), the transactions table must be removed from the publication, the necessary changes must be made, and then the table should be added back (along with the transactions_archive table). Some publication properties must also be modified. This is done using the commands below:
exec sp_helppublication @publication = N'Publication_MyTransactions'
go
exec sp_changepublication
@publication = N'Publication_MyTransactions',
@property = N'allow_anonymous',
@value = 'false'
go
exec sp_changepublication
@publication = N'Publication_MyTransactions',
@property = N'immediate_sync',
@value = 'false'
go
Here are the different steps:
- Enable partition switching since we are using transactional replication
- Generate a snapshot only for the modified articles
- Remove the transactions article from replication
- Rebuild the non-partitioned indexes on the transactions table
- Create the transactions_archive table with the same structure as the transactions table
- Add the transactions and transactions_archive articles to the publication
- When adding them, consider the clustered and nonclustered indexes as well as the partitioning

- Start the snapshot agent
- Execute a partition switch and verify the result
- Plan for a rollback of the allow_anonymous and immediate_sync properties
We previously mentioned that the transactions table already existed and was being replicated (via transactional replication).
We therefore have our database (MyTransactions) with several filegroups (to simplify, we are only creating a few filegroups that will contain the data for 2024):
use master
go
declare @NumPart int = 1, @NumPartStr varchar(2), @SQLCmd nvarchar(max) = ''
while (@NumPart <= 4)
begin
select @NumPart
set @NumPartStr = +RIGHT('0'+CAST(@NumPart as varchar(2)), 2)
set @SQLCmd = @SQLCmd+'alter database MyTransactions add filegroup FG2024Q'+@NumPartStr+'; '
set @SQLCmd = @SQLCmd+'alter database MyTransactions add file (name = F2024Q'+@NumPartStr+', filename = ''D:\Data\F2024Q'+@NumPartStr+'.mdf'') to filegroup FG2024Q'+@NumPartStr+';'
set @NumPart = @NumPart+1
end
print @SQLCMD
exec(@SQLCMD)

We create our partition function:
use MyTransactions
go
create partition function FunctionPartition_Transactions(DATE)
as range right for values ('20240101','20240401','20240701','20241001')
go
We create our partition scheme:
use MyTransactions
go
create partition scheme SchemePartition_Transactions as partition FunctionPartition_Transactions
to ([Primary], FG2024Q01, FG2024Q02, FG2024Q03, FG2024Q04)
go
We then create our transactions table:
use MyTransactions
go
create table transactions
(
transactions_id int identity(1,1) not null,
transactions_label nvarchar(50) not null,
transactions_amount int not null,
transactions_date date not null
) on SchemePartition_Transactions(transactions_date)
alter table transactions add constraint pk_transactions primary key clustered(transactions_id, transactions_date)
go
create nonclustered index ix_transactions_label_date on transactions(transactions_label) include (transactions_date) on [PRIMARY]
go
We notice that the clustered index is partitioned:

However, the nonclustered index ix_transactions_label_date is not partitioned. Indeed, it was not created using the clause: ON SchemePartition_Transactions(transactions_date).

The table looks something like this:

We also mentioned that this table was replicated:



We now need to create our archive table, which will be an identical copy of the source table (transactions):
create table transactions_archive
(
transactions_id int identity(1,1) not null,
transactions_label nvarchar(50) not null,
transactions_amount int not null,
transactions_date date not null
) on SchemePartition_Transactions(transactions_date)
alter table transactions_archive add constraint pk_transactions_archive primary key clustered(transactions_id, transactions_date)
go
create nonclustered index ix_transactions_label_date on transactions_archive(transactions_label) include (transactions_date) on SchemePartition_Transactions(transactions_date)
go
This table is partitioned, as are its indexes (clustered and nonclustered). However, this is not the case for the transactions table, and this will cause issues when executing ALTER SWITCH commands to perform partition switches.
We then check what the partitioning of our tables looks like:
select
OBJECT_NAME(p.object_id) as obj_name,
f.name,
p.partition_number,
p.rows
from sys.system_internals_allocation_units a
join sys.partitions p
on p.partition_id = a.container_id
join sys.filegroups f on a.filegroup_id = f.data_space_id
where p.object_id in (object_id('transactions'), object_id('transactions_archive'))
order by obj_name, p.index_id, p.partition_number

We now need to perform the following operations:
- Ensure that the nonclustered index ix_transactions_label_date on the transactions table is partitioned. To do this, it must be recreated. However, this table is replicated, and explicit index creation and deletion commands are not replicated. This is why we need to modify certain publication properties (allow_anonymous and immediate_sync), remove the article from the publication, partition the nonclustered index, and then add the article back to the publication.

We have the following steps:
- Remove the transactions article from the publication
- Modify the following properties: allow_anonymous and immediate_sync
- (Re)create and partition the nonclustered index
- Modify certain replication properties to allow SQL commands like ALTER SWITCH
- Add the transactions and transactions_article articles to the publication
First, we remove the transactions article from the publication (note that in our client’s case, several dozen tables were replicated).
Second, we only want to generate a snapshot for the modified objects (transactions and transactions_archive).
exec sp_changepublication
@publication = N'Publication_MyTransactions',
@property = N'allow_anonymous',
@value = 'false'
go
exec sp_changepublication
@publication = N'Publication_MyTransactions',
@property = N'immediate_sync',
@value = 'false'
go
We (re)create the nonclustered index on the transactions table in order to partition it:
drop index ix_transactions_label_date on [dbo].[transactions]
go
create nonclustered index ix_transactions_label_date on transactions(transactions_label) on SchemePartition_Transactions(transactions_date)
go
Our index is now partitioned:


We then need to modify certain publication properties to allow ALTER SWITCH commands:
exec sp_helppublication @publication = N'Publication_MyTransactions'
go
exec sp_changepublication
@publication = N'Publication_MyTransactions',
@property = N'allow_partition_switch',
@value = N'true'
go
exec sp_changepublication
@publication = N'Publication_MyTransactions',
@property = N'replicate_partition_switch',
@value = N'true'
go
exec sp_helppublication @publication = N'Publication_MyTransactions'
go
The article can be added easily via (for example) the graphical interface:

We take partitioning into account in the following way:

It is very important to modify the properties of only these two articles and not those of already published articles (if any). Modifying the properties of already published articles would invalidate the existing snapshot and force SQL Server to generate a snapshot of all articles.
The above modification was made to the two articles (transactions and transactions_archive).
Once these steps are completed, we need to generate a snapshot via the corresponding job:

The corresponding objects are taken into account:

Once these operations are completed, we can see that our objects have been replicated and that the elements below have been taken into account:
- Partitioned nonclustered index
- Partition scheme
- Partition function

We now want to perform partition switches and verify that these commands are replicated.
Here is the corresponding code to run on the publisher (I am moving my data from partition 2 of my transactions table to partition 2 of my transactions_archive table):
alter table transactions switch partition 2 to transactions_archive partition 2
Before (from the publisher):

After (from the publisher):

From the subscriber:

Here is the related code :
:connect TR-Pub-1\INSTANCE2019
use MyTransactions
go
alter table transactions switch partition 2 to transactions_archive partition 2
go
select
OBJECT_NAME(p.object_id) as obj_name,
f.name,
p.partition_number,
p.rows
from sys.system_internals_allocation_units a
join sys.partitions p
on p.partition_id = a.container_id
join sys.filegroups f on a.filegroup_id = f.data_space_id
where p.object_id in (object_id('transactions'), object_id('transactions_archive'))
order by obj_name, p.index_id, p.partition_number
go
:connect TR-Sub-1\INSTANCE2019
use MyTransactions
go
select
OBJECT_NAME(p.object_id) as obj_name,
f.name,
p.partition_number,
p.rows
from sys.system_internals_allocation_units a
join sys.partitions p
on p.partition_id = a.container_id
join sys.filegroups f on a.filegroup_id = f.data_space_id
where p.object_id in (object_id('transactions'), object_id('transactions_archive'))
order by obj_name, p.index_id, p.partition_number
go
We then need to roll back the modification of the following properties: allow_anonymous et immediate_sync.
exec sp_changepublication
@publication = N'Publication_MyTransactions',
@property = N'immediate_sync',
@value = 'true'
go
exec sp_changepublication
@publication = N'Publication_MyTransactions',
@property = N'allow_anonymous',
@value = 'true'
go
exec sp_helppublication @publication = N'Publication_MyTransactions'
go
Thank you. Amine Haloui.
L’article Customer case study – Partition switching and transactional replication est apparu en premier sur dbi Blog.
Connecting to Oracle Database 23ai on Oracle@GCP with VS Code: A DBA’s Guide
Get connected to Oracle Database 23ai on Oracle@GCP in under 10 minutes using VS Code and SQL Developer extension. This step-by-step guide walks through wallet retrieval, connection configuration, and security best practices for DBAs managing cloud databases. No complexity, just straightforward instructions to establish secure connections and start working with your Autonomous Database instance.
The post Connecting to Oracle Database 23ai on Oracle@GCP with VS Code: A DBA’s Guide appeared first on DBASolved.
Oracle@GCP Setup: A Senior DBA’s Guide to Simple Enterprise Database Deployment
After decades of managing Oracle databases – from 8i in the Army to the latest 23ai deployments – I’ve seen […]
The post Oracle@GCP Setup: A Senior DBA’s Guide to Simple Enterprise Database Deployment appeared first on DBASolved.
Automatisation for Oracle ZDM installation and update
Let’s have a look how we can install and update Oracle Zero Downtime Migration tool, and see how we can automatise it with ansible…
Check ZDM buildLet’s first see how we can check ZDM version using zdm cli.
[zdm@zdmhost ~]$ zdmcli -v RHP_PT.ZDM21_LINUX.X64_221207.30 [zdm@zdmhost ~]$ zdmcli -build version: 21.0.0.0.0 full version: 21.4.0.0.0 patch version: 21.4.5.0.0 label date: 221207.30 ZDM kit build date: Mar 21 2024 22:07:12 UTC CPAT build version: 24.6.0 [exauser@sva-oelexa501 ~]$
Version installed is 21.4, and we would see how to update it to last current version, which is 21.5.
Manuel installation and updateManuel installation and update is quite easy as it will be done through the zdminstall.sh
script. It will be the option that will highlight if it is a first installation or an update. For a first installation, the option will be setup
and for an update it will be update
.
The playbook to automatise the installation and the update, in my case named deploy_zdm.yml
, is composed of following tasks.
We start by defining the playbook and the variable that will be used in the included tasks to install or update to zdm current last version, which is 21.5. The zip file to unarchive will be stored in the ../oracle_swfiles
directory.
--- # Playbook for deploying ZDM host # Marc Wagner - dbi # Date : 06.10.2023 # 11.06.2025 : stop service + new 21.5 version - name: Deploy ZDM hosts: "zdmhost" vars: sfw_folder: "../oracle_swfiles" zdm_base: "/u01/app/oracle" zdm_install_dir: "zdm21.5" zdm_download_dir: "/u01/app/zdm_download_dir" zdm_archive: "{{ sfw_folder }}/V1045330-01.zip" # ZDM update is an in-place process zdm_home: "/u01/app/oracle/product/zdm" environment: HTTP_PROXY: "" HTTPS_PROXY: "" tasks:
Then we will have all the tasks performing the installation.
The first one will be used to define the variable for the option provided as extra argument of the ansible playbook. And we will assert that the variable is provided.
- name: Assert extra-var has been passed as argument to playbook ansible.builtin.assert: that: - deploy_option is defined quiet: false fail_msg: "Please provide --extra-var deploy_option=" success_msg: "deploy_option={{ deploy_option }}"
The next task will stop ZDM service and will only run if following file exists : /u01/app/oracle/product/zdm/bin/zdmservice
.
This check is done so the task will only run if ZDM tool is already installed.
- name: Stop ZDM service ansible.builtin.shell: | cmd: | {{ zdm_home }}/bin/zdmservice stop args: executable: "/bin/bash" removes: "/u01/app/oracle/product/zdm/bin/zdmservice"
The next task will install some prerequisite if not already installed.
- name: Install ZDM software prerequisites become: true become_user: root ansible.builtin.dnf: name: "{{ item }}" loop: - perl - expect - libaio - glibc-devel - unzip - libnsl - ncurses-compat-libs - oraclelinux-developer-release-el8
The next task will create all needed directories like zdm base, zdm home and the file used to store the installation archive file.
- name: Create directories for ZDM tool become: true become_user: root ansible.builtin.file: path: "{{ item }}" state: directory owner: exauser group: exauser mode: '755' loop: - "{{ zdm_base }}" - "{{ zdm_home }}" - "{{ zdm_download_dir }}"
The next task will unarchive the installation zip file.
- name: Unarchive ZDM ansible.builtin.unarchive: src: "{{ zdm_archive }}" dest: "{{ zdm_download_dir }}"
And the next task will finally installed or update zdm tool according to the option given to the playbook.
- name: Install or update ZDM ansible.builtin.shell: cmd: | {{ zdm_download_dir }}/{{ zdm_install_dir }}/zdminstall.sh \ {{ deploy_option }} oraclehome={{ zdm_home }} oraclebase={{ zdm_base }} \ ziploc={{ zdm_download_dir }}/{{ zdm_install_dir }}/zdm_home.zip -zdm args: executable: "/bin/bash"
And we can finally with the last steps start ZDM service.
- name: Start ZDM service ansible.builtin.shell: | cmd: | {{ zdm_home }}/bin/zdmservice start args: executable: "/bin/bash"Run the playbook
As prerequisite, let’s first check that the appropriate ZDM installation zip file is in the expected ansible folder.
[myuser@domain.com@admin-host zdm_ansible]$ ls -ltrh ./oracle_swfiles/ total 874M -rw-r--r--. 1 myuser@domain.com myuser@domain.com 871M Jun 11 10:12 V1045330-01.zip [myuser@domain.com@admin-host zdm_ansible]$
We can check that the assert task to ensure we put the appropriate –extra-vars works.
[myuser@domain.com@admin-host zdm_ansible]$ ansible-playbook ./playbooks/deploy_zdm.yml PLAY [Deploy ZDM] *********************************************************************************************************************************************************************************************************************************************************************** TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************************************** [WARNING]: Platform linux on host zdmhost is using the discovered Python interpreter at /usr/bin/python3, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.15/reference_appendices/interpreter_discovery.html for more information. ok: [zdmhost] TASK [Assert extra-var has been passed as argument to playbook] ************************************************************************************************************************************************************************************************************************* fatal: [zdmhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'deploy_option' is undefined. 'deploy_option' is undefined\n\nThe error appears to be in '/home/nfs/domain.com/myuser/ExaCCGit/zdm_ansible/playbooks/deploy_zdm.yml': line 25, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Assert extra-var has been passed as argument to playbook\n ^ here\n"} PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************************** zdmhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 [myuser@domain.com@admin-host zdm_ansible]$
Then we can run the playbook with the update option.
[myuser@domain.com@admin-host zdm_ansible]$ ansible-playbook ./playbooks/deploy_zdm.yml -e deploy_option="update" PLAY [Deploy ZDM] *********************************************************************************************************************************************************************************************************************************************************************** TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************************************** [WARNING]: Platform linux on host zdmhost is using the discovered Python interpreter at /usr/bin/python3, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.15/reference_appendices/interpreter_discovery.html for more information. ok: [zdmhost] TASK [Assert extra-var has been passed as argument to playbook] ************************************************************************************************************************************************************************************************************************* ok: [zdmhost] => { "changed": false, "msg": "deploy_option=update" } TASK [Stop ZDM service] ***************************************************************************************************************************************************************************************************************************************************************** changed: [zdmhost] TASK [Install ZDM software prerequisites] *********************************************************************************************************************************************************************************************************************************************** ok: [zdmhost] => (item=perl) ok: [zdmhost] => (item=expect) ok: [zdmhost] => (item=libaio) ok: [zdmhost] => (item=glibc-devel) ok: [zdmhost] => (item=unzip) ok: [zdmhost] => (item=libnsl) ok: [zdmhost] => (item=ncurses-compat-libs) ok: [zdmhost] => (item=oraclelinux-developer-release-el8) TASK [Create directories for ZDM tool] ************************************************************************************************************************************************************************************************************************************************** ok: [zdmhost] => (item=/u01/app/oracle) ok: [zdmhost] => (item=/u01/app/oracle/product/zdm) ok: [zdmhost] => (item=/u01/app/zdm_download_dir) TASK [Unarchive ZDM] ******************************************************************************************************************************************************************************************************************************************************************** ok: [zdmhost] TASK [Install or update ZDM] ************************************************************************************************************************************************************************************************************************************************************ changed: [zdmhost] TASK [Start ZDM service] **************************************************************************************************************************************************************************************************************************************************************** changed: [zdmhost] PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************************** zdmhost : ok=8 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0Check new ZDM version
And we can check that the new ZDM tool version is 21.5.
[zdm@zdmhost ~]$ zdmcli -v RHP_PT.ZDM21_LINUX.X64_240219.12 [zdm@zdmhost ~]$ zdmcli -build version: 21.0.0.0.0 full version: 21.5.0.0.0 patch version: N/A label date: 240219.12 ZDM kit build date: Sep 10 2024 21:59:18 UTC CPAT build version: 24.6.0 [zdm@zdmhost ~]$To wrap up…
Installing and updating ZDM cli is quite easy, and writing an ansible playbook will help automatising the installation and further update.
L’article Automatisation for Oracle ZDM installation and update est apparu en premier sur dbi Blog.
Oracle GoldenGate LOB Replication to Snowflake: Getting It Right
Recently I’ve been helping clients navigate one of the more challenging aspects of Oracle GoldenGate implementations – replicating Large Objects […]
The post Oracle GoldenGate LOB Replication to Snowflake: Getting It Right appeared first on DBASolved.
Patching SUSE Multi Linux Manager
In the last post about SUSE Multi Linux Manager we had a look at how you can schedule OpenSCAP reports using the API. In this post we’ll look into something very basic: How can you patch the server components of SUSE Multi Linux Manager. We speak about components because you need to patch the host (which is a SLE Micro in this case) and the container hosting the application.
Looking at the host operating system we can see this is a SLE Micro 5.5:
suma:~ $ cat /etc/os-release
NAME="SLE Micro"
VERSION="5.5"
VERSION_ID="5.5"
PRETTY_NAME="SUSE Linux Enterprise Micro 5.5"
ID="sle-micro"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sle-micro:5.5"
As this comes with a read only root file system we cannot directly use zypper to patch the system. The tool to use in this case is transactional-update. This still uses zypper in the background, but the updates are installed into a new Btrfs snapshot. Using this approach the running system is not touched at all and the updates only become available when the system is rebooted into the new snapshot (which happens automatically when the system is rebooted). If something is wrong with the new snapshot, the system can be booted from the old snapshot and the system is back to what it was before patching.
Before we patch the host system let’s have a look at the snapshots we currently have available:
suma:~ $ snapper list
# | Type | Pre # | Date | User | Used Space | Cleanup | Description | Userdata
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+--------------
0 | single | | | root | | | current |
1 | single | | Fri 08 Mar 2024 10:45:41 AM CET | root | 1.30 GiB | number | first root filesystem | important=yes
2 | single | | Mon 07 Jul 2025 12:18:08 PM CEST | root | 1.51 MiB | number | Snapshot Update of #1 | important=yes
3 | single | | Mon 07 Jul 2025 12:30:01 PM CEST | root | 1.02 MiB | number | Snapshot Update of #2 | important=yes
4 | single | | Tue 08 Jul 2025 05:33:39 AM CEST | root | 39.78 MiB | number | Snapshot Update of #3 | important=yes
5 | single | | Wed 16 Jul 2025 09:25:23 AM CEST | root | 45.07 MiB | | Snapshot Update of #4 |
6* | single | | Wed 23 Jul 2025 04:13:09 PM CEST | root | 58.62 MiB | | Snapshot Update of #5 |
Let’s patch and compare what we’ll have afterwards:
suma:~ $ zypper ref
Warning: The gpg key signing file 'repomd.xml' has expired.
Repository: SLE-Micro-5.5-Updates
Key Fingerprint: FEAB 5025 39D8 46DB 2C09 61CA 70AF 9E81 39DB 7C82
Key Name: SuSE Package Signing Key <build@suse.de>
Key Algorithm: RSA 2048
Key Created: Mon 21 Sep 2020 10:21:47 AM CEST
Key Expires: Fri 20 Sep 2024 10:21:47 AM CEST (EXPIRED)
Rpm Name: gpg-pubkey-39db7c82-5f68629b
Retrieving repository 'SLE-Micro-5.5-Updates' metadata ..............................................................................................................[done]
Building repository 'SLE-Micro-5.5-Updates' cache ...................................................................................................................[done]
Warning: The gpg key signing file 'repomd.xml' has expired.
Repository: SUSE-Manager-Server-5.0-Updates
Key Fingerprint: FEAB 5025 39D8 46DB 2C09 61CA 70AF 9E81 39DB 7C82
Key Name: SuSE Package Signing Key <build@suse.de>
Key Algorithm: RSA 2048
Key Created: Mon 21 Sep 2020 10:21:47 AM CEST
Key Expires: Fri 20 Sep 2024 10:21:47 AM CEST (EXPIRED)
Rpm Name: gpg-pubkey-39db7c82-5f68629b
Retrieving repository 'SUSE-Manager-Server-5.0-Updates' metadata ....................................................................................................[done]
Building repository 'SUSE-Manager-Server-5.0-Updates' cache .........................................................................................................[done]
Repository 'SLE-Micro-5.5-Pool' is up to date.
Repository 'SUSE-Manager-Server-5.0-Pool' is up to date.
All repositories have been refreshed.
suma:~ $ transactional-update
Checking for newer version.
transactional-update 4.1.9 started
Options:
Separate /var detected.
2025-07-30 09:42:32 tukit 4.1.9 started
2025-07-30 09:42:32 Options: -c6 open
2025-07-30 09:42:33 Using snapshot 6 as base for new snapshot 7.
2025-07-30 09:42:33 /var/lib/overlay/6/etc
2025-07-30 09:42:33 Syncing /etc of previous snapshot 5 as base into new snapshot "/.snapshots/7/snapshot"
2025-07-30 09:42:33 SELinux is enabled.
ID: 7
2025-07-30 09:42:36 Transaction completed.
Calling zypper up
2025-07-30 09:42:38 tukit 4.1.9 started
2025-07-30 09:42:38 Options: callext 7 zypper -R {} up -y --auto-agree-with-product-licenses
2025-07-30 09:42:39 Executing `zypper -R /tmp/transactional-update-JsIr01 up -y --auto-agree-with-product-licenses`:
Refreshing service 'SUSE_Linux_Enterprise_Micro_5.5_x86_64'.
Refreshing service 'SUSE_Manager_Server_Extension_5.0_x86_64'.
Loading repository data...
Reading installed packages...
The following 21 packages are going to be upgraded:
boost-license1_66_0 libboost_system1_66_0 libboost_thread1_66_0 libpolkit-agent-1-0 libpolkit-gobject-1-0 mgradm mgradm-bash-completion mgrctl mgrctl-bash-completion polkit python3-pyparsing python3-pytz python3-PyYAML python3-requests python3-salt python3-simplejson python3-urllib3 salt salt-minion salt-transactional-update uyuni-storage-setup-server
21 packages to upgrade.
Package download size: 16.8 MiB
Package install size change:
| 71.4 MiB required by packages that will be installed
654.0 KiB | - 70.8 MiB released by packages that will be removed
Backend: classic_rpmtrans
Continue? [y/n/v/...? shows all options] (y): y
...
2025-07-30 09:44:40 New default snapshot is #7 (/.snapshots/7/snapshot).
2025-07-30 09:44:40 Transaction completed.
Please reboot your machine to activate the changes and avoid data loss.
New default snapshot is #7 (/.snapshots/7/snapshot).
transactional-update finished
As noted above we must reboot the system for the updates to become active. Before we do that, let’s again have a look at the snapshots:
suma:~ $ snapper list
# | Type | Pre # | Date | User | Used Space | Cleanup | Description | Userdata
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+--------------
0 | single | | | root | | | current |
1 | single | | Fri 08 Mar 2024 10:45:41 AM CET | root | 1.30 GiB | number | first root filesystem | important=yes
2 | single | | Mon 07 Jul 2025 12:18:08 PM CEST | root | 1.51 MiB | number | Snapshot Update of #1 | important=yes
3 | single | | Mon 07 Jul 2025 12:30:01 PM CEST | root | 1.02 MiB | number | Snapshot Update of #2 | important=yes
4 | single | | Tue 08 Jul 2025 05:33:39 AM CEST | root | 39.78 MiB | number | Snapshot Update of #3 | important=yes
5 | single | | Wed 16 Jul 2025 09:25:23 AM CEST | root | 45.07 MiB | | Snapshot Update of #4 |
6- | single | | Wed 23 Jul 2025 04:13:09 PM CEST | root | 4.11 MiB | | Snapshot Update of #5 |
7+ | single | | Wed 30 Jul 2025 09:42:32 AM CEST | root | 88.39 MiB | | Snapshot Update of #6 |
We got a new snapshot (number 7) which is not yet active, let’s reboot and check again:
suma:~ $ reboot
...
suma:~ $ snapper list
# | Type | Pre # | Date | User | Used Space | Cleanup | Description | Userdata
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+--------------
0 | single | | | root | | | current |
1 | single | | Fri 08 Mar 2024 10:45:41 AM CET | root | 1.30 GiB | number | first root filesystem | important=yes
2 | single | | Mon 07 Jul 2025 12:18:08 PM CEST | root | 1.51 MiB | number | Snapshot Update of #1 | important=yes
3 | single | | Mon 07 Jul 2025 12:30:01 PM CEST | root | 1.02 MiB | number | Snapshot Update of #2 | important=yes
4 | single | | Tue 08 Jul 2025 05:33:39 AM CEST | root | 39.78 MiB | number | Snapshot Update of #3 | important=yes
5 | single | | Wed 16 Jul 2025 09:25:23 AM CEST | root | 45.07 MiB | | Snapshot Update of #4 |
6 | single | | Wed 23 Jul 2025 04:13:09 PM CEST | root | 4.11 MiB | | Snapshot Update of #5 |
7* | single | | Wed 30 Jul 2025 09:42:32 AM CEST | root | 88.39 MiB | | Snapshot Update of #6 |
The new snapshot became active and we’re fully patched on the host system.
Now that the host system is fully patched, we can proceed with patching the SUSE Multi Linux Manager application. Before we do that, let’s check what we currently have:
suma:~ $ mgradm inspect
10:40AM INF Welcome to mgradm
10:40AM INF Executing command: inspect
10:40AM INF Computed image name is registry.suse.com/suse/manager/5.0/x86_64/server:5.0.4.1
10:40AM INF Ensure image registry.suse.com/suse/manager/5.0/x86_64/server:5.0.4.1 is available
WARN[0002] Path "/etc/SUSEConnect" from "/etc/containers/mounts.conf" doesn't exist, skipping
10:40AM INF
{
"CurrentPgVersion": "16",
"ImagePgVersion": "16",
"DBUser": "spacewalk",
"DBPassword": "<REDACTED>",
"DBName": "susemanager",
"DBPort": 5432,
"UyuniRelease": "",
"SuseManagerRelease": "5.0.4.1",
"Fqdn": "suma.dwe.local"
}
The currently running version is “5.0.4.1”. Patching is quite simple as this just updates the container:
suma:~ $ mgradm upgrade podman
10:41AM INF Welcome to mgradm
10:41AM INF Use of this software implies acceptance of the End User License Agreement.
10:41AM INF Executing command: podman
...
10:41AM INF No changes requested for hub. Keep 0 replicas.
10:41AM INF Computed image name is registry.suse.com/suse/manager/5.0/x86_64/server-hub-xmlrpc-api:5.0.5
10:41AM INF Ensure image registry.suse.com/suse/manager/5.0/x86_64/server-hub-xmlrpc-api:5.0.5 is available
10:42AM INF Cannot find RPM image for registry.suse.com/suse/manager/5.0/x86_64/server-hub-xmlrpc-api:5.0.5
Checking the version again:
suma:~ $ mgradm inspect
10:36AM INF Welcome to mgradm
10:36AM INF Use of this software implies acceptance of the End User License Agreement.
10:36AM INF Executing command: inspect
10:36AM INF Computed image name is registry.suse.com/suse/manager/5.0/x86_64/server:5.0.5
10:36AM INF Ensure image registry.suse.com/suse/manager/5.0/x86_64/server:5.0.5 is available
10:36AM ??? time="2025-07-30T10:36:20+02:00" level=warning msg="Path \"/etc/SUSEConnect\" from \"/etc/containers/mounts.conf\" doesn't exist, skipping"
10:36AM INF
{
"CurrentPgVersion": "16",
"ImagePgVersion": "16",
"DBUser": "spacewalk",
"DBPassword": "<REDACTED>",
"DBName": "susemanager",
"DBPort": 5432,
"UyuniRelease": "",
"SuseManagerRelease": "5.0.5",
"Fqdn": "suma.dwe.local"
}
Now we are on version “5.0.5” and we’re done with our patching for the server part. Clients also should be upgraded, especially the Salt client as SUSE Multi Linux Manager uses Salt to manage the clients. You can either do that manually by using the package manager of the distributions you’re managing or you can do that from the WebUI:

That’s it, not hard to do and an easy process to follow.
L’article Patching SUSE Multi Linux Manager est apparu en premier sur dbi Blog.
What will happen to your spare clones during a switchover on ExaCC?
As you might know, on ExaCC, it is possible to create spare clones. This is based on snapshot technology. There is no physical copy and the modified blocks will be written to the sparse clone, and in our case these blocks will be part of the ASM spare disk group. And of course, the sparse clone then needs a master read only. This article is not intended to explain the whole functioning of this technology. For more information on the technology, I would recommend you to read following blog article :
https://blogs.oracle.com/exadata/post/exadata-sparse-clones-and-monitor-sparse-disk-groups-size
Intention of this article is to show what will happen to your sparse during a switchover. In multitenant environment, on the standby side the sparse clone PDB will be seen in mount status, but not accessible. The sparse is made locally, that is to say on the primary database side where it was created, and only accessible on this site. The sparse clone is storage dependant and primary and standby are not sharing same shelf storage. So after a switchover, the sparse clone is not accessible until switchover back to the initial primary is performed. Let’s see how that works!
Lab descriptionFor the demonstration we will use:
- an ExaCC cluster named exacc-cl01 (currently the primary side) on which we will have a CDB named SWIT001T_CHZ2
- an ExaCC cluster named exacc-cl02 (currently the standby side) on which we will have the physical standby CDB named SWIT001T_CHZ3
- The PDB will be SWTET_APP_001T
- The Master Read Only PDB will be SWTET_TMR_001T
- The sparce clone PDB will be SWTET_APP_002T
The command used to create the Master Read Only on the primary CDB was:
SQL> CREATE PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2 CONNECT TO C##USER IDENTIFIED BY ".................." USING 'SWIT001T_CHZ2';
SQL> CREATE PLUGGABLE DATABASE SWTET_TMR_001T FROM SWTET_APP_001T@"SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2"
KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
PARALLEL 8
SERVICE_NAME_CONVERT=('SWTET_APP_001T','SWTET_TMR_001T')
REFRESH MODE MANUAL
STANDBYS=NONE;
SQL> ALTER PLUGGABLE DATABASE SWTET_TMR_001T OPEN READ ONLY INSTANCES=ALL;
SQL> DROP PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2;
On the primary side, this looks like:
oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 14:46:17 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ WRITE NO 4 SWTET_TMR_001T READ ONLY NO SQL>
On the standby side, this looks like:
oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 14:46:48 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ ONLY NO 4 SWTET_TMR_001T MOUNTED SQL>
Which is correct as the Master Read Only SWTET_TMR_001T was created with STANDBYS=NONE option.
Sparse clone creationThe command used to create the sparse clone on the primary CDB was:
SQL> CREATE PLUGGABLE DATABASE SWTET_APP_002T FROM SWTET_TMR_001T
KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
PARALLEL 8
CREATE_FILE_DEST='+SPRC2'
SERVICE_NAME_CONVERT=('SWTET_TMR_001T','SWTET_APP_002T','SWTET_APP_001T','SWTET_APP_002T')
SNAPSHOT COPY
STANDBYS=NONE;
SQL> ALTER PLUGGABLE DATABASE SWTET_APP_002T OPEN INSTANCES=ALL;
On the primary side, this looks like:
oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:03:18 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ WRITE NO 4 SWTET_TMR_001T READ ONLY NO 5 SWTET_APP_002T READ WRITE NO SQL>
The sparse clone PDB is opened as expected in READ/WRITE mode.
On the standby side, this looks like:
oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:37:11 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ ONLY NO 4 SWTET_TMR_001T MOUNTED 5 SWTET_APP_002T MOUNTED SQL>
Which is correct as we used the standby=no option to create the sparse clone and in any case the snapshot is only locally as storage dependant.
Create some data in the spare cloneLet’s create some data into the spare clone.
oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:28:39 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ WRITE NO 4 SWTET_TMR_001T READ ONLY NO 5 SWTET_APP_002T READ WRITE NO SQL> alter session set container=SWTET_APP_002T; Session altered. SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; Session altered. SQL> set lines 300 SQL> create table TEST_SWITCH (id int, name varchar(100), run_exec date); Table created. SQL> insert into TEST_SWITCH values (0, 'Before switch', sysdate); 1 row created. SQL> commit; Commit complete. SQL> select * from TEST_SWITCH; ID NAME RUN_EXEC ---------- ---------------------------------------------------------------------------------------------------- ------------------- 0 Before switch 06/06/2025 15:33:32 SQL>Switchover to CHZ3
Let’s switchover from CHZ2 to CHZ3.
oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] dgh DGMGRL for Linux: Release 19.0.0.0.0 - Production on Fri Jun 6 15:36:06 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect sys@SWIT001T_CHZ2 Password: Connected to "SWIT001T_CHZ2" Connected as SYSDBA. DGMGRL> show configuration lag Configuration - swit001t_dgconf Protection Mode: MaxPerformance Members: SWIT001T_CHZ2 - Primary database SWIT001T_CHZ3 - Physical standby database Transport Lag: 0 seconds (computed 0 seconds ago) Apply Lag: 0 seconds (computed 0 seconds ago) Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 46 seconds ago) DGMGRL> validate database SWIT001T_CHZ3 Database Role: Physical standby database Primary Database: SWIT001T_CHZ2 Ready for Switchover: Yes Ready for Failover: Yes (Primary Running) Managed by Clusterware: SWIT001T_CHZ2: YES SWIT001T_CHZ3: YES Temporary Tablespace File Information: SWIT001T_CHZ2 TEMP Files: 5 SWIT001T_CHZ3 TEMP Files: 3 DGMGRL> switchover to SWIT001T_CHZ3; Performing switchover NOW, please wait... Operation requires a connection to database "SWIT001T_CHZ3" Connecting ... Connected to "SWIT001T_CHZ3" Connected as SYSDBA. New primary database "SWIT001T_CHZ3" is opening... Oracle Clusterware is restarting database "SWIT001T_CHZ2" ... Connected to "SWIT001T_CHZ2" Switchover succeeded, new primary is "swit001t_chz3" DGMGRL> DGMGRL> show configuration lag Configuration - swit001t_dgconf Protection Mode: MaxPerformance Members: SWIT001T_CHZ3 - Primary database SWIT001T_CHZ2 - Physical standby database Transport Lag: 0 seconds (computed 1 second ago) Apply Lag: 0 seconds (computed 1 second ago) Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 7 seconds ago)Create table in source pdb in new primary cl02
To check on the sparse clone database would react after the switchover, let’s add some data in the source PDB on the new primary (CHZ3).
oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:40:16 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ WRITE NO 4 SWTET_TMR_001T MOUNTED 5 SWTET_APP_002T MOUNTED SQL> alter session set container=SWTET_APP_001T; Session altered. SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; Session altered. SQL> set lines 300 SQL> create table TEST_ON_STDBY (id int, name varchar(100), run_exec date); Table created. SQL> insert into TEST_ON_STDBY values (0, 'Cl08 is primary', sysdate); 1 row created. SQL> commit; Commit complete. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 3 SWTET_APP_001T READ WRITE NO SQL> select instance_name, host_name from v$instance; INSTANCE_NAME HOST_NAME ---------------- ---------------------------------------------------------------- SWIT001T1 exacc-cl02n1 SQL> select * from TEST_ON_STDBY; ID NAME RUN_EXEC ---------- ---------------------------------------------------------------------------------------------------- ------------------- 0 Cl08 is primary 06/06/2025 15:43:19 SQL>Check table on new standby side cl01
We can check data on the standby side PDB on the cluster cl01.
oracle@exacc-cl01n1:~/ [grinf19] SWIT001T1 ******************************************** INSTANCE_NAME : SWIT001T1 DB_NAME : SWIT001T DB_UNIQUE_NAME : SWIT001T_CHZ2 STATUS : OPEN READ ONLY WITH APPLY LOG_MODE : ARCHIVELOG USERS/SESSIONS : 4/19 DATABASE_ROLE : PHYSICAL STANDBY FLASHBACK_ON : YES FORCE_LOGGING : YES VERSION : 19.26.0.0.0 CDB_ENABLED : YES PDBs : PDB$SEED SWTET_APP_001T SWTET_APP_002T SWTET_TMR_001T ******************************************** PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only Statustime: 2025-06-06 15:45:25 oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:45:27 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ ONLY NO 4 SWTET_TMR_001T MOUNTED 5 SWTET_APP_002T READ ONLY NO
The Master Read Only PDB is in MOUNT status, opening it in READ ONLY mode would not be possible. This would make sense as it is the reference for the sparse clone and we created it with the no standby option.
SQL> alter pluggable database SWTET_TMR_001T open read only instances=all; alter pluggable database SWTET_TMR_001T open read only instances=all * ERROR at line 1: ORA-01173: data dictionary indicates missing data file from system tablespace
Let’s check if the source PDB is having the new created data on the new standby side cl01.
SQL> alter session set container=SWTET_APP_001T; Session altered. SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; Session altered. SQL> set lines 300 SQL> select instance_name, host_name from v$instance; INSTANCE_NAME HOST_NAME ---------------- ---------------------------------------------------------------- SWIT001T1 exacc-cl01n1 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 3 SWTET_APP_001T READ ONLY NO SQL> select * from TEST_ON_STDBY; ID NAME RUN_EXEC ---------- ---------------------------------------------------------------------------------------------------- ------------------- 0 Cl08 is primary 06/06/2025 15:43:19 SQL>
Yes, we have the new data.
Run switchover backLet’s switch back to cl01.
DGMGRL> show configuration lag Configuration - swit001t_dgconf Protection Mode: MaxPerformance Members: SWIT001T_CHZ3 - Primary database SWIT001T_CHZ2 - Physical standby database Transport Lag: 0 seconds (computed 0 seconds ago) Apply Lag: 0 seconds (computed 0 seconds ago) Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 11 seconds ago) DGMGRL> validate database SWIT001T_CHZ2; Database Role: Physical standby database Primary Database: SWIT001T_CHZ3 Ready for Switchover: Yes Ready for Failover: Yes (Primary Running) Managed by Clusterware: SWIT001T_CHZ3: YES SWIT001T_CHZ2: YES Temporary Tablespace File Information: SWIT001T_CHZ3 TEMP Files: 3 SWIT001T_CHZ2 TEMP Files: 5 DGMGRL> switchover to SWIT001T_CHZ2; Performing switchover NOW, please wait... Operation requires a connection to database "SWIT001T_CHZ2" Connecting ... Connected to "SWIT001T_CHZ2" Connected as SYSDBA. New primary database "SWIT001T_CHZ2" is opening... Oracle Clusterware is restarting database "SWIT001T_CHZ3" ... Connected to "SWIT001T_CHZ3" Switchover succeeded, new primary is "swit001t_chz2" DGMGRL> show configuration lag Configuration - swit001t_dgconf Protection Mode: MaxPerformance Members: SWIT001T_CHZ2 - Primary database SWIT001T_CHZ3 - Physical standby database Transport Lag: 0 seconds (computed 5 seconds ago) Apply Lag: 0 seconds (computed 5 seconds ago) Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 20 seconds ago)
On the standby side, this would look like:
oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:51:33 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ ONLY NO 4 SWTET_TMR_001T MOUNTED 5 SWTET_APP_002T MOUNTED SQL>
On the primary side, this would look like:
oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:52:24 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ WRITE NO 4 SWTET_TMR_001T MOUNTED 5 SWTET_APP_002T READ WRITE NO
Reopening the Master Read Only would fail.
SQL> alter pluggable database SWTET_TMR_001T open read only instances=all; alter pluggable database SWTET_TMR_001T open read only instances=all * ERROR at line 1: ORA-01173: data dictionary indicates missing data file from system tablespaceAdd data in source PDB
Adding new data in source PDB would be of course successful.
SQL> alter session set container=SWTET_APP_001T; Session altered. SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; Session altered. SQL> set lines 300 SQL> select instance_name, host_name from v$instance; INSTANCE_NAME HOST_NAME ---------------- ---------------------------------------------------------------- SWIT001T1 exacc-cl01n1 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 3 SWTET_APP_001T READ WRITE NO SQL> insert into TEST_ON_STDBY values (1,'primary back to cl07',sysdate); 1 row created. SQL> commit; Commit complete. SQL> select * from TEST_ON_STDBY; ID NAME RUN_EXEC ---------- ---------------------------------------------------------------------------------------------------- ------------------- 0 Cl08 is primary 06/06/2025 15:43:19 1 primary back to cl07 06/06/2025 15:54:54 SQL>
And data would be also available in the source PDB on the standby side.
oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:56:00 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ ONLY NO 4 SWTET_TMR_001T MOUNTED 5 SWTET_APP_002T MOUNTED SQL> alter session set container=SWTET_APP_001T; Session altered. SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; Session altered. SQL> set lines 300 SQL> select instance_name, host_name from v$instance; INSTANCE_NAME HOST_NAME ---------------- ---------------------------------------------------------------- SWIT001T1 exacc-cl02n1 SQL> select * from TEST_ON_STDBY; ID NAME RUN_EXEC ---------- ---------------------------------------------------------------------------------------------------- ------------------- 0 Cl08 is primary 06/06/2025 15:43:19 1 primary back to cl07 06/06/2025 15:54:54 SQL>
<h3>Add new data in spare clone</h3>
Let’s add new data in the sparse clone.
oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:57:00 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ WRITE NO 4 SWTET_TMR_001T MOUNTED 5 SWTET_APP_002T READ WRITE NO SQL> alter session set container=SWTET_APP_002T; Session altered. SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; Session altered. SQL> set lines 300 SQL> select instance_name, host_name from v$instance; INSTANCE_NAME HOST_NAME ---------------- ---------------------------------------------------------------- SWIT001T1 exacc-cl01n1 SQL> select * from TEST_SWITCH; ID NAME RUN_EXEC ---------- ---------------------------------------------------------------------------------------------------- ------------------- 0 Before switch 06/06/2025 15:33:32 SQL> insert into TEST_SWITCH values (1,'After switchover and back cl07', sysdate); 1 row created. SQL> commit; Commit complete. SQL> select * from TEST_SWITCH; ID NAME RUN_EXEC ---------- ---------------------------------------------------------------------------------------------------- ------------------- 0 Before switch 06/06/2025 15:33:32 1 After switchover and back cl07 06/06/2025 15:59:01 SQL>
Adding new data into the sparse clone is possible.
Drop the sparse cloneCommand to drop the sparse clone would be:
SQL> ALTER PLUGGABLE DATABASE SWTET_APP_002T CLOSE IMMEDIATE INSTANCES=ALL;
SQL> DROP PLUGGABLE DATABASE SWTET_APP_002T INCLUDING DATAFILES;
On the primary side, this would now look like:
oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:25:32 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ WRITE NO 4 SWTET_TMR_001T MOUNTED SQL>
On the standby side, this would now look like:
oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:26:03 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ ONLY NO 4 SWTET_TMR_001T MOUNTED SQL>Refresh Master Read Only
We can refresh the Master Read Only with following command.
SQL> drop pluggable database SWTET_TMR_001T including datafiles;
SQL> CREATE PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2 CONNECT TO C##USER IDENTIFIED BY ".................." USING 'SWIT001T_CHZ2';
SQL> CREATE PLUGGABLE DATABASE SWTET_TMR_001T FROM SWTET_APP_001T@"SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2"
KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
PARALLEL 8
SERVICE_NAME_CONVERT=('SWTET_APP_001T','SWTET_TMR_001T')
REFRESH MODE MANUAL
STANDBYS=NONE;
SQL> ALTER PLUGGABLE DATABASE SWTET_TMR_001T OPEN READ ONLY INSTANCES=ALL;
SQL> DROP PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2;
Create spare clone again
And we can create the sparse clone again with the same command.
SQL> CREATE PLUGGABLE DATABASE SWTET_APP_002T FROM SWTET_TMR_001T
KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
PARALLEL 8
CREATE_FILE_DEST='+SPRC2'
SERVICE_NAME_CONVERT=('SWTET_TMR_001T','SWTET_APP_002T','SWTET_APP_001T','SWTET_APP_002T')
SNAPSHOT COPY
STANDBYS=NONE;
SQL> ALTER PLUGGABLE DATABASE SWTET_APP_002T OPEN INSTANCES=ALL;
On the primary side, this looks like:
oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:38:12 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ WRITE NO 4 SWTET_APP_002T READ WRITE NO 5 SWTET_TMR_001T READ ONLY NO SQL>
And on the standby side:
oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:38:38 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 SWTET_APP_001T READ ONLY NO 4 SWTET_APP_002T MOUNTED 5 SWTET_TMR_001T MOUNTED SQL>To wrap up…
Sparse clone works successfully in Data Guard environment. Sparse clone are only available locally, and so available only when the CDB hosting initially the sparse clone PDB has the primary role. Switching to the other site is not a problem. The sparse clone would be available again once switching back. At last, we can then easily refresh the master read only and create again the spare clone.
L’article What will happen to your spare clones during a switchover on ExaCC? est apparu en premier sur dbi Blog.
dbaascli database move failing with ORA-01691 on ExaCC
I recently had to move a database from a 19.23 to a 19.26 dbhome on an ExaCC. This will include the database to be patched. The command dbaascli database move failed during the datapatch steps with following error:
DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM
With this blog I would like to share my troubleshooting with you and how I resolved the problem.
Moving database to new dbhomeIn order to move/patch the database I used dbaascli database move command.
[root@exacc-cl01n1 ~]# dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST DBAAS CLI version 25.1.1.0.0 Executing command database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST Job id: 17f6fef5-2143-4bb7-8007-a27cf5232099 Session log: /var/opt/oracle/log/CDBTEST/database/move/dbaastools_2025-05-05_02-47-51-PM_237874.log Loading PILOT... Session ID of the current execution is: 15920 Log file location: /var/opt/oracle/log/CDBTEST/database/move/pilot_2025-05-05_02-47-55-PM_238225 ----------------- Running initialization job Completed initialization job ----------------- Running validate_user_input job Completed validate_user_input job ----------------- Running validate_database job [WARNING] [DBAAS-70643] Following pluggable databases '{CDBTEST=[PDB_001T]}' do not have services configured. ACTION: Make sure to configure the services of pluggable databases so that pluggable databases are started after the database bounce. Completed validate_database job ----------------- Running validate_creg_file_existence job Completed validate_creg_file_existence job ----------------- Running validate_source_home job Completed validate_source_home job ----------------- Running validate_major_version job Completed validate_major_version job ----------------- Running validate_oracle_home_type job Completed validate_oracle_home_type job ----------------- Running check_target_source_home_not_same job Completed check_target_source_home_not_same job ----------------- Running validate_home_existence job Completed validate_home_existence job ----------------- Running validate_home_consistency job Completed validate_home_consistency job ----------------- Running validate_home_options job Completed validate_home_options job ----------------- Running validate_disk_space job Completed validate_disk_space job ----------------- Acquiring write lock: cdbtest Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_1 Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_3 Running pre_patch_lock_manager job Completed pre_patch_lock_manager job ----------------- Running enable_wallet_root job Completed enable_wallet_root job ----------------- Running copy_config_files job Completed copy_config_files job ----------------- Running stop_database_instance-exacc-cl01n1 job Completed stop_database_instance-exacc-cl01n1 job ----------------- Running update_database_resource-exacc-cl01n1 job Completed update_database_resource-exacc-cl01n1 job ----------------- Running start_database_instance-exacc-cl01n1 job Completed start_database_instance-exacc-cl01n1 job ----------------- Running exacs_post_patch_node_updation job Completed exacs_post_patch_node_updation job ----------------- Running update_dba_directories job Completed update_dba_directories job ----------------- Running datapatch_and_recompile_invalid_objects job Datapatch execution on database 'CDBTEST' is in progress Execution of datapatch_and_recompile_invalid_objects failed [FATAL] [DBAAS-60022] Command '/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/datapatch -verbose' has failed on nodes [exacc-cl01n1]. *MORE DETAILS* Result of node:exacc-cl01n1 [Interim patch 28318139 (ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.36.095921 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 28555193 (DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.37.111786 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 30944402 (SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.269998 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 32067171 (OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171)):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.35.074017 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 33192694 (OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694)):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.262033 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 33522539 (MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.271219 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 34786990 (OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990)):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.17.242116 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 34972375 (DATAPUMP BUNDLE PATCH 19.18.0.0.0):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.19.038902 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 36199232 (OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)):, Binary registry: Not installed, PDB PDB_001T: Applied successfully on 17-AUG-24 08.59.17.362926 AM, PDB CDB$ROOT: Applied successfully on 18-APR-24 10.08.56.234640 AM, PDB PDB$SEED: Applied successfully on 18-APR-24 10.19.33.039940 AM, Interim patch 36420641 (DATAPUMP BUNDLE PATCH 19.23.0.0.0):, Binary registry: Not installed, PDB PDB_001T: Applied successfully on 17-AUG-24 09.00.21.159959 AM, PDB CDB$ROOT: Applied successfully on 02-OCT-24 11.47.34.300679 AM, PDB PDB$SEED: Applied successfully on 02-OCT-24 11.50.34.101031 AM, Interim patch 37102264 (OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)):, Binary registry: Installed, PDB PDB_001T: Not installed, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 37470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0):, Binary registry: Installed, PDB PDB_001T: Not installed, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Current state of release update SQL patches:, Binary registry:, 19.26.0.0.0 Release_Update 250118124854: Installed, PDB PDB_001T:, Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 17-AUG-24 08.59.50.930598 AM, PDB CDB$ROOT:, Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 18-APR-24 10.19.15.621942 AM, PDB PDB$SEED:, Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 18-APR-24 10.28.06.019862 AM, Adding patches to installation queue and performing prereq checks...done, Installation queue:, For the following PDBs: CDB$ROOT PDB$SEED PDB_001T, The following interim patches will be rolled back:, 36199232 (OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)), 26749785 (PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK), 27605010 (DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999), 36420641 (DATAPUMP BUNDLE PATCH 19.23.0.0.0), Patch 37260974 (Database Release Update : 19.26.0.0.250121 (37260974)):, Apply from 19.23.0.0.0 Release_Update 240406004238 to 19.26.0.0.0 Release_Update 250118124854, The following interim patches will be applied:, 37102264 (OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)), 26749785 (PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK), 27605010 (DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999), 37470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0), DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM, ORA-06512: at line 2 (DBD ERROR: OCIStmtExecute) [for Statement "BEGIN, INSERT INTO sys.dba_registry_sqlpatch_ru_info, (patch_id,, patch_uid,, patch_descriptor,, ru_version,, ru_build_description,, ru_build_timestamp,, patch_directory), VALUES, (:patch_id,, :patch_uid,, :patch_descriptor,, :ru_version,, :ru_build_description,, TO_TIMESTAMP(:ru_build_timestamp, 'YYMMDDHH24MISS'),, :patch_directory);, COMMIT;, END;" with ParamValues: :patch_descriptor=OCIXMLTypePtr=SCALAR(0x4b4dcb0), :patch_directory='PK.........y4Z�&(.�...........37260974_rollback.sql�.]s.ֺ.��+t��ng�.o����v.�?2�����F..�$ . [���.@.q���dQ.�Tqc��û��z���rv�8.�������.���..����(M�ϳ��´.]%�,u �z��.�8���.��.s�f�����,J�Wa.�a6�y�.zp�������,..�?�l���t���..�/fW�(j�}pp�8^��..~w�N.�?�..�.��.(Z���e��.������b���.�������K6E�+�?��A.ln._�.=.�qp����������.������ǫ.qﻋ��.���.�.β �\/x��.M�Q�_�q�ý7.-N�Tory�|���y�&E..y��i��..)�N.�.7��.��������oqxz|�|st�X...������G'��.N�_�.?..*�.�..../����.��?|�pWn.�*��;;;.}��.�.�+����..fJ�`., 7+��z�>.? �Y�.�.��...�.,...N��9��.��.Y�rp����O��8_���O��%�.E�F�.��t�|��.��.�Q|�L�����y.��[�'�|.���.�.y�ë.�:xy�..pS>��.|U.��r�/j�?=:�\...������.�������ry�����QY�J.�.}N��.�.S%������j�^j.=7T�Z�e.W�z.�?..�>��?�w�.{w9{������/Nj�������f])�.�����.O�*� ��϶!����?�.��&)���I�tzv�x��., �LU.�IV�..7.R7.y��&h{*�3.�mJm}n(, _�...��2�Lݰ�r*ç..�S7ղ@�V��.1>���'�2�.��., ...�9.m)s.�S.ȴ�T��K.�.$!'�..;�Z�.#T�.��(�f.��.�i.pBK��h�fX*ߩ...��i�, �.*d�.�e..6.Oy!*�.:�.MQ4s���.�...�SW��|��...F�OC�,e�..�"/��:5�...', :patch_id='37260974', :patch_uid='26040769', :ru_build_description="Release_Update", :ru_build_timestamp="250118124854", :ru_version="19.26.0.0.0"] at /u02/app/oracle/product/19.0.0.0/dbhome_3/sqlpatch/sqlpatch.pm line 5337., Please refer to MOS Note 1609718.1 and/or the invocation log, /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_258023_2025_05_05_14_50_06/sqlpatch_invocation.log, for information on how to resolve the above errors., SQL Patching tool complete on Mon May 5 14:50:40 2025] Exit code of the operation:1 Releasing lock: cdbtest Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_1 Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_3 *** Executing jobs which need to be run always... *** ----------------- Running post_patch_lock_manager job Completed post_patch_lock_manager job ******** PLUGIN EXECUTION FAILED ******** To resume this failed session, run the following command: dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume
As we can see the command failed with following error:
[FATAL] [DBAAS-60022] Command '/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/datapatch -verbose' has failed on nodes [exacc-cl01n1].
7470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0), DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM
Troubleshooting
So let’s troubleshoot!
The database has been moved to the new dbhome, and the dbaascli command failed on datapatch level.
oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [rdbms1900] CDBTEST1 2025-05-05_14-56-12::DMK_ENV.pm::read_oratab ::INFO ==> changed ORACLE_HOME found in oratab: +ASM1 - old=/u01/app/19.0.0.0/grid, new=/u02/app/23.0.0.0/gridhome_1 ********************************** INSTANCE_NAME : CDBTEST1 DB_NAME : CDBTEST DB_UNIQUE_NAME : CDBTEST_CHZ2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : 2/7 DATABASE_ROLE : PRIMARY FLASHBACK_ON : YES FORCE_LOGGING : YES VERSION : 19.26.0.0.0 CDB_ENABLED : YES PDBs : PDB_001T PDB$SEED ********************************** PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only Statustime: 2025-05-05 14:56:12 oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [CDBTEST1 (CDB$ROOT)] echo $ORACLE_HOME /u02/app/oracle/product/19.0.0.0/dbhome_3
And the PDB is opened read write.
oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [CDBTEST1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 14:56:31 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 4 PDB_001T READ WRITE NO SQL>
Let’s try to run datapatch manually, expecting it will fail with same errors. But it might help me to extract more easily datapatch result.
oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/ [CDBTEST1 (CDB$ROOT)] cd OPatch/ oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/ [CDBTEST1 (CDB$ROOT)] ./datapatch -verbose SQL Patching tool version 19.26.0.0.0 Production on Mon May 5 14:57:00 2025 Copyright (c) 2012, 2025, Oracle. All rights reserved. Log file for this invocation: /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_309940_2025_05_05_14_57_00/sqlpatch_invocation.log Connecting to database...OK Gathering database info...done Note: Datapatch will only apply or rollback SQL fixes for PDBs that are in an open state, no patches will be applied to closed PDBs. Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation (Doc ID 1585822.1) Bootstrapping registry and package to current versions...done Determining current state...done Current state of interim SQL patches: Interim patch 26749785 (PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK): Binary registry: Not installed PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.18.040319 AM PDB CDB$ROOT: Not installed PDB PDB$SEED: Not installed Interim patch 26749785 (PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK): Binary registry: Not installed PDB PDB_001T: Applied successfully on 17-AUG-24 08.59.56.646477 AM PDB CDB$ROOT: Applied successfully on 02-OCT-24 11.46.11.005940 AM PDB PDB$SEED: Applied successfully on 02-OCT-24 11.47.47.794119 AM ... ... ... DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM ORA-06512: at line 2 (DBD ERROR: OCIStmtExecute) [for Statement "BEGIN INSERT INTO sys.dba_registry_sqlpatch_ru_info (patch_id, patch_uid, patch_descriptor, ru_version, ru_build_description, ru_build_timestamp, patch_directory) VALUES (:patch_id, :patch_uid, :patch_descriptor, :ru_version, :ru_build_description, TO_TIMESTAMP(:ru_build_timestamp, 'YYMMDDHH24MISS'), :patch_directory); COMMIT; END;" with ParamValues: :patch_descriptor=OCIXMLTypePtr=SCALAR(0x4611c98), :patch_directory='PK.........y4Z▒&(.▒...........37260974_rollback.sql▒.]s.ֺ.▒▒+t▒▒ng▒.o▒▒▒▒v.▒͌?2▒▒▒▒▒F..▒$ . [▒▒▒.@.q▒▒▒dQ.▒Tqc▒▒û▒▒z▒▒▒rv▒8.▒▒▒▒▒▒▒.▒▒▒..▒▒▒▒(M▒ϳ▒▒´.]%▒,u ▒z▒▒.▒8▒▒▒.▒▒.s▒f▒▒▒▒▒,J▒Wa.▒a6▒y▒.zp▒▒▒▒▒▒▒,..▒?▒l▒▒▒t▒▒▒..▒/fW▒(j▒}pp▒8^▒▒..~w▒N.▒?▒..▒.▒▒.(Z▒▒▒e▒▒.▒▒▒▒▒▒b▒▒▒.▒▒▒▒▒▒▒K6E▒+▒?▒▒A.ln._▒.=.▒ܸqp▒▒▒▒▒▒▒▒▒▒.▒▒▒▒▒▒ǫ.qﻋ▒▒.▒▒▒.▒.β ▒\/x▒.M▒Q▒_▒q▒ý7.-N▒Tory▒|▒▒▒y▒&E..y▒▒i▒▒..)▒N.▒.7▒▒.▒▒▒▒▒▒▒▒oqxz|▒|st▒X...▒▒▒▒▒▒G'▒▒.N▒_▒.?..*▒.▒..../▒▒▒▒.▒▒?|▒pWn.▒*▒▒;;;.}▒▒.▒.▒+▒▒▒▒..fJ▒`. 7+▒▒z▒>.? ▒Y▒.▒.▒▒...▒.,...N▒▒9▒.▒▒.Y▒rp▒▒ⷷ▒▒O▒8_▒▒▒O▒▒%▒.E▒F▒.▒▒t▒|▒▒.▒▒.▒Q|▒L▒▒▒▒y.▒▒[▒'▒|.▒▒▒.▒.y▒ë.▒:xy▒..pS>▒▒.|U.▒▒r▒/j▒?=:▒\...▒▒▒▒▒▒.▒▒▒▒▒▒▒ry▒▒▒▒▒QY▒J.▒.}N▒▒.▒.S%▒▒▒▒▒▒j▒^j.=7T▒Z▒e.W▒z.▒?..▒>▒▒?▒w▒.{w9{▒▒▒▒▒▒/Nj▒▒▒▒▒▒▒f])▒.▒▒▒▒.O▒*▒ ▒▒϶!▒▒▒▒?▒.▒▒&)▒▒▒I▒tzv▒x▒▒. ▒LU.▒IV▒..7.R7.y▒▒&h{*▒3.▒mJm}n( _▒...▒▒2▒Lݰ▒r*ç..▒S7ղ@▒V▒▒.1>▒▒▒'▒2▒.▒▒. ...▒9.m)s.▒S.ȴ▒T▒▒K.▒.$!'▒..;▒Z▒.#T▒.▒▒(▒f.▒▒.▒i.pBK▒▒h▒fX*ߩ...▒▒i▒ ▒.*d▒.▒e..6.Oy!*▒.:▒.MQ4s▒▒▒.▒...▒SW▒▒|▒▒...F▒OC▒,e▒..▒"/▒▒:5▒...', :patch_id="37260974", :patch_uid="26040769", :ru_build_description="Release_Update", :ru_build_timestamp="250118124854", :ru_version="19.26.0.0.0"] at /u02/app/oracle/product/19.0.0.0/dbhome_3/sqlpatch/sqlpatch.pm line 5337. Please refer to MOS Note 1609718.1 and/or the invocation log /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_309940_2025_05_05_14_57_00/sqlpatch_invocation.log for information on how to resolve the above errors. SQL Patching tool complete on Mon May 5 14:57:34 2025
Let’s check SYSTEM tablespace usage in the CDB$ROOT.
oracle@exacc-cl01n1:~/ [CDBTEST1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 15:02:00 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> @qdbstbssize.sql PL/SQL procedure successfully completed. Nb Extent Segment Alloc. Space Max. Percent Block Name files Type Mgmnt Mgmnt Size (GB) Free (GB) Size (GB) used % size Log Encrypt Compress ------------------------- ----- ---- ------ ------- ---------- ---------- ---------- ------- ----- --- ------- -------- SYSAUX 1 DATA LM-SYS AUTO 5.95 4.47 32768.00 .00 8 KB YES YES NO SYSTEM 1 DATA LM-SYS MANUAL 1.95 .62 32768.00 .00 8 KB YES YES NO TEMP 1 TEMP LM-UNI MANUAL 1.00 2.31 512.00 -.26 8 KB NO YES NO UNDOTBS1 1 UNDO LM-SYS MANUAL 1.95 1.89 512.00 .01 8 KB YES YES NO USERS 1 DATA LM-SYS AUTO 1.00 .93 32768.00 .00 8 KB YES YES NO ----- ---------- ---------- ---------- TOTAL 5 11.85 10.22 99328.00 SQL>
We can see all is ok. Let’s check the same in the PDB.
SQL> alter session set container=PDB_001T; Session altered. SQL> @qdbstbssize.sql ... ... ... SYSAUX 1 DATA LM-SYS AUTO 82.62 74.52 32768.00 .02 8 KB YES YES NO SYSTEM 1 DATA LM-SYS MANUAL 24.71 .10 32768.00 .08 8 KB YES YES NO TEMP 1 TEMP LM-UNI MANUAL 114.26 342.75 32768.00 -.70 8 KB NO YES NO UNDO 1 UNDO LM-SYS MANUAL 684.57 673.58 32768.00 .03 8 KB YES YES NO USERS 1 DATA LM-SYS AUTO 20.00 19.93 20.00 .33 8 KB YES YES NO ----- ---------- ---------- ---------- TOTAL 2732 7101.04 1385.79 ########## 2732 rows selected. SQL>
It’s all good as well. And the pdb$seed?
SQL> alter session set container=PDB$SEED; Session altered. SQL> @qdbstbssize.sql PL/SQL procedure successfully completed. Nb Extent Segment Alloc. Space Max. Percent Block Name files Type Mgmnt Mgmnt Size (GB) Free (GB) Size (GB) used % size Log Encrypt Compress ------------------------- ----- ---- ------ ------- ---------- ---------- ---------- ------- ----- --- ------- -------- SYSAUX 1 DATA LM-SYS AUTO .59 .08 32768.00 .00 8 KB YES NO NO SYSTEM 1 DATA LM-SYS MANUAL .59 .00 32768.00 .00 8 KB YES NO NO TEMP 1 TEMP LM-UNI MANUAL .18 .00 32768.00 .00 8 KB NO NO NO UNDOTBS1 1 UNDO LM-SYS MANUAL .59 .33 512.00 .05 8 KB YES NO NO ----- ---------- ---------- ---------- TOTAL 4 1.95 .41 98816.00 SQL>
All is ok as well.
Let’s check where this SYS_LOB belongs to. cdb$root? PDB? pdb$seed?
SQL> alter session set container=PDB$SEED; Session altered. SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS'; TABLESPACE_NAME TABLE_NAME ------------------------------ -------------------------------------------------------------------------------------------------------------------------------- SYSTEM REGISTRY$SQLPATCH_RU_INFO SQL> alter session set container=cdb$root; Session altered. SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS'; no rows selected SQL> alter session set container=PDB_001T; Session altered. SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS'; no rows selected SQL>
So the LOB belongs to the pdb$seed.
From the alert log I can see:
PDB$SEED(2):Pluggable database PDB$SEED opening in read write
PDB$SEED(2):Autotune of undo retention is turned on.
PDB$SEED(2):This instance was first to open pluggable database PDB$SEED (container=2)
PDB$SEED(2):queued attach DA request 0xb2325ed8 for pdb 2, ospid 3457
2025-05-05T15:11:01.061406+02:00
Domain Action Reconfiguration started (domid 2, new da inc 19, cluster inc 2)
Instance 1 is attaching to domain 2
Global Resource Directory partially frozen for domain action
Domain Action Reconfiguration complete (total time 0.0 secs)
2025-05-05T15:11:01.067485+02:00
PDB$SEED(2):Endian type of dictionary set to little
PDB$SEED(2):Undo initialization recovery: Parallel FPTR failed: start:1139322373 end:1139322379 diff:6 ms (0.0 seconds)
PDB$SEED(2):Undo initialization recovery: err:0 start: 1139322373 end: 1139322392 diff: 19 ms (0.0 seconds)
PDB$SEED(2):[3457] Successfully onlined Undo Tablespace 2.
PDB$SEED(2):Undo initialization online undo segments: err:0 start: 1139322392 end: 1139322638 diff: 246 ms (0.2 seconds)
PDB$SEED(2):Undo initialization finished serial:0 start:1139322373 end:1139322643 diff:270 ms (0.3 seconds)
PDB$SEED(2):Database Characterset for PDB$SEED is AL32UTF8
PDB$SEED(2):SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x0, new 0x0 (no suplog)
PDB$SEED(2):Opening pdb with no Resource Manager plan active
2025-05-05T15:11:01.806446+02:00
PDB$SEED(2):joxcsys_required_dirobj_exists: directory object exists with required path /u02/app/oracle/product/19.0.0.0/dbhome_3/javavm/admin/, pid 3457 cid 2
Pluggable database PDB$SEED opened read write
2025-05-05T15:11:03.081311+02:00
QPI: opatch file present, opatch
QPI: qopiprep.bat file present
QPI: Cleaning and refreshing metadata..
2025-05-05T15:11:09.187099+02:00
QPI: Cleaning and refreshing metadata..
2025-05-05T15:11:23.765174+02:00
PDB$SEED(2):ORA-1691: unable to extend lobsegment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM [PDB$SEED] (ospid 3221)
2025-05-05T15:11:24.482146+02:00
PDB$SEED(2):Pluggable database PDB$SEED closing
So we can see that once the pdb$seed is opened in read write mode, datapatch is failing due to the fact it is not possible to extend the SYS.SYS_LOB0000023009C00008$$ lob segment.
Let’s look into the ASM to see if there is enough disk space.
ASMCMD> lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED HIGH N 512 512 4096 4194304 49072128 56708 0 18902 0 Y DATAC4/ MOUNTED HIGH N 512 512 4096 4194304 16356864 13063776 0 4354592 0 N RECOC4/ MOUNTED HIGH N 512 512 4096 4194304 163568640 163563936 0 54521312 0 N SPRC4/ ASMCMD>
There is not a lot of usable space for the DATA Disk Group, certainly not enough, but still some space. Let’s try to open the pdb$seed in read write and to create a table.
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO SQL> shutdown immediate Pluggable Database closed. SQL> alter pluggable database open read write; Pluggable database altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ WRITE NO SQL> create table xxx (n number) tablespace system; Table created. SQL> insert into xxx values (10); 1 row created. SQL> commit; Commit complete. SQL> drop table xxx purge; Table dropped.
This is possible. Let’s try to resize system data file.
SQL> alter database datafile 2 resize 2048M; alter database datafile 2 resize 2048M * ERROR at line 1: ORA-01237: cannot extend datafile 2 ORA-01110: data file 2: '+DATAC4/CDBTEST_CHZ2/165C22D4CFFCA759E0638A534664B46C/DATAFILE/system.615.1181302705' ORA-17505: ksfdrsz:1 Failed to resize file to size 262144 blocks ORA-15041: diskgroup "DATAC4" space exhausted
This is not possible and it is confirmed DATA Disk Group does not have enough available space.
I first will reopen the pdb$seed in read only.
SQL> shutdown immediate Pluggable Database closed. SQL> alter pluggable database open read only; Pluggable database altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO SQL>
Following DocId would explain the problem. One or more disk having 0 free MB:
Datapatch failed with ORA-01691: unable to extend lob segment on MGMTDB (Doc ID 2352895.1)
Let’s check, connecting to the +ASM instance.
SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- +ASM1 SQL> select group_number, name from v$asm_diskgroup; GROUP_NUMBER NAME ------------ ------------------------------ 1 DATAC4 2 RECOC4 3 SPRC4 SQL> select DISK_NUMBER, FREE_MB from v$asm_disk where group_number=1 and FREE_MB=0; DISK_NUMBER FREE_MB ----------- ---------- 48 0 SQL> select count(*) from v$asm_disk where group_number=1 and FREE_MB 0; COUNT(*) ---------- 95 SQL>
Yes, this is exactly the problem! Disk Number 48 has got 0 free MB.
ResolutionLet’s resolve the problem.
Through the OCI console, I added 1 TB to the ASM. The usable space of the disk groups then became as following:
ASMCMD> lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED HIGH Y 512 512 4096 4194304 50959872 1932164 0 644054 0 Y DATAC4/ MOUNTED HIGH N 512 512 4096 4194304 16356864 13058556 0 4352852 0 N RECOC4/ MOUNTED HIGH N 512 512 4096 4194304 163568640 163563936 0 54521312 0 N SPRC4/ ASMCMD>
I checked to ensure there is no more disks with 0 free MB.
[grid@exacc-cl01n1 ~]$ sqlplus / as sysdba SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Mon May 5 16:28:11 2025 Version 23.7.0.25.01 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 23ai Enterprise Edition Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems Version 23.7.0.25.01 SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- +ASM1 SQL> select DISK_NUMBER, FREE_MB from v$asm_disk where group_number=1 and FREE_MB=0; no rows selected SQL> select count(*) from v$asm_disk where group_number=1 and FREE_MB 0; COUNT(*) ---------- 96 SQL>
I resume the dbaascli move operation, which became successful.
[root@exacc-cl01n1 ~]# dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume DBAAS CLI version 25.1.1.0.0 Executing command database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume Job id: 7e59d18a-ca1b-4808-a927-c3ce9131b968 Session log: /var/opt/oracle/log/CDBTEST/database/move/dbaastools_2025-05-05_05-52-50-PM_350983.log Loading PILOT... Session ID of the current execution is: 15932 Log file location: /var/opt/oracle/log/CDBTEST/database/move/pilot_2025-05-05_05-52-54-PM_351354 ----------------- Running initialization job Completed initialization job ----------------- Acquiring write lock: cdbtest Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_1 Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_3 Running datapatch_and_recompile_invalid_objects job Datapatch execution on database 'CDBTEST' is in progress Datapatch execution on database 'CDBTEST' is complete Recompilation of invalid objects on database 'CDBTEST' is in progress Recompilation of invalid objects on database 'CDBTEST' is complete Completed datapatch_and_recompile_invalid_objects job ----------------- Running update_pdb_status job Completed update_pdb_status job ----------------- Running post_patch_lock_manager job Completed post_patch_lock_manager job Releasing lock: cdbtest Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_1 Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_3 ----------------- Running invoke_backup_asst job Completed invoke_backup_asst job ----------------- Running post_move_validation job Completed post_move_validation job ----------------- Running generate_dbsystem_details job Acquiring native write lock: global_dbsystem_details_generation Releasing native lock: global_dbsystem_details_generation Completed generate_dbsystem_details job dbaascli execution completed You have new mail in /var/spool/mail/root
And I checked database patch version for the cdb$root and the pdb.
oracle@exacc-cl01n1:~/ [rdbms1900] CDBTEST1 2025-05-05_18-01-21::DMK_ENV.pm::read_oratab ::INFO ==> changed ORACLE_HOME found in oratab: +ASM1 - old=/u01/app/19.0.0.0/grid, new=/u02/app/23.0.0.0/gridhome_1 ********************************** INSTANCE_NAME : CDBTEST1 DB_NAME : CDBTEST DB_UNIQUE_NAME : CDBTEST_CHZ2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : 2/7 DATABASE_ROLE : PRIMARY FLASHBACK_ON : YES FORCE_LOGGING : YES VERSION : 19.26.0.0.0 CDB_ENABLED : YES PDBs : PDB_001T PDB$SEED ********************************** PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only Statustime: 2025-05-05 18:01:22 oracle@exacc-cl01n1:~/ [CDBTEST1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 18:01:25 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> set lines 300 SQL> set tab off SQL> set pages 500 SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; Session altered. SQL> col action_time for a30 SQL> select action_time, patch_id, source_version, target_version, status, description from dba_registry_sqlpatch; ACTION_TIME PATCH_ID SOURCE_VERSION TARGET_VERSION STATUS DESCRIPTION ------------------------------ ---------- --------------- --------------- ------------------------- ---------------------------------------------------------------------------------------------------- 18-APR-24 10.08.56.234640 AM 36199232 19.1.0.0.0 19.23.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232) 18-APR-24 10.19.15.621942 AM 36233263 19.1.0.0.0 19.23.0.0.0 SUCCESS Database Release Update : 19.23.0.0.240416 (36233263) 02-OCT-24 11.46.11.005940 AM 26749785 19.23.0.0.0 19.23.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 02-OCT-24 11.46.13.343508 AM 27605010 19.23.0.0.0 19.23.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 02-OCT-24 11.47.34.300679 AM 36420641 19.23.0.0.0 19.23.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.23.0.0.0 05-MAY-25 05.27.25.226657 PM 36199232 19.23.0.0.0 19.26.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232) 05-MAY-25 05.27.26.115643 PM 37102264 19.23.0.0.0 19.26.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264) 05-MAY-25 05.27.26.697687 PM 26749785 19.23.0.0.0 19.26.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 05-MAY-25 05.27.28.734483 PM 27605010 19.23.0.0.0 19.26.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 05-MAY-25 05.27.28.839820 PM 36420641 19.23.0.0.0 19.26.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.23.0.0.0 05-MAY-25 05.28.35.890001 PM 37260974 19.23.0.0.0 19.26.0.0.0 SUCCESS Database Release Update : 19.26.0.0.250121 (37260974) 05-MAY-25 05.28.44.283893 PM 26749785 19.23.0.0.0 19.26.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 05-MAY-25 05.28.46.272241 PM 27605010 19.23.0.0.0 19.26.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 05-MAY-25 05.29.22.101438 PM 37470729 19.23.0.0.0 19.26.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.26.0.0.0 14 rows selected. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 4 PDB_001T READ WRITE NO SQL> alter session set container=PDB_001T; Session altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 4 PDB_001T READ WRITE NO SQL> select action_time, patch_id, source_version, target_version, status, description from dba_registry_sqlpatch; ACTION_TIME PATCH_ID SOURCE_VERSI TARGET_VERSI STATUS DESCRIPTION ------------------------------ ---------- ------------ ------------ -------------------- ---------------------------------------------------------------------------------------------------- 19-JUN-21 09.56.16.648813 AM 32067171 19.1.0.0.0 19.1.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171) 19-JUN-21 09.56.16.641919 AM 32218454 19.1.0.0.0 19.10.0.0.0 SUCCESS Database Release Update : 19.10.0.0.210119 (32218454) 19-JUN-21 09.56.17.529234 AM 28318139 19.1.0.0.0 19.1.0.0.0 SUCCESS ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL 19-JUN-21 09.56.18.407598 AM 28555193 19.1.0.0.0 19.1.0.0.0 WITH ERRORS DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT 19-JUN-21 10.13.13.165620 AM 28555193 19.10.0.0.0 19.10.0.0.0 SUCCESS DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT 06-MAR-22 10.57.35.074017 AM 32067171 19.13.0.0.0 19.13.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171) 06-MAR-22 10.57.39.351946 AM 33192694 19.10.0.0.0 19.10.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694) 06-MAR-22 10.57.36.095921 AM 28318139 19.13.0.0.0 19.13.0.0.0 SUCCESS ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL 06-MAR-22 10.57.37.111786 AM 28555193 19.13.0.0.0 19.13.0.0.0 SUCCESS DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT 06-MAR-22 10.57.39.348199 AM 33192793 19.10.0.0.0 19.13.0.0.0 SUCCESS Database Release Update : 19.13.0.0.211019 (33192793) 06-MAR-22 10.57.40.368792 AM 30944402 19.10.0.0.0 19.10.0.0.0 SUCCESS SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS 06-MAR-22 10.57.41.384551 AM 33522539 19.10.0.0.0 19.10.0.0.0 SUCCESS MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027 19-AUG-23 09.58.11.262033 AM 33192694 19.18.0.0.0 19.18.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694) 19-AUG-23 09.58.13.172768 AM 34786990 19.13.0.0.0 19.13.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990) 19-AUG-23 09.58.11.269998 AM 30944402 19.18.0.0.0 19.18.0.0.0 SUCCESS SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS 19-AUG-23 09.58.11.271219 AM 33522539 19.18.0.0.0 19.18.0.0.0 SUCCESS MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027 19-AUG-23 09.58.13.169832 AM 34765931 19.13.0.0.0 19.18.0.0.0 SUCCESS DATABASE RELEASE UPDATE : 19.18.0.0.230117 (REL-JAN230131) (34765931) 19-AUG-23 09.58.13.340768 AM 26749785 19.13.0.0.0 19.13.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 19-AUG-23 09.58.13.346063 AM 27605010 19.13.0.0.0 19.13.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 19-AUG-23 09.58.14.253669 AM 34972375 19.13.0.0.0 19.13.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.18.0.0.0 17-AUG-24 08.59.17.242116 AM 34786990 19.18.0.0.0 19.23.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990) 17-AUG-24 08.59.17.362926 AM 36199232 19.18.0.0.0 19.23.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232) 17-AUG-24 08.59.18.040319 AM 26749785 19.18.0.0.0 19.23.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 17-AUG-24 08.59.18.971587 AM 27605010 19.18.0.0.0 19.23.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 17-AUG-24 08.59.19.038902 AM 34972375 19.18.0.0.0 19.23.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.18.0.0.0 17-AUG-24 08.59.50.930598 AM 36233263 19.18.0.0.0 19.23.0.0.0 SUCCESS Database Release Update : 19.23.0.0.240416 (36233263) 17-AUG-24 08.59.56.646477 AM 26749785 19.18.0.0.0 19.23.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 17-AUG-24 08.59.57.504574 AM 27605010 19.18.0.0.0 19.23.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 17-AUG-24 09.00.21.159959 AM 36420641 19.18.0.0.0 19.23.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.23.0.0.0 05-MAY-25 05.31.01.733241 PM 36199232 19.23.0.0.0 19.26.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232) 05-MAY-25 05.31.03.413696 PM 37102264 19.23.0.0.0 19.26.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264) 05-MAY-25 05.31.04.821299 PM 26749785 19.23.0.0.0 19.26.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 05-MAY-25 05.31.06.205197 PM 27605010 19.23.0.0.0 19.26.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 05-MAY-25 05.31.06.379357 PM 36420641 19.23.0.0.0 19.26.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.23.0.0.0 05-MAY-25 05.32.25.947558 PM 37260974 19.23.0.0.0 19.26.0.0.0 SUCCESS Database Release Update : 19.26.0.0.250121 (37260974) 05-MAY-25 05.32.32.616612 PM 26749785 19.23.0.0.0 19.26.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 05-MAY-25 05.32.33.746247 PM 27605010 19.23.0.0.0 19.26.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 05-MAY-25 05.35.48.324333 PM 37470729 19.23.0.0.0 19.26.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.26.0.0.0 38 rows selected.
And finally I checked PDB violations.
oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/ [CDBTEST1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 18:10:56 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 4 PDB_001T READ WRITE NO SQL> alter session set container=PDB_001T; Session altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 4 PDB_001T READ WRITE NO SQL> col name for a15 SQL> col message for a120 SQL> col cause for a50 SQL> set lines 300 SQL> col cause for a20 SQL> col type for a10 SQL> select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where status 'RESOLVED'; NAME CAUSE TYPE MESSAGE STATUS --------------- -------------------- ---------- ------------------------------------------------------------------------------------------------------------------------ --------------------------- PDB_001T OPTION WARNING Database option APS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option DV mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option OLS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option OWM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option SDO mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option XOQ mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING 8 rows selected. SQL>
Yes, that’s it, all is ok now.
To wrap up…The database move on another higher version dbhome on the ExaCC failed on the datapatch step, due to the fact one of the Disk had 0 free MB. Adding space to the ASM resolved the issue.
L’article dbaascli database move failing with ORA-01691 on ExaCC est apparu en premier sur dbi Blog.
What brings the M-Files and Microsoft 365 Strategic Partnership

On the 1st of July 2025 M-Files and Microsoft announced their strategic partnership with the headline “M-Files and Microsoft Announce Strategic Partnership Revolutionising Document Management”
I was wondering myself what does this will mean for us and a M-Files Partner and for our customers. What are the new features and benefits. In this blog article I will share the information and my perspective about this phantasmic announcement.
One of the hot topics is the fundamental shift of how enterprise content is managed within the M-Files and the Microsoft ecosystem.
Main FeaturesThanks to the strategic partnership between M-Files and Microsoft, M-Files will now be able to use features such as the ones below. This unlocks the native Microsoft 365 capabilities for M-Files AI-curated content.
- Microsoft 365 and its API-only service
- SharePoint Embedded
- Document co-authoring
This powerful solution is native to Microsoft 365, offering the best of both worlds.
M-Files strengthen:
- Automation of M-Files including their strengths in metadata
- Use one client to access data and features to make work easier
Microsoft’s power:
- Copilot capabilities
- Collaboration tools
- Security integeration

- Generative AI results based on the M-Files content
- Be insured that the results are accurate, relevant and can be trusted
- Ground the fundamental base for the AI journey
- Native document co-authoring with Microsoft applications
- Native collaboration with the trusted Microsoft tools
- M-Files automated content government and permission and secure content remains within the Microsoft security boundary
- Use of Microsoft Purview to use compliance and governance policies
Requirements for M-Files and Micrososft 365:
- M-Files Cloud
- M-Files Business platform
- Microsoft 365 subscription
Desktop co-authoring is already enabled for M-Files Cloud customers with a Business Platform subscription. Detailed instructions on how to unlock all the benefits and capabilities will be available in August 2025. Stay tuned — I will share further details as soon as they are available!
ConclusionAs has been noted in previous blogs, the direction of travel is towards incorporating AI in the optimum way. And to ensure optimal security boundaries.
This announcement is important because it provides a centralised, scalable environment for managing and supporting your business requirements. Topics include effortlessly finding documents, managing the entire document lifecycle, effective collaboration, gaining actionable insights faster, and ensuring robust governance. Using AI to achieve all this will boost productivity and reduce risk for your business.
If you would like to hear the perspective of a Microsoft architect on this strategic partnership, you can watch this video.
I hope this blog gives you a good idea of the strategic partnership between M-Files and Microsoft. Keep an eye out for more blog posts about M-Files on this channel, and be sure to check out the other blogs that are around the M-Files topic.
Do not hesitate to get in touch with us if you have any questions, or contact me directly if you prefer.
L’article What brings the M-Files and Microsoft 365 Strategic Partnership est apparu en premier sur dbi Blog.
Determining Optimal Index Key Compression Length
In 2008, Richard Foote wrote this still excellent 4-part guide to index key compression.
- Index Compression Part I (Low)
- Index Compression Part II (Down Is The New Up)
- Index Compression Part III (2+2=5)
- Index Compression Part IV (Packt Like Sardines In a Crushd Tin Box)
He started with this comment, that I think is just as valid as it was then:
“Index compression is perhaps one of the most under used and neglected index options available. It has the potential to substantially reduce the overall size of non-Unique indexes and multi-column unique indexes, in some scenarios dramatically so... Not only will it potentially save storage, but if the resultant index contains fewer leaf blocks, that’s potentially fewer LIOs and from the Cost Based Optimizer’s point of view, potentially a cheaper execution plan option.”
Index key compression is a highly effective option for reducing index size and improving index performance.
“Oracle stores each distinct combination of compressed column values found within a specific index leaf block in a ‘Prefix’ table within the leaf block and assigns each combination a unique prefix number.” If the prefix length (the number of leading columns to be compressed) is too great, then the prefix table will contain more entries, ultimately one for every row in the index. The compressed index could end up being larger than the uncompressed index! If the prefix length is too small, then you might not get as much compression as you might with a longer prefix length.
In other words, there is a sweet spot where you will achieve optimal compression. That sweet spot can vary from no compression to compressing all the columns. It will vary from index to index, from partition to partition, and potentially over time as the data in an index changes.
Test Every Option to Determine Optimal CompressionOne way to determine optimal compression is through exhaustive testing. Each index could be rebuilt at each possible compression prefix length, and the size of the index could be compared, and the performance of application processes could be tested.
The following PL/SQL script (available on GitHub) rebuilds each index on a named table at each possible compression length, collects statistics and stores them in a table.
REM index_compression_test.sql
create table gfc_index_compression_stats
(table_name varchar2(128)
,index_name varchar2(128)
,num_rows number
,last_analyzed date
,prefix_length number
,blevel number
,leaf_blocks number
,avg_leaf_blocks_per_key number
,avg_data_blocks_per_key number
,clustering_factor number
,constraint gfc_index_compression_stats_pk primary key (table_name, index_name, prefix_length)
);
DECLARE
l_table_name VARCHAR2(128) := 'PSTREENODE';
l_num_cols INTEGER;
l_sql CLOB;
e_invalid_compress_length EXCEPTION;
PRAGMA EXCEPTION_INIT(e_invalid_compress_length,-25194);
BEGIN
FOR i IN (
SELECT table_name, index_name, column_position prefix_length FROM user_ind_columns
WHERE table_name = l_table_name
UNION
SELECT table_name, index_name, 0 FROM user_indexes
WHERE table_name = l_table_name
ORDER BY table_name, index_name, prefix_length DESC
) LOOP
IF i.prefix_length > 0 THEN
l_sql := 'ALTER INDEX '||i.index_name||' REBUILD COMPRESS '||i.prefix_length;
ELSE
l_sql := 'ALTER INDEX '||i.index_name||' REBUILD NOCOMPRESS';
END IF;
BEGIN
dbms_output.put_line(l_sql);
EXECUTE IMMEDIATE l_sql;
dbms_stats.gather_index_stats(user,i.index_name);
MERGE INTO gfc_index_compression_stats u
USING (SELECT * FROM user_indexes WHERE table_name = i.table_name And index_name = i.index_name) s
ON (u.table_name = s.table_name AND u.index_name = s.index_name AND u.prefix_length = NVL(s.prefix_length,0))
WHEN MATCHED THEN UPDATE SET u.num_rows = s.num_rows, u.last_analyzed = s.last_analyzed, u.blevel = s.blevel, u.leaf_blocks = s.leaf_blocks, u.avg_leaf_blocks_per_key = s.avg_leaf_blocks_per_key, u.avg_data_blocks_per_key = s.avg_data_blocks_per_key, u.clustering_factor = s.clustering_factor
WHEN NOT MATCHED THEN INSERT (table_name, index_name, num_rows, last_analyzed, prefix_length, blevel, leaf_blocks, avg_leaf_blocks_per_key, avg_data_blocks_per_key, clustering_factor)
VALUES (s.table_name, s.index_name, s.num_rows, s.last_analyzed, NVL(s.prefix_length,0), s.blevel, s.leaf_blocks, s.avg_leaf_blocks_per_key, s.avg_data_blocks_per_key, s.clustering_factor);
EXCEPTION
WHEN e_invalid_compress_length THEN NULL;
END;
END LOOP;
END;
/
The following chart presents the data collected by the script above for the PSTREENODE table in PeopleSoft. The number of leaf blocks is graphed against the compression prefix length. The left-hand end of each line shows the uncompressed size of the index. ANALYZE INDEX … VALIDATE STRUCTURE CASCADE;
ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE;
REM calc_opt_comp.sql
REM (c)Go-Faster Consultancy Ltd. 2014
REM see https://blog.psftdba.com/2016/02/implementing-index-compression-and.html
set serveroutput on autotrace off
clear columns
SPOOL calc_opt_comp
REM DROP TABLE sysadm.gfc_index_stats PURGE;
--create working storage table with same structure as INDEX_STATS
CREATE TABLE sysadm.gfc_index_stats
AS SELECT * FROM index_stats
WHERE 1=2
/
ALTER TABLE sysadm.gfc_index_stats
MODIFY name NOT NULL
/
CREATE UNIQUE INDEX sysadm.gfc_index_stats
ON sysadm.gfc_index_stats (name, partition_name)
/
undefine table_name
DECLARE
l_sql VARCHAR2(100);
l_owner VARCHAR2(8) := 'SYSADM';
l_table_name VARCHAR2(30) := '&&table_name';
BEGIN
FOR i IN (
SELECT i.index_name, ip.partition_name
FROM all_indexes i
, all_ind_partitions ip
WHERE i.index_type like '%NORMAL'
AND i.table_owner = l_owner
AND i.partitioned = 'YES'
AND i.table_name = l_table_name
AND ip.index_owner = i.owner
AND ip.index_name = i.index_name
AND ip.subpartition_count = 0
AND ip.segment_created = 'YES'
UNION
SELECT i.index_name, isp.subpartition_name
FROM all_indexes i
, all_ind_subpartitions isp
WHERE i.index_type like '%NORMAL'
AND i.table_owner = l_owner
AND i.partitioned = 'YES'
AND i.table_name = l_table_name
AND isp.index_owner = i.owner
AND isp.index_name = i.index_name
AND isp.segment_created = 'YES'
UNION
SELECT i.index_name, NULL
FROM all_indexes i
WHERE i.index_type like '%NORMAL'
AND i.table_owner = l_owner
AND i.table_name = l_table_name
AND i.partitioned = 'NO'
AND i.segment_created = 'YES'
MINUS
SELECT name, partition_name
FROM sysadm.gfc_index_stats
) LOOP
IF i.partition_name IS NULL THEN
l_sql := 'ANALYZE INDEX '||l_owner||'.'||i.index_name||' VALIDATE STRUCTURE';
ELSE
l_sql := 'ANALYZE INDEX '||l_owner||'.'||i.index_name||' PARTITION ('||i.partition_name||') VALIDATE STRUCTURE';
END IF;
dbms_output.put_line(l_sql);
EXECUTE IMMEDIATE l_sql;
DELETE FROM sysadm.gfc_index_stats g
WHERE EXISTS(
SELECT 'x'
FROM index_stats i
WHERE i.name = g.name
AND (i.partition_name = g.partition_name OR (i.partition_name IS NULL AND g.partition_name IS NULL)));
INSERT INTO sysadm.gfc_index_stats
SELECT i.* FROM index_stats i;
COMMIT;
END LOOP;
END;
/
…
The script produces reports of its analysis. The summary report shows the optimal compression length for each index and lists the columns that are and are not compressed. We can see that the result of the ANALYZE command agrees with the result of the previous test that rebuilt each index at each compression length and measured the size of the index.
Summary Report
Opt Comp Weighted Est.
Prefix Num Average Comp
Table Name Index Name Length FREQ Parts Blocks Saving % Blocks
------------------ ------------------ -------- ---- ----- ------------ -------- ------------
Compress Columns Do Not Compress Columns
----------------------------------------------------------- -----------------------------------------------------------
PSTREENODE PSAPSTREENODE 4 1 0 2,048 41.0 1,208
SETID, TREE_NAME, EFFDT, TREE_BRANCH TREE_NODE, TREE_NODE_NUM, TREE_NODE_NUM_END, TREE_NODE_TYPE
PSBPSTREENODE 8 1 0 1,920 34.0 1,267
SETID, TREE_NAME, TREE_BRANCH, TREE_NODE_NUM, TREE_NODE, TR
EE_NODE_NUM_END, TREE_LEVEL_NUM, TREE_NODE_TYPE
PSDPSTREENODE 3 1 0 1,280 61.0 499
SETID, TREE_NAME, EFFDT PARENT_NODE_NUM
PSFPSTREENODE 2 1 0 1,024 67.0 338
TREE_NAME, EFFDT
PSGPSTREENODE 2 1 0 2,304 35.0 1,498
PARENT_NODE_NAME, TREE_NAME EFFDT, TREE_NODE, SETID
PSHPSTREENODE 2 1 0 2,048 24.0 1,556
TREE_NODE, TREE_NAME EFFDT, SETID, SETCNTRLVALUE, TREE_NODE_NUM
PSIPSTREENODE 3 1 0 1,152 .0 1,152
SETID, TREE_NAME, EFFDT TREE_NODE, TREE_NODE_NUM, TREE_NODE_NUM_END
PS_PSTREENODE 4 1 0 1,792 46.0 968
SETID, SETCNTRLVALUE, TREE_NAME, EFFDT TREE_NODE_NUM, TREE_NODE, TREE_BRANCH
****************** ----- ------------ ------------
sum 0 13,568 8,486
Compression of Partitioned Indexes Detail Report
Opt Comp Est.
Prefix Saving Comp
Table Name Index Name Partition Name Length Blocks % Blocks
------------------ ------------------ ------------------------------ -------- ------------ ------ ------------
…
PSHJRNL_LN JRNL_LNH201612 1 143,264 142.0 -60,171
JRNL_LNH201712 0 88,192 74.0 22,930
JRNL_LNH201812 6 12,240 .0 12,240
JRNL_LNH201912 6 11,104 .0 11,104
…
JRNL_LNH202201 6 13,752 .0 13,752
JRNL_LNH202202 6 5,496 .0 5,496
JRNL_LNH202203 6 6,504 .0 6,504
JRNL_LNH202204 6 5,920 .0 5,920
JRNL_LNH202205 6 6,864 .0 6,864
JRNL_LNH202206 6 13,584 .0 13,584
JRNL_LNH202207 6 12,408 .0 12,408
JRNL_LNH202208 3 212,904 121.0 -44,710
JRNL_LNH202209 0 262,472 111.0 -28,872
JRNL_LNH202210 3 228,552 102.0 -4,571
****************** ------------ ------------
sum 1,625,328 574,550
- Choosing to compress the entire index at a shorter compression. In which case, most of the partitions will be larger, the exception partitions will be small, but the net effect is that the index will be larger.
- Disabling compression on these partitions. Over-compressed indexes are generally only slightly larger than uncompressed indexes, so the benefit is probably only small
- Leave compression at the length that is optimal for most of the partitions, accepting that a few partitions will be over-compressed. This usually results in the smallest index overall.
Dctm – Invalid/Missing XSRF token on D2
During an upgrade project to Documentum 23.4, I faced yet again another interesting behavior from D2. The Documentum Server upgrade happened properly for all components like the Connection Broker, Repositories, D2, etc… The OTDS was also setup and available/working properly without too much trouble. However, the deployment of the D2 war file was a bit of another story.
As usual, we try to make it so that Documentum and all its components are setup as securely as possible. From a WebServer point of view, that include a bunch of Best Practices that we add into our deployments / custom images (when using containers), and D2 isn’t without rest. One of such things is for example to setup the Tomcat and D2 application to work only with cookies that have the “secure” and “httpOnly” flags. That is done in a few locations, but in recent versions of D2, there is additional parameters to help control this kind of behavior inside the ESAPI.properties file.
Note: there are often confusions about the “httpOnly” flag for cookies, so I think a quick reminder wouldn’t hurt. The “secure” flag means that the cookie can only be sent through HTTPS (except when using localhost), so it’s much harder to get access to it. The “httpOnly” one, contrary to his name, doesn’t mean that the cookie is only for HTTP communications, but it means that it cannot be accessed by client’s scripts like JavaScript. Therefore, sensitive cookies should have both flags, so that they go through the network securely and even when it arrives on the target client’s browser, its access is protected.
Therefore, as a good practice, I went ahead and configured D2 as secure as I could, even before a 1st deployment, and that included these 4 parameters:
[tomcat@d2-0 war_prep]$ grep -B1 -E "ForceHttpOnly|ForceSecure" WEB-INF/classes/ESAPI.properties
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=true
# Whlie doing a cross site access through https make the below flag to true
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 war_prep]$
Once my D2 WAR file was ready and configured, I tried to deploy it on Tomcat. No errors/issues during the deployment/startup of D2. However, accessing the D2 UI ended up with a pretty and infinite loading logo of D2. You probably have all seen that happen at some point:

Nothing on the D2 logs (generated through the logback.xml or log4j2.properties), but on the Tomcat logs, I could see the stack related to that issue when I accessed the URL a few minutes after Tomcat was fully up&running:
2025-07-08 14:25:56,379 UTC INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [$CATALINA_HOME/webapps/D2/D2.war] has finished in [57,704] ms
2025-07-08 14:25:56,382 UTC INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["https-jsse-nio-8080"]
2025-07-08 14:25:56,400 UTC INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [57846] milliseconds
2025-07-08 14:29:36,966 UTC SEVERE [https-jsse-nio-8080-exec-42] org.apache.catalina.core.ApplicationContext.log Key[type=com.emc.x3.server.services.labels.RpcLabelServiceImpl, annotation=[none]]: An RpcTokenException was thrown while processing this call.
com.google.gwt.user.client.rpc.RpcTokenException: Invalid RPC token (Missing XSRF token: not on request, client IP=xxx.xxx.xxx.xxx)
at com.emc.x3.server.D2XsrfProtectedServiceServlet.validateXsrfToken(D2XsrfProtectedServiceServlet.java:33)
at com.google.gwt.user.server.rpc.AbstractXsrfProtectedServiceServlet.onAfterRequestDeserialized(AbstractXsrfProtectedServiceServlet.java:66)
at com.emc.x3.server.GuiceRemoteServiceServlet.processCall(GuiceRemoteServiceServlet.java:120)
at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:373)
at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:590)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:658)
at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:290)
at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:280)
at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:184)
at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:89)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
at com.emc.x3.portal.server.filters.authc.X3OTDSAuthenticationFilter.executeChain(X3OTDSAuthenticationFilter.java:1106)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:458)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:373)
at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:387)
at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:370)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:121)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:133)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at com.emc.x3.portal.server.filters.X3SessionTimeoutFilter.doFilter(X3SessionTimeoutFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115)
at org.apache.catalina.valves.StuckThreadDetectionValve.invoke(StuckThreadDetectionValve.java:185)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:663)
at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:731)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:397)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:905)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1741)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1190)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63)
at java.base/java.lang.Thread.run(Thread.java:840)
I checked that the XSRF token was indeed generated, and it appeared to be present in the request, at least as shown in the Browser’s Network traces (Developer Tools). It was being generated and forwarded by the browser with both the “secure” and “httpOnly” flags. So, what was the issue then? It took me a bit of time, but I could pinpoint the issue to the ESAPI.properties file and more specifically to the 4 properties I mentioned above, that control the flags of both cookies and sessions. To be exact, I expected the “httpOnly” flag for the cookies to be the issue, since it would hide the XSRF_TOKEN from JavaScript on the client-side. Keeping the content of the WAR file exploded folder the same, I tried to switch this 1 parameter back to false, which is the default value:
[tomcat@d2-0 ~]$ esapi_file="$CATALINA_HOME/webapps/D2/WEB-INF/classes/ESAPI.properties"
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ grep -B1 -E "ForceHttpOnly|ForceSecure" ${esapi_file}
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=true
# Whlie doing a cross site access through https make the below flag to true
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ sed -i 's,\(HttpUtilities.ForceHttpOnlyCookies\)=true,\1=false,' ${esapi_file}
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ grep -B1 -E "ForceHttpOnly|ForceSecure" ${esapi_file}
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=false
# Whlie doing a cross site access through https make the below flag to true
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 ~]$
After a restart of Tomcat, the issue was completely gone and the login to D2 through the OTDS was working successfully… Since I could deploy D2-REST, D2-Smartview and D2-Config with all 4 ESAPI.properties parameter set to “true“, I asked OT if it was expected that only D2 has a problem with “HttpUtilities.ForceHttpOnlyCookies=true“. After a few days of exchange, I got the feedback that it’s not documented but it’s apparently required for D2 to NOT have the “httpOnly” flag because of the XSRF_TOKEN. They will see to create a KB for that topic (update: it was created and it’s available here: KB0845279). If you really need to have both flags set, then you will not have any other choice than switching to the new UI, D2-Smartview.
L’article Dctm – Invalid/Missing XSRF token on D2 est apparu en premier sur dbi Blog.
Scheduling OpenSCAP reports in SUSE Multi-Linux Manager
As we’ve recently supported some customers on SUSE Multi Linux Manager I’d like share something which was not as easy to implement as it appeared to be in the first place. But first of all, what is SUSE Multi Linux Manager? It is basically a fork of Spacewalk which was also used as the upstream project by the Red Hat Satellite product. But as Spacewalk was dis-continued and the project on Github was archived some people decided to fork in and started a new project called Uyuni, and this is now the upstream project for SUSE Multi Linux Manager. One of the great things about Uyuni and SUSE Multi Linux Manager is, that it supports various Linux distributions such as SUSE and openSUSE distributions, Red Hat, Rocky, Oracle and Alma Linux, Debian, Ubuntu, and also ancient versions of CentOS if you still depend on them.
I am not going into the setup or basic configuration as you can already find related bogs here and more information in the documentation:
- Uyuni, an open-source configuration and infrastructure management solution for software-defined infrastructure (1) – The server (this is for version 4.x)
- Uyuni, an open-source configuration and infrastructure management solution for software-defined infrastructure (2) – Adding a client (this is for version 4.x)
- SUMA (SUSE Manager) is back and it has a new name: SUSE Multi-Linux
- SUSE Manager installation (this is for version 4)
What I want to look at in this post is automatic scheduling of OpenSCAP scans/reports. When this requirement came up, it seemed pretty easy to do, as you can easily schedule such a scan against a single system. As you can see below I have a Red Hat 9 system registered to my SUSE Multi Linux Server:

What you can easily do out of the box is to manually schedule an OpenSCAP scan:

Once the scan completes, it becomes visible under the “List Scan” tab and you can browse into the details:



Quite easy to do but still a manual action. As we wanted to have it automated the obvious choice was to create a “Recurring Action”:

This gives you to option to create and configure a “Recurring Action”:

The issue is, there is no pre-defined “Custom State” which is scheduling an OpenSCAP scan:

The very same is true for “System Groups”, which you normally would use because otherwise you’d need to schedule that on every single system:

The last option seemed to be something under “Schedule” but this only gives you a list of what you already have:

At this point we were stuck and had to talk to SUSE support, which really was a great experience by the way. It turned out there is no easy, build-in, way to do this. A feature request has been logged, but of course there is no guarantee that it will be implemented.
But, there is a workaround, not a very beautiful one, but at least it works. SUSE Multi Linux Manager (and Uyuni of course) come with an API and there is one call for triggering an OpenSCAP scan. Using this, a custom state channel can be created which in turn calls the API to trigger the scan:


The “SLS Contents” actually contains the code (Python in this case) which is taking to the API and triggers the scan:
/usr/local/bin/schedule_xccdf_scan.py:
file.managed:
- user: root
- group: root
- mode: 755
- contents: |
#!/usr/bin/python3
import xmlrpc.client
client = xmlrpc.client.ServerProxy('https://suma.dwe.local/rpc/api')
key = client.auth.login('admin', 'xxxx')
client.system.scap.scheduleXccdfScan(
key,
1000010000,
'/usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml',
'--profile xccdf_org.ssgproject.content_profile_cis_server_l1'
)
client.auth.logout(key)
schedule_xccdf_scan:
cmd.run:
- name: /usr/local/bin/schedule_xccdf_scan.py
- require:
- file: /usr/local/bin/schedule_xccdf_scan.py
I am not going into the code itself, this should be easy to understand. The important part is the system ID in line 14. This defines the system you want the scan to happen on (you can also provide an array of systems, see the API documentation linked above).
As soon as you have this, you can schedule this automatically as a recurring action on either the system itself, or a group of systems in “System Groups”:



Not as easy as it could be, and the systems are still hard coded in the Python code, but at least we have something that works. Hope that helps.
L’article Scheduling OpenSCAP reports in SUSE Multi-Linux Manager est apparu en premier sur dbi Blog.
SQL Server 2025 – Optimized Locking
Within the past few days I was investigating one of the new features of SQL Server 2025: Optimized Locking. I was curious about the capabilities, the behaviour and as well the limits of the feature. Optimized Locking is based on two primary components:
- Transaction ID (TID)
- Lock after qualification (LAQ)
Here we have the transaction ID working as a unique identifier for a transaction. Each row which is modified by this transaction will be labelled with its transaction ID. This produces only one single lock on the TID which is used instead of many key- or RID-locks. To be precise: Update- and Exclusive Locks will be placed, but released immediately without waiting for the transaction to be commited.
This behaviour helps lock manager enormously to keep the locks he has to maintain at a minimum and thus saves a lot of space (memory).
Lock after qualification (LAQ) at the other hand provides a silent qualification for rows affected by an update in the background without having the need to place Shared Locks on those while scanning through. Only if a row has been qualified – means it will be affected by this update – an attempt to place an Update Lock will happen.
In order to benefit from this mechanism, Read Committed Snapshot Isolation (RCSI) must be enabled on database-level.
My Demo environment looks as follows:
- SQL Server 2025 CTP 2.0 (17.0.700.9)
- SQL Server Management Studio 21 (21.3.6) with Copilot activated
My colleague Stéphane Haby wrote blog posts about SQL Server 2025, for example this one:
In the meantime there was SQL Server 2025 CTP 2.1 released by Microsoft with a few improvements explained on BOL:
https://learn.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2025?view=sql-server-ver17
Copilot is not part of the game when we discuss Optimized Locking, but as AI is omnipresent these days, I want to mention the blog post of my colleague Steven Naudet where he describes the enabling process for Copilot in SSMS:
Now, back to business, back to Optimized Locking – Let’s see this performance tuning feature in action!
First things first – I’ve created a new database called “OptimizedLocking” and checked what features are in place. To determine if Optimized Locking is enabled on a database you can us either the function DATABASEPROPERTYEX or grab the information from sys.databases directly:
-- Query the dedicated DATABASEPROPERTYEX für Optimized Locking
USE [OptimizedLocking]
GO
SELECT DATABASEPROPERTYEX(DB_NAME(), 'IsOptimizedLockingOn') AS IsOptimizedLockingOn
GO
USE [master]
GO
SELECT name AS DatabaseName, is_read_committed_snapshot_on, is_optimized_locking_on, is_accelerated_database_recovery_on
FROM sys.databases
WHERE name = 'OptimizedLocking'
GO

Why do I request additionally the columns “is_read_committed_snapshot_on” and “is_accelerated_database_recovery_on”?
Well, the latter is absolutely necessary to get Optimized Locking work and RCSI enables lock after qualification (LAQ) which I will show you a little bit later.
Now it’s time to create a simple table within the database “OptimizedLocking”:
-- Create a table to show locks
USE [OptimizedLocking]
GO
DROP TABLE IF EXISTS T1
GO
CREATE TABLE T1
(
T1_ID INT NOT NULL,
Value INT
);
INSERT INTO T1 (T1_ID, Value)
VALUES (1,10),(2,20),(3,30);
GO
SELECT * FROM dbo.T1
GO
SELECT DB_NAME(database_id) AS DatabaseName, OBJECT_NAME(object_id) AS TableName, index_type_desc FROM sys.dm_db_index_physical_stats(DB_ID('OptimizedLocking'), OBJECT_ID('dbo.T1'), NULL, NULL, 'DETAILED')
GO
The content of the table T1 looks as follows – and it’s of course a Heap because I didn’t neither add a PRIMARY KEY constraint to any of it’s column nor a CLUSTERED INDEX was specified:

I ran two concurrent update statements, the first will change the row with the ID = 1 (in the left pane of SSMS) and the second one tries to change the row with the ID = 2 (in the middle pane). Within the pane on the right I placed the statement to show you the locks which are present.
The second statement is blocked, because it can not acquire the necessary Update Lock which is needed to update the row:

Now I change the database in terms of setting OPTIMIZED_LOCKING to ON and I do the same demo as above again. As mentioned above, ACCELERATED_DATABASE_RECOVERY is necessary as well to fulfil this:
USE [master]
GO
ALTER DATABASE [OptimizedLocking] SET ACCELERATED_DATABASE_RECOVERY = ON;
ALTER DATABASE [OptimizedLocking] SET OPTIMIZED_LOCKING = ON;
GO

As you can see now, the behaviour remains the same, the second statement is blocked trying to place a Shared Lock on the rows – but we don’t have those 4 “normal” locks we had before but only 2 XACT Locks instead, means we have a reduction of the amount of locks that have to be managed by the lock manager:

Let’s go one step further and enable RCSI on database-level and see what happens now:
USE [master]
GO
ALTER DATABASE [OptimizedLocking] SET READ_COMMITTED_SNAPSHOT = ON;
GO
Et voilà, both statement have successfully had their Exclusive Lock requests (escalated from the Update Lock placed in advance) approved/granted:

Heap vs. Clustered Index
But what happens now, if we use a Clustered Index instead of leaving our data (un)organized as a Heap? Let’s dig into this as well by sorting the T1_ID column using a clustered index:
USE [OptimizedLocking]
GO
CREATE CLUSTERED INDEX CI_T1 ON dbo.T1 (T1_ID)
GO

All the features that we switched on earlier are now deactivated to be able to begin from scratch again:

But what’s going on now? None of the former mentioned features is enabled but both statements have their requested locks granted:

This is because the potentially critical Exclusive Locks (X) are placed on different keys (i.e. different rows) and the Intent Exclusive Locks (IX) set on page level are compatible with each other.
Wrap-Up
During my consulting at customer site I often see database with a huge amount of Heaps.
Don’t misunderstand me, Heaps are great on tables, where we expect only INSERTS (e.g. a logging-table) but if we deal with the other DML-Statements as well at the same time, a Clustered Index would be the better choice.
Related to this demo and the feature “Optimized Locking” can be said, that if we are using a Clustered Index on our tables, we don’t need this feature to be enabled in terms of concurrency, but regarding the overhead for the lock manager it’s definitely worth using “Optimized Locking”.
But wait: until now we dealt with the default isolation level of SQL Server – READ_COMMITTED. What will happen if we turn this into a higher level, for example SERIALIZABLE? Will Optimized Locking support us in having fewer locks and blockings?
The answer is no – not at all. Do you agree?
And this makes sense regarding the mechanism of such an isolation level like SERIALIZABLE. When we choose such a high isolation level, we are (or at least should be) aware that we are blocking others for a (longer) period of time because we tell SQL Server to behave like this. Everyone who is familiar with the locking behaviour in depth in SQL Server using different isolation levels knows, that SQL Server has to change/align his behaviour to be able to take the ACID principle into account.
Isolation Level SERIALIZABLE
Here is the output of the same demo I did above several times except that I executed the first UPDATE statement with the transaction isolation level SERIALIZABLE. As you can see, the second UPDATE statement (in the middle pane) is blocked during the attempt of placing an Exclusive Lock on the row with ID = 2 because the first UPDATE statement had to place an Exclusive Lock on the range where the row with ID = 2 is part of, based on the requirements of the Isolation Level:

Conclusion
From my point of view, “Optimized Locking” is a real game changer. Regardless of the underlying structure (Heap or Clustered Index), locking becomes easier and in a certain way more “lightweight”.
As we saw, the optimal way to use this feature is to have turned on Read Committed Snapshot Isolation (RCSI) in addition and – if this make sense for storing the data in this way in terms of the access pattern – use a Clustered Index to organize tables. Either way, however, we benefit from the fact that the Lock Manager has to manage fewer locks which saves a significant amount of memory in any case.
L’article SQL Server 2025 – Optimized Locking est apparu en premier sur dbi Blog.
Pages
