Yann Neuhaus
Using Oracle API for MongoDB on OCI
If you’ve ever had to deal with MongoDB after years (or even decades) of working with SQL, you’ve probably realized that querying JSON documents can sometimes be painful, to say the least.
In response to this, Oracle (and others) use a not-so-native adaptation of SQL to query JSON documents. And if you’ve ever had to deal with these SQL-based JSON queries, you may have found yourself thinking (in hindsight) that querying JSON documents with MQL (MongoDB Query Language) wasn’t that painful after all. And rightly so : JSON documents are very different from relational tables, and there is no easy way to continue using SQL without learning some new syntax.
Oracle likely came to the same conclusion, and offers an API dedicated to MongoDB natives. The idea is simple : to ease the work of developers and database administrators when deciding to migrate from MongoDB to Oracle (and the other way around ?…)
On Oracle Cloud Infrastructure (OCI), this means being able to connect to Autonomous Databases (ADB). As a reminder, you have two of them included in the Always Free Tier of OCI, so you can play around with this feature for free before deciding to migrate to the cloud.
Setting up the Oracle MongoDB API on OCIWhen creating an Autonomous Database, you can decide on multiple workloads types, including a JSON workload. However, this workload type isn’t strictly required for the MongoDB API to work.

However, the network access setting of your Autonomous Database must be non-default with one of the following options enabled :
- Secure access from allowed IPs and VCNs only
- Private endpoint access only
For instance, when using the Secure access from allowed IPs and VCNs only option, you can add any IP address to the Access Control List (ACL), including your own.

Once the ADB (Autonomous Database) is created, you can check in the Tool configuration tab whether the MongoDB API is enabled (it should be, by default).


Then you can go in Database actions > Database Users to either create a user or modify an existing one, with the following privileges : CREATE SESSION
, CREATE RESOURCE
(default), and the SODA_APP
role. After creation, you should enable REST as shown below. This will allow the API to work on that particular schema.

Back on the Tool configuration tab of your ADB instance, you’ll find the Public access URL. Copy it, this will be your connection string when using the API.

For this to work, you have to replace the user and password in the connection string. For instance, if your user is called USER_DB
, and your password is userpwd
, then you would use this connection string (without the brackets). Make sure to escape any special character in the password when doing so (see documentation).
mongodb://user_db:userpwd@A49E7CHE9B320E6-PCCVX8PAE2NZPVBQ.adb.eu-zurich-1.oraclecloudapps.com:27017/user_db?authMechanism=PLAIN&authSource=$external&ssl=true&retryWrites=false&loadBalanced=true'
Use this connection string with mongosh
or any tool provided by MongoDB. With mongosh
, you would connect with this command :
mongosh 'mongodb://user_db:userpwd@A49E7CHE9B320E6-PCCVX8PAE2NZPVBQ.adb.eu-zurich-1.oraclecloudapps.com:27017/adb_admin?authMechanism=PLAIN&authSource=$external&ssl=true&retryWrites=false&loadBalanced=true'
You are now connected to the Oracle ADB instance through the mongosh
utility.
If you connect to an existing schema with tables, you will notice that running show collections
(or show tables
) doesn’t return anything. In Oracle 19c, only collections/tables created through the API appear in this list. Standard tables create with a CREATE TABLE
SQL statement won’t be displayed.
Once connected, you can query the different databases (schemas, in Oracle terms) of your instance. However, unlike in MongoDB, you can’t administer the instance through mongosh
.
You can look at the documentation to check whether the features you’re interested in are included in the API, but here is an extract of the most notable changes :
- All the most basic aggregated pipelines are available in the API, but more complex ones like
$currentOp
or$merge
are not yet supported. - Other aggregation operators, like
$bucket
,$documents
or$sample
are only available starting with Oracle 23ai. db.dropDatabase()
doesn’t drop the Oracle schema. In fact, it doesn’t even deletes everything in the schema. It only deletes all real collections inside of it. So if you have a traditional table that is not a considered as a collection (not shown inshow collections
), it will not be deleted by this command.
MongoServerError: Database connection unavailable
You might encounter the following error :
MongoServerError: Database connection unavailable. Ensure that the user exists and the schema is enabled for use with Oracle REST Data Services. A schema can be enabled by calling the PL/SQL procedure ORDS.ENABLE_SCHEMA
If you see this message, you can either follow the procedure or remember to enable REST directly on the OCI Database Actions panel, as shown above.
In short, Oracle’s MongoDB API provides an interesting bridge between the relational and document worlds. Even though some MongoDB features are supported yet, the API offers a straightforward way to connect MongoDB and Oracle Autonomous Database, making it ideal for testing, migration or even hybrid setups. For developers used to MQL, it can significantly ease the transition to Oracle’s ecosystem.
L’article Using Oracle API for MongoDB on OCI est apparu en premier sur dbi Blog.
Oracle OEM Holistic Patch
Starting with Oracle Enterprise Manager (OEM) 13.5 RU24 (October 2024), Oracle introduced the Holistic Patch for OEM — a unified patching model that consolidates several separate updates into a single coordinated package.
Previously, administrators had to patch:
* The OMS and Agents with the quarterly Release Update (RU).
* OEM Application via the System Patch Bundle (SPB).
* WebLogic CPU and JDK manually.
There was an element of complication because of the need to keep the solution homogeneous. With the Holistic Patch, the last two steps are merged into a single patch bundle.
Now we have a new Holistic patch for OEM that is released with each new OMS Release Update.
How It Works ?Each quarter, it is recommended to do the following procedure:
1- Apply the RU to update the OMS and Agents.
2- Then apply the corresponding Holistic Patch to update WebLogic, the JDK, and all related security components.
The following Oracle Notes should guide you into a safe deployment of this strategy :
* If you are using OEM 13.5 : (Doc ID 3002450.1) .
* If you are using OEM 24.1 : (Doc ID 3082475.1).
Below is a walk-through of the steps I performed to apply the July 2025 Holistic Patch for OEM 13.5.
This procedure assumes the OMS and Agents have already been patched to 13.5 RU27 (July 2025).
Download, copy, and unzip the related patches, in this case :
- Patch 34761383: DB Client 19c Upgrade for FMW 12.2.1.4.0
- Patch 35039230: PATCH 34761383 FAILS DURING UPGRADE DUE TO UPDATED ORACLE_COMMON/JDK/JRE
- Patch 31190532: PROVIDE A PATCH USING INSTALLER AUTOUPDATE PATCH FEATURE FOR OL8 OS PACKAGES CHECKS
- Patch 38194776: Oracle Enterprise Manager 13c Release 5 Holistic Patch for Oracle Management Service – July 2025
Before proceeding, plan your rollback strategy — For instance I took a VM snapshot of my OEM machine and created a guaranteed restore point for the repository database:
SQL> select name from v$restore_point; no rows selected SQL> create restore point CLEAN_DB guarantee flashback database; Restore point created. SQL> select name from v$restore_point; NAME -------------------------------------------------------------------------------- CLEAN_DBStep 1 — Upgrade the DB Client (19c)
This step is required only once before applying your first Holistic Patch (see Doc ID 3060775.1).
If you haven’t done this before, check your current client version of your OMS Home :
export ORACLE_HOME = /opt/app/oracle/product/MiddleWare13cR5 cd $ORACLE_HOME/bin ./sqlplus -version SQL*Plus: Release 12.1.0.2.0 Production
The client is not in 19c, so let’s upgrade this client :
Stop the OMSoracle@Machine1:/home/oracle/ [EM13CR5] emctl stop oms -all Oracle Enterprise Manager Cloud Control 13c Release 5 Copyright (c) 1996, 2021 Oracle Corporation. All rights reserved. Stopping Oracle Management Server... WebTier Successfully Stopped Oracle Management Server Successfully Stopped AdminServer Successfully Stopped Oracle Management Server is Down JVMD Engine is DownRun Preliminary Scripts
Run changePerm.sh from Patch 34761383 on the OMS Home :
oracle@Machine1:/opt/software/34761383/ [EM13CR5] ./changePerm.sh /opt/app/oracle/product/MiddleWare13cR5 Release 6 is Not in the /etc/redhat-release Change permission of /opt/app/oracle/product/MiddleWare13cR5/QOpatch/qopiprep.bat Update areasQueries Rename /opt/app/oracle/product/MiddleWare13cR5/bin/skgxpinfo to /opt/app/oracle/product/MiddleWare13cR5/bin/skgxpinfo.12c.bak Starting rollback of Patches. This action may take some time .......... ORACLE_HOME = /opt/app/oracle/product/MiddleWare13cR5 PATCHES_LIST_IN_ORDER = 29494060,33779451,33897347,34454476 ONEOFF_PATCHES = 34830313 ####################### Begin : rollbackOneoffPatches() ######################## ORACLE_HOME=/opt/app/oracle/product/MiddleWare13cR5 Patches to be rolled back=34830313 Running : /opt/app/oracle/product/MiddleWare13cR5/OPatch/opatch lspatches -oh /opt/app/oracle/product/MiddleWare13cR5 | grep -E "34830313" Patch ID(s): Rollback of Oneoff patch operation is not needed.Continue.... ########################## End : rollbackOneoffPatches() ######################## real 0m5.871s user 0m13.874s sys 0m0.584s ####################### Begin : rollbackPatches() ######################## ORACLE_HOME=/opt/app/oracle/product/MiddleWare13cR5 Patches to be rolled back=29494060|33779451|33897347|34454476 Running : /opt/app/oracle/product/MiddleWare13cR5/OPatch/opatch lspatches -oh /opt/app/oracle/product/MiddleWare13cR5 | grep -E "29494060|33779451|33897347|34454476" Patch ID(s): Rollback operation is not needed.Continue with Thin Installer ########################## End : rollbackPatches() ######################## real 0m5.957s user 0m14.772s sys 0m0.530s All Success
Then, from Patch 35039230, run changePermJDK.sh on the OMS Home :
oracle@Machine1:/opt/software/35039230/ [EM13CR5] ./changePermJDK.sh /opt/app/oracle/product/MiddleWare13cR5 Change permission of /opt/app/oracle/product/MiddleWare13cR5/oracle_common/jdk/jre/legal/jdk Change permission of /opt/app/oracle/product/MiddleWare13cR5/oracle_common/jdk/jre/legal/javafx Change permission of /opt/app/oracle/product/MiddleWare13cR5/oracle_common/jdk/legal/javafx Change permission of /opt/app/oracle/product/MiddleWare13cR5/oracle_common/jdk/jre/COPYRIGHT Change permission of /opt/app/oracle/product/MiddleWare13cR5/oracle_common/jdk/jre/LICENSE Change permission of /opt/app/oracle/product/MiddleWare13cR5/oracle_common/jdk/jre/THIRDPARTYLICENSEREADME.txt oracle@Machine1:/opt/software/35039230/ [EM13CR5]
These scripts fix permissions in the Middleware home and JDK directories.
Attempt the DB Client Upgradeoracle@Machine1:/opt/software/34761383/ [EM13CR5] export ORACLE_HOME=/opt/app/oracle/product/MiddleWare13cR5 oracle@Machine1:/opt/software/34761383/ [EM13CR5] ./fmw_12.2.1.19.0_dbclient_linux64.bin -invPtrLoc ${ORACLE_HOME}/oraInst.loc -silent ORACLE_HOME=${ORACLE_HOME} Launcher log file is /tmp/OraInstall2025-09-10_10-34-53AM/launcher2025-09-10_10-34-53AM.log. Checking if CPU speed is above 300 MHz. Actual 2893.202 MHz Passed Checking swap space: must be greater than 512 MB. Actual 4051 MB Passed Checking if this platform requires a 64-bit JVM. Actual 64 Passed (-d64 flag is not required) Checking temp space: must be greater than 300 MB. Actual 7269 MB Passed Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2025-09-10_10-34-53AM Log: /tmp/OraInstall2025-09-10_10-34-53AM/install2025-09-10_10-34-53AM.log Setting ORACLE_HOME... Copyright (c) 1996, 2022, Oracle and/or its affiliates. All rights reserved. Skipping Software Updates Starting check : CertifiedVersions Expected result: One of oracle-8, redhat-7, oracle-7, SuSE-11, SuSE-12, SuSE-15 Actual Result: redhat-n/a Check complete. The overall result of this check is: Passed CertifiedVersions Check: Success. Starting check : Packages Checking for binutils-2.23.52.0.1; found binutils-2.30-117.0.3.el8-x86_64. Passed Checking for compat-libcap1-1.10; Not found. Failed Checking for compat-libstdc++-33-3.2.3-x86_64; Not found. Failed Checking for libgcc-4.8.2-x86_64; found libgcc-8.5.0-16.0.2.el8_7-x86_64. Passed Checking for libstdc++-4.8.2-x86_64; found libstdc++-8.5.0-16.0.2.el8_7-x86_64. Passed Checking for libstdc++-devel-4.8.2-x86_64; found libstdc++-devel-8.5.0-16.0.2.el8_7-x86_64. Passed Checking for sysstat-10.1.5; found sysstat-11.7.3-7.0.1.el8_7.1-x86_64. Passed Checking for gcc-4.8.2; found gcc-8.5.0-16.0.2.el8_7-x86_64. Passed Checking for gcc-c++-4.8.2; found gcc-c++-8.5.0-16.0.2.el8_7-x86_64. Passed Checking for ksh-...; found ksh-20120801-257.0.1.el8-x86_64. Passed Checking for make-3.82; found make-1:4.2.1-11.el8-x86_64. Passed Checking for glibc-2.17-x86_64; found glibc-2.28-211.0.1.el8-x86_64. Passed Checking for glibc-devel-2.17-x86_64; found glibc-devel-2.28-211.0.1.el8-x86_64. Passed Checking for libaio-0.3.109-x86_64; found libaio-0.3.112-1.el8-x86_64. Passed Checking for libaio-devel-0.3.109-x86_64; found libaio-devel-0.3.112-1.el8-x86_64. Passed Check complete. The overall result of this check is: Failed Problem: Some recommended packages are missing (see above). Recommendation: You may actually have installed packages which have obsoleted these, in which case you can successfully continue with the install. If you have not, it is recommended that you do not continue. Refer to the product release notes to find out how to get the missing packages and update the system. Error: Check:Packages failed. Starting check : Kernel Checking for VERSION=3.10.0; found VERSION=5.15.0-100.96.32.el8uek.x86_64. Passed Checking for hardnofiles=4096; found hardnofiles=262144. Passed Checking for softnofiles=4096; found softnofiles=262144. Passed Check complete. The overall result of this check is: Passed Kernel Check: Success. Starting check : TotalMemory Expected result: 1024MB Actual Result: 15704MB Check complete. The overall result of this check is: Passed TotalMemory Check: Success. Starting check : CheckJDKVersion Expected result: 1.8.0_341 Actual Result: 1.8.0_341 Check complete. The overall result of this check is: Passed CheckJDKVersion Check: Success. Prereq failed. Aborting Install Sep 10, 2025 10:35:04 AM oracle.as.install.engine.modules.configuration.ConfigurationModule onExit INFO: Cleaning up Configuration Module The log(s) can be found here: /tmp/OraInstall2025-09-10_10-34-53AM. ERROR: Installer execution failed (1).
This upgrade might work for you right away.
If it fails due to missing packages (As it was the case for me here), follow Doc ID 3060775.1 and rerun the installer with the 31190532 patch metadata:
We restart the client upgrade with -prereqConfigLoc clause :
[oracle@Machine1 34761383]$ ./fmw_12.2.1.19.0_dbclient_linux64.bin -invptrloc $ORACLE_HOME/oraInst.loc -silent ORACLE_HOME=$ORACLE_HOME -prereqConfigLoc /opt/software/31190532/prereq_metadata/oracle.as.install.ohs.prerequisite/prereq/ Launcher log file is /tmp/OraInstall2025-09-10_11-10-02AM/launcher2025-09-10_11-10-02AM.log. Checking if CPU speed is above 300 MHz. Actual 2893.202 MHz Passed Checking swap space: must be greater than 512 MB. Actual 4051 MB Passed Checking if this platform requires a 64-bit JVM. Actual 64 Passed (-d64 flag is not required) Checking temp space: must be greater than 300 MB. Actual 7050 MB Passed Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2025-09-10_11-10-02AM Log: /tmp/OraInstall2025-09-10_11-10-02AM/install2025-09-10_11-10-02AM.log Setting ORACLE_HOME... Copyright (c) 1996, 2022, Oracle and/or its affiliates. All rights reserved. Skipping Software Updates Starting check : CertifiedVersions Expected result: One of redhat-8, redhat-7, redhat-6, oracle-6, oracle-8, oracle-7, SuSE-11, SuSE-12, SuSE-15 Actual Result: redhat-n/a Check complete. The overall result of this check is: Passed CertifiedVersions Check: Success. Starting check : Packages Checking for binutils-2.23.52.0.1; found binutils-2.30-117.0.3.el8-x86_64. Passed Checking for libcap-2.25-9; found libcap-2.48-4.el8-x86_64. Passed Checking for libstdc++-8.2.1-3.5.0.1.el8-x86_64; found libstdc++-8.5.0-16.0.2.el8_7-x86_64. Passed Checking for libgcc-4.8.2-x86_64; found libgcc-8.5.0-16.0.2.el8_7-x86_64. Passed Checking for libstdc++-devel-4.8.2-x86_64; found libstdc++-devel-8.5.0-16.0.2.el8_7-x86_64. Passed Checking for sysstat-10.1.5; found sysstat-11.7.3-7.0.1.el8_7.1-x86_64. Passed Checking for gcc-4.8.2; found gcc-8.5.0-16.0.2.el8_7-x86_64. Passed Checking for gcc-c++-4.8.2; found gcc-c++-8.5.0-16.0.2.el8_7-x86_64. Passed Checking for ksh-...; found ksh-20120801-257.0.1.el8-x86_64. Passed Checking for make-3.82; found make-1:4.2.1-11.el8-x86_64. Passed Checking for glibc-2.17-x86_64; found glibc-2.28-211.0.1.el8-x86_64. Passed Checking for glibc-devel-2.17-x86_64; found glibc-devel-2.28-211.0.1.el8-x86_64. Passed Checking for libaio-0.3.109-x86_64; found libaio-0.3.112-1.el8-x86_64. Passed Checking for libaio-devel-0.3.109-x86_64; found libaio-devel-0.3.112-1.el8-x86_64. Passed Check complete. The overall result of this check is: Passed Packages Check: Success. Starting check : Kernel Checking for VERSION=3.10.0; found VERSION=5.15.0-100.96.32.el8uek.x86_64. Passed Checking for hardnofiles=4096; found hardnofiles=262144. Passed Checking for softnofiles=4096; found softnofiles=262144. Passed Check complete. The overall result of this check is: Passed Kernel Check: Success. Starting check : TotalMemory Expected result: 1024MB Actual Result: 15704MB Check complete. The overall result of this check is: Passed TotalMemory Check: Success. Starting check : CheckJDKVersion Expected result: 1.8.0_241 Actual Result: 1.8.0_341 Check complete. The overall result of this check is: Passed CheckJDKVersion Check: Success. Validations are enabled for this session. Verifying data Copying Files Percent Complete : 10 Percent Complete : 20 Percent Complete : 30 Percent Complete : 40 Percent Complete : 50 Percent Complete : 60 Percent Complete : 70 Percent Complete : 80 Percent Complete : 90 Percent Complete : 100 The installation of Database Client 12.2.1.19.0 completed successfully. Logs successfully copied to /opt/app/oracle/oraInventory/logs.
This time the upgrade is fine.
Verify the client version (19.14 is the expected value, As per july 2025, regardless of your repository database or OMS patch level):
[oracle@Machine1 34761383]$ $ORACLE_HOME/bin/sqlplus -version SQL*Plus: Release 19.0.0.0.0 - Production Version 19.14.0.0.0 Holistic Patch 13.5.0.27 :Step 2 — Apply the Holistic Patch
Start the OMS in admin_only mode :
[oracle@Machine1 software]$ emctl start oms -admin_only Oracle Enterprise Manager Cloud Control 13c Release 5 Copyright (c) 1996, 2021 Oracle Corporation. All rights reserved. Starting Admin Server only... Admin Server Successfully Started
Ensure the OMSPatcher utility is in your path:
[oracle@Machine1 software]$ echo $ORACLE_HOME /opt/app/oracle/product/MiddleWare13cR5 [oracle@Machine1 software]$ export PATH=$PATH:/opt/app/oracle/product/MiddleWare13cR5/OMSPatcher [oracle@Machine1 software]$ which omspatcher /opt/app/oracle/product/MiddleWare13cR5/OMSPatcher/omspatcherAnalyze the Patch
We analyze the patch before applying it, the omspatcher tool is used, as for the OMS RU patching, but with the -spb_patch clause :
[oracle@Machine1 software]$ omspatcher apply /opt/software/38194776/ -spb_patch -analyze SUCCESS :: The minimum Release Update (RU) check passed for applying the given holistic patch. Minimum OMS RU Version is 13.5.0.27 JDK update log location :: /opt/app/oracle/product/MiddleWare13cR5/cfgtoollogs/omspatcher/SPB_PATCH/jdk_update_Wed_Sep_10_11_25_12_2025.log SPB JDK version :: 1.8.0.461, Env JDK version :: 1.8.0.341 The Env version is lower as compared to JDK included in patch :: JDK will be updated OPatch update log :: /opt/app/oracle/product/MiddleWare13cR5/cfgtoollogs/omspatcher/SPB_PATCH/opatch_update_Wed_Sep_10_11_25_14_2025.log The Env Opatch version :: 13.9.4.2.5 The SPB Opatch version :: 13.9.4.2.20 The Environment Opatch version is low as per OPatch included in Patch :: OPatch will be updated OMSPatcher analyze log file :: /opt/app/oracle/product/MiddleWare13cR5/cfgtoollogs/omspatcher/SPB_PATCH/omspatcher_analyze_Wed_Sep_10_11_25_15_2025.log Running Command :: /opt/app/oracle/product/MiddleWare13cR5/OPatch/opatch lspatches -oh /opt/app/oracle/product/MiddleWare13cR5 Running Command :: /opt/app/oracle/product/MiddleWare13cR5/OPatch/opatch prereq CheckForNoOpPatches -oh /opt/app/oracle/product/MiddleWare13cR5 -phbasefile /opt/app/oracle/product/MiddleWare13cR5/linux64_patchlist.txt -silent Running Command :: /opt/app/oracle/product/MiddleWare13cR5/OPatch/opatch prereq CheckConflictAgainstOH -oh /opt/app/oracle/product/MiddleWare13cR5 -phbasefile /opt/app/oracle/product/MiddleWare13cR5/linux64_patchlist.txt -silent OMSPatcher succeeded. [oracle@Machine1 software]$Apply the Patch
As the analysis passes successfully, we proceed to applying the patch.
This command:
* Updates JDK (e.g., to 1.8.0_461)
* Updates OPatch if needed
* Applies the SPB sub-patches automatically
* Restarts the OMS as part of the process
[oracle@Machine1 software]$ omspatcher apply /opt/software/38194776/ -spb_patch SUCCESS :: The minimum Release Update (RU) check passed for applying the given holistic patch. Minimum OMS RU Version is 13.5.0.27 JDK update log location :: /opt/app/oracle/product/MiddleWare13cR5/cfgtoollogs/omspatcher/SPB_PATCH/jdk_update_Wed_Sep_10_11_30_13_2025.log SPB JDK version :: 1.8.0.461, Env JDK version :: 1.8.0.341 The Env version is lower as compared to JDK included in patch :: JDK will be updated Updating the JDK now... Stopping the OMS... Oracle Enterprise Manager Cloud Control 13c Release 5 Copyright (c) 1996, 2021 Oracle Corporation. All rights reserved. Stopping Oracle Management Server... Oracle Management Server Already Stopped Oracle Management Server is Down JVMD Engine is Down Back_up Java folder :: /opt/app/oracle/product/MiddleWare13cR5/oracle_common/jdk_1757496623 New Java version is... java version "1.8.0_461" Java(TM) SE Runtime Environment (build 1.8.0_461-b25) Java HotSpot(TM) 64-Bit Server VM (build 25.461-b25, mixed mode) Updated Java Successfully OPatch update log :: /opt/app/oracle/product/MiddleWare13cR5/cfgtoollogs/omspatcher/SPB_PATCH/opatch_update_Wed_Sep_10_11_30_23_2025.log The Env Opatch version :: 13.9.4.2.5 The SPB Opatch version :: 13.9.4.2.20 The Environment Opatch version is low as per OPatch included in Patch :: OPatch will be updated Updating the Opatch now....... Updated Opatch Successfully....... OMSPatcher analyze log file :: /opt/app/oracle/product/MiddleWare13cR5/cfgtoollogs/omspatcher/SPB_PATCH/omspatcher_analyze_Wed_Sep_10_11_31_01_2025.log Running Command :: /opt/app/oracle/product/MiddleWare13cR5/OPatch/opatch lspatches -oh /opt/app/oracle/product/MiddleWare13cR5 Running Command :: /opt/app/oracle/product/MiddleWare13cR5/OPatch/opatch prereq CheckForNoOpPatches -oh /opt/app/oracle/product/MiddleWare13cR5 -phbasefile /opt/app/oracle/product/MiddleWare13cR5/linux64_patchlist.txt -silent Running Command :: /opt/app/oracle/product/MiddleWare13cR5/OPatch/opatch prereq CheckConflictAgainstOH -oh /opt/app/oracle/product/MiddleWare13cR5 -phbasefile /opt/app/oracle/product/MiddleWare13cR5/linux64_patchlist.txt -silent OMSPatcher Automation Tool Copyright (c) 2017, Oracle Corporation. All rights reserved. OMSPatcher version : 13.9.5.26.0 OUI version : 13.9.4.0.0 Running from : /opt/app/oracle/product/MiddleWare13cR5 Log file location : /opt/app/oracle/product/MiddleWare13cR5/cfgtoollogs/omspatcher/opatch2025-09-10_11-33-20AM_1.log OMSPatcher log file: /opt/app/oracle/product/MiddleWare13cR5/cfgtoollogs/omspatcher/SPB_PATCH/omspatcher_2025-09-10_11-33-21AM_apply.log Please enter OMS weblogic admin server URL(t3s://Machine1.sig-ge.ch:7102):> Please enter OMS weblogic admin server username(weblogic):> Please enter OMS weblogic admin server password:> Performing prerequisites..... The following sub-patches are already applied to the OMS system: 36910011 The following sub-patches will be applied to the OMS system: 38015786 34809489 35965629 1221426 37894414 36789759 38108218 37925688 38051795 36905789 37894601 36316422 38042142 34065178 38073767 37074199 37887285 37284722 38156117 38194914 Loaded SPB properties successfully Stopping the oms.... Oracle Enterprise Manager Cloud Control 13c Release 5 Copyright (c) 1996, 2021 Oracle Corporation. All rights reserved. Stopping Oracle Management Server... WebTier Successfully Stopped Oracle Management Server Successfully Stopped AdminServer Successfully Stopped Oracle Management Server is Down JVMD Engine is Down Running the command :: /opt/app/oracle/product/MiddleWare13cR5/OPatch/opatch napply -oh /opt/app/oracle/product/MiddleWare13cR5 -phbasefile /opt/app/oracle/product/MiddleWare13cR5/linux64_patchlist.txt -silent Log file to be checked to get the list of applied patches : /opt/app/oracle/product/MiddleWare13cR5/cfgtoollogs/opatch/opatch2025-09-10_11-41-24AM_1.log Command ran successfully Starting the oms.... Oracle Enterprise Manager Cloud Control 13c Release 5 Copyright (c) 1996, 2021 Oracle Corporation. All rights reserved. Starting Oracle Management Server... WebTier Successfully Started Oracle Management Server Successfully Started Oracle Management Server is Up JVMD Engine is Up Log file location: /opt/app/oracle/product/MiddleWare13cR5/cfgtoollogs/omspatcher/SPB_PATCH/omspatcher_2025-09-10_11-33-21AM_apply.log OMSPatcher succeeded.Conclusion
Patching OEM used to be tedious, with separate updates for OMS, Agents, WebLogic, and JDK — often leading to inconsistent environments or outdated components. And ignoring some of these patches is a security risk.
The Holistic Patch simplifies the post-OMS patching into one coordinated update.
If you’re maintaining Oracle Enterprise Manager, adopting the holistic patching approach is highly recommended.
L’article Oracle OEM Holistic Patch est apparu en premier sur dbi Blog.
RAG Series – Adaptive RAG, understanding Confidence, Precision & nDCG
In this RAG series we tried so far to introduce new concepts of the RAG workflow each time. This new article is going to introduce also new key concepts at the heart of Retrieval. Adaptive RAG will allow us to talk about measuring the quality of the retrieved data and how we can leverage it to push our optimizations further.
A now famous study from MIT is stating how 95% of organizations fail to get ROI within the 6 months of their “AI projects”. Although we could argue about the relevancy of the study and what it actually measured, one of the key element to have a successful implementation is measurement.
An old BI principle is to know your KPI, what it really measures but also when it fails to measure. For example if you would use the speedometer on your dashboard’s car to measure the speed at which you are going, you’d be right as long as the wheels are touching the ground. So with that in mind, let’s see how we can create smart and reliable retrieval.
Hybrid search significantly improves retrieval quality by combining dense semantic vectors with sparse lexical signals. However, real-world queries vary:
- Some are factual, asking for specific names, numbers, or entities.
- Others are conceptual, exploring ideas, reasons, or relationships.
A single static weighting between dense and sparse methods cannot perform optimally across all query types.
Adaptive RAG introduces a lightweight classifier that analyzes each query to determine its type and dynamically adjusts the hybrid weights before searching.
For example:
This dynamic weighting ensures that each search leverages the right signals:
- Sparse when exact matching matters.
- Dense when semantic similarity matters.
Under the hood, our AdaptiveSearchEngine
wraps dense and sparse retrieval modules. Before executing, it classifies the query, assigns weights, and fuses the results via a weighted Reciprocal Rank Fusion (RRF), giving us the best of both worlds — adaptivity without complexity.
Once we make retrieval adaptive, the next challenge is trust. How confident are we in the results we just returned?
Confidence from ClassificationEach query classification includes a confidence score (e.g., 0.92 “factual” vs 0.58 “conceptual”).
When classification confidence is low, Adaptive RAG defaults to a balanced retrieval (dense 0.5, sparse 0.5) — avoiding extreme weighting that might miss relevant content.
We also compute confidence based on retrieval statistics:
- The similarity gap between the first and second ranked results (large gap = high confidence).
- Average similarity score of the top-k results.
- Ratio of sparse vs dense agreement (when both find the same document, confidence increases).
These metrics are aggregated into a normalized confidence score between 0 and 1:
def compute_confidence(top_scores, overlap_ratio):
sim_conf = min(1.0, sum(top_scores[:3]) / 3)
overlap_conf = 0.3 + 0.7 * overlap_ratio
return round((sim_conf + overlap_conf) / 2, 2)
If confidence < 0.5, the system triggers a fallback strategy:
- Expands
top_k
results (e.g., from 10 → 30). - Broadens search to both dense and sparse equally.
- Logs the event for later evaluation.
The retrieval API now returns a structured response:
{
"query": "When was PostgreSQL 1.0 released?",
"query_type": "factual",
"confidence": 0.87,
"precision@10": 0.8,
"recall@10": 0.75
}
This allows monitoring not just what was retrieved, but how sure the system is. Enabling alerting, adaptive reruns, or downstream LLM prompt adjustments (e.g., “Answer cautiously” when confidence < 0.6).
Evaluating Quality with nDCGPrecision and recall are fundamental metrics for retrieval systems, but they don’t consider the order of results. If a relevant document appears at rank 10 instead of rank 1, the user experience is still poor even if recall is high.
That’s why we now add nDCG@k (normalized Discounted Cumulative Gain) — a ranking-aware measure that rewards systems for ordering relevant results near the top.
The idea:
- DCG@k evaluates gain by position:

- nDCG@k normalizes this against the ideal order (IDCG):

A perfect ranking yields nDCG = 1.0. Poorly ordered but complete results may still have high recall, but lower nDCG.
In practice, we calculate nDCG@10 for each query and average it over the dataset.
Our evaluation script (lab/04_evaluate/metrics.py
) integrates this directly:
from evaluation import ndcg_at_k
score = ndcg_at_k(actual=relevant_docs, predicted=retrieved_docs, k=10)
print(f"nDCG@10: {score:.3f}")
These results confirm that adaptive weighting not only improves raw accuracy but also produces better-ranked results, giving users relevant documents earlier in the list.
Implementation in our LABYou can explore the implementation in the GitHub repository:
git clone https://github.com/boutaga/pgvector_RAG_search_lab
cd pgvector_RAG_search_lab
Key components:
lab/04_search/adaptive_search.py
— query classification, adaptive weights, confidence scoring.lab/04_evaluate/metrics.py
— precision, recall, and nDCG evaluation.- Streamlit UI (
streamlit run streamlit_demo.py
) — visualize retrieved chunks, scores, and confidence in real time.
Example usage:
python lab/04_search/adaptive_search.py --query "Who invented SQL?"
Output:
Query type: factual (0.91 confidence)
Dense weight: 0.3 | Sparse weight: 0.7
Precision@10: 0.82 | Recall@10: 0.77 | nDCG@10: 0.84
This feedback loop closes the gap between research and production — making RAG not only smarter but measurable.
What is “Relevance”?When we talk about precision, recall, or nDCG, all three depend on one hidden thing:
a ground truth of which documents are relevant for each query.
There are two main ways to establish that ground truth:
ApproachWho decides relevanceProsConsHuman labelingExperts mark which documents correctly answer each queryMost accurate; useful for benchmarksExpensive and slowAutomated or LLM-assisted labelingAn LLM (or rules) judges if a retrieved doc contains the correct answerScalable and repeatableRisk of bias / noiseIn some business activity you are almost forced to use human labeling because the business technicalities are so deep that automating it is hard. Labeling can be slow and expensive for a business but I learned that it also is a way to introduce change management towards AI workflow by enabling key employees of the company to participate and build a solution with their expertise and without going through a harder project of asking to an external organization to create specific business logic into a software that was never made to handle it in the first place. As a DBA, I witnessed business logic move away from databases towards ORMs and application code and this time the business logic is going towards AI workflow. Starting this human labeling project my be the first step towards it and guarantees solid foundations.
Managers need to keep in mind that AI workflows are not just a technical solution, they are social-technical framework to allow organizational growth. You can’t just ship an AI chatbot into an app and expect 10x returns with minimal effort, this is a simplistic state of mind that already cost billions according the MIT study.
In a research setup (like your pgvector_RAG_search_lab
), you can mix both approach:
- Start with a seed dataset of
(query, relevant_doc_ids)
pairs (e.g. small set labeled manually). - Use the LLM to extend or validate relevance judgments automatically.
For example:
prompt = f"""
Query: {query}
Document: {doc_text[:2000]}
Is this document relevant to answering the query? (yes/no)
"""
llm_response = openai.ChatCompletion.create(...)
label = llm_response['choices'][0]['message']['content'].strip().lower() == 'yes'
Then you store that in a simple table or CSV:
query_iddoc_idrelevant1101true1102false2104true Precision & Recall in PracticeOnce you have that table of true relevances, you can compute:
- Precision@k → “Of the top k documents I retrieved, how many were actually relevant?”
- Recall@k → “Of all truly relevant documents, how many did I retrieve in my top k?”
They’re correlated but not the same:
- High precision → few false positives.
- High recall → few false negatives.
For example:
QueryRetrieved docs (top 5)True relevantPrecision@5Recall@5“Who founded PostgreSQL?”[d3, d7, d9, d1, d4][d1, d4]0.41.0You got both relevant docs (good recall = 1.0), but only 2 of the 5 retrieved were correct (precision = 0.4).
Why nDCG is NeededPrecision and recall only measure which docs were retrieved, not where they appeared in the ranking.
nDCG@k adds ranking quality:
- Each relevant document gets a relevance grade (commonly 0, 1, 2 — irrelevant, relevant, highly relevant).
- The higher it appears in the ranked list, the higher the gain.
So if a highly relevant doc is ranked 1st, you get more credit than if it’s ranked 10th.
In your database, you can store relevance grades in a table like:
query_iddoc_idrel_grade110121102111030Then your evaluator computes:
import math
def dcg_at_k(relevances, k):
return sum((2**rel - 1) / math.log2(i+2) for i, rel in enumerate(relevances[:k]))
def ndcg_at_k(actual_relevances, k):
ideal = sorted(actual_relevances, reverse=True)
return dcg_at_k(actual_relevances, k) / dcg_at_k(ideal, k)
You do need to keep track of rank (the order in which docs were returned).
In PostgreSQL, you could log that like:
Then it’s easy to run SQL to evaluate:
SELECT query_id,
SUM((POWER(2, rel_grade) - 1) / LOG(2, rank + 1)) AS dcg
FROM eval_results
WHERE rank <= 10
GROUP BY query_id;
In a real system (like your Streamlit or API demo), you can:
- Log each retrieval attempt (query, timestamp, ranking list, scores, confidence).
- Periodically recompute metrics (precision, recall, nDCG) using a fixed ground-truth set.
This lets you track if tuning (e.g., changing dense/sparse weights) is improving performance.
Structure of your evaluation log table could be:
run_idquery_idmethodrankdoc_idscoreconfidencerel_grade2025-10-12_011adaptive_rrf11010.920.8722025-10-12_011adaptive_rrf21020.850.871From there, you can generate:
- nDCG@10 trend over runs (e.g., in Prometheus or Streamlit chart)
- Precision vs Confidence correlation
- Recall improvements per query type
⚠️ Note: While nDCG is a strong metric for ranking quality, it’s not free from bias. Because it normalizes per query, easier questions (with few relevant documents) can inflate the average score. In our lab, we mitigate this by logging both raw DCG and nDCG, and by comparing results across query categories (factual vs conceptual vs exploratory). This helps ensure improvements reflect true retrieval quality rather than statistical artifacts.
Human + LLM Hybrid Evaluation (Practical Middle Ground)For your PostgreSQL lab setup:
- Label a small gold set manually (e.g., 20–50 queries × 3–5 relevant docs each).
- For larger coverage, use the LLM as an auto-grader.
You can even use self-consistency: ask the LLM to re-evaluate relevance twice and keep consistent labels only.
This gives you a semi-automated evaluation dataset, good enough to monitor:
- Precision@10
- Recall@10
- nDCG@10 over time
Through Adaptive RAG, we’ve transformed retrieval from a static process into a self-aware one.
- Precision increased by ~6–7%, especially for conceptual queries.
- Recall improved by ~8% for factual questions thanks to better keyword anchoring.
- nDCG@10 rose from 0.75 → 0.82, confirming that relevant results are appearing earlier.
- Confidence scoring provides operational visibility: we now know when the system is uncertain, enabling safe fallbacks and trust signals.
The combination of adaptive routing, confidence estimation, and nDCG evaluation makes this pipeline suitable for enterprise-grade RAG use cases — where explainability, reliability, and observability are as important as accuracy.
Conclusion and Next StepsAdaptive RAG is the bridge between smart retrieval and reliable retrieval.
By classifying queries, tuning dense/sparse balance dynamically, and measuring ranking quality with nDCG, we now have a system that understands what kind of question it’s facing and how well it performed in answering it.
This version of the lab introduces the first metrics-driven feedback loop for RAG in PostgreSQL:
- Retrieve adaptively,
- Measure precisely,
- Adjust intelligently.
In the next part, we’ll push even further — introducing Agentic RAG, and how it plans and executes multi-step reasoning chains to improve retrieval and answer quality even more.
Try Adaptive RAG in the pgvector_RAG_search_lab repository, explore your own datasets, and start measuring nDCG@10 to see how adaptive retrieval changes the game.
L’article RAG Series – Adaptive RAG, understanding Confidence, Precision & nDCG est apparu en premier sur dbi Blog.
How to check if my Oracle dbhome is RO or RW?
As you might know, RO (Read Only) dbhome was introduced in Oracle 18c and became default configuration in Oracle 21c. Oracle came back with RW (Read Write) dbhome as default configuration in 23ai. It might be interesting to know how we can check if the oracle dbhome is RO (Read Only) or RW (Read Write). I had to recently look for it and just wanted to share it through this blog.
EnvironmentFor my test I used a FREE Oracle 23ai version. Here are my ORACLE_HOME and my ORACLE_BASE environment:
oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] echo $ORACLE_HOME /opt/oracle/product/23ai/dbhomeFree oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] echo $ORACLE_BASE /opt/oracle
We know that for RW dbhome the dbs folder will be used in $ORACLE_HOME and for RO in $ORACLE_BASE.
ORACLE_BASE_HOME will be $ORACLE_HOME for RW and $ORACLE_BASE/homes/<HOME_NAME> for RO.
I did not activate RO on my dbhome, so I should not have so far any dbs or homes directory in my $ORACLE_BASE:
oracle@vmdmkdev:~/ [rdbms2300] ls $ORACLE_BASE admin audit cfgtoollogs diag oraInventory product
ROOH stands for Read Only Oracle HOME.
Activate ROOHI will active ROOH for my dbhome.
oracle@vmdmkdev:~/ [rdbms2300] roohctl -enable Enabling Read-Only Oracle home. Cannot enable Read-Only Oracle home in a configured Oracle home. The Oracle Home is configured with databases 'FREE'.
I still have my FREE instance database using the dbhome. I have just updated oratab file to comment the database.
oracle@vmdmkdev:~/ [rdbms2300] vi /etc/oratab
And ran roohctl command again, which this time will be successful.
oracle@vmdmkdev:~/ [rdbms2300] roohctl -enable Enabling Read-Only Oracle home. Update orabasetab file to enable Read-Only Oracle home. Orabasetab file has been updated successfully. Create bootstrap directories for Read-Only Oracle home. Bootstrap directories have been created successfully. Bootstrap files have been processed successfully. Bootstrap files have been processed successfully. Read-Only Oracle home has been enabled successfully. Check the log file /opt/oracle/cfgtoollogs/roohctl/roohctl-251003PM023449.log for more details.
Now I have got a homes folder in $ORACLE_BASE as well as a dbs one.
oracle@vmdmkdev:~/ [rdbms2300] ls $ORACLE_BASE admin audit cfgtoollogs dbs diag homes oraInventory product
Contents of homes folder:
oracle@vmdmkdev:~/ [rdbms2300] ls $ORACLE_BASE/homes OraDBHome23aiFree oracle@vmdmkdev:~/ [rdbms2300] ls $ORACLE_BASE/homes/OraDBHome23aiFree/ assistants dbs drdaas hs install mgw network rdbms oracle@vmdmkdev:~/ [rdbms2300] ls $ORACLE_BASE/homes/OraDBHome23aiFree/dbs oracle@vmdmkdev:~/ [rdbms2300]
My dbs folder in $ORACLE_HOME remains, of course, unchanged:
oracle@vmdmkdev:~/ [rdbms2300] cdh oracle@vmdmkdev:/opt/oracle/product/23ai/dbhomeFree/ [rdbms2300] cd dbs oracle@vmdmkdev:/opt/oracle/product/23ai/dbhomeFree/dbs/ [rdbms2300] ls hc_FREE.dat init.ora lkFREE_SITE1 orapwFREE spfileFREE.oraDeactivate ROOH again
Let’s deactivate ROOH again and make the Oracle Home RW again to start all the tests.
oracle@vmdmkdev:/opt/oracle/product/23ai/dbhomeFree/dbs/ [rdbms2300] roohctl -disable Disabling Read-Only Oracle home. Update orabasetab file to disable Read-Only Oracle home. Orabasetab file has been updated successfully. Read-Only Oracle home has been disabled successfully. Check the log file /opt/oracle/cfgtoollogs/roohctl/roohctl-251003PM024030.log for more details.
dbs folder in $ORACLE_BASE will remain unchanged.
oracle@vmdmkdev:/opt/oracle/product/23ai/dbhomeFree/dbs/ [rdbms2300] cd $ORACLE_BASE oracle@vmdmkdev:/opt/oracle/ [rdbms2300] ls admin audit cfgtoollogs dbs diag homes oraInventory productCheck dbhome status with orabasehome binary
There is a binary file, orabasehome, stored in $ORACLE_HOME/bin which will tell me if my Oracle Home is RO or RW. It will just provide the value of ORACLE_BASE_HOME. So either $ORACLE_HOME folder if it is a RW dbhome or $ORACLE_BASE/homes/<HOME_NAME> folder if it is a RO dbhome.
oracle@vmdmkdev:/opt/oracle/dbs/ [rdbms2300] $ORACLE_HOME/bin/orabasehome /opt/oracle/product/23ai/dbhomeFree oracle@vmdmkdev:/opt/oracle/dbs/ [rdbms2300]
The binary file will display in my case $ORACLE_HOME, which is correct.
Start the databaseI tested instance startup with file either in $ORACLE_HOME/dbs or $ORACLE_BASE/dbs folder.
oracle@vmdmkdev:/opt/oracle/dbs/ [FREE (CDB$ROOT)] sqh SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Fri Oct 3 14:49:46 2025 Version 23.5.0.24.07 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance ORACLE instance started. Total System Global Area 1070721856 bytes Fixed Size 5368640 bytes Variable Size 335544320 bytes Database Buffers 725614592 bytes Redo Buffers 4194304 bytes Database mounted. Database opened. SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down.
Database is starting correctly.
I moved the instance file from $ORACLE_HOME/dbs to $ORACLE_BASE/dbs:
oracle@vmdmkdev:/opt/oracle/dbs/ [FREE (CDB$ROOT)] ls -ltrh $ORACLE_HOME/dbs total 18M -rw-r-----. 1 oracle oinstall 3.1K May 14 2015 init.ora lrwxrwxrwx. 1 oracle oinstall 38 Sep 26 2024 orapwFREE -> /opt/oracle/admin/FREE/pfile/orapwFREE -rw-r-----. 1 oracle oinstall 24 Sep 26 2024 lkFREE_SITE1 -rw-r-----. 1 oracle oinstall 4.5K Oct 3 14:49 spfileFREE.ora -rw-r-----. 1 oracle oinstall 18M Oct 3 14:59 snapcf_FREE.f -rw-rw----. 1 oracle oinstall 1.6K Oct 3 15:06 hc_FREE.dat oracle@vmdmkdev:/opt/oracle/dbs/ [FREE (CDB$ROOT)] ls -ltrh $ORACLE_BASE/dbs total 0 oracle@vmdmkdev:/opt/oracle/dbs/ [FREE (CDB$ROOT)] mv $ORACLE_HOME/dbs/* $ORACLE_BASE/dbs oracle@vmdmkdev:/opt/oracle/dbs/ [FREE (CDB$ROOT)] ls -ltrh $ORACLE_HOME/dbs total 0 oracle@vmdmkdev:/opt/oracle/dbs/ [FREE (CDB$ROOT)] ls -ltrh $ORACLE_BASE/dbs total 18M -rw-r-----. 1 oracle oinstall 3.1K May 14 2015 init.ora lrwxrwxrwx. 1 oracle oinstall 38 Sep 26 2024 orapwFREE -> /opt/oracle/admin/FREE/pfile/orapwFREE -rw-r-----. 1 oracle oinstall 24 Sep 26 2024 lkFREE_SITE1 -rw-r-----. 1 oracle oinstall 4.5K Oct 3 14:49 spfileFREE.ora -rw-r-----. 1 oracle oinstall 18M Oct 3 14:59 snapcf_FREE.f -rw-rw----. 1 oracle oinstall 1.6K Oct 3 15:06 hc_FREE.dat
Starting the database would fail:
oracle@vmdmkdev:/opt/oracle/dbs/ [FREE (CDB$ROOT)] sqh SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Fri Oct 3 15:18:36 2025 Version 23.5.0.24.07 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORA-01078: failure in processing system parameters LRM-00109: could not open parameter file '/opt/oracle/product/23ai/dbhomeFree/dbs/initFREE.ora' SQL>
So Oracle is looking only for instance file in $ORACLE_HOME/dbs when configured in RW mode, which makes sense.
I moved database’s instance files again to $ORACLE_HOME/dbs, and delete $ORACLE_BASE/dbs and $ORACLE_BASE/homes.
oracle@vmdmkdev:/opt/oracle/ [FREE (CDB$ROOT)] mv $ORACLE_BASE/dbs/* $ORACLE_HOME/dbs/ oracle@vmdmkdev:/opt/oracle/ [FREE (CDB$ROOT)] ls $ORACLE_BASE/dbs/ oracle@vmdmkdev:/opt/oracle/ [FREE (CDB$ROOT)] ls $ORACLE_HOME/dbs/ hc_FREE.dat init.ora lkFREE_SITE1 orapwFREE snapcf_FREE.f spfileFREE.ora oracle@vmdmkdev:/opt/oracle/ [FREE (CDB$ROOT)] rmdir $ORACLE_BASE/dbs/ oracle@vmdmkdev:/opt/oracle/ [FREE (CDB$ROOT)] rm -rf $ORACLE_BASE/homesActivate ROOH again
I activated ROOH for the dbhome and checked dbs and homes folders to be created again.
oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] roohctl -enable Enabling Read-Only Oracle home. Update orabasetab file to enable Read-Only Oracle home. Orabasetab file has been updated successfully. Create bootstrap directories for Read-Only Oracle home. Bootstrap directories have been created successfully. Bootstrap files have been processed successfully. Bootstrap files have been processed successfully. Read-Only Oracle home has been enabled successfully. Check the log file /opt/oracle/cfgtoollogs/roohctl/roohctl-251003PM032952.log for more details. oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] ls -l $ORACLE_BASE | grep -iE 'dbs|homes' drwxr-x---. 2 oracle oinstall 6 Oct 10 15:39 dbs drwxr-x---. 3 oracle oinstall 31 Oct 10 15:39 homes oracle@vmdmkdev:~/ [FREE (CDB$ROOT)]Check dbhome status with orabasehome binary
Binary file is now showing $ORACLE_BASE/homes/HOME_NAME, so RO is really activated on my dbhome.
oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] $ORACLE_HOME/bin/orabasehome /opt/oracle/homes/OraDBHome23aiFree oracle@vmdmkdev:~/ [FREE (CDB$ROOT)]Start the database
Starting the database would fail, as there is no instance file in $ORACLE_BASE/dbs folder.
oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] sqh SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Fri Oct 3 15:30:55 2025 Version 23.5.0.24.07 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORA-01078: failure in processing system parameters LRM-00109: could not open parameter file '/opt/oracle/dbs/initFREE.ora' SQL>
As expected…
By the way, the inventory.xml file is not displaying anything about RW or RO.
Let’s move the instance files from $ORACLE_HOME/dbs to $ORACLE_BASE/dbs:
oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] ls -l $ORACLE_BASE/dbs total 0 oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] ls -l $ORACLE_HOME/dbs total 18340 -rw-rw----. 1 oracle oinstall 1544 Oct 3 15:06 hc_FREE.dat -rw-r-----. 1 oracle oinstall 3079 May 14 2015 init.ora -rw-r-----. 1 oracle oinstall 24 Sep 26 2024 lkFREE_SITE1 lrwxrwxrwx. 1 oracle oinstall 38 Sep 26 2024 orapwFREE -> /opt/oracle/admin/FREE/pfile/orapwFREE -rw-r-----. 1 oracle oinstall 18759680 Oct 3 14:59 snapcf_FREE.f -rw-r-----. 1 oracle oinstall 4608 Oct 3 14:49 spfileFREE.ora oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] mv $ORACLE_HOME/dbs/* $ORACLE_BASE/dbs oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] ls -l $ORACLE_BASE/dbs total 18340 -rw-rw----. 1 oracle oinstall 1544 Oct 3 15:06 hc_FREE.dat -rw-r-----. 1 oracle oinstall 3079 May 14 2015 init.ora -rw-r-----. 1 oracle oinstall 24 Sep 26 2024 lkFREE_SITE1 lrwxrwxrwx. 1 oracle oinstall 38 Sep 26 2024 orapwFREE -> /opt/oracle/admin/FREE/pfile/orapwFREE -rw-r-----. 1 oracle oinstall 18759680 Oct 3 14:59 snapcf_FREE.f -rw-r-----. 1 oracle oinstall 4608 Oct 3 14:49 spfileFREE.ora oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] ls -l $ORACLE_HOME/dbs total 0 oracle@vmdmkdev:~/ [FREE (CDB$ROOT)]
And database can be started:
oracle@vmdmkdev:~/ [FREE (CDB$ROOT)] sqh SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Fri Oct 3 15:48:23 2025 Version 23.5.0.24.07 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance ORACLE instance started. Total System Global Area 1070721856 bytes Fixed Size 5368640 bytes Variable Size 335544320 bytes Database Buffers 725614592 bytes Redo Buffers 4194304 bytes Database mounted. Database opened. SQL>To wrap up…
If dbhome in RW or RO it looks in appropriate dbs directory, $ORACLE_HOME for RW and $ORACLE_BASE for RO.
Oracle inventory will not give us more details about if the Oracle dbhome is RW or RO.
orabasehome binary is the solution to know if the Oracle dbhome is in RW or RO.
L’article How to check if my Oracle dbhome is RO or RW? est apparu en premier sur dbi Blog.
Possible to relocate a PDB on ExaCC with move option?
In some of my previous blogs, I showed how we were migrating on-premise databases to the ExaCC using Oracle ZDM at one of our customer. I also explained in some article how it was important in some case to relocate the PDB in a final CDB. Relocating a PDB on ExaCC is usually done using dbaascli pdb relocate
command. The only drawback of this way, is that dbaascli is doing a copy of the datafiles during the relocate, which for a huge terabytes database, is taking time. In oder to minimise the downtime, I had to study another way to relocate the pdb, by moving the datafiles instead of doing a copy, and could manually relocate it in a few minutes. In this blog, I would like to share with you my findings and hoping it can help some of you.
In my case, I would like to relocate PDB, named PDB_DRY_999T, from the source CDB, named CDB_CHZ2, into the target CDB, named CDBTGT_CHZ2.
Source CDB information:
oracle@exacc-cl01n1:~/ [CDB1 (CDB$ROOT)] CDB1 *********************************** INSTANCE_NAME : CDB1 DB_NAME : CDB DB_UNIQUE_NAME : CDB_CHZ2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : 2/9 DATABASE_ROLE : PRIMARY FLASHBACK_ON : YES FORCE_LOGGING : YES VERSION : 19.26.0.0.0 CDB_ENABLED : YES PDBs : PDB_DRY_999T PDB$SEED ***********************************
Target CDB information:
oracle@exacc-cl01n1:~/ [CDB1 (CDB$ROOT)] CDBTGT1 *********************************** INSTANCE_NAME : CDBTGT1 DB_NAME : CDBTGT DB_UNIQUE_NAME : CDBTGT_CHZ2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : 2/10 DATABASE_ROLE : PRIMARY FLASHBACK_ON : YES FORCE_LOGGING : YES VERSION : 19.26.0.0.0 CDB_ENABLED : YES PDBs : PDB$SEED ***********************************
Knowing we are using TDE with ExaCC, it will be needed to export and import the encryption key. I’m using the FORCE option during export and import of the encryption key because I’m using AUTO LOGIN.
Export the encryption key on the PDB from the source CDB_CHZ2Let’s connect to the PDB and check encryption.
oracle@exacc-cl01n1:~/ [CDB1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Aug 29 10:05:55 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T READ WRITE NO SQL> alter session set container=PDB_DRY_999T; Session altered. SQL> set lines 300 pages 500 SQL> col WRL_PARAMETER format a30 SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET; WRL_PARAMETER WRL_TYPE WALLET_TYPE STATUS ------------------------------ -------------------- -------------------- ------------------------------ FILE AUTOLOGIN OPEN SQL> col status for a20 SQL> col name for a30 SQL> select a.con_id, name, status, keystore_mode from v$pdbs a, v$encryption_wallet b where a.con_id=b.con_id; CON_ID NAME STATUS KEYSTORE ---------- ------------------------------ -------------------- -------- 3 PDB_DRY_999T OPEN UNITED
We are in united mode so all cdb and pdb encryption keys are stored in same keystore.
If we try to export the encryption key, we will get an ORA-46659 error:
SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "********************" to '/home/oracle/mwagner/TDE_for_unplug/PDB_DRY_999T_export.p12' FORCE keystore IDENTIFIED BY "********************"; ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "********************" to '/home/oracle/mwagner/TDE_for_unplug/PDB_DRY_999T_export.p12' FORCE keystore IDENTIFIED BY "********************" * ERROR at line 1: ORA-46659: master keys for the given PDB not found
Let’s create a new master key for the PDB.
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 3 PDB_DRY_999T READ WRITE NO SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY "********************" WITH BACKUP USING 'pre-unplug-PDB_DRY_999T_20250829'; keystore altered. SQL>
And now we can export the pdb encryption key:
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 3 PDB_DRY_999T READ WRITE NO SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "********************" to '/home/oracle/mwagner/TDE_for_unplug/PDB_DRY_999T_export_pdb_level.p12' FORCE keystore IDENTIFIED BY "********************"; keystore altered. SQL>Unplug PDB_DRY_999T PDB from source CDB
Let’s close the PDB and run ALTER PLUGGABLE DATABASE <PDB_NAME> UNPLUG INTO '<XML_FILE>'
command:
SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- CDB1 SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T READ WRITE NO SQL> alter pluggable database PDB_DRY_999T close instances=all; Pluggable database altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T MOUNTED SQL> ALTER PLUGGABLE DATABASE PDB_DRY_999T UNPLUG INTO '/home/oracle/mwagner/unplug_pdb/PDB_DRY_999T.xml'; Pluggable database altered. SQL> !ls -ltrh /home/oracle/mwagner/unplug_pdb/PDB_DRY_999T.xml -rw-r--r-- 1 oracle asmdba 2.7M Aug 29 10:50 /home/oracle/mwagner/unplug_pdb/PDB_DRY_999T.xml SQL>
In the alert log we would see some warnings to inform that the encryption key will need to be imported. Otherwise the pdb can only be opened in restricted mode:
PDB_DRY_999T(3):KZTDE:WARNING: Detected that PDB needs to import keys from source. PDB can only open in restricted mode until import.
This is of course makes sense.
Drop the PDB but keep datafiles!!!As we are going to relocate the PDB moving the datafiles, it is VERY IMPORTANT to drop the PDB by KEEPING the datafiles.
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T MOUNTED SQL> drop pluggable database PDB_DRY_999T keep datafiles; Pluggable database dropped. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO SQL>
From the alert log file we can see that only the temporary file are dropped. This file will be recreated again by the oracle plug command.
2025-08-29T10:57:26.137914+02:00
Deleted Oracle managed file +DATAC2/CDB_CHZ2/3D6E40CA5F218D4EE063181FA10A38B0/TEMPFILE/temp.26231.1210348651
2025-08-29T10:57:26.168729+02:00
Stopped service pdb_dry_999t
If we connect to the ASM, we will see that the datafiles are still present. I still have 7.45 TB of datafiles for my PDB.
ASMCMD> du PDB_DRY_999T Used_MB Mirror_used_MB 7809404 23428212 ASMCMD>Check compatibility for plugging the PDB in the target CDB
I will now connect to the target CDB, and check if the PDB is compatible to be plug in.
I will run following SQL command to do so:
set serveroutput on
DECLARE
compatible BOOLEAN := FALSE;
BEGIN
compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
pdb_descr_file => '/home/oracle/mwagner/unplug_pdb/PDB_DRY_999T.xml');
if compatible then
DBMS_OUTPUT.PUT_LINE('Is pluggable PDB_DRY_999T compatible? YES');
else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB_DRY_999T compatible? NO');
end if;
END;
Let’s connect to the target CDB and check:
oracle@exacc-cl01n1:~/ [CDB1 (CDB$ROOT)] CDBTGT1 *********************************** INSTANCE_NAME : CDBTGT1 DB_NAME : CDBTGT DB_UNIQUE_NAME : CDBTGT_CHZ2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : 2/10 DATABASE_ROLE : PRIMARY FLASHBACK_ON : YES FORCE_LOGGING : YES VERSION : 19.26.0.0.0 CDB_ENABLED : YES PDBs : PDB$SEED *********************************** PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only Statustime: 2025-08-29 11:07:00 oracle@exacc-cl01n1:~/ [CDBTGT1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Aug 29 11:07:02 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO SQL> set serveroutput on SQL> DECLARE 2 compatible BOOLEAN := FALSE; 3 BEGIN 4 compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY( 5 pdb_descr_file => '/home/oracle/mwagner/unplug_pdb/PDB_DRY_999T.xml'); 6 if compatible then 7 DBMS_OUTPUT.PUT_LINE('Is pluggable PDB_DRY_999T compatible? YES'); 8 else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB_DRY_999T compatible? NO'); 9 end if; 10 END; 11 / Is pluggable PDB_DRY_999T compatible? YES PL/SQL procedure successfully completed. SQL>
The PDB can be plug in my target CDB.
Import PDB encryption key in the target CDB keystoreWe first need to import the PDB encryption key in the CDB, otherwise the plugging command would failed with ORA-28374 error:
SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- CDBTGT1 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> create pluggable database PDB_DRY_999T using '/home/oracle/mwagner/unplug_pdb/PDB_DRY_999T.xml' move; create pluggable database PDB_DRY_999T using '/home/oracle/mwagner/unplug_pdb/PDB_DRY_999T.xml' move * ERROR at line 1: ORA-28374: typed master key not found in wallet
So let’s import the PDB encryption key into the CDB:
SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- CDBTGT1 SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "********************" from '/home/oracle/mwagner/TDE_for_unplug/PDB_DRY_999T_export_pdb_level.p12' FORCE keystore IDENTIFIED BY "********************" with backup using 'pre-plug-PDB_DRY_999T_20250829'; keystore altered. SQL>Plug the PDB in the target CDB CDBTGT_CHZ2
Now we can plug the PDB in the target CDB:
SQL> !date Fri Aug 29 11:27:36 CEST 2025 SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- CDBTGT1 SQL> create pluggable database PDB_DRY_999T using '/home/oracle/mwagner/unplug_pdb/PDB_DRY_999T.xml' move; Pluggable database created. SQL> !date Fri Aug 29 11:30:36 CEST 2025 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T MOUNTED SQL>
We can see that instead of taking several hours to relocate the PDB with dbaascli, we took about 3 minutes to relocate the PDB moving the datafiles.
We can check the alert log file for any errors. In my case, I can see the completed operation:
Completed: create pluggable database PDB_DRY_999T using '/home/oracle/mwagner/unplug_pdb/PDB_DRY_999T.xml' move
2025-08-29T11:30:52.168375+02:00
Checking the ASM I can see that the source datafile directory is now empty:
ASMCMD> pwd +DATAC2 ASMCMD> du PDB_DRY_999T Used_MB Mirror_used_MB 48780 146340 ASMCMD> cd PDB_DRY_999T ASMCMD> ls CHANGETRACKING/ DATAGUARDCONFIG/ TEMPFILE/ controlfile/ datafile/ onlinelog/ password/ ASMCMD> cd datafile ASMCMD> pwd +DATAC2/PDB_DRY_999T/datafile ASMCMD> ls ASMCMD>Start the PDB
Let’s start the PDB.
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T MOUNTED SQL> alter pluggable database PDB_DRY_999T open instances=all; Warning: PDB altered with errors. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T READ WRITE YES SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T READ WRITE YES
The PDB is started in restricted mode. This was given as warning in the alert log during the unplug, by informing that the encryption key needs to be imported in the PDB in order to open it without any restriction.
Let’s check the violations.
SQL> select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where status 'RESOLVED'; NAME CAUSE TYPE MESSAGE STATUS --------------- -------------------------------------------------- --------- ------------------------------------------------------------------------------------------------------------------------ -------------------- PDB_DRY_999T Parameter WARNING CDB parameter processes mismatch: Previous 2048 Current 1000 PENDING PDB_DRY_999T Parameter WARNING CDB parameter sga_max_size mismatch: Previous 4G Current 9G PENDING PDB_DRY_999T Parameter WARNING CDB parameter archive_lag_target mismatch: Previous 1800 Current 0 PENDING PDB_DRY_999T Parameter WARNING CDB parameter cluster_database mismatch: Previous FALSE Current TRUE PENDING PDB_DRY_999T Parameter WARNING CDB parameter db_lost_write_protect mismatch: Previous 'typical' Current 'NONE' PENDING PDB_DRY_999T Parameter WARNING CDB parameter inmemory_force mismatch: Previous 'cellmemory_level' Current 'DEFAULT' PENDING PDB_DRY_999T Parameter WARNING CDB parameter wallet_root mismatch: Previous '/var/opt/oracle/dbaas_acfs/CDB/wallet_root' Current '/var/opt/oracle/dbaa PENDING s_acfs/CDBTGT/wallet_root' PDB_DRY_999T Parameter WARNING CDB parameter distributed_lock_timeout mismatch: Previous 360 Current 60 PENDING PDB_DRY_999T Parameter WARNING CDB parameter unified_audit_common_systemlog mismatch: Previous 'LOCAL5.INFO' Current NULL PENDING PDB_DRY_999T Parameter WARNING CDB parameter enable_ddl_logging mismatch: Previous TRUE Current FALSE PENDING PDB_DRY_999T Parameter WARNING CDB parameter awr_pdb_max_parallel_slaves mismatch: Previous 2 Current 10 PENDING PDB_DRY_999T Parameter WARNING CDB parameter awr_snapshot_time_offset mismatch: Previous 1000000 Current 0 PENDING PDB_DRY_999T OPTION WARNING Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_DRY_999T OPTION WARNING Database option DV mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_DRY_999T OPTION WARNING Database option OLS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_DRY_999T OPTION WARNING Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_DRY_999T OPTION WARNING Database option OWM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_DRY_999T OPTION WARNING Database option SDO mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_DRY_999T Wallet Key Needed ERROR PDB needs to import keys from source. PENDING 19 rows selected. SQL>
Here we have:
PDB_DRY_999T Wallet Key Needed ERROR PDB needs to import keys from source.
Let’s connect to the PDB and import the encryption key:
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T READ WRITE YES SQL> alter session set container=PDB_DRY_999T; Session altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 3 PDB_DRY_999T READ WRITE YES SQL> ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "********************" from '/home/oracle/mwagner/TDE_for_unplug/PDB_DRY_999T_export_pdb_level.p12' FORCE keystore IDENTIFIED BY "********************" with backup using 'post-plug-PDB_DRY_999T_20250829'; keystore altered. SQL>Restart the PDB
Let’s restart the PDB and check the status.
SQL> alter session set container=cdb$root; Session altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T READ WRITE YES SQL> alter pluggable database PDB_DRY_999T close instances=all; Pluggable database altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T MOUNTED SQL> alter pluggable database PDB_DRY_999T open instances=all; Pluggable database altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB_DRY_999T READ WRITE NO SQL>
We can see that the PDB is now opened without any restriction. Let’s check the PDB violations:
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 3 PDB_DRY_999T READ WRITE NO SQL> col name for a15 SQL> col message for a120 SQL> col cause for a50 SQL> select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where status 'RESOLVED'; NAME CAUSE TYPE MESSAGE STATUS --------------- -------------------------------------------------- --------- ------------------------------------------------------------------------------------------------------------------------ -------------------- PDB_DRY_999T OPTION WARNING Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_DRY_999T OPTION WARNING Database option DV mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_DRY_999T OPTION WARNING Database option OLS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_DRY_999T OPTION WARNING Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_DRY_999T OPTION WARNING Database option OWM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_DRY_999T OPTION WARNING Database option SDO mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING 6 rows selected. SQL>
There is no more violations.
Check the PDBCheck datafiles location:
SQL> select name from v$datafile where rownum select count(*) from v$datafile where name like '+DATAC2/CDBTGT_CHZ2/3D6E40CA5F218D4EE063181FA10A38B0/DATAFILE%'; COUNT(*) ---------- 2830 SQL> select count(*) from v$datafile where name not like '+DATAC2/CDBTGT_CHZ2/3D6E40CA5F218D4EE063181FA10A38B0/DATAFILE%'; COUNT(*) ---------- 0 SQL>
All good, PDB datafiles are in the expected ASM folder.
Check that there is no invalid objects:
SQL> select count(*) from dba_invalid_objects; COUNT(*) ---------- 0
And finally check that all tablespace are encrypted:
SQL> select count(*) from cdb_tablespaces a, (select x.con_id, y.ENCRYPTIONALG, x.name from V$TABLESPACE x, V$ENCRYPTED_TABLESPACES y 2 where x.ts#=y.ts# and x.con_id=y.con_id) b where a.con_id=b.con_id(+) and a.tablespace_name=b.name(+) and b.ENCRYPTIONALG='AES128'; COUNT(*) ---------- 2831 SQL> select count(*) from cdb_tablespaces a, (select x.con_id, y.ENCRYPTIONALG, x.name from V$TABLESPACE x, V$ENCRYPTED_TABLESPACES y 2 where x.ts#=y.ts# and x.con_id=y.con_id) b where a.con_id=b.con_id(+) and a.tablespace_name=b.name(+) and b.ENCRYPTIONALG is null; COUNT(*) ---------- 0 SQL>
The PDB is as well registered in the listener:
oracle@exacc-cl01n1:~/ [CDBTGT1 (CDB$ROOT)] export TNS_ADMIN=/u02/app/oracle/product/19.0.0.0/dbhome_1/network/admin/listener_test/ oracle@exacc-cl01n1:~/ [CDBTGT1 (CDB$ROOT)] lsnrctl status LISTENER_TEST| grep -i PDB_DRY_999T Service "pdb_dry_999t.domain.com" has 1 instance(s). oracle@exacc-cl01n1:~/ [CDBTGT1 (CDB$ROOT)]To wrap up
Here we have a good alternative for relocating a PDB on ExaCC avoiding a copy of the datafiles. This will have the benefit to limit the maintenance windows.
L’article Possible to relocate a PDB on ExaCC with move option? est apparu en premier sur dbi Blog.
MongoDB Log Analysis : A Comprehensive Guide
Contrary to other DBMS, MongoDB self-managed deployments generate logs in a JSON format, which can be quite intimidating at first. But instead of hoping to never have to look at the logs, you can use some tools and tips to navigate them more easily and not waste time finding the information you’re looking for.
Summary Log format overviewInside /var/log/mongodb/mongod.log
(or a custom path if you set one), a typical log entry looks like this (shortened, for readability) :
{"t":{"$date":"2025-03-06T14:54:28.298+01:00"},"s":"I", "c":"CONTROL", "id":8423403, "ctx":"initandlisten","msg":"mongod startup complete","attr":{"Summary of time elapsed":{"Startup from clean shutdown?":false,"Statistics":{"Set up periodic runner":"0 ms","Set up online certificate status protocol manager":"0 ms",[...],"Start transport layer":"0 ms","_initAndListen total elapsed time":"626 ms"}}}}
At first glance, it’s pretty difficult to extract the essential information, but let’s see how the document looks like when it’s formatted (we’ll see how to do that later).
{
"t": {
"$date": "2025-03-06T14:54:28.298+01:00"
},
"s": "I",
"c": "CONTROL",
"id": 8423403,
"ctx": "initandlisten",
"msg": "mongod startup complete",
"attr": {
"Summary of time elapsed": {
"Startup from clean shutdown?": false,
"Statistics": {
"Set up periodic runner": "0 ms",
"Set up online certificate status protocol manager": "0 ms",
[...] # lines hidden
"Start transport layer": "0 ms",
"_initAndListen total elapsed time": "626 ms"
}
}
}
}
Here is a description of the main fields of the log document :
t
: Timestamp of the log entry.s
: Severity code associated with the log entry (E
for error,W
for warning,I
for information andD1
toD5
for debug).c
: Category of the log entry. Most common categories areCONTROL
,COMMAND
,ELECTION
,REPL
(for replication) orNETWORK
. An extensive list is available in the official MongoDB documentation.id
: Unique log entry ID.ctx
: Thread that generated the log.msg
: Usually a short message describing the log.attr
: Optional additional attributes.
This will help us when looking at the logs, first with mongosh
.
mongosh
You can query logs inside the MongoDB shell called mongosh
. To do so, use the getLog
admin command :
db.adminCommand({ getLog: "global"}); // display all log entries
Another useful option is to view startup warnings, which will only display warning logs since last startup.
db.adminCommand({ getLog: "startupWarnings" }) // display startup warnings
{
totalLinesWritten: 2,
log: [
'{"t":{"$date":"2025-03-07T08:32:41.005+01:00"},"s":"W", "c":"NETWORK", "id":5123300, "ctx":"initandlisten","msg":"vm.max_map_count is too low","attr":{"currentValue":65530,"recommendedMinimum":102400,"maxConns":51200},"tags":["startupWarnings"]}\n',
'{"t":{"$date":"2025-03-07T08:32:41.005+01:00"},"s":"W", "c":"CONTROL", "id":8386700, "ctx":"initandlisten","msg":"We suggest setting swappiness to 0 or 1, as swapping can cause performance problems.","attr":{"sysfsFile":"/proc/sys/vm/swappiness","currentValue":60},"tags":["startupWarnings"]}\n'
],
ok: 1
}
Even though this can sometimes be useful, it requires an authenticated access to the database, and it only works when the mongod
process is running. You won’t be able to use this method when the database crashes, for instance. Moreover, the logs are difficult to read.
Most of the time, you will be better served by the jq
utility.
jq
jq
is a powerful utility that helps you navigate JSON documents, and even though it is not an official MongoDB product, you should always integrate it in your MongoDB deployments.
The first benefit of the jq
command is to display MongoDB logs in a readable format :
> head -1 mongod.log | jq
{
"t": {
"$date": "2025-03-05T14:44:28.531+01:00"
},
"s": "I",
"c": "CONTROL",
"id": 23285,
"ctx": "main",
"msg": "Automatically disabling TLS 1.0"
}
Of course, a single line of log will now span multiple lines in the output. But thanks to the log structure explained above, we can write our first queries with jq
to filter the results and only display what’s important.
I would definitely recommend to build aliases with the following commands in order to quickly access the information that you find more valuable in the logs.
Display error messagesBy using the s
field (severity), we can filter the logs to only display error messages. This is especially useful when failing to start a MongoDB instance.
jq 'select(.s == "E")' mongod.log
You can also include warnings by slightly modifying the command.
jq 'select(.s == "E" or .s == "W")' mongod.log
Filtering displayed fields
When selecting fields, you should pipe the jq
filtering to a json object like this one :
{time: .t["$date"], message: .msg, error: .attr.error}
From now on, I will use the -c
option to display the JSON in the compact format. Even in this format, logs can be readable when you select or exclude specific fields. Here, I want to select the .t["$date"]
, .msg
and .attr.error
fields. To improve the display, I will rename them :
> jq -c 'select(.s == "E") | {time: .t["$date"], message: .msg, error: .attr.error}' mongod.log
{"time":"2025-03-05T14:44:28.665+01:00","message":"WiredTiger error message","error":13}
{"time":"2025-03-05T14:44:28.665+01:00","message":"WiredTiger error message","error":13}
{"time":"2025-03-05T14:44:28.665+01:00","message":"WiredTiger error message","error":13}
{"time":"2025-03-06T10:17:07.383+01:00","message":"DBException in initAndListen, terminating","error":"Location28596: Unable to determine status of lock file in the data directory /var/lib/mongodb: boost::filesystem::status: Permission denied [system:13]: \"/var/lib/mongodb/mongod.lock\""}
Similarly, you can exclude a field with the del
function. For instance, this will remove the message
sub-field located inside the attr
field.
jq 'del(.attr.message)' mongod.log
Errors and warnings grouped by message
To check for recurrent warnings or errors, you can pipe the jq output to group the results by msg
content.
jq 'select(.s == "E" or .s=="W") | .msg' mongod.log | sort | uniq -c | sort -nr | head
Occurrences of each log severity
If you want to quickly look for the number of every severity, you can do so with the s
field.
> jq '.s' mongod.log | sort | uniq -c
10 "E"
3 "F"
1727 "I"
88 "W"
View logs for specific log categories
As mentioned before, the category of the logs might be interesting to filter (only the replication logs, for instance).
jq -c 'select(.c == "REPL")' mongod.log
Filter logs by date
Whether you use log rotation or not, you might want to quickly access the last minutes/hours/days of logs. With the date
utility, you can retrieve the most recent logs :
jq -c --arg since "$(date -d '10 minutes ago' +%Y-%m-%dT%H:%M:%S)" 'select(.t["$date"] >= $since)' mongod.log
Still filtering on the .t["$date"]
field, you can also extract a portion of the logs :
jq -c 'select(.t["$date"] >= "2025-03-06T14:30:00" and .t["$date"] <= "2025-03-06T14:40:00")' mongod.log
Look for a specific pattern in the log
Of course, you can always use grep
followed by jq
to find a pattern in the logs : grep -i "pattern" mongod.log | jq
But if you want to look for a specific pattern inside a specific field, you can do so with the test
function :
> jq -c 'select(.msg | test("failed to authenticate"; "i"))' mongod.log // (i option for case insensitivity)
{"t":{"$date":"2025-03-07T08:37:52.950+01:00"},"s":"I","c":"ACCESS","id":5286307,"ctx":"conn18","msg":"Failed to authenticate","attr":{"client":"xxx.xxx.xxx.xxx(ip):xxxxx(port)","isSpeculative":true,"isClusterMember":false,"mechanism":"SCRAM-SHA-256","user":"root","db":"admin","error":"AuthenticationFailed: SCRAM authentication failed, storedKey mismatch","result":18,"metrics":{"conversation_duration":{"micros":5091,"summary":{"0":{"step":1,"step_total":2,"duration_micros":62},"1":{"step":2,"step_total":2,"duration_micros":48}}}},"extraInfo":{}}}
Check for logs regarding connections to the MongoDB database
For filtering connections logs, search for the attr.remote
field :
jq -c 'select(.attr.remote)' mongod.log
Analyzing slow queries with jq
Inside the mongo shell, you can activate logging for slow queries with db.setProfilingLevel(1, <slowms>)
, with <slowms>
being the threshold (in milliseconds) to log such queries.
Warnings related to slow queries in MongoDB :
- Once activated, the slow queries logging could slow down the database, so be very careful when activating it.
- There is a security threat when combining slow query logging and queryable encryption, since queries will not be encrypted in the
mongod.log
file.
Slow query logs look like this :
{
"t": { "$date": "2024-03-06T12:34:56.789Z" },
"s": "I",
"c": "COMMAND",
"id": 123,
"ctx": "conn20",
"msg": "Slow query",
"attr": {
"ns": "mydb.coll",
"command": { "find": "coll", "filter": { "status": "active" } },
"planSummary": "COLLSCAN",
"keysExamined": 0,
"docsExamined": 5000,
"numYields": 0,
"reslen": 2000,
"locks": { "Global": { "acquireCount": { "r": 1 } } },
"durationMillis": 150
}
}
With this in mind, and with what we have already seen, you can filter the logs with the fields you want, like attr.durationMillis
(duration of the query, in milliseconds), or attr.ns
, which is the object on which the query is made.
For instance, if you want to retrieve slow queries above a given threshold (one second, in the example below) :
jq 'select(.attr.durationMillis >= 1000)' mongod.log
Or if you want to filter slow queries on a specific database mydb
and collection coll
:
jq 'select(.msg == "Slow query" and .attr.ns == "mydb.coll")' mongod.log
You can also select only queries that are run on a given database mydb
:
jq 'select(.msg == "Slow query" and .attr.command["$db"] == "mydb")' mongod.log
Conclusion
While being a bit complex at first sight, MongoDB logs are very useful if you know how to apprehend them. By leveraging the jq
utility for advanced filtering, and combining it with monitoring tools, you can efficiently analyze logs and improve your efficiency as a DBA.
L’article MongoDB Log Analysis : A Comprehensive Guide est apparu en premier sur dbi Blog.
From Oracle’s PL/SQL to PostgreSQL’s PL/pgSQL – 2 – Records, outputs and constraints
In the last post we’ve learned the basics for migrating a sample procedure from PL/SQL to PL/pgSQL. In this post we’ll actually start with migrating the code. The first step in the original procedure was to iterate over the foreign key constraints of the user’s tables and to disable them:
for i in ( select table_name
, constraint_name
from user_constraints
where constraint_type ='R'
and status = 'ENABLED' )
loop
execute immediate 'alter table ' || i.table_name || ' disable constraint ' || i.constraint_name;
end loop;
dbms_output.put_line ( 'Disabled foreign key constraints');
If you try this as it is in PL/pgSQL this will fail for several reasons:
postgres=> \! cat a.sql
create or replace procedure p1() as
$$
declare
begin
for i in ( select table_name
, constraint_name
from user_constraints
where constraint_type ='R'
and status = 'ENABLED' )
loop
execute immediate 'alter table ' || i.table_name || ' disable constraint ' || i.constraint_name;
end loop;
dbms_output.put_line ( 'Disabled foreign key constraints');
end;
$$ language plpgsql;
postgres=> \i a.sql
psql:a.sql:16: ERROR: loop variable of loop over rows must be a record variable or list of scalar variables
LINE 5: for i in ( select table_name
The error message is pretty clear, we need to use a variable of type “record” if we want to iterate over a number of rows, so:
postgres=> \! cat a.sql
create or replace procedure p1() as
$$
declare
i record;
begin
for i in ( select table_name
, constraint_name
from user_constraints
where constraint_type ='R'
and status = 'ENABLED' )
loop
execute immediate 'alter table ' || i.table_name || ' disable constraint ' || i.constraint_name;
end loop;
dbms_output.put_line ( 'Disabled foreign key constraints');
end;
$$ language plpgsql;
postgres=> \i a.sql
psql:a.sql:17: ERROR: syntax error at or near "dbms_output"
LINE 14: dbms_output.put_line ( 'Disabled foreign key constraints'...
dbms_output is an Oracle package, this does not exist in PostgreSQL. We have two options:
- Create a schema called “dbms_output” and in there create a procedure “put_line” which does what we want
- Use what PL/pgSQL provides by default, and this is RAISE
(There is a third option, which is orafce, but we’re not going to look into this one)
postgres=> \! cat a.sql
create or replace procedure p1() as
$$
declare
i record;
begin
for i in ( select table_name
, constraint_name
from user_constraints
where constraint_type ='R'
and status = 'ENABLED' )
loop
execute immediate 'alter table ' || i.table_name || ' disable constraint ' || i.constraint_name;
end loop;
raise notice 'Disabled foreign key constraints';
end;
$$ language plpgsql;
postgres=> \i a.sql
CREATE PROCEDURE
Now the procedure can be created, but there are more issues to fix. If we execute the procedure we get this:
postgres=> call p1();
ERROR: relation "user_constraints" does not exist
LINE 3: from user_constraints
^
QUERY: ( select table_name
, constraint_name
from user_constraints
where constraint_type ='R'
and status = 'ENABLED' )
CONTEXT: PL/pgSQL function p1() line 5 at FOR over SELECT rows
“user_constraints” is an Oracle catalog view, and this view is of course not existing in PostgreSQL. But PostgreSQL provides the catalog table pg_constraint, so:
postgres=> \! cat a.sql
create or replace procedure p1() as
$$
declare
i record;
begin
for i in ( select conrelid::regclass as table_name
, conname as constraint_name
from pg_constraint
where contype ='f' )
loop
execute immediate 'alter table ' || i.table_name || ' disable constraint ' || i.constraint_name;
end loop;
raise notice 'Disabled foreign key constraints';
end;
$$ language plpgsql;
postgres=> \i a.sql
CREATE PROCEDURE
postgres=> call p1();
ERROR: type "immediate" does not exist
LINE 1: immediate 'alter table ' || i.table_name || ' disable constr...
^
QUERY: immediate 'alter table ' || i.table_name || ' disable constraint ' || i.constraint_name
CONTEXT: PL/pgSQL function p1() line 10 at EXECUTE
Next issue: “IMMEDIATE” does not exist in PL/pgSQL, but this is easy to fix, just remove it:
postgres=> \! cat a.sql
create or replace procedure p1() as
$$
declare
i record;
begin
for i in ( select conrelid::regclass as table_name
, conname as constraint_name
from pg_constraint
where contype ='f' )
loop
execute 'alter table ' || i.table_name || ' disable constraint ' || i.constraint_name;
end loop;
raise notice 'Disabled foreign key constraints';
end;
$$ language plpgsql;
postgres=> \i a.sql
CREATE PROCEDURE
postgres=> call p1();
ERROR: syntax error at or near "constraint"
LINE 1: alter table t2 disable constraint t2_b_fkey
^
QUERY: alter table t2 disable constraint t2_b_fkey
CONTEXT: PL/pgSQL function p1() line 10 at EXECUTE
In PostgreSQL you cannot disable constraints as you can do it in Oracle:
postgres=# alter table t2 disable
ROW LEVEL SECURITY RULE TRIGGER
The way to do this in PostgreSQL is to drop the constraints and re-create them afterwards but for this we need to store the “create” commands. One way of doing that would be something like this:
create or replace procedure p1() as
$$
declare
i record;
constraint_create text[];
tmp text;
tmp2 text;
counter int := 1;
begin
for i in ( select oid
, conrelid::regclass as table_name
, conname as constraint_name
from pg_constraint
where contype ='f' )
loop
select 'alter table ' || i.table_name || ' add constraint ' || i.constraint_name || ' ' || pg_get_constraintdef ( i.oid ) into tmp;
constraint_create[counter] := tmp;
execute 'alter table ' || i.table_name || ' drop constraint ' || i.constraint_name;
counter := counter + 1;
end loop;
raise notice 'Dropped foreign key constraints';
-- load data
-- re-create the foreign keys
foreach tmp2 in array constraint_create
loop
raise notice 'Re-creating foreign key constraint: %', tmp2;
execute tmp2;
end loop;
end;
$$ language plpgsql;
This almost completes the migration of the code but there is still the topic of “exceptions”, and that’s for the next post.
L’article From Oracle’s PL/SQL to PostgreSQL’s PL/pgSQL – 2 – Records, outputs and constraints est apparu en premier sur dbi Blog.
M-Files Electronic and Digital Signatures
Who has never during his or her business working daily life had to deal with one document signature. There is no more room for doubt concerning the usefulness of this functionality especially in ECM software like M-Files. A lot of solutions exist. Let me share with you, here, few of them that you may consider to use and integrate in your beloved M-Files in order to fulfill your needs.
M-Files out of the box signing solutionIn M-Files, an electronic signature is always tied to a workflow state transition or to an assignment completion, where end users provide system credentials to complete the action.

After providing the user credentials and the reason for transition, the signing event is logged in the event log.

Let’s pause for a moment and review the terminology used so that we are all on the same page.
Relying on Code of Federal Regulation (21 CFR Part 11) definitions, specific to electronic records and electronic signatures, including electronic submissions to the Food and Drug Administration:
- An electronic signature is computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an individual to be the legally binding equivalent of the individual’s handwritten signature.
- A digital signature is an electronic signature based upon cryptographic methods of originator authentication, computed by using a set of rules and a set of parameters such that the identity of the signer and the integrity of the data can be verified.
As you may understood, legal concept comes into play to prove who signed what based on signer’s intent and consent. All gathered data around signing process is saved to M-Files database either in text properties and/or in dedicated electronic signatures objects that can be optionally rendered into a PDF.
Hence, rest assured, all M-Files signing solutions fulfill the technical requirements of common regulatory guidelines such as 21 CFR Part 11 and EU GMP Annex 11.
This said, note that PDF processing is mostly used to set up the final document signed. Possibly, additional overlays, watermark stamping and other metadata information are generated. Portable Document Format contributes heavily to stick to authenticity, conformity and falsification detection. To achieve that, a cryptography-based process is used in digital signature, ensuring signed documents authenticity, providing tampering detection and saving digital signature certificate to document files.
Let’s take a look at a few examples.
Trusted Digital SignaturePDF documents can be digitally signed with digital signature certificates. Typically, for two purposes:
– To prove the identity of the signer
– To prove that the document has not been modified after signing
There are two kinds of digital signature certificates: hardware-based and software-based.

This solution is very useful to sign PDF files (only) with digital signature certificates installed on your M-Files server. This requires a license and, unfortunately, is not supported in M-Files Cloud.
M-Files for DocuSignDocuSign cloud service is a web application that can be used to digitally sign PDF documents. You still compose the document in M-Files, but then, move it to the “Sent for signing” state in dedicated workflow so it will automatically be uploaded to DocuSign to handle signing process. Of course, all signers receive an email assignment.

Similarly, using Adobe Acrobat Sign cloud service allows M-Files users and external persons to edit and sign documents remotely. Sender(s) and signer(s) receive e-mail notification without requiring any Adobe subscription for signers.


From an M-Files technical point of view, in both above solutions, as always, configuration is all about Workflows settings, Object types, Classes and Properties definitions. Agreed that it is managed by your Business needs.

Hence, your M-Files Business Administrator will bring all this to life, mapping required metadata properties into “M-Files for DocuSign” or “M-Files for Adobe Acrobat Sign” application vault console:

Last but not least, do not forget to consider these Signing Cloud Services solutions plans and subscriptions prerequisite. By default, licenses bound to one or another service are not integrated into M-Files add-on modules potentially subscribed. The way one may choose such or such features is totally tied in firms business processes and the amount/volume of end users documents signatures generated monthly.
Eventually, you may also consider to use additional M-Files modules such as PDF processor or Electronic Signatures from M-Files Compliance Kit. This will allow you to greatly extend your Business tasks capabilities and facilitate in a way signing process integration.
Do not hesitate to contact dbi services If you need assistance, for any M-Files support and guidance around this topic and others.
L’article M-Files Electronic and Digital Signatures est apparu en premier sur dbi Blog.
From Oracle’s PL/SQL to PostgreSQL’s PL/pgSQL – 1 – Basics
Migrating from one database system to another is not an easy task. It might be easy if you only need to migrate the data, but as soon as you have business logic in the database this can become a lot work quite fast. When you want to migrate from Oracle to PostgreSQL this means rewriting all your PL/SQL code into something which PostgreSQL understands, and this usually is PL/pgSQL as it is very close to PL/SQL. Some might say that you mustn’t put any business logic into the database at all, but this is another discussion and will not be handled here or in any of the follow posts.
There are some tools, such as ora2pg, which give you some automatic translation from PL/SQL to PL/pgSQL, but you carefully have to check the result and go through all the code anyway. ora2pg gives you a great starting point, but the works begins after the automatic translation.
In this first post, we’ll look into the very basics of both languages. The starting point in Oracle is this:
sqlplus sys@ora.it.dbi-services.com/XEPDB1 as sysdba
create user u
identified by u
default tablespace users
quota 10M on users
temporary tablespace temp;
grant create session to u;
grant create table to u;
grant create procedure to u;
exit;
sqlplus u@ora.it.dbi-services.com/XEPDB1
drop table t2;
drop table t;
create table t ( a number primary key , b number check ( b> 10 ));
create table t2 ( a number primary key, b number references t(a) );
insert into t (a,b) values (1,11);
insert into t (a,b) values (2,12);
insert into t2 (a,b) values (1,1);
insert into t2 (a,b) values (2,1);
insert into t2 (a,b) values (3,1);
insert into t2 (a,b) values (4,1);
insert into t2 (a,b) values (5,1);
insert into t2 (a,b) values (6,2);
commit;
This gives you a dedicated user which enough permissions to connect, create tables and procedures. This user is then used to create two simple tables connected by a foreign key constraint. While this is really simple, the following (and all follow up) examples are redacted and simplified real customer use cases.
The procedure to be migrated is this one:
create or replace procedure p as
ln_count number;
begin
for i in ( select table_name
, constraint_name
from user_constraints
where constraint_type ='R'
and status = 'ENABLED' )
loop
execute immediate 'alter table ' || i.table_name || ' disable constraint ' || i.constraint_name;
end loop;
dbms_output.put_line ( 'Disabled foreign keys constraints');
select count(*)
into ln_count
from t2
where b = 2;
-- some more logic around counting
--
delete
from t
where a = 2;
delete
from t2
where b = 2;
commit;
dbms_output.put_line ( 'Commit done');
for i in ( select table_name
, constraint_name
from user_constraints
where constraint_type ='R'
and status = 'ENABLED' )
loop
execute immediate 'alter table ' || i.table_name || ' disable constraint ' || i.constraint_name;
end loop;
dbms_output.put_line ( 'Enabling foreign keys constraints');
exception
when others then
for i in ( select table_name
, constraint_name
from user_constraints
where constraint_type ='R'
and status = 'ENABLED' )
loop
execute immediate 'alter table ' || i.table_name || ' disable constraint ' || i.constraint_name;
end loop;
dbms_output.put_line ( 'Enabling foreign key constraints');
raise_application_error(-50001, 'runtime_exception', true );
end;
/
Again, this is very simplified and there was more code in the real procedure, but this is enough to talk about the topic of this post.
When this procedure is executed we get the following result in Oracle:
SQL> select * from t;
A B
---------- ----------
1 11
SQL> select * from t2;
A B
---------- ----------
1 1
2 1
3 1
4 1
5 1
The starting point in PostgreSQL is this:
$ psql
create user u with login password 'u';
create schema u;
alter schema u owner to u;
\c postgres u
set search_path=u;
create table t ( a int primary key, b int check ( b > 10 ));
create table t2 ( a int primary key, b int references t(a));
insert into t (a,b) values (1,11);
insert into t (a,b) values (2,12);
insert into t2 (a,b) values (1,1);
insert into t2 (a,b) values (2,1);
insert into t2 (a,b) values (3,1);
insert into t2 (a,b) values (4,1);
insert into t2 (a,b) values (5,1);
insert into t2 (a,b) values (6,2);
… which gives you pretty much the same as in Oracle (except for the schema and integer data types).
Let’s start with migrating that procedure, step by step. The “create procedure” command in PostgreSQL works like this (there is a bit more, but it is the simplest form to start with):
create procedure p() as
$$
declare
begin
null;
end; $$ language plpgsql;
PostgreSQL uses dollar quoting which saves you from constantly escaping special characters in strings, e.g.:
postgres=> select $$""""'''''%%%&&&&&$$;
?column?
-------------------
""""'''''%%%&&&&&
(1 row)
If you have the “$” sign itself in the string you can use tags ($xx$ in the example below):
postgres=> select $xx$""""$$'''''%%%&&&&&$xx$;
?column?
---------------------
""""$$'''''%%%&&&&&
(1 row)
The same concept is used for the body (code) of user defined functions and procedures.
“language” tells PostgreSQL which language you’re writing the code in, here we use PL/pgSQL but this could also be Python, Perl or Tcl in a typical PostgreSQL installation (more languages are available as external extensions).
The “declare” block is used to declare variables you want to use in the code (more on that in a later post) and “begin” and “end” wraps the actual code.
Now, that we know the basic building blocks we can go further and start to code the procedure so that it is doing the same as the procedure in PL/SQL. This is the topic for the next post.
L’article From Oracle’s PL/SQL to PostgreSQL’s PL/pgSQL – 1 – Basics est apparu en premier sur dbi Blog.
RAG Series – Naive RAG
Since my last series on pgvector I had quite a fun time to work on RAG workflows on pgvector and learned some valuable lessons and decided to share some of it in a blog post series on the matter.
We will discover together where all RAG best practices are landing for the past 2 years and how can as a DBA or “AI workflow engineer” improve your designs to be production fit.
We start this series with Naïve RAG, this is quite known but important and foundational for the next posts of this series.
Retrieval-Augmented Generation (RAG) is a technique that combines the power of large language models (LLMs) with information retrieval. Instead of relying solely on an LLM’s internal knowledge (which may be outdated or limited, and prone to hallucinations), a RAG system retrieves relevant external documents and provides them as context for the LLM to generate a response. In practice, this means when a user asks a question, the system will retrieve a set of relevant text snippets (often from a knowledge base or database) and augment the LLM’s input with those snippets, so that the answer can be grounded in real data. This technique is key for integrating businesses or organizations data with LLMs capabilities because it allows you to implement business rules, guidelines, governance, data privacy constraints…etc.
Naïve RAG is the first logical step to understand how the retrieval part works and how it can impact the LLM output.
A RAG pipeline typically involves the following steps:
- Document Embedding Storage – Your knowledge base documents are split into chunks and transformed into vector embeddings, which are stored in a vector index or database.
- Query Embedding & Retrieval – The user’s query is converted into an embedding and the system performs a similarity search in the vector index to retrieve the top-$k$ most relevant chunks.
- Generation using LLM – The retrieved chunks (as context) plus the query are given to an LLM which generates the final answer.
Clone the repository and explore this implementation:
git clone https://github.com/boutaga/pgvector_RAG_search_lab
cd pgvector_RAG_search_lab
The lab includes:
- Streamlit interface for testing different search methods
- n8n workflows for orchestrating the RAG pipeline
- Embedding generation scripts supporting multiple models
- Performance comparison tools to evaluate different approaches
Before diving deeper, it’s worth contrasting the vector-based semantic search used in RAG with traditional keyword-based search techniques (like SQL LIKE
queries or full-text search indexes). This is especially important for DBAs who are familiar with SQL and may wonder why a vector approach is needed.
Traditional Search (SQL LIKE, full-text): Matches literal terms or boolean combinations. Precise for exact matches but fails when queries use different wording. A search for “car” won’t find documents about “automobiles” without explicit synonym handling.
Semantic Vector Search: Converts queries and documents into high-dimensional vectors encoding semantic meaning. Finds documents whose embeddings are closest to the query’s embedding in vector space, enabling retrieval based on context rather than exact matches.
The key advantage: semantic search improves recall when wording varies and excels with natural language queries. However, traditional search still has value for exact phrases or specific identifiers. Many production systems implement hybrid search combining both approaches (covered in a later post).
I am not going to go through all types of searches available in PostgreSQL but here is a diagram that is showing the historical and logical steps we went through the past decades.

Key point: moving to vector search enables semantic retrieval that goes beyond what SQL LIKE
or standard full-text indexes can achieve. It allows your RAG system to find the right information even when queries use different phrasing, making it far more robust for knowledge-based Q&A.
Let’s break down how to implement a Naïve RAG pipeline properly, using the example from the pgvector_RAG_search_lab repository. We’ll go through the major components and discuss best practices at each step: document chunking, embedding generation, vector indexing, the retrieval query, and finally the generation step.
Here is a diagram of the entire data process :

Chunking documents: Large documents (e.g. long articles, manuals, etc.) need to be split into smaller pieces called chunks before embedding. Choosing the right chunking strategy is crucial. If chunks are too large, they may include irrelevant text along with relevant info; if too small, you might lose context needed to answer questions.
- Chunk size: test on your own data but a rule of thumb could be 100-150 tokens for factoid queries, 300+ for contextual queries but sentence or paragraph chunking are also an option.
Generating embeddings: Once the documents are chunked, each chunk is converted to a vector embedding by an embedding model. The choice of embedding model has a big impact on your RAG system’s effectiveness and is generally coupled with the LLM model you are going to choose. Since I am using ChatGPT-5, I went for OpenAI embedding models with 3072(large) and 1536(small) dimensions. The lab supports OpenAI’s text-embedding-3-large, small (dvdrental db) and open-source alternatives. Check MTEB benchmarks for model selection.
In the LAB repository you can generate the embeddings with the following Python script on the wikipedia database (Note: in the example bellow the SPLADE model is loaded but not used for dense vectors, the script is handling both dense and sparse embedding generation, we will cover this on the next blog post) :
(.venv) 12:56:43 postgres@PG1:/home/postgres/RAG_lab_demo/lab/embeddings/ [PG17] python generate_embeddings.py --source wikipedia --type dense
2025-09-27 12:57:28,407 - __main__ - INFO - Loading configuration...
2025-09-27 12:57:28,408 - __main__ - INFO - Initializing services...
2025-09-27 12:57:28,412 - lab.core.database - INFO - Database pool initialized with 1-20 connections
2025-09-27 12:57:29,093 - lab.core.embeddings - INFO - Initialized OpenAI embedder with model: text-embedding-3-large
2025-09-27 12:57:33,738 - lab.core.embeddings - INFO - Loading SPLADE model: naver/splade-cocondenser-ensembledistil on device: cpu
Some weights of BertModel were not initialized from the model checkpoint at naver/splade-cocondenser-ensembledistil and are newly initialized: ['pooler.dense.bias', 'pooler.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
2025-09-27 12:57:34,976 - lab.core.embeddings - INFO - SPLADE embedder initialized on cpu
============================================================
EMBEDDING GENERATION JOB SUMMARY
============================================================
Source: wikipedia
Embedding Type: dense
Update Existing: False
============================================================
CURRENT STATUS:
Wikipedia Articles: 25000 total, 25000 with title embeddings
25000 with content embeddings
Proceed with embedding generation? (y/N): y
============================================================
EXECUTING JOB 1/1
Table: articles
Type: dense
Columns: ['title', 'content'] -> ['title_vector_3072', 'content_vector_3072']
============================================================
2025-09-27 12:57:41,367 - lab.embeddings.embedding_manager - INFO - Starting embedding generation job: wikipedia - dense
2025-09-27 12:57:41,389 - lab.embeddings.embedding_manager - WARNING - No items found for embedding generation
Job 1 completed:
Successful: 0
Failed: 0
============================================================
FINAL SUMMARY
============================================================
Total items processed: 0
Successful: 0
Failed: 0
FINAL EMBEDDING STATUS:
============================================================
EMBEDDING GENERATION JOB SUMMARY
============================================================
Source: wikipedia
Embedding Type: dense
Update Existing: False
============================================================
CURRENT STATUS:
Wikipedia Articles: 25000 total, 25000 with title embeddings
25000 with content embeddings
2025-09-27 12:57:41,422 - __main__ - INFO - Embedding generation completed successfully
2025-09-27 12:57:41,422 - lab.core.database - INFO - Database connection pool closed
(.venv) 12:57:42 postgres@PG1:/home/postgres/RAG_lab_demo/lab/embeddings/ [PG17]
After these steps, you will have a vectorized database: each document chunk is represented by a vector, stored in a table (if using a DB like Postgres/pgvector) or in a vector store. Now it’s ready to be queried.
2. Vector Indexing and Search in Postgres (pgvector + DiskANN)For production-scale RAG, how you index and search your vectors is critical for performance. In a naive setup, you might simply do a brute-force nearest neighbor search over all embeddings – which is fine for a small dataset or testing, but too slow for large collections. Instead, you should use an Approximate Nearest Neighbor (ANN) index to speed up retrieval. The pgvector extension for PostgreSQL allows you to create such indexes in the database itself.
Using pgvector in Postgres: pgvector stores vectors and supports IVFFlat and HNSW for ANN. For larger-than-RAM or cost-sensitive workloads, add the pgvectorscale extension, which introduces a StreamingDiskANN index inspired by Microsoft’s DiskANN, plus compression and filtered search.
! Not all specialized Vector databases or Vector stores have this feature, if your data needs to scale this is a critical aspect !
For our Naïve RAG example :
CREATE INDEX idx_articles_content_vec
ON articles
USING diskann (content_vector vector_cosine_ops);
Why use Postgres for RAG? For many organizations, using Postgres with pgvector is convenient because it keeps the vectors alongside other relational data and leverages existing operational familiarity (backup, security, etc.). It avoids introducing a separate vector database. Storing vectors in your existing operational DB can eliminate the complexity of syncing data with a separate vector store, while still enabling semantic search (not just keyword search) on that data. With extensions like pgvector (and vectorscale), Postgres can achieve performance close to specialized vector DBs, if not better. Of course, specialized solutions (Pinecone, Weaviate, etc.) are also options – but the pgvector approach is very appealing for DBAs who want everything in one familiar ecosystem.
3. Query Processing and RetrievalWith the data indexed, the runtime query flow of Naïve RAG is straightforward:
- Query embedding: When a user question comes in (for example: “What is WAL in PostgreSQL and why is it important?”), we first transform that query into an embedding vector using the same model we used for documents. This could be a real-time call to an API (if using an external service like OpenAI for embeddings) or a local model inference. Caching can be applied for repeated queries, though user queries are often unique. Ensure the text of the query is cleaned or processed in the same way document text was (e.g., if you did lowercasing, removal of certain stopwords, etc., apply consistently if needed – though modern embedding models typically handle raw text well without special preprocessing).
- Vector similarity search: We then perform the ANN search in the vector index with the query embedding. As shown earlier, this is an
ORDER BY vector <=> query_vector LIMIT k
type query in SQL (or the equivalent call in your vector DB’s client). The result is the top k most similar chunks to the query. Choosing k (the number of chunks) to retrieve is another design parameter: common values are in the range 3–10. You want enough pieces of context to cover the answer, but not so many that you overwhelm the LLM or introduce irrelevant noise. A typical default isk=5
. In the example lab workflow, they use atop_k
of 5 by default. If you retrieve too few, you might miss part of the answer; too many, and the prompt to the LLM becomes long and could confuse it with extraneous info.
The outcome of the retrieval step is a set of top-k text chunks (contexts) that hopefully contain the information needed to answer the user’s question.
4. LLM Answer GenerationFinally, the retrieved chunks are fed into the prompt of a large language model to generate the answer. This step is often implemented with a prompt template such as:
“Use the following context to answer the question. If the context does not have the answer, say you don’t know.
Context:
[Chunk 1 text]
[Chunk 2 text]
…
Question: [User’s query]
Answer:”
The LLM (which could be GPT-5, or an open-source model depending on your choice) will then produce an answer, hopefully drawing facts from the provided context rather than hallucinating. Naïve RAG doesn’t include complex prompt strategies or multiple prompt stages; it’s usually a single prompt that includes all top chunks at once (this is often called the “stuffing” approach – stuffing the context into the prompt). This is simple and works well when the amount of context is within the model’s input limit.
Best practices for the generation step:
- Order and formatting of contexts: Usually, chunks can be simply concatenated. It can help to separate them with headings or bullet points, or any delimiter that clearly resets context. Some frameworks sort retrieved chunks by similarity score (highest first) under the assumption that the first few are most relevant – this makes sense so that if the prompt gets truncated or the model gives more weight to earlier context (which can happen), the best info is first.
- Avoid exceeding token limits: If each chunk is, say, ~100 tokens and you include 5 chunks, that’s ~500 tokens of context plus the prompt overhead and question. This should fit in most LLMs with 4k+ token contexts. But if your chunks or k are larger, be mindful not to exceed the model’s max token limit for input. If needed, reduce k or chunk size, or consider splitting the question into sub-queries (advanced strategy) to handle very broad asks.
- Prompt instructions: In naive usage, you rely on the model to use the context well. It’s important to instruct the model clearly to only use the provided context for answering, and to indicate if the context doesn’t have the answer. This mitigates hallucination. For example, telling it explicitly “If you don’t find the answer in the context, respond that you are unsure or that it’s not in the provided data.” This way, if retrieval ever fails (e.g., our top-k didn’t actually contain the needed info), the model won’t fabricate an answer. It will either abstain or say “I don’t know.” Depending on your application, you might handle that case by maybe increasing k or using a fallback search method.
- Citing sources: A nice practice, especially for production QA systems, is to have the LLM output the source of the information (like document titles or IDs). Since your retrieval returns chunk metadata, you can either have the model include them in the answer or attach them after the fact. This builds trust with users and helps for debugging. For instance, the lab workflow tracks the titles of retrieved articles and could enable showing which Wikipedia article an answer came from. In a naive setup, you might just append a list of sources (“Source: [Title of Article]”) to the answer.
With that, the Naïve RAG pipeline is complete: the user’s query is answered by the LLM using real data fetched from your database. Despite its simplicity, this approach can already dramatically improve the factual accuracy of answers and allow your system to handle queries about information that the base LLM was never trained on (for example, very recent or niche knowledge).
In our LAB setup on n8n the workflow without the chunking and embedding generation looks like that :
Note that we use

In the Streamlit interface also provided in the repo, we have the following :

Implementing the pipeline is only part of the story. In a production setting, we need to monitor the system’s performance and gather feedback to continuously improve it.
The comparison workflow allows side-by-side testing of:
- Different embedding models
- Chunk sizes and strategies
- Retrieval parameters (top-k values)
- LLM prompting approaches
In summary, treat your RAG system as an evolving product: monitor retrieval relevance, answer accuracy (groundedness), and system performance. Use a combination of automated metrics and human review to ensure quality. Tools like LangSmith can provide infrastructure for logging queries and scoring outputs on metrics like faithfulness or relevance, flagging issues like “bad retrievals” or hallucinated responses. By keeping an eye on these aspects, you can iterate and improve your Naïve RAG system continuously, making it more robust and trustworthy. Although Langsmith is very usefull, be carefull with the necessary pitfall that come along with any abstraction. A good rule of thumb would be to keep your core logic into custom code while taking leverage of Langsmith tools for peripherals.
Conclusion and Next StepsNaïve RAG provides the basic blueprint of how to augment LLMs with external knowledge using semantic search. We discussed how to implement it using PostgreSQL with pgvector, covering best practices in chunking your data, selecting suitable embeddings, indexing with advanced methods like DiskANN for speed, and ensuring that you monitor the system’s effectiveness in production. This straightforward dense retrieval approach is often the first step toward building a production-grade QA system. It’s relatively easy to set up and already yields substantial gains in answer accuracy and currency of information.
However, as powerful as Naïve RAG is, it has its limitations. Pure dense vector similarity can sometimes miss exact matches (like precise figures or rare terms) that a keyword search would catch, and it might bring in semantically relevant but not factually useful context in some cases. In the upcoming posts of this series, we’ll explore more advanced RAG techniques that address these issues:
- Hybrid RAG: combining dense vectors with sparse (lexical) search to get the best of both worlds – we’ll see how a hybrid approach can improve recall and precision by weighting semantic and keyword signals GitHub.
- Adaptive RAG: introducing intelligent query classification and dynamic retrieval strategies – for example, automatically detecting when to favor lexical vs. semantic, or how to route certain queries to specialized retrievers.
- and other more trendy RAG types like Self RAG or Agentic RAG….
As you experiment with the provided lab or your own data, remember the core best practices: ensure your retrieval is solid (that often solves most problems), and always ground the LLM’s output in real, retrieved data. The repository will introduce other RAG and lab examples over time, the hybrid and adaptive workflows are already built in though.
With all the fuss around AI and LLM it might seem chaotic from an outsider with a lack of stability and maturity. There a good chance that the RAG fundamental component will still be there in the coming years if not decade, it lasted 2 years already and proved useful, we just might see those components be summed and integrated to other systems especially with components provided normative monitoring and evaluation which the big open subject. We can’t say that RAG patterns we have are mature but we know for sure they are fundamental to what is coming next.
Stay tuned for the next part, where we dive into Hybrid RAG and demonstrate how combining search strategies can boost performance beyond what naive semantic search can do alone.
L’article RAG Series – Naive RAG est apparu en premier sur dbi Blog.
M-Files with Microsoft Co-Authoring
Following the announcement of the partnership between M-Files and Microsoft, they have confirmed the availability of native Microsoft Co-Authoring within the Microsoft 365 ecosystem.
What means native and what is the differentiator to other ECM’sIn this context, “native” means that documents stored in M-Files can be edited using the native capabilities of Microsoft 365, such as Co-Authoring. It is not necessary to move or copy the data outside of M-Files in order to use the Co-Authoring capabilities of Microsoft 365.
As we are aware, the functionality of these features is contingent upon the storage of data within the Microsoft 365 Tenant of the collaborating users working on a document.
This new experience is only possible because the documents are stored in the same back-end storage system. Currently, Microsoft has made this possible especially for M-Files as they work close together in the development of the M-Files Sharepoint Embedded Application. No other content management vendor has this deep partnership and integration in Sharepoint Embedded.
Benefits and Differentiator- No need to move the documents outside of M-Files
- The security is always managed by M-Files
- The desktop application of the Microsoft Office can be used
- True Collaboration Anywhere, it doesn’t matter from which application the co-authoring is started (Microsoft Office Suite or M-Files Vault)
- No More Version Chaos
- User stay in the application they know and benefit from M-Files features
It is called native because the data is stored in the Microsoft 365 storage of the client tenant, like his other Microsoft 365 native documents.
How is this possible ?
M-Files developed a Microsoft Sharepoint Embedded (SPE) application to store the M-Files data in a SPE storage container. This enables the possibility to get use of the Microsoft 365 capabilities like a Microsoft Co-Authoring. In addition, such a SPE container allows to respect the defined security boundaries of a M-Files Vault.
The diagram below illustrates the high-level architecture. The key point to take away from this is the importance of understanding the change in data storage location.
At present, M-Files stores data in Azure Blob Storage, as illustrated in the orange box of the diagram. By enabling of the co-authoring function this will change.
Please note that data for Microsoft 365 will be stored in a SPE container in the Microsoft 365 Tenant, as shown in the blue box in the diagram. All other data is stored in a new Blob Storage area, as shown in the green box.
The Master Storage, displayed centrally, is responsible for managing the allocation of data to its designated storage locations.
It is my hope that this will provide a clearer understanding of the new storage management approach developed by M-Files. Should you require further information, I would be happy to provide it in a separate blog post. Please do not hesitate to leave your comments below.

To enable the function the following prerequisites must be taken in consideration.
- M-Files Cloud is required
- M-Files Sharepoint Embedded requires admin consent in the Microsoft 365 Tenant
- At least a M-Files Business user license is required
- The M-Files Enterprise Application must be configured in the Microsoft Admin Center and the users in scope must be added
- In M-Files Manage the user provisioning configuration for the M-Files Enterprise Application must be configured
The process to enable it is very straightforward, only the two steps below are required, and these are clearly documented by M-Files.
- Grant tenant-wide admin consent to the M-Files SharePoint Embedded application in Microsoft 365.
- Enable desktop co-authoring in the M-Files Admin tool vault properties
It is important to understand that all data will move to the new storage and this cannot be reverted!
Co-Authoring in actionI am delighted to share that, to showcase the capability in a recorded video, a demo environment was created. Please find below a high-level description of this exciting project.
- M-Files Cloud Vault
- M-Files Desktop Client (new version)
- M-Files Web Client
- Microsoft 365 Tenant with two demo users
- Microsoft Word application with logged in demo users
By leveraging the capabilities of both Microsoft and M-Files, businesses can optimize the use of their data while ensuring its secure storage within the designated domain. The use of the Sharepoint Embedded application ensures that data is stored within the client-owned Tenant, with Microsoft’s trusted security and authentication functions providing robust protection. This marks the start of a collaborative effort to leverage the native Microsoft functionalities in conjunction with M-Files, integrating the capabilities of Microsoft Sharepoint Embedded.
We are able to assist your business in advancing to the next level of digitalisation. Please do not hesitate to get in contact with us for further details.
L’article M-Files with Microsoft Co-Authoring est apparu en premier sur dbi Blog.
Errorhandling in Ansible
Per default, if a task in a playbook fails, then the execution of the playbook is stopped for that host.
- name: PLAY1
hosts: localhost
gather_facts: no
tasks:
- name: let this shell-command fail
ansible.builtin.shell: exit 1
- name: let this shell-command complete
ansible.builtin.shell: exit 0

As you can see, the 2nd task is not executed. If you want to continue in such a case, the ignore_errors
parameter is your friend
- name: let this shell-command fail
ansible.builtin.shell: exit 1
ignore_errors: true

Per default, Ansible evaluates the exit-code of the module, in case of the shell-module, the exit-code of the last command.
But for some commands, that is not adequate. Example: The Oracle commandline tool sqlplus
to submit sql-commands will have an exit-code of 0 if it can connect to the database-instance. It is not related to the result of your SQL-commands. Error-messages in Oracle are prefixed by ORA-.
So, if you want to check for application errors, you have to implement it yourself. For that, you can use the failed_when
option.
- name: let this shell-command fail
ansible.builtin.shell: |
. ~/.profile
echo "select count(*) from all_users;" | sqlplus / as sysdba
register: number_of_users
failed_when: "'ORA-' in number_of_users.stdout"
Caution: In this case the exit-code of the shell is no longer evaluated. To also get the exit-code of the sqlplus call (e.g., sqlplus can not connect to the database, or sqlplus binary not found), you have to add this (default) condition:
failed_when: "number_of_users.rc != 0 or 'ORA-' in number_of_users.stdout"
But caution! Not all modules will have an rc
field.
Conceptually, Ansible is not the fastest tool. For each task, it will usually login with ssh to the remote server. If you have to run several checks in the shell, then, instead of running each in a separate task, you can run all these check-commands in one shell-task, and evaluate the result afterwards.
- name: run many check commands
ansible.builtin.shell: |
mount | grep ' on /u00 ' #check if /u00 is mounted
rpm -q ksh 2>&1 #check if ksh is installed
exit 0 # do not fail if rpm exit != 0; we will do our own errorhandling
register: checks
- name: fail if no /u00 is mounted
fail:
msg: "/u00 is not mounted"
when: "' on /u00 ' not in checks.stdout"
- name: No ksh found, try to install it
yum:
name: ksh
state: present
when: "'package ksh is not installed' in checks.stdout"
If you only want to throw an error, then you can do it directly in the shell-task:
when: "' on /u00 ' not in checks.stdout" or 'package ksh is not installed' in checks.stdout
But if you parse the output afterwards, you can run tasks to fix the error.
Sometimes it is difficult to parse the output if some commands return the same output, e.g. “OK”.
If your check-commands always return exactly 1 line, then you can directly parse the output of the command. The output of the 3rd command is in checks.stdout_lines[2]
.
In the above example that will not work because grep
will return the exit-code 0 (not found) or 1 (found) plus the found line. So, expand it as: mount | grep ' on /u00 ' || echo error
Do not fail the task itself, it is very usually unreadable because all information is printed on one line.

Instead, use ignore_errors: true
and failed_when: false
in the task. Do the errorhandling in a separate task with a customized errormessage. To print the multiline list of stdout_lines, use debug:
otherwise you can directy use ansible.builtin.fail:
with a customized message:
- name: force sqlplus to throw error because of missing environment
ansible.builtin.shell: |
/u00/app/oracle/product/19.7.0/bin/sqlplus -L / as sysdba @myscript.sql 2>&1
register: checks
ignore_errors: true
failed_when: false
- name: Check for errors of task before
debug: var=checks.stdout_lines
failed_when: checks.rc != 0 or 'ORA-' in checks.stdout
when: checks.rc != 0 or 'ORA-' in checks.stdout

As an example for this scenario: A customer of mine will do an offline backup of the development databases. And I have to run an Ansible playbook against all databases. If the playbook for this host is run at the backup time, it will fail because the database is down. But after some minutes the database will be restarted.
What we can do now?
Wait until the database is up again. That is possible, see the example of ansible.builtin.wait_for
in my blog post Parallel execution of Ansible roles. But for this scenario it is a waste of time. The database can be stopped (not for backup) and will not be restarted within the next few minutes.
Try later after a while. My playbook for all hosts (parallel forks=5) takes about 1 hour. The idea now is to remember the host with the stopped database and to continue with the next host. After the play finished for all hosts, restart the play for the remembered hosts.
- The 1st play running against all database hosts:
- gets the status of the databases on the host
- assigns the database instances to a
is_running
and anot_open
list - Include the role to run against the running databases
- Dynamically add the host to the group
re_run_if_not_open
if there arenot_open
databases
- The next play only runs for the
re_run_if_not_open
group - Include the role to run against the (hopefully now running) databases
- If the database then is still down, we assume it is stopped permanently.
L’article Errorhandling in Ansible est apparu en premier sur dbi Blog.
AWS Summit Zurich 2025
Last Thursday I could go to the AWS Summit Zurich 2025 edition. So it’s time to do a little recap of what I could see and what was presented during the event.

Amid the infinite sessions about AI, I could also find some on other topics. I started my journey with a session on security best practices.

Then came the keynote, where I could see a demo of an automated delivery robot shaped like a dog that can climb stairs. During the keynote, it was announced that Oracle Exadata would be available in the Zurich region starting 2026.
Among the other sessions I attended, I can spotlight one on the IAM service, where the speakers deep-dived into how to manage the service more accurately.
I also attended a session addressing the Sovereignty topic where we were explained the impact of the CLOUD act and the data protection subject.
I also went to a network focused presentation, the speaker showed us the new features of AWS Cloud WAN and how it works as a backbone network for hybrid solutions. The biggest news was that now we can connect AWS Direct Connections directly to AWS Cloud WAN without the need of a transit gateway.

They also introduced to us VPC Lattice, a fully managed application networking service that you use to connect, secure, and monitor the services and resources for your application.

Finally, I followed a session on cloud cost optimization. We were presented some dashboards that we can use to optimize our cloud costs.

That’s it for this year. Yet another summit where I’m able to learn new stuff. I hope I can go again next year and still learn new skills.

L’article AWS Summit Zurich 2025 est apparu en premier sur dbi Blog.
PostgreSQL CDC to JDBC Sink – minimal event driven architecture
“Make it simple.”
When you have to create a solution, design an architecture, you should always take the simplest path, this is often the better solution proven by field knowledge and experiences of not scalable patterns. Which means also that you should only add the necessary and required complexity to your solutions.
So while following this principle I tried to look for the simplest event driven design that I could find.
And here it is :
PG Source → Flink CDC → JDBC Sink → PG Data Mart

What each piece does :
- PostgreSQL (source)
Emits a change stream from the WAL using logical replication (wal_level=logical
).
A publication defines which tables are replicated; a replication slot (one active reader per slot) holds WAL until the consumer confirms it. - Flink CDC source (Postgres)
A Flink table that reads from the publication/slot and turns WAL into a changelog stream (insert/update/delete).
Key options you’re using:scan.incremental.snapshot.enabled=true
– non-blocking initial loadslot.name=...
– binds the job to a specific slot- primary key in the table schema – lets downstream sinks do upserts
- Flink runtime
Runs a streaming job that:- Initial incremental snapshot: splits the table, bulk reads current rows, remembers an LSN.
- Catch-up + stream: replays WAL from that LSN and then tails new changes.
- Checkpoints: the slot LSN + operator state are stored on your S3/MinIO path, so restarts resume exactly from the last acknowledged point.
- JDBC sink (Postgres data mart)
Another Flink table. With a PRIMARY KEY defined, the connector performs UPSERT/DELETE semantics (e.g.,INSERT ... ON CONFLICT DO UPDATE
in Postgres).
It writes in batches, flushes on checkpoints, and retries on transient errors. - PostgreSQL (data mart)
Receives the normalized upsert/delete stream and ends up with a 1:1 “current state” of the source tables (ideal for BI).
- Near-real-time replication with transforms: you can filter, project, cleanse, deduplicate, join reference data, and even aggregate in Flink SQL before hitting the data mart—something native logical replication can’t do.
- Upserts keep the mart tidy: the JDBC sink writes the current state keyed by your PKs (perfect for reporting).
- Resilient: checkpoints + WAL offsets → automatic catch-up after failures/restarts.
- DB-friendly: WAL-based CDC has low OLTP impact compared to heavy ETL pulls.
Architecture of the LAB :
server1 (SRC) → PostgreSQL 17.6 (source) + Flink (jobs run here) : IP 172.19.0.4
server2 (SINKS) → PostgreSQL 17.6 (data mart) : IP 172.20.0.4
First we need data to transfer, here is a sample database that you can create yourself, additionally we also setup the PostgreSQL instance :
# Execute on server1 only
sudo -u postgres psql -c "ALTER SYSTEM SET wal_level='logical';"
# Increase the number of concurrent replication connections allowed.
sudo -u postgres psql -c "ALTER SYSTEM SET max_wal_senders=10;"
# Increase the number of replication slots the server can support.
sudo -u postgres psql -c "ALTER SYSTEM SET max_replication_slots=10;"
# On both servers, modify the Host-Based Authentication file to allow connections from each others.
echo "host all all 127.0.0.1/32 trust" | sudo tee -a /etc/postgresql/17/main/pg_hba.conf
echo "host all all 172.19.0.4/32 scram-sha-256" | sudo tee -a /etc/postgresql/17/main/pg_hba.conf
echo "host all all 172.20.0.4/32 scram-sha-256" | sudo tee -a /etc/postgresql/17/main/pg_hba.conf
# Restart the PostgreSQL service to apply all configuration changes.
sudo systemctl restart postgresql
sudo -u postgres createdb logistics_src
sudo su - postgres
# Execute a multi-statement SQL block to define and seed the schema.
psql -U postgres -d logistics_src <<'SQL'
CREATE SCHEMA logistics;
CREATE TABLE logistics.customers (
customer_id bigserial PRIMARY KEY,
name text NOT NULL,
city text,
email text UNIQUE
);
CREATE TABLE logistics.products (
product_id bigserial PRIMARY KEY,
sku text UNIQUE NOT NULL,
name text NOT NULL,
list_price numeric(12,2) NOT NULL
);
CREATE TABLE logistics.orders (
order_id bigserial PRIMARY KEY,
customer_id bigint NOT NULL REFERENCES logistics.customers(customer_id),
status text NOT NULL DEFAULT 'NEW',
order_ts timestamptz NOT NULL DEFAULT now()
);
CREATE TABLE logistics.order_items (
order_id bigint NOT NULL REFERENCES logistics.orders(order_id),
product_id bigint NOT NULL REFERENCES logistics.products(product_id),
qty int NOT NULL,
unit_price numeric(12,2) NOT NULL,
PRIMARY KEY(order_id, product_id)
);
CREATE TABLE logistics.inventory (
product_id bigint PRIMARY KEY REFERENCES logistics.products(product_id),
on_hand int NOT NULL DEFAULT 0
);
CREATE TABLE logistics.shipments (
shipment_id bigserial PRIMARY KEY,
order_id bigint NOT NULL REFERENCES logistics.orders(order_id),
carrier text,
shipped_at timestamptz,
status text
);
-- Seed initial data
INSERT INTO logistics.customers(name,city,email)
SELECT 'Customer '||g, 'City '|| (g%10), 'c'||g||'@example.com'
FROM generate_series(1,200) g;
INSERT INTO logistics.products(sku,name,list_price)
SELECT 'SKU-'||g, 'Product '||g, (random()*90+10)::numeric(12,2)
FROM generate_series(1,500) g;
INSERT INTO logistics.inventory(product_id,on_hand)
SELECT product_id, (random()*100)::int
FROM logistics.products;
-- Create 100 orders
WITH o AS (
INSERT INTO logistics.orders(customer_id,status)
SELECT (floor(random()*200)+1)::int, 'NEW' -- customers 1..200
FROM generate_series(1,100)
RETURNING order_id
)
-- For each order, choose 2 distinct products and insert items
INSERT INTO logistics.order_items(order_id, product_id, qty, unit_price)
SELECT o.order_id,
p.product_id,
(floor(random()*5)+1)::int AS qty, -- qty 1..5
p.list_price
FROM o
CROSS JOIN LATERAL (
SELECT pr.product_id, pr.list_price
FROM logistics.products pr
ORDER BY random()
LIMIT 2
) AS p;
SQL
psql -U postgres -d logistics_src -c "ALTER ROLE postgres IN DATABASE logistics_src
SET search_path = logistics, public;"
postgres@LAB-CDC-SRC:~$ psql -U postgres -d logistics_src -c "ALTER ROLE postgres IN DATABASE logistics_src
SET search_path = logistics, public;"
ALTER ROLE
postgres@LAB-CDC-SRC:~$ psql
psql (17.6 (Ubuntu 17.6-1.pgdg24.04+1))
Type "help" for help.
postgres=# \l
List of databases
Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges
---------------+----------+----------+-----------------+---------+---------+--------+-----------+-----------------------
logistics_src | postgres | UTF8 | libc | C.UTF-8 | C.UTF-8 | | |
postgres | postgres | UTF8 | libc | C.UTF-8 | C.UTF-8 | | |
template0 | postgres | UTF8 | libc | C.UTF-8 | C.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | libc | C.UTF-8 | C.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
(4 rows)
postgres=# \c logistics_src
You are now connected to database "logistics_src" as user "postgres".
logistics_src=# \dt
List of relations
Schema | Name | Type | Owner
-----------+-------------+-------+----------
logistics | customers | table | postgres
logistics | inventory | table | postgres
logistics | order_items | table | postgres
logistics | orders | table | postgres
logistics | products | table | postgres
logistics | shipments | table | postgres
(6 rows)
logistics_src=# SELECT count(*) FROM logistics.orders;
SELECT count(*) FROM logistics.order_items;
count
-------
700
(1 row)
count
-------
1100
(1 row)
## Here I created a slot per table on my source
postgres=# SELECT slot_name, active, confirmed_flush_lsn FROM pg_replication_slots;
SELECT pubname FROM pg_publication;
slot_name | active | confirmed_flush_lsn
------------------------+--------+---------------------
flink_order_items_slot | f | 0/20368F0
flink_orders_slot | f | 0/20368F0
(2 rows)
pubname
---------
(0 rows)
postgres=#
On my target data mart I created the empty structure :
logistics_dm=#
SELECT count(*) FROM datamart.orders;
SELECT count(*) FROM datamart.order_items;
count
-------
0
(1 row)
count
-------
0
(1 row)
logistics_dm=#
Then we can start Flink :
adrien@LAB-CDC-SRC:~/flink-1.20.2$ ./bin/start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host LAB-CDC-SRC.
Starting taskexecutor daemon on host LAB-CDC-SRC.
Verify that your Flink UI is up by checking the URL http://127.0.0.1:8081

In my case I created a custom config file to handle some other aspects of my LAB like the Hudi and S3 part that you can skip. The important point in the config is more related to task manager and memory settings for the scheduler :
adrien@LAB-CDC-SRC:~/flink-1.20.2$ cat conf/flink-conf.yaml
jobmanager.rpc.address: localhost
# Web/API
rest.address: 0.0.0.0
# Memory (required by 1.20 to be set explicitly)
jobmanager.memory.process.size: 1200m
taskmanager.memory.process.size: 1600m
taskmanager.numberOfTaskSlots: 8
# Allow multiple small jobs at once
parallelism.default: 1 # so each job can start with 1 slot by default
jobmanager.scheduler: adaptive
# Checkpoint/savepoint locations (use MinIO so they survive restarts)
state.checkpoints.dir: s3a://flink/checkpoints
state.savepoints.dir: s3a://flink/savepoints
execution.checkpointing.interval: 10 s
execution.checkpointing.mode: EXACTLY_ONCE
execution.checkpointing.externalized-checkpoint-retention: RETAIN_ON_CANCELLATION
# Optional resilience
restart-strategy: fixed-delay
restart-strategy.fixed-delay.attempts: 10
restart-strategy.fixed-delay.delay: 5 s
# Prefer parent loader (helps with some connector deps)
classloader.resolve-order: parent-first
classloader.check-leaked-classloader: false
# S3A to MinIO (Hadoop FS)
s3.endpoint: http://172.20.0.4:9000
s3.path.style.access: true
s3.access.key: admin
s3.secret.key: adminadmin
s3.connection.ssl.enabled: false
#s3.impl: org.apache.hadoop.fs.s3a.S3AFileSystem
#s3.aws.credentials.provider: org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
# optional, handy for labs
execution.checkpointing.interval: 10 s
env.java.opts: --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED -Djdk.attach.allowAttachSelf=true
adrien@LAB-CDC-SRC:~/flink-1.20.2$
Then we start the SQL client,
adrien@LAB-CDC-SRC:~/flink-1.20.2$ ./bin/sql-client.sh
▒▓██▓██▒
▓████▒▒█▓▒▓███▓▒
▓███▓░░ ▒▒▒▓██▒ ▒
░██▒ ▒▒▓▓█▓▓▒░ ▒████
██▒ ░▒▓███▒ ▒█▒█▒
░▓█ ███ ▓░▒██
▓█ ▒▒▒▒▒▓██▓░▒░▓▓█
█░ █ ▒▒░ ███▓▓█ ▒█▒▒▒
████░ ▒▓█▓ ██▒▒▒ ▓███▒
░▒█▓▓██ ▓█▒ ▓█▒▓██▓ ░█░
▓░▒▓████▒ ██ ▒█ █▓░▒█▒░▒█▒
███▓░██▓ ▓█ █ █▓ ▒▓█▓▓█▒
░██▓ ░█░ █ █▒ ▒█████▓▒ ██▓░▒
███░ ░ █░ ▓ ░█ █████▒░░ ░█░▓ ▓░
██▓█ ▒▒▓▒ ▓███████▓░ ▒█▒ ▒▓ ▓██▓
▒██▓ ▓█ █▓█ ░▒█████▓▓▒░ ██▒▒ █ ▒ ▓█▒
▓█▓ ▓█ ██▓ ░▓▓▓▓▓▓▓▒ ▒██▓ ░█▒
▓█ █ ▓███▓▒░ ░▓▓▓███▓ ░▒░ ▓█
██▓ ██▒ ░▒▓▓███▓▓▓▓▓██████▓▒ ▓███ █
▓███▒ ███ ░▓▓▒░░ ░▓████▓░ ░▒▓▒ █▓
█▓▒▒▓▓██ ░▒▒░░░▒▒▒▒▓██▓░ █▓
██ ▓░▒█ ▓▓▓▓▒░░ ▒█▓ ▒▓▓██▓ ▓▒ ▒▒▓
▓█▓ ▓▒█ █▓░ ░▒▓▓██▒ ░▓█▒ ▒▒▒░▒▒▓█████▒
██░ ▓█▒█▒ ▒▓▓▒ ▓█ █░ ░░░░ ░█▒
▓█ ▒█▓ ░ █░ ▒█ █▓
█▓ ██ █░ ▓▓ ▒█▓▓▓▒█░
█▓ ░▓██░ ▓▒ ▓█▓▒░░░▒▓█░ ▒█
██ ▓█▓░ ▒ ░▒█▒██▒ ▓▓
▓█▒ ▒█▓▒░ ▒▒ █▒█▓▒▒░░▒██
░██▒ ▒▓▓▒ ▓██▓▒█▒ ░▓▓▓▓▒█▓
░▓██▒ ▓░ ▒█▓█ ░░▒▒▒
▒▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░▓▓ ▓░▒█░
______ _ _ _ _____ ____ _ _____ _ _ _ BETA
| ____| (_) | | / ____|/ __ \| | / ____| (_) | |
| |__ | |_ _ __ | | __ | (___ | | | | | | | | |_ ___ _ __ | |_
| __| | | | '_ \| |/ / \___ \| | | | | | | | | |/ _ \ '_ \| __|
| | | | | | | | < ____) | |__| | |____ | |____| | | __/ | | | |_
|_| |_|_|_| |_|_|\_\ |_____/ \___\_\______| \_____|_|_|\___|_| |_|\__|
Welcome! Enter 'HELP;' to list all available commands. 'QUIT;' to exit.
Command history file path: /home/adrien/.flink-sql-history
-- Set the checkpointing interval for the SQL client session.
-- While configured globally in flink-conf.yaml, setting it here ensures it applies.
SET 'execution.checkpointing.interval' = '10s';
-- =============================================================================
-- DEFINE CDC SOURCE TABLES
-- =============================================================================
-- This table definition maps to the 'logistics.orders' table in PostgreSQL.
-- The 'postgres-cdc' connector is used to stream changes.
SET 'execution.checkpointing.interval' = '10s';
CREATE TABLE src_orders (
order_id BIGINT,
customer_id BIGINT,
status STRING,
order_ts TIMESTAMP(3),
PRIMARY KEY (order_id) NOT ENFORCED
) WITH (
'connector' = 'postgres-cdc',
'hostname' = '172.19.0.4',
'port' = '5432',
'username' = 'postgres',
'password' = 'your_postgres_password',
'database-name' = 'logistics_src',
'schema-name' = 'logistics',
'table-name' = 'orders',
'slot.name' = 'flink_orders_slot',
'decoding.plugin.name' = 'pgoutput',
'scan.incremental.snapshot.enabled' = 'true'
);
CREATE TABLE src_order_items (
order_id BIGINT,
product_id BIGINT,
qty INT,
unit_price DECIMAL(12,2),
PRIMARY KEY (order_id, product_id) NOT ENFORCED
) WITH (
'connector' = 'postgres-cdc',
'hostname' = '172.19.0.4',
'port' = '5432',
'username' = 'postgres',
'password' = 'your_postgres_password',
'database-name' = 'logistics_src',
'schema-name' = 'logistics',
'table-name' = 'order_items',
'slot.name' = 'flink_order_items_slot',
'decoding.plugin.name' = 'pgoutput',
'scan.incremental.snapshot.enabled' = 'true'
);
CREATE TABLE dm_orders (
order_id BIGINT,
customer_id BIGINT,
status STRING,
order_ts TIMESTAMP(3),
PRIMARY KEY (order_id) NOT ENFORCED
) WITH (
'connector' = 'jdbc',
'url' = 'jdbc:postgresql://172.20.0.4:5432/logistics_dm',
'table-name' = 'datamart.orders',
'username' = 'postgres',
'password' = 'your_postgres_password',
'driver' = 'org.postgresql.Driver'
);
CREATE TABLE dm_order_items (
order_id BIGINT,
product_id BIGINT,
qty INT,
unit_price DECIMAL(12,2),
PRIMARY KEY (order_id, product_id) NOT ENFORCED
) WITH (
'connector' = 'jdbc',
'url' = 'jdbc:postgresql://172.20.0.4:5432/logistics_dm',
'table-name' = 'datamart.order_items',
'username' = 'postgres',
'password' = 'your_postgres_password',
'driver' = 'org.postgresql.Driver'
);
INSERT INTO dm_orders SELECT * FROM src_orders;
INSERT INTO dm_order_items SELECT * FROM src_order_items;
So here we just declared the source tables and the data mart tables, note the connector type.
Once the tables are declare in Flink you can then start the pipeline with the INSERT INTO statements.
At the end you should get something like this in the sql_client.sh :
Flink SQL> [INFO] Submitting SQL update statement to the cluster...
[INFO] SQL update statement has been successfully submitted to the cluster:
Job ID: 1e776162f4e23447ed5a9546ff464a61
Flink SQL> [INFO] Submitting SQL update statement to the cluster...
[INFO] SQL update statement has been successfully submitted to the cluster:
Job ID: 14ea0afbf7e2fda44efc968c189ee480
And in Flink :


Then verify that on the target data mart the tables are now populated :
postgres=# \c logistics_dm
You are now connected to database "logistics_dm" as user "postgres".
logistics_dm=#
SELECT count(*) FROM datamart.orders;
SELECT count(*) FROM datamart.order_items;
count
-------
700
(1 row)
count
-------
1100
(1 row)
logistics_dm=# select * from order_items limit 5;
order_id | product_id | qty | unit_price | discount
----------+------------+-----+------------+----------
224 | 223 | 4 | 15.44 | 0.00
96 | 94 | 1 | 53.91 | 0.00
689 | 143 | 3 | 70.71 | 0.00
290 | 223 | 3 | 15.44 | 0.00
93 | 191 | 2 | 95.85 | 0.00
(5 rows)
logistics_dm=# select * from orders limit 5;
order_id | customer_id | status | order_ts | channel
----------+-------------+--------+-------------------------------+---------
310 | 163 | NEW | 2025-09-09 14:36:47.247234+00 |
305 | 69 | NEW | 2025-09-09 14:36:47.247234+00 |
304 | 14 | NEW | 2025-09-09 14:36:47.247234+00 |
303 | 122 | NEW | 2025-09-09 14:36:47.247234+00 |
302 | 9 | NEW | 2025-09-09 14:36:47.247234+00 |
(5 rows)
And this is it, we did the setup of the minimal event-driven architecture.
End-to-end flow (what happens when you “INSERT INTO dm_* SELECT * FROM src_*”)- Flink CDC attaches to the slot and (if configured) creates/uses the publication.
- It takes a consistent snapshot of
orders
/order_items
, while buffering concurrent WAL changes. - Once snapshot is done, it emits the buffered changes and continues streaming new WAL records.
- The JDBC sink receives a relational changelog:
INSERT
→INSERT … ON CONFLICT DO UPDATE
(upsert)UPDATE
→ treated as upsertDELETE
→DELETE … WHERE pk = ?
- Checkpoints coordinate the CDC source offsets with the sink flush so recovery doesn’t duplicate effects. With upserts, the pipeline is effectively-once even if a retry happens.
- One slot per independent reader (or a single job fan-out via Statement Set).
- Checkpointing (you’ve set it):
execution.checkpointing.interval=10 s
+ S3 paths forstate.checkpoints.dir
/state.savepoints.dir
. - Target DDL: create the DM tables up front with PRIMARY KEY to enable upserts.
- Throughput: increase job/connector parallelism, adjust JDBC sink batch size/interval if needed (defaults are usually fine for this lab).
- DDL changes on the source: Postgres logical decoding does not emit DDL → when schemas change, redeploy the Flink table DDL (and adjust the target tables). Use Liquibase/Flyway to manage that cleanly.
- PK required for clean upserts/deletes. Without it, the JDBC sink would just append.
- Pros (vs. PostgreSQL logical replication)
- DDL: manual coordination (expand/contract) is required.
- Slots: one active consumer per slot; multiple slots can increase WAL retention if a reader lags or is stopped.
- Backpressure: a slow data mart will throttle the job; watch Flink backpressure metrics and tune JDBC batching/parallelism.
- Transform while you move: filter, project, join, enrich, aggregate, de-duplicate, derive SCD logic—before data lands in the target.
- Fan-out: the same CDC stream can drive many targets (multiple PG DBs, ClickHouse, Elasticsearch, etc.).
- Decoupling & safety: backpressure, retries, checkpointed state, and rate-limiting protect the source and the target; you can shape load to the DM.
- Schema mediation: implement expand/contract, rename mapping, default values—logical replication needs the same schema on both sides.
- Non-PG targets: works when your sink isn’t Postgres (or not only Postgres).
- Observability: Flink UI and metrics on lag, checkpoints, failures, couple it with already proven Prometheus exporters.
- More moving parts: Flink cluster + connectors to operate.
- Latency a bit higher: seconds (checkpoint cadence, batching) instead of sub-second WAL apply.
- DDL isn’t automatic: Postgres logical decoding doesn’t emit DDL; you still manage schema changes (expand/contract + redeploy).
- Throughput tuning: JDBC upserts can bottleneck a single DB; you tune parallelism and flush/batch settings.
- You just need a near-real-time 1:1 copy PG→PG, no transforms, same schema, and lowest latency with minimal ops.
- You’re okay with subscriber-side work (indexes/views) and whole-table subscription (no row-level filtering).
Add Hudi when you need things a database replica can’t give you:
- History & replayability: keep the raw truth (Bronze) cheaply; rebuild downstream tables any time.
- Upserts/deletes at scale on files: CDC-friendly MOR/COW tables.
- Time travel & incremental pulls: audits, backtests, point-in-time reprocessing.
- Many heterogeneous consumers: BI + ML + ad-hoc engines (Trino/Spark/Presto) without re-extracting from OLTP.
- Big volumes: storage and compute scale independently; you can compaction/cluster off-peak.
- Scale compute and storage independently.
Trade-off: more infra (S3/MinIO + compaction), higher “cold” query latency than a hot DB, and you still materialize serving tables for apps.
What I don’t like with this design :- Java based : I don’t like to handle Java issues, runtime and versions. I am not a developer and those kinds of this should be packaged in a way that it makes them run fast and easy. This open source version and setup is not that user-friendly and requires deep knowledge of the tool, but realistically not that much more than any other tool that would do similar things. Additionally, if your are going to run this in production there is a high likelihood that your are going to use Kubernetes which is going to solve those issues and offer scalability.
- The versions dependencies : I did figure out the hard way that not all latest version of all packages used are compatible with each other. In the open source world some projects sometimes need to catch up in development. Here I need to use Flink 1.20.2 to have CDC working with HUDI for example and because I want to use both JDBC connector and HUDI Sink I had to downgrade the entire stack. So be careful Flink 1.20 is the LTS version so this is fine but if you want to use the latest feature of the stable version you might want to check that first.
- Cloud : this operational complexity for setup and management is handle on the cloud hence the strong argument to go for it appart from the obvious data sensitivity issue but that last part is more a labelling and classification issue than really a Flink usage argument. If your company is using not taking leverage of cloud solution because it is afraid of data loss, this is a high indicator of a lack of maturity in that area rather than a technical limitation.
Additional notes : this setup is not meant for production, this is a simple showcase for lab purpose that you can easily reproduce, here there is not persistence (if you restart the Flink processes your pipeline are lost), if you want a production ready setup refer to the official documentation and look for Kubernetes installations probably : Introduction | Apache Flink CDC.
L’article PostgreSQL CDC to JDBC Sink – minimal event driven architecture est apparu en premier sur dbi Blog.
Simplify trainings follow up with M-Files
In today’s organizations, things are continuously evolving. We have to become more agile, but we also need more control to ensure compliance with ISO certifications and the like.
In my last blog post, I discussed controlled documents, especially procedures. It’s good to write a procedure, but it’s better to ensure the audience is aware of it.

For that, M-Files provides also a solution with its Training Management module (part of the Compliance Kit).
Compliance KitIf you have read my previous post, then you already know what this is about. For those who haven’t, here is a brief overview. The Compliance Kit is an additional Vault application to extend M-Files capabilities like Version Control, Electronic signature, Extended PDF renditions,…and today we will explore the Training Module.
All these modules are especially valuable in regulated industries such as pharmaceuticals, finance, and manufacturing, where traceability and accountability are essential.
Training Management Module: added valueThis module allows organizations to manage employee training records directly within M-Files.
So after some configuration to fit your needs you will get:
- Assignment of training: Automatically assign identified training based on roles, departments, or compliance requirements.

- tracking or reporting: a supervisor can easily check completion, overdue tasks,…

- Version control: A new training request is triggered as soon as a new controlled document is released to ensure that people are always up to date.

- Integration with workflow: as usual with M-Files,it is possible to integrate training completion with other business processes to create strong links, avoid silos, and improve the user experience.
Admittedly, the training module is not a pure LMS solution. However, it has the advantage of allowing you to monitor training progress and ensure that employees are aware of the latest available information thanks to M-Files, without adding any other software.
M-Files and the Compliance Kit represent a strategic opportunity to deliver high-value solutions that align with both operational efficiency and regulatory compliance. The Training Management module, in particular, bridges the gap between document control and human resource development, making it a cornerstone for digital transformation in compliance-heavy sectors.
Whether you are deploying M-Files for the first time or optimizing an existing configuration, please do not hesitate to contact us for personalized support.
L’article Simplify trainings follow up with M-Files est apparu en premier sur dbi Blog.
Reimage your ODA from scratch
Oracle Database Appliance is still very popular today. Provisioning is fast: from unboxing to a first running database is done in one day. And everything is standardized.
Patching is not that straight forward. First, you can only apply the latest patch on top of the 4 previous ones, and if you come from an older release, you may have 2, 3 or more patches to apply to reach the current version. It could take hours, even days.
The question one could ask is if it wouldn’t be more efficient to do a full ODA reimaging instead of patching. Let’s try to find out.
Environment and upgrade planThis is a “real” environment composed of 4x ODA X8-2M: 2x ODAs for production, 2x ODAs for test/development. These servers have been provisioned in 2020 with the first 19c production available on ODA: 19.6. They were updated twice in the past years to reach 19.20 in 2024. These 2 upgrades were not that easy for some reasons, and systems are not that clean in my opinion.
According to MOS note 2757884.1, these ODAs are supported until August 2027, meaning that the last patch for these servers will be provided at this time. My client wants to keep them for 2 more years at least. So it’s definitely worth it to apply the patch.
I could have applied patch 19.24 then patch 19.27 on these ODAs, but as the first jump requires a Data Preserving Reprovisioning (meaning an OS reimaging) and as my ODAs are not that clean, I discussed with my client and we chose to reimage the servers from scratch.
Note that the production ODAs we will focus on are running 15 Enterprise Edition databases, and that a Data Guard configuration exists for each of them.
The plan is to switch all primaries to one ODA, reimage the other one, wait 2 weeks to see if everything is fine, do the switchover to the freshly reimaged ODA and reimage the other one the same way. As I don’t want to patch my databases now, I will deploy a 19.20 DB home and migrate to the latest version later.
Note that when I started this reimaging, current patch was 19.27. In the meantime, 19.28 becomes available.
Before starting: check ODA’s healthLet’s first check if the ODA I planned to reimage is healthy. You should normally monitor the hardware alerts on your ODAs, and eventually configure Automatic Service Request (ASR) to address hardware issues as soon as possible.
odaadmcli show server | grep Problem
Open Problems : 0
Open Problems Report : System is healthy
No problem on this ODA. I wouldn’t do anything on an ODA that is not healthy: fix up hardware troubles before patching or reimaging an ODA.
Check if standby databases are synced and disable archive transportApart from DBTEST which is primary (I always have a standalone test database on my ODAs), every database must be a Physical Standby Database. Expected downtime for such a reimaging is several hours or days depending on the number of databases. So, checking if sync is OK is a good practice. I also disable transport to standby databases and step down Data Guard to Maximum Performance:
vi check_role.sql
conn / as sysdba
set pages 0
select name||' is here '|| database_role from v$database;
exit;
for a in `ps -ef | grep pmon | grep -v grep | grep -v ASM | grep -v APX | cut -c 58- | sort`; do . oraenv <<< $a 1> /dev/null ; sqlplus -s /nolog @check_role; done
BPAPRD is here PHYSICAL STANDBY
DBTEST is here PRIMARY
BMC3PRD is here PHYSICAL STANDBY
MLOGAR is here PHYSICAL STANDBY
NIMSHMP is here PHYSICAL STANDBY
NOVOTEC is here PHYSICAL STANDBY
NTRAMPRD is here PHYSICAL STANDBY
NWTASKP is here PHYSICAL STANDBY
OPERA is here PHYSICAL STANDBY
PBYPRD is here PHYSICAL STANDBY
PRAXPRD is here PHYSICAL STANDBY
RCAT is here PHYSICAL STANDBY
SMARTPRD is here PHYSICAL STANDBY
SMKPRD is here PHYSICAL STANDBY
SPORT1 is here PHYSICAL STANDBY
SPORT2 is here PHYSICAL STANDBY
for a in `ps -ef | grep pmon | grep -v grep | grep -v ASM | grep -v APX | grep -v DBTEST | cut -c 58- | sort`; do echo $a ; . oraenv <<< $a 1> /dev/null ; dgmgrl / "show configuration lag" | grep Lag; done
BPAPRD
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 1 second ago)
BMC3PRD
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
MLOGAR
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 1 second ago)
NIMSHMP
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
NOVOTEC
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
NTRAMPRD
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
NWTASKP
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
OPERA
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
PBYPRD
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
PRAXPRD
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 1 second ago)
RCAT
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 1 second ago)
SMARTPRD
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
SMKPRD
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 1 second ago)
SPORT1
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
SPORT2
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
for a in `ps -ef | grep pmon | grep -v grep | grep -v ASM | grep -v APX | cut -c 58- | sort`; do echo $a ; . oraenv <<< $a 1> /dev/null ; dgmgrl / "edit configuration set protection mode as MaxPerformance;" | grep Succeeded; dgmgrl / "EDIT DATABASE ${a}_IN SET STATE='TRANSPORT-OFF';" | grep Succeed ; done
BPAPRD
Succeeded.
Succeeded.
DBTEST
BMC3PRD
Succeeded.
Succeeded.
MLOGAR
Succeeded.
Succeeded
NIMSHMP
Succeeded.
Succeeded.
NOVOTEC
Succeeded.
Succeeded.
NTRAMPRD
Succeeded.
Succeeded
NWTASKP
Succeeded.
Succeeded.
OPERA
Succeeded.
Succeeded.
PBYPRD
Succeeded.
Succeeded.
PRAXPRD
Succeeded.
Succeeded.
RCAT
Succeeded.
Succeeded.
SMARTPRD
Succeeded.
Succeeded.
SMKPRD
Succeeded.
Succeeded.
SPORT1
Succeeded.
Succeeded.
SPORT2
Succeeded.
Succeeded.
Backup important files
As you will erase everything on this ODA, backup everything that needs to be backed up on a remote filesystem (/backup here). Usually, I use this script:
cat /backup/Patch19.27/dbi_prepatch_backup.sh
# Backup important files before patching
export BKPPATH=/backup/Patch19.27/backup_ODA_`hostname`_`date +"%Y%m%d_%H%M"`
echo "Backing up to " $BKPPATH
mkdir -p $BKPPATH
odacli list-databases > $BKPPATH/list-databases.txt
ps -ef | grep pmon | grep -v ASM | grep -v APX | grep -v grep | cut -c 58- | sort > $BKPPATH/running-instances.txt
odacli list-dbhomes > $BKPPATH/list-dbhomes.txt
odacli list-dbsystems > $BKPPATH/list-dbsystems.txt
odacli list-vms > $BKPPATH/list-vms.txt
crontab -u oracle -l > $BKPPATH/crontab-oracle.txt
crontab -u grid -l > $BKPPATH/crontab-grid.txt
crontab -l > $BKPPATH/crontab-root.txt
cat /etc/fstab > $BKPPATH/fstab.txt
cat /etc/oratab > $BKPPATH/oratab.txt
cat /etc/sysconfig/network > $BKPPATH/etc-sysconfig-network.txt
cat /etc/hosts > $BKPPATH/hosts
cat /etc/resolv.conf > $BKPPATH/resolv.conf
cat /etc/sysctl.conf > $BKPPATH/
cp /etc/krb5.conf $BKPPATH/
cp /etc/krb5.keytab $BKPPATH/
mkdir $BKPPATH/network-scripts
cp /etc/sysconfig/network-scripts/ifcfg* $BKPPATH/network-scripts/
odacli describe-system > $BKPPATH/describe-system.txt
odacli describe-component > $BKPPATH/describe-component.txt
HISTFILE=~/.bash_history
set -o history
history > $BKPPATH/history-root.txt
cp /home/oracle/.bash_history $BKPPATH/history-oracle.txt
df -h > $BKPPATH/filesystems-status.txt
for a in `odacli list-dbhomes -j | grep dbHomeLocation | awk -F '"' '{print $4}' | sort` ; do mkdir -p $BKPPATH/$a/network/admin/ ; cp $a/network/admin/tnsnames.ora $BKPPATH/$a/network/admin/; cp $a/network/admin/sqlnet.ora $BKPPATH/$a/network/admin/; done
for a in `odacli list-dbhomes -j | grep dbHomeLocation | awk -F '"' '{print $4}' | sort` ; do mkdir -p $BKPPATH/$a/owm/ ; cp -r $a/owm/* $BKPPATH/$a/owm/; done
cp `ps -ef | grep -v grep | grep LISTENER | awk -F ' ' '{print $8}' | awk -F 'bin' '{print $1}'`network/admin/listener.ora $BKPPATH/gridhome-listener.ora
cp `ps -ef | grep -v grep | grep LISTENER | awk -F ' ' '{print $8}' | awk -F 'bin' '{print $1}'`/network/admin/sqlnet.ora $BKPPATH/gridhome-sqlnet.ora
tar czf $BKPPATH/u01-app-oracle-admin.tgz /u01/app/oracle/admin/
tar czf $BKPPATH/u01-app-odaorabase-oracle-admin.tgz /u01/app/odaorabase/oracle/admin/
tar czf $BKPPATH/u01-app-oracle-local.tgz /u01/app/oracle/local/
tar czf $BKPPATH/home.tgz /home/
cp /etc/passwd $BKPPATH/
cp /etc/group $BKPPATH/
echo "End"
echo "Backup files size:"
du -hs $BKPPATH
echo "Backup files content:"
ls -lrt $BKPPATH
sh /backup/Patch19.27/dbi_prepatch_backup.sh
...
This backup script is probably not extensive: take some time to make a suitable one for your needs.
Let’s also take a backup of the DB homes, in my case I only have one:
tar czf /backup/Patch19.27/odaprdcr_dbhome.tgz /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4
ls -lrth /backup/Patch19.27/odaprdcr_dbhome.tgz
-rw-r--r-- 1 root root 5.9G Sep 3 10:50 /backup/Patch19.27/odaprdcr_dbhome.tgz
Backup standby spfiles and controlfiles
As this ODA is only running standby databases, I don’t need to back them up. I can use the backup from primary assuming a reliable backup strategy has been deployed on production’s databases. 2 files differ between primary and standby databases: spfile and controlfile. So let’s backup these 2 files on all databases, it just needs a couple of minutes for all:
su - oracle
mkdir /backup/Patch19.27/ctl_spf
cat /backup/Patch19.27/bck_ctl_spf.rcv
backup current controlfile format '/backup/Patch19.27/ctl_spf/ctl_%d_%T.bck';
backup spfile format '/backup/Patch19.27/ctl_spf/spf_%d_%T.bck';
exit;
for a in `ps -ef | grep pmon | grep -v grep | grep -v ASM | grep -v APX | grep -v DBTEST | cut -c 58- | sort`; do echo $a ; . oraenv <<< $a 1> /dev/null ; rman target / cmdfile="/backup/Patch19.27/bck_ctl_spf.rcv" | grep Succeeded; done
ls -lrt /backup/Patch19.27/ctl_spf
total 33592
-rw-r----- 1 oracle asmadmin 46104576 Sep 3 10:33 ctl_BPAPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:33 spf_BPAPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 21233664 Sep 3 10:33 ctl_BMC3PRD_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:33 spf_BMC3PRD_20250903.bck
-rw-r----- 1 oracle asmadmin 27656192 Sep 3 10:33 ctl_MLOGAR_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:34 spf_MLOGAR_20250903.bck
-rw-r----- 1 oracle asmadmin 30965760 Sep 3 10:34 ctl_NIMSHMP_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:34 spf_NIMSHMP_20250903.bck
-rw-r----- 1 oracle asmadmin 27394048 Sep 3 10:34 ctl_NOVOTEC_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:34 spf_NOVOTEC_20250903.bck
-rw-r----- 1 oracle asmadmin 77955072 Sep 3 10:34 ctl_NTRAMPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:34 spf_NTRAMPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 29294592 Sep 3 10:34 ctl_NWTASKP_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:34 spf_NWTASKP_20250903.bck
-rw-r----- 1 oracle asmadmin 66387968 Sep 3 10:35 ctl_OPERA_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:35 spf_OPERA_20250903.bck
-rw-r----- 1 oracle asmadmin 20905984 Sep 3 10:35 ctl_PBYPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:35 spf_PBYPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 28868608 Sep 3 10:35 ctl_PRAXPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:35 spf_PRAXPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 14352384 Sep 3 10:35 ctl_RCAT_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:35 spf_RCAT_20250903.bck
-rw-r----- 1 oracle asmadmin 35028992 Sep 3 10:35 ctl_SMARTPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:35 spf_SMARTPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 21594112 Sep 3 10:35 ctl_SMKPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:36 spf_SMKPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 36798464 Sep 3 10:36 ctl_SPORT1_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:36 spf_SPORT1_20250903.bck
-rw-r----- 1 oracle asmadmin 20840448 Sep 3 10:36 ctl_SPORT2_20250903.bck
-rw-r----- 1 oracle asmadmin 114688 Sep 3 10:36 spf_SPORT2_20250903.bck
The restore will be easier with these files.
Prepare the json file for ODA deploymentBefore doing the cleanup, prepare the json file you will use for deploying again your ODA.
You can use the one used for initial deployment, or eventually create a new one.
You will need these things for the json:
- which edition you use: Standard Edition or Enterprise Edition
- hostname, IP, netmask and gateway of your server
- hostname, IP, netmask and gateway of your ILOM
- user and group ids if you don’t use the default ones
- ratio between DATA and RECO for disk partitioning (you may change from what was configured before)
- DNS and NTP IPs
- default’s network interface (usually btbond1)
- DBTEST parameters if you create a test database during provisioning (I would recommend doing so)
You will find most of these parameters in the backup you’ve done with the script.
Here is my json file:
{
"instance": {
"instanceBaseName": "odaprdcr",
"dbEdition": "EE",
"objectStoreCredentials": null,
"name": "odaprdcr",
"systemPassword": "*********",
"timeZone": "Europe/Zurich",
"domainName": "dbi-services.ch",
"ntpServers" : ["10.8.122.132","10.8.122.133"],
"dnsServers" : ["10.8.122.132","10.8.122.233"],
"isMultiUserAccessEnabled": "false",
"isRoleSeparated": true,
"osUserGroup": {
"users": [
{
"userName": "oracle",
"userRole": "oracleUser",
"userId": 1000
},
{
"userName": "grid",
"userRole": "gridUser",
"userId": 1001
}
],
"groups": [
{
"groupName": "oinstall",
"groupRole": "oinstall",
"groupId": 1001
},
{
"groupName": "dbaoper",
"groupRole": "dbaoper",
"groupId": 1002
},
{
"groupName": "dba",
"groupRole": "dba",
"groupId": 1000
},
{
"groupName": "asmadmin",
"groupRole": "asmadmin",
"groupId": 1004
},
{
"groupName": "asmoper",
"groupRole": "asmoper",
"groupId": 1005
},
{
"groupName": "asmdba",
"groupRole": "asmdba",
"groupId": 1006
}
]
}
},
"nodes": [
{
"nodeNumber": "0",
"nodeName": "odaprdcr",
"network": [
{
"ipAddress" : "10.192.100.103",
"subNetMask" : "255.255.255.0",
"gateway" : "10.192.100.254",
"nicName": "btbond1",
"networkType": [
"Public"
],
"isDefaultNetwork": true
}
],
"ilom": {
"ilomName": "odaprdcr-ilom",
"ipAddress":"10.192.200.103",
"subNetMask":"255.255.255.0",
"gateway":"10.192.200.254"
}
}
],
"grid": {
"vip": [],
"diskGroup": [
{
"diskGroupName": "DATA",
"diskPercentage": 85,
"redundancy": "NORMAL"
},
{
"diskGroupName": "RECO",
"diskPercentage": 15,
"redundancy": "NORMAL"
}
],
"language": "en",
"scan": null
},
"database": {
"dbName": "DBTEST",
"dbCharacterSet": {
"characterSet": "AL32UTF8",
"nlsCharacterset": "AL16UTF16",
"dbTerritory": "AMERICA",
"dbLanguage": "AMERICAN"
},
"dbRedundancy": null,
"dbRole": "PRIMARY",
"adminPassword": "*********",
"dbEdition": "EE",
"databaseUniqueName": "DBTEST",
"dbClass": "OLTP",
"dbVersion": "19.27.0.0.250415",
"dbHomeId": null,
"instanceOnly": false,
"isCdb": true,
"pdBName": "PDB1",
"dbShape": "odb1",
"pdbAdminuserName": "pdbadmin",
"enableTDE": false,
"dbType": "SI",
"dbStorage": "ASM",
"dbConsoleEnable": false,
"dbOnFlashStorage": false,
"backupConfigId": null,
"rmanBkupPassword": null,
"tdePassword": null
},
"acr": {
"acrEnable": false
},
"sysOraHomeStorage": {
"diskgroup": "DATA",
"volumeSize": 80
}
}
Cleanup the data disks
Cleanup of the data disk headers is mandatory before reimaging, otherwise provisioning will fail. Make sure you’re on the correct ODA, you can’t undo this action:
/opt/oracle/oak/onecmd/cleanup.pl
INFO: Log file is /opt/oracle/oak/log/odaprdcr/cleanup/cleanup_2025-09-03_10-53-04.log
INFO: Log file is /opt/oracle/oak/log/odaprdcr/cleanup/dcsemu_diag_precleanup_2025-09-03_10-53-04.log
INFO: *******************************************************************
INFO: ** Starting process to cleanup provisioned host odaprdcr **
INFO: *******************************************************************
INFO: Default mode being used to cleanup a provisioned system.
INFO: It will change all ASM disk status from MEMBER to FORMER
Do you want to continue (yes/no) : yes
INFO: nodes will be rebooted
Do you want to continue (yes/no) : yes
INFO: /u01/app/19.20.0.0/oracle/bin/crsctl.bin
INFO: *************************************
INFO: ** Checking for GI bits presence
INFO: *************************************
INFO: GI bits /u01/app/19.20.0.0/oracle found on system under /u01/app directory...
INFO: ** Disabling AFD filtering
SUCCESS: AFD filtering disabled on all devices
INFO: ** Saving disk partition info
INFO: *************************************
INFO: ** Executing Stop Clusterware commands
INFO: *************************************
INFO: *************************************
INFO: ** Remove Oracle Stack
…
INFO: Rebooting the system via <reboot>...
INFO: Executing <reboot>
INFO: Cleanup was successful
INFO: Log file is /opt/oracle/oak/log/odaprdcr/cleanup/cleanup_2025-09-03_10-53-04.log
WARNING: After system reboot, please re-run "odacli update-repository" for GI/DB clones,
WARNING: before running "odacli create-appliance".
Reimage with the latest OS
ODA patch file 30403643 includes an ISO image you will use to reimage your ODA. From the ILOM, start the remote console, connect this ISO file, define next boot device as CDROM and do a power cycle of the server. Reimaging an ODA takes about 50 minutes and doesn’t need any input. Keep the remote console open for the next operation.
Configure firstnetOnce your ODA runs on a brand new OS, you will need to do the initial network setup through the remote console. So log into the console with root/welcome1, type configure-firstnet and provide IP address, netmask and gateway of your server. LACP mode will depend on your switch configuration, if you didn’t use it before, keep it disabled. On an ODA deployed in 2020, LACP mode was not available. You normally won’t use DHCP, neither VLAN as VLAN is usually tagged on the switch ports where your ODA is connected to.
configure-firstnet
...
Once configured, you can reach your server through the network and copy the needed files for the next steps.
Patch the microcodesReimaging your ODA doesn’t mean your microcodes are updated, it’s only the software part that is new. Therefore, first task after reimaging is patching these microcodes.
Let’s copy and register the patch file on the ODA and apply the updates (patch 37817290 for 19.27):
cd /opt/dbi
unzip p37817290_1927000_Linux-x86-64.zip
odacli update-repository -f /opt/dbi/oda-sm-19.27.0.0.0-250601-server.zip
odacli describe-job -i "1e2dd483-2237-47b9-9c60-5395fa720fd2"
Job details
----------------------------------------------------------------
ID: 1e2dd483-2237-47b9-9c60-5395fa720fd2
Description: Repository Update
Status: Success
Created: September 03, 2025 09:53:24 UTC
Message: /opt/dbi/oda-sm-19.27.0.0.0-250601-server.zip
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Unzip bundle September 03, 2025 09:53:39 UTC September 03, 2025 09:53:54 UTC Success
odacli describe-component
System Version
--------------
19.27.0.0.0
System Node Name
----------------
oak
Local System Version
--------------------
19.27.0.0.0
Component Installed Version Available Version
---------------------------------------- -------------------- --------------------
OAK 19.27.0.0.0 up-to-date
DCSCONTROLLER 19.27.0.0.0 up-to-date
DCSCLI 19.27.0.0.0 up-to-date
DCSAGENT 19.27.0.0.0 up-to-date
DCSADMIN 19.27.0.0.0 up-to-date
OS 8.10 up-to-date
ILOM 5.1.1.23.r151750 5.1.4.25.r160118
BIOS 52100600 52140100
LOCAL CONTROLLER FIRMWARE {
[c8, c9] 214.2.271.9 up-to-date
}
SHARED CONTROLLER FIRMWARE {
[c0, c1, c2, c3, c4, c5] VDV1RL06 up-to-date
}
LOCAL DISK FIRMWARE {
[c6d0, c6d1] XC311102 up-to-date
}
HMP 2.4.10.1.600 up-to-date
odacli create-prepatchreport -s -v 19.27.0.0.0
sleep 120 ; odacli describe-prepatchreport -i 8010302b-feca-4f23-92bc-6878aa586713
Prepatch Report
------------------------------------------------------------------------
Job ID: 8010302b-feca-4f23-92bc-6878aa586713
Description: Patch pre-checks for [OS, ILOM, SERVER] to 19.27.0.0.0
Status: SUCCESS
Created: September 3, 2025 9:54:56 AM UTC
Result: All pre-checks succeeded
Node Name
---------------
oak
Pre-Check Status Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions Success Validated minimum supported versions.
Validate patching tag Success Validated patching tag: 19.27.0.0.0.
Is patch location available Success Patch location is available.
Verify OS patch Success There are no packages available for
an update
Validate command execution Success Skipped command execution verfication
- Instance is not provisioned
__ILOM__
Validate ILOM server reachable Success Successfully connected with ILOM
server using public IP and USB
interconnect
Validate supported versions Success Validated minimum supported versions.
Validate patching tag Success Validated patching tag: 19.27.0.0.0.
Is patch location available Success Patch location is available.
Checking Ilom patch Version Success Successfully verified the versions
Patch location validation Success Successfully validated location
Validate command execution Success Skipped command execution verfication
- Instance is not provisioned
__SERVER__
Validate local patching Success Successfully validated server local
patching
Validate command execution Success Skipped command execution verfication
- Instance is not provisioned
odacli update-server -v 19.27.0.0.0
sleep 900 ; odacli describe-job -i 3a4ce5a7-6305-483d-a7c3-e6328d8a831e
Job details
----------------------------------------------------------------
ID: 3a4ce5a7-6305-483d-a7c3-e6328d8a831e
Description: Server Patching to 19.27.0.0.0
Status: Success
Created: September 03, 2025 09:57:34 UTC
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Validate ILOM server reachable September 03, 2025 09:57:36 UTC September 03, 2025 09:57:37 UTC Success
Stop DCS Admin September 03, 2025 09:57:38 UTC September 03, 2025 09:57:39 UTC Success
Generate mTLS certificates September 03, 2025 09:57:39 UTC September 03, 2025 09:57:40 UTC Success
Exporting Public Keys September 03, 2025 09:57:40 UTC September 03, 2025 09:57:41 UTC Success
Creating Trust Store September 03, 2025 09:57:41 UTC September 03, 2025 09:57:43 UTC Success
Restart DCS Admin September 03, 2025 09:57:43 UTC September 03, 2025 09:58:03 UTC Success
Update config files September 03, 2025 09:57:43 UTC September 03, 2025 09:57:43 UTC Success
Deactivate Unit[dnf-makecache.timer] September 03, 2025 09:58:04 UTC September 03, 2025 09:58:04 UTC Success
Deactivate Unit[kdump.service] September 03, 2025 09:58:04 UTC September 03, 2025 09:58:05 UTC Success
Modify BM udev rules September 03, 2025 09:58:05 UTC September 03, 2025 09:58:25 UTC Success
Creating repositories using yum September 03, 2025 09:58:25 UTC September 03, 2025 09:58:27 UTC Success
Updating YumPluginVersionLock rpm September 03, 2025 09:58:27 UTC September 03, 2025 09:58:29 UTC Success
Applying OS Patches September 03, 2025 09:58:29 UTC September 03, 2025 09:59:03 UTC Success
Creating repositories using yum September 03, 2025 09:59:03 UTC September 03, 2025 09:59:03 UTC Success
Applying HMP Patches September 03, 2025 09:59:04 UTC September 03, 2025 09:59:06 UTC Success
Applying Firmware local Disk Patches September 03, 2025 09:59:07 UTC September 03, 2025 09:59:10 UTC Success
Oda-hw-mgmt upgrade September 03, 2025 09:59:07 UTC September 03, 2025 09:59:07 UTC Success
Patch location validation September 03, 2025 09:59:07 UTC September 03, 2025 09:59:07 UTC Success
Applying Firmware local Controller Patch September 03, 2025 09:59:10 UTC September 03, 2025 09:59:15 UTC Success
Checking Ilom patch Version September 03, 2025 09:59:15 UTC September 03, 2025 09:59:15 UTC Success
Patch location validation September 03, 2025 09:59:15 UTC September 03, 2025 09:59:15 UTC Success
Save password in Wallet September 03, 2025 09:59:15 UTC September 03, 2025 09:59:16 UTC Success
Apply Ilom patch September 03, 2025 09:59:16 UTC September 03, 2025 10:09:24 UTC Success
Disabling IPMI v2 September 03, 2025 09:59:16 UTC September 03, 2025 09:59:16 UTC Success
Cleanup JRE Home September 03, 2025 10:09:24 UTC September 03, 2025 10:09:24 UTC Success
Copying Flash Bios to Temp location September 03, 2025 10:09:24 UTC September 03, 2025 10:09:24 UTC Success
Generating and saving BOM September 03, 2025 10:09:24 UTC September 03, 2025 10:09:27 UTC Success
Update System version September 03, 2025 10:09:24 UTC September 03, 2025 10:09:24 UTC Success
PreRebootNode Actions September 03, 2025 10:09:27 UTC September 03, 2025 10:09:27 UTC Success
Reboot Node September 03, 2025 10:09:27 UTC September 03, 2025 10:09:27 UTC Success
The system will reboot once patched, let’s check the versions:
odacli describe-component
System Version
--------------
19.27.0.0.0
System Node Name
----------------
oak
Local System Version
--------------------
19.27.0.0.0
Component Installed Version Available Version
---------------------------------------- -------------------- --------------------
OAK 19.27.0.0.0 up-to-date
DCSCONTROLLER 19.27.0.0.0 up-to-date
DCSCLI 19.27.0.0.0 up-to-date
DCSAGENT 19.27.0.0.0 up-to-date
DCSADMIN 19.27.0.0.0 up-to-date
OS 8.10 up-to-date
ILOM 5.1.4.25.r160118 up-to-date
BIOS 52140100 up-to-date
LOCAL CONTROLLER FIRMWARE {
[c8, c9] 214.2.271.9 up-to-date
}
SHARED CONTROLLER FIRMWARE {
[c0, c1, c2, c3, c4, c5] VDV1RL06 up-to-date
}
LOCAL DISK FIRMWARE {
[c6d0, c6d1] XC311102 up-to-date
}
HMP 2.4.10.1.600 up-to-date
No need to apply the storage patch here, but if needed, apply it with:
odacli update-storage -v 19.27.0.0.0
Provision the appliance
Now it’s time to put back Oracle binaries on this system. First register the GI and DB clones, then do the create-appliance with the json file you prepared earlier:
cd /opt/dbi
unzip p30403662_1927000_Linux-x86-64.zip
unzip p30403673_1927000_Linux-x86-64.zip
odacli update-repository -f /opt/dbi/odacli-dcs-19.27.0.0.0-250527-GI-19.27.0.0.zip
sleep 90 ; odacli describe-job -i c8f3559f-7f93-4df1-ba99-7511111f27ce
Job details
----------------------------------------------------------------
ID: c8f3559f-7f93-4df1-ba99-7511111f27ce
Description: Repository Update
Status: Success
Created: September 03, 2025 10:19:04 UTC
Message: /opt/dbi/odacli-dcs-19.27.0.0.0-250527-GI-19.27.0.0.zip
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Unzip bundle September 03, 2025 10:19:04 UTC September 03, 2025 10:20:08 UTC Success
odacli update-repository -f /opt/dbi/odacli-dcs-19.27.0.0.0-250527-DB-19.27.0.0.zip
sleep 90 ; odacli describe-job -i 96187596-16f6-413c-8905-9d85c6d3e106
Job details
----------------------------------------------------------------
ID: 96187596-16f6-413c-8905-9d85c6d3e106
Description: Repository Update
Status: Success
Created: September 03, 2025 10:20:50 UTC
Message: /opt/dbi/odacli-dcs-19.27.0.0.0-250527-DB-19.27.0.0.zip
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Unzip bundle September 03, 2025 10:20:50 UTC September 03, 2025 10:21:46 UTC Success
odacli create-appliance -r /opt/dbi/deploy_odaprdcr.txt
Enter an initial password for Web Console account (oda-admin): **********
Confirm the password for Web Console account (oda-admin): **********
User 'oda-admin' created successfully...
odacli describe-job -i 22fec8d6-d9ae-4576-8f18-d8e1a0b00370
Job details
----------------------------------------------------------------
ID: 22fec8d6-d9ae-4576-8f18-d8e1a0b00370
Description: Provisioning service creation
Status: Success
Created: September 03, 2025 12:45:37 CEST
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Validate absence of Interconnect September 03, 2025 12:45:40 CEST September 03, 2025 12:45:40 CEST Success
network configuration file
Restart network interface pubnet September 03, 2025 12:45:43 CEST September 03, 2025 12:45:49 CEST Success
Setting up Network September 03, 2025 12:45:43 CEST September 03, 2025 12:45:43 CEST Success
Setting up Vlan September 03, 2025 12:45:50 CEST September 03, 2025 12:45:51 CEST Success
Restart network interface priv0.100 September 03, 2025 12:45:51 CEST September 03, 2025 12:45:52 CEST Success
Restart network interface privasm September 03, 2025 12:45:52 CEST September 03, 2025 12:45:53 CEST Success
Restart network interface privasm September 03, 2025 12:45:53 CEST September 03, 2025 12:45:59 CEST Success
Setting up Network September 03, 2025 12:45:53 CEST September 03, 2025 12:45:53 CEST Success
Network update September 03, 2025 12:45:59 CEST September 03, 2025 12:46:13 CEST Success
Restart network interface btbond1 September 03, 2025 12:45:59 CEST September 03, 2025 12:46:03 CEST Success
Setting up Network September 03, 2025 12:45:59 CEST September 03, 2025 12:45:59 CEST Success
Updating network September 03, 2025 12:45:59 CEST September 03, 2025 12:46:13 CEST Success
Restart network interface btbond1 September 03, 2025 12:46:03 CEST September 03, 2025 12:46:07 CEST Success
Restart network interface pubnet September 03, 2025 12:46:07 CEST September 03, 2025 12:46:13 CEST Success
OS usergroup 'asmadmin' creation September 03, 2025 12:46:13 CEST September 03, 2025 12:46:13 CEST Success
OS usergroup 'asmdba' creation September 03, 2025 12:46:13 CEST September 03, 2025 12:46:13 CEST Success
OS usergroup 'asmoper' creation September 03, 2025 12:46:13 CEST September 03, 2025 12:46:13 CEST Success
OS usergroup 'dba' creation September 03, 2025 12:46:13 CEST September 03, 2025 12:46:14 CEST Success
Validate availability of pubnet September 03, 2025 12:46:13 CEST September 03, 2025 12:46:13 CEST Success
OS user 'grid' creation September 03, 2025 12:46:14 CEST September 03, 2025 12:46:15 CEST Success
OS usergroup 'dbaoper' creation September 03, 2025 12:46:14 CEST September 03, 2025 12:46:14 CEST Success
OS usergroup 'oinstall' creation September 03, 2025 12:46:14 CEST September 03, 2025 12:46:14 CEST Success
OS user 'oracle' creation September 03, 2025 12:46:15 CEST September 03, 2025 12:46:16 CEST Success
Add SYSNAME in Env September 03, 2025 12:46:16 CEST September 03, 2025 12:46:16 CEST Success
Backup Config name validation September 03, 2025 12:46:16 CEST September 03, 2025 12:46:16 CEST Success
Backup config metadata persist September 03, 2025 12:46:16 CEST September 03, 2025 12:46:16 CEST Success
Default backup policy creation September 03, 2025 12:46:16 CEST September 03, 2025 12:46:16 CEST Success
Grant permission to RHP files September 03, 2025 12:46:16 CEST September 03, 2025 12:46:16 CEST Success
Install oracle-ahf September 03, 2025 12:46:16 CEST September 03, 2025 12:47:23 CEST Success
Stop DCS Admin September 03, 2025 12:48:25 CEST September 03, 2025 12:48:26 CEST Success
Generate mTLS certificates September 03, 2025 12:48:26 CEST September 03, 2025 12:48:27 CEST Success
Exporting Public Keys September 03, 2025 12:48:27 CEST September 03, 2025 12:48:28 CEST Success
Creating Trust Store September 03, 2025 12:48:29 CEST September 03, 2025 12:48:31 CEST Success
Restart DCS Admin September 03, 2025 12:48:31 CEST September 03, 2025 12:48:52 CEST Success
Update config files September 03, 2025 12:48:31 CEST September 03, 2025 12:48:31 CEST Success
Creating GI home directories September 03, 2025 12:48:52 CEST September 03, 2025 12:48:52 CEST Success
Extract GI clone September 03, 2025 12:48:52 CEST September 03, 2025 12:50:11 CEST Success
Grid home creation September 03, 2025 12:48:52 CEST September 03, 2025 12:50:13 CEST Success
Storage discovery September 03, 2025 12:50:13 CEST September 03, 2025 12:51:33 CEST Success
Creating wallet for Root User September 03, 2025 12:51:33 CEST September 03, 2025 12:51:36 CEST Success
Creating wallet for ASM Client September 03, 2025 12:51:36 CEST September 03, 2025 12:51:39 CEST Success
Grid stack creation September 03, 2025 12:51:39 CEST September 03, 2025 13:00:38 CEST Success
Provisioning GI with RHP September 03, 2025 12:51:39 CEST September 03, 2025 12:57:27 CEST Success
Set CRS ping target September 03, 2025 12:57:28 CEST September 03, 2025 12:57:29 CEST Success
Updating GI home version September 03, 2025 12:57:29 CEST September 03, 2025 12:57:34 CEST Success
Restarting Clusterware September 03, 2025 12:57:35 CEST September 03, 2025 13:00:38 CEST Success
Post cluster OAKD configuration September 03, 2025 13:00:38 CEST September 03, 2025 13:01:33 CEST Success
Disk group 'RECO' creation September 03, 2025 13:01:41 CEST September 03, 2025 13:01:52 CEST Success
Setting ACL for disk groups September 03, 2025 13:01:52 CEST September 03, 2025 13:01:55 CEST Success
Modify DB file attributes September 03, 2025 13:01:55 CEST September 03, 2025 13:02:03 CEST Success
Register Scan and Vips to Public Network September 03, 2025 13:02:03 CEST September 03, 2025 13:02:05 CEST Success
Configure export clones resource September 03, 2025 13:03:24 CEST September 03, 2025 13:03:24 CEST Success
Volume 'commonstore' creation September 03, 2025 13:03:24 CEST September 03, 2025 13:03:41 CEST Success
ACFS File system 'DATA' creation September 03, 2025 13:03:41 CEST September 03, 2025 13:04:03 CEST Success
Creating ACFS database home September 03, 2025 13:04:04 CEST September 03, 2025 13:05:22 CEST Success
Database home creation September 03, 2025 13:04:04 CEST September 03, 2025 13:07:55 CEST Success
Validating dbHome available space September 03, 2025 13:05:23 CEST September 03, 2025 13:05:23 CEST Success
Create required directories September 03, 2025 13:05:24 CEST September 03, 2025 13:05:24 CEST Success
Creating DbHome Directory September 03, 2025 13:05:24 CEST September 03, 2025 13:05:24 CEST Success
Extract DB clone September 03, 2025 13:05:24 CEST September 03, 2025 13:06:18 CEST Success
ProvDbHome by using RHP September 03, 2025 13:06:18 CEST September 03, 2025 13:07:33 CEST Success
Enable DB options September 03, 2025 13:07:33 CEST September 03, 2025 13:07:47 CEST Success
Creating wallet for DB Client September 03, 2025 13:07:51 CEST September 03, 2025 13:07:55 CEST Success
Database Creation by RHP September 03, 2025 13:07:56 CEST September 03, 2025 13:20:28 CEST Success
Provisioning service creation September 03, 2025 13:07:56 CEST September 03, 2025 13:23:47 CEST Success
Change permission for xdb wallet files September 03, 2025 13:20:28 CEST September 03, 2025 13:20:29 CEST Success
Add Startup Trigger to Open all PDBS September 03, 2025 13:20:29 CEST September 03, 2025 13:20:30 CEST Success
Place SnapshotCtrlFile in sharedLoc September 03, 2025 13:20:30 CEST September 03, 2025 13:20:33 CEST Success
SqlPatch upgrade September 03, 2025 13:22:08 CEST September 03, 2025 13:22:30 CEST Success
Running dbms_stats init_package September 03, 2025 13:22:30 CEST September 03, 2025 13:22:31 CEST Success
Set log_archive_dest for Database September 03, 2025 13:22:31 CEST September 03, 2025 13:22:34 CEST Success
Updating the Database version September 03, 2025 13:22:34 CEST September 03, 2025 13:22:38 CEST Success
Users tablespace creation September 03, 2025 13:23:47 CEST September 03, 2025 13:23:50 CEST Success
Clear all listeners from Database September 03, 2025 13:23:50 CEST September 03, 2025 13:23:50 CEST Success
Clear local and remote listeners from September 03, 2025 13:23:52 CEST September 03, 2025 13:23:53 CEST Success
Database
Copy Pwfile to Shared Storage September 03, 2025 13:23:53 CEST September 03, 2025 13:23:59 CEST Success
Persist new agent state entry September 03, 2025 13:24:02 CEST September 03, 2025 13:24:02 CEST Success
Persist new agent state entry September 03, 2025 13:24:02 CEST September 03, 2025 13:24:02 CEST Success
Provisioning service creation September 03, 2025 13:24:02 CEST September 03, 2025 13:24:02 CEST Success
Restart DCS Agent September 03, 2025 13:24:02 CEST September 03, 2025 13:24:03 CEST Success
Creating the appliance usually takes 40 to 50 minutes.
Check core reductionReimaging an ODA will keep the core reduction, but make sure it’s true with these commands:
odacli list-cpucores
Node Cores Modified Job Status
----- ------ ---------------------------------- ---------------
0 4 September 3, 2025 9:46:11 AM CEST CONFIGURED
lscpu | grep Core
Core(s) per socket: 2
Here I have 4 enabled cores, meaning 2 processor licenses.
Setup older DB homeAs I will bring back my Data Guard setup with 19.20 databases, I will need a 19.20 DB home. Let’s register the clone file and create the DB home. An increase of the diagnostic dest is probably needed, 10GB is way too small for me:
odacli list-dbhome-storages
=============================================================================================================
ID Node Description Disk Group Volume Size(GB) Status
-------------------------------------- ---- -------------------- ---------- ----------- ---------- ----------
bef3ae8a-96d3-44d4-b179-1e5dbdd2a939 0 ORACLE_HOME DATA orahome_sh 80 CONFIGURED
e972cc17-ff92-4203-b23f-55fab02a7cc0 0 ORACLE_BASE DATA odabase_n0 10 CONFIGURED
=============================================================================================================
odacli modify-dbhome-storage -i e972cc17-ff92-4203-b23f-55fab02a7cc0 -s 50
sleep 10 ; odacli list-dbhome-storages
=============================================================================================================
ID Node Description Disk Group Volume Size(GB) Status
-------------------------------------- ---- -------------------- ---------- ----------- ---------- ----------
bef3ae8a-96d3-44d4-b179-1e5dbdd2a939 0 ORACLE_HOME DATA orahome_sh 80 CONFIGURED
e972cc17-ff92-4203-b23f-55fab02a7cc0 0 ORACLE_BASE DATA odabase_n0 50 CONFIGURED
=============================================================================================================
cd /opt/dbi
unzip p30403662_1920000_Linux-x86-64.zip
odacli update-repository -f /opt/dbi/odacli-dcs-19.20.0.0.0-230720-DB-19.20.0.0.zip
sleep 30 ; odacli describe-job -i 3ee5db4d-d350-47b4-8e24-18949d244ab7
Job details
----------------------------------------------------------------
ID: 3ee5db4d-d350-47b4-8e24-18949d244ab7
Description: Repository Update
Status: Success
Created: September 03, 2025 13:47:23 CEST
Message: /opt/dbi/odacli-dcs-19.20.0.0.0-230720-DB-19.20.0.0.zip
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Unzip bundle September 03, 2025 13:47:23 CEST September 03, 2025 13:47:55 CEST Success
odacli create-dbhome -v 19.20.0.0.230718
sleep 240 ; odacli describe-job -i 3123544b-60c5-418a-82b5-c110dd68aafa
Job details
----------------------------------------------------------------
ID: 3123544b-60c5-418a-82b5-c110dd68aafa
Description: Database Home OraDB19000_home2 creation with version :19.20.0.0.230718
Status: Success
Created: September 03, 2025 13:53:21 CEST
Message: Create Database Home
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Setting up SSH equivalence September 03, 2025 13:53:32 CEST September 03, 2025 13:53:34 CEST Success
Setting up SSH equivalence September 03, 2025 13:53:34 CEST September 03, 2025 13:53:35 CEST Success
Creating ACFS database home September 03, 2025 13:53:35 CEST September 03, 2025 13:53:35 CEST Success
Validating dbHome available space September 03, 2025 13:53:35 CEST September 03, 2025 13:53:35 CEST Success
Create required directories September 03, 2025 13:53:36 CEST September 03, 2025 13:53:36 CEST Success
Creating DbHome Directory September 03, 2025 13:53:36 CEST September 03, 2025 13:53:36 CEST Success
Extract DB clone September 03, 2025 13:53:36 CEST September 03, 2025 13:54:23 CEST Success
ProvDbHome by using RHP September 03, 2025 13:54:23 CEST September 03, 2025 13:55:39 CEST Success
Enable DB options September 03, 2025 13:55:39 CEST September 03, 2025 13:55:54 CEST Success
Creating wallet for DB Client September 03, 2025 13:55:57 CEST September 03, 2025 13:55:58 CEST Success
odacli list-dbhomes
ID Name DB Version DB Edition Home Location Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
19ee71be-07ae-4373-9bb6-31db8195e36c OraDB19000_home1 19.27.0.0.250415 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 CONFIGURED
cd2dc372-3f95-40f1-917c-ee0793c26664 OraDB19000_home2 19.20.0.0.230718 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2 CONFIGURED
Create the databases
Your ODA registry needs to know the databases, so I will create them before doing the restore. I don’t use the -io (instance only) of odacli for database creation as srvctl entries will be missing. So let’s create a new standalone database and then remove its files:
odacli create-database -cs AL32UTF8 -no-c -u BMC3PRD_CR -dh 'cd2dc372-3f95-40f1-917c-ee0793c26664' -n BMC3PRD -s odb1s -l AMERICAN -dt AMERICA -no-co -r asm
sleep 660 ; odacli describe-job -i 40b80359-0607-4330-ae6e-055d284a780e
Job details
----------------------------------------------------------------
ID: 40b80359-0607-4330-ae6e-055d284a780e
Description: Database service creation with DB name: BMC3PRD
Status: Success
Created: September 03, 2025 15:11:48 CEST
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Database Creation by RHP September 03, 2025 15:11:50 CEST September 03, 2025 15:19:24 CEST Success
Database Service creation September 03, 2025 15:11:50 CEST September 03, 2025 15:22:19 CEST Success
Setting up SSH equivalence September 03, 2025 15:11:50 CEST September 03, 2025 15:11:50 CEST Success
Setting up SSH equivalence September 03, 2025 15:11:50 CEST September 03, 2025 15:11:50 CEST Success
Change permission for xdb wallet files September 03, 2025 15:19:24 CEST September 03, 2025 15:19:25 CEST Success
Place SnapshotCtrlFile in sharedLoc September 03, 2025 15:19:25 CEST September 03, 2025 15:19:28 CEST Success
SqlPatch upgrade September 03, 2025 15:20:43 CEST September 03, 2025 15:20:50 CEST Success
Running dbms_stats init_package September 03, 2025 15:20:50 CEST September 03, 2025 15:20:51 CEST Success
Set log_archive_dest for Database September 03, 2025 15:20:51 CEST September 03, 2025 15:20:54 CEST Success
Updating the Database version September 03, 2025 15:20:54 CEST September 03, 2025 15:20:58 CEST Success
Create Users tablespace September 03, 2025 15:22:19 CEST September 03, 2025 15:22:20 CEST Success
Clear all listeners from Database September 03, 2025 15:22:20 CEST September 03, 2025 15:22:21 CEST Success
Copy Pwfile to Shared Storage September 03, 2025 15:22:23 CEST September 03, 2025 15:22:27 CEST Success
su - oracle
. oraenv <<< BMC3PRD
sqlplus / as sysdba
set pages 0
set feedback off
spool /tmp/rmfiles.sh
select 'asmcmd rm -fr '||file_name from dba_data_files;
select 'asmcmd rm -fr '||file_name from dba_temp_files;
select 'asmcmd rm -fr '||member from v$logfile;
select 'asmcmd rm -fr '||name from v$controlfile;
select 'asmcmd rm -fr '||value from v$parameter where name='spfile';
spool off
shutdown abort;
exit
exit
su - grid
sh /tmp/rmfiles.sh
exit
Do the same for the other databases. Don’t try to create your databases with multiple odacli jobs running at the same time: this is something ODA cannot do in parallel, creation will be serialized and jobs will be waiting for the current one to complete.
Restore the databasesI’m used to move the spfile from ASM to the filesystem, so it’s the exact same path as the other server. In the same time, I change the role of the database as odacli created it as a primary:
su - oracle
. oraenv <<< BMC3PRD
srvctl modify database -db BMC3PRD_CR -spfile /u01/app/oracle/admin/BMC3PRD/pfile/spfileBMC3PRD.ora -startoption mount -role PHYSICAL_STANDBY
Now let’s restore the spfile and the controlfile. RMAN is able to start an instance without any spfile for recovering the spfile, it’s very convenient:
rman target /
startup nomount;
restore spfile from '/backup/Patch19.27/ctl_spf/spf_BMC3PRD_20250903.bck';
Starting restore at 04-SEP-25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=433 device type=DISK
channel ORA_DISK_1: restoring spfile from AUTOBACKUP /backup/Patch19.27/ctl_spf/spf_BMC3PRD_20250903.bck
channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 04-SEP-25
startup force nomount;
restore controlfile from '/backup/Patch19.27/ctl_spf/ctl_BMC3PRD_20250903.bck';
Starting restore at 04-SEP-25
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=26 device type=DISK
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:05
output file name=+RECO/BMC3PRD_CR/CONTROLFILE/current.486.1210937913
Finished restore at 04-SEP-25
Now I can start restoring the database. Choose the parallel degree carefully: as your ODA has nothing else to do, I would recommend using the maximum channels you could for big databases, for example between 4 and 8 channels for an ODA with 4 enabled cores:
startup force mount;
run {
allocate channel c1 device type disk;
allocate channel c2 device type disk;
allocate channel c3 device type disk;
allocate channel c4 device type disk;
restore database ;
}
…
channel c4: restoring datafile 00004 to +DATA/BMC3PRD/DATAFILE/undotbs1.291.1042201603
channel c4: restoring datafile 00005 to +DATA/BMC3PRD/DATAFILE/users.294.1042202001
channel c4: reading from backup piece /backup/BMC3PRD/backupset/20250901_000002_inc0_BMC3PRD_45931446_s36914_p1.bck
channel c3: piece handle=/backup/BMC3PRD/backupset/20250901_000002_inc0_BMC3PRD_45931446_s36908_p2.bck tag=INC0_20250901_000002
channel c3: restored backup piece 2
channel c3: restore complete, elapsed time: 00:00:38
channel c1: piece handle=/backup/BMC3PRD/backupset/20250901_000002_inc0_BMC3PRD_45931446_s36911_p2.bck tag=INC0_20250901_000002
channel c1: restored backup piece 2
channel c1: restore complete, elapsed time: 00:00:42
channel c4: piece handle=/backup/BMC3PRD/backupset/20250901_000002_inc0_BMC3PRD_45931446_s36914_p1.bck tag=INC0_20250901_000002
channel c4: restored backup piece 1
channel c4: restore complete, elapsed time: 00:00:35
channel c2: piece handle=/backup/BMC3PRD/backupset/20250901_000002_inc0_BMC3PRD_45931446_s36911_p1.bck tag=INC0_20250901_000002
channel c2: restored backup piece 1
channel c2: restore complete, elapsed time: 00:00:54
Finished restore at 04-SEP-25
released channel: c1
released channel: c2
released channel: c3
released channel: c4
exit
Add TNS entries for Data Guard and a static entry for the listener
Your database is now restored, you’ll need to configure communication with the primary. Let’s add the following TNS entries:
vi $ORACLE_HOME/network/admin/tnsnames.ora
…
BMC3PRD_CR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = odaprdcr)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BMC3PRD_CR)
)
)
BMC3PRD_IN =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = odaprdin)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BMC3PRD_IN)
)
)
You will also need the static entry for your database in the listener.ora file:
su - grid
vi $ORACLE_HOME/network/admin/listener.ora
SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME = BMC3PRD_CR)
(ORACLE_HOME = /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2)
(SID_NAME = BMC3PRD)
)
)
exit
Copy the password file and modify its location
Pick up the password file from primary server and relocate the file to the filesystem with srvctl:
scp oracle@odaprdin:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2/dbs/orapwBMC3PRD /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2/dbs/
srvctl modify database -db BMC3PRD_CR -pwfile
srvctl stop database -db BMC3PRD_CR ; sleep 10 ; srvctl start database -db BMC3PRD_CR -o mount
You can use a symbolic link in the dbs subfolder if you want to put this password file alongside your spfile.
Recover the databaseRestored datafiles are probably several hours old, so let’s do a recover on the standby from the primary service before configuring Data Guard:
rman target sys
recover database from service 'BMC3PRD_IN';
…
channel ORA_DISK_1: using network backup set from service BMC3PRD_IN
destination for restore of datafile 00007: +DATA/BMC3PRD_CR/DATAFILE/dm_mc3prd_docbase.477.1210938063
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service BMC3PRD_IN
destination for restore of datafile 00008: +DATA/BMC3PRD_CR/DATAFILE/dm_mc3prd_docbase.478.1210938063
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service BMC3PRD_IN
destination for restore of datafile 00009: +DATA/BMC3PRD_CR/DATAFILE/dm_mc3prd_docbase.479.1210938067
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service BMC3PRD_IN
destination for restore of datafile 00010: +DATA/BMC3PRD_CR/DATAFILE/dm_mc3prd_docbase.480.1210938165
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
starting media recovery
media recovery complete, elapsed time: 00:00:00
Finished recover at 04-SEP-25
exit
If archivelog files are missing or if recovery is not possible, it doesn’t mean that it’s a problem. Data Guard should be able to solve a small or bigger archive gap.
Rebuild Data Guard configurationInstead of adding back my standby database to the existing Data Guard configuration, I will recreate this configuration. It’s a matter of seconds:
dgmgrl sys
*****
Remove configuration;
create configuration DTG as primary database is 'BMC3PRD_IN' connect identifier is 'BMC3PRD_IN';
add database 'BMC3PRD_CR' as connect identifier is 'BMC3PRD_CR';
enable configuration;
edit database 'BMC3PRD_CR' set property LogXptMode='SYNC';
edit database 'BMC3PRD_IN' set property LogXptMode='SYNC';
edit database 'BMC3PRD_CR' set property StandbyFileManagement='AUTO';
edit database 'BMC3PRD_IN' set property StandbyFileManagement='AUTO';
EDIT DATABASE 'BMC3PRD_CR' SET PROPERTY 'ArchiveLagTarget'=1200;
EDIT DATABASE 'BMC3PRD_IN' SET PROPERTY 'ArchiveLagTarget'=1200;
edit configuration set protection mode as maxavailability;
Depending on the lag between your primary and this standby, it can takes several minutes to bring back the sync of both databases:
show configuration lag;
Configuration - dtg
Protection Mode: MaxAvailability
Members:
BMC3PRD_IN - Primary database
BMC3PRD_CR - Physical standby database
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 1 second ago)
Fast-Start Failover: Disabled
Configuration Status:
SUCCESS (status updated 6 seconds ago)
This database is OK, let’s continue with the next ones.
Additional settingsIf you made additional settings on your ODA, you’ll have to configure them again. It could be:
- monitoring
- additional network interfaces
- additional services
- mount points and shares
- firewall rules
- kerberos authentication
- compatibility modes in sqlnet.ora file
- …
With several multi-TB databases to restore, it took me less than 3 days per ODA for this operation. And these ODAs are now clean and ready for their second life.
Reimaging an ODA from scratch is a good solution as soon as a Data Guard setup allows the databases to run on another ODA for several days. And with a good preparation, the job is not that huge. And there is much less troubleshooting compared to patching. This is something to consider.
L’article Reimage your ODA from scratch est apparu en premier sur dbi Blog.
Handling of documents including meta data with Chronoscan and AI
Chronoscan is a well known application to scan, split and extract metadata from documents. This is possible with traditional functionalities like Intelli-Tag from Chronoscan, which requires additional complex and sometimes time-consuming configuration.
It also offers an AI implementation, which makes things much less complex and reduces implementation time considerably. In the last version that I reviewed and tested, I discovered the possibility of using a local AI implementation. This means that AI servers which are not under your own control can be eliminated. As you can imagine, this is a game changer, as it puts your data back under your own government’s control, which is a major achievement in terms of security requirements.
Introduction to the use caseIn this blog, I will present a possible implementation based on the handling of invoices. We will import them, split them and extract the metadata. The next step is straightforward: we will store them in a M-Files vault, including the PDF and the metadata we defined.
The diagram below clearly shows the high level of the technical requirements and provides a comprehensive overview of the necessary infrastructure and software.

The first step in the process is to add the documents to the designated hot folder. This will initiate the automatic document handling process.
It is possible to move a document manually from an email or directly from a scanner, and add it to the hot folder.
How the documents import works.Chronoscan regularly verifies the hot folder for new documents. Should a valid document be identified, the system will initiate the import, analysis and export processes. In this example, we will follow the steps outlined in the diagram below when importing the document.

In the final stage of the process, the document and its extracted metadata are exported to the M-Files Vault. As is clearly evident. Please refer to the image below for further information. The document has been correctly assigned to the relevant document class, including the defined metadata.

Should you be seeking a comprehensive automated solution for the management of both electronic and paper-based documentation, with subsequent processing according to your business’s specific workflow, we invite you to contact us to discuss your particular requirements.
We can then work together to define a solution and phrase a project. As a first step, we can create a proof of concept to confirm that the solution will meet your needs and make your daily work easier and saving you resources.
L’article Handling of documents including meta data with Chronoscan and AI est apparu en premier sur dbi Blog.
Unit Testing PostgreSQL with pgTAP

Unit testing is a fundamental practice in software development, ensuring that individual components function correctly. When working with PostgreSQL, testing database logic—such as functions, triggers, and constraints—is crucial for maintaining data integrity and reliability. One powerful tool for this purpose is pgTAP.
pgTAP is a PostgreSQL extension that provides a set of TAP (Test Anything Protocol) functions for writing unit tests directly in SQL. It allows developers to test database functions, schemas, constraints, and much more in an automated and repeatable way.
Before using pgTAP, you need to install it on your PostgreSQL instance. You can install it from source as follows:
10:18:56 postgres@ws-pgdev:/home/postgres/ [sw] wget https://api.pgxn.org/dist/pgtap/1.3.3/pgtap-1.3.3.zip .
--2025-04-02 10:19:53-- https://api.pgxn.org/dist/pgtap/1.3.3/pgtap-1.3.3.zip
Resolving api.pgxn.org (api.pgxn.org)... 88.198.49.178
Connecting to api.pgxn.org (api.pgxn.org)|88.198.49.178|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 329966 (322K) [application/zip]
Saving to: ‘pgtap-1.3.3.zip’
pgtap-1.3.3.zip 100%[============================================================================>] 322.23K --.-KB/s in 0.1s
2025-04-02 10:19:54 (3.18 MB/s) - ‘pgtap-1.3.3.zip’ saved [329966/329966]
--2025-04-02 10:19:54-- http://./
Resolving . (.)... failed: No address associated with hostname.
wget: unable to resolve host address ‘.’
FINISHED --2025-04-02 10:19:54--
Total wall clock time: 0.4s
Downloaded: 1 files, 322K in 0.1s (3.18 MB/s)
10:19:54 postgres@ws-pgdev:/home/postgres/ [sw] unzip pgtap-1.3.3.zip
Archive: pgtap-1.3.3.zip
b941782fada240afdb7057065eb3261a21e8512c
creating: pgtap-1.3.3/
inflating: pgtap-1.3.3/Changes
...
...
10:20:11 postgres@ws-pgdev:/home/postgres/ [sw] cd pgtap-1.3.3/
11:11:58 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] make
GNUmake running against Postgres version 17.0, with pg_config located at /u01/app/postgres/product/17/db_0/bin
Makefile:186: To use pg_prove, TAP::Parser::SourceHandler::pgTAP Perl module
Makefile:187: must be installed from CPAN. To do so, simply run:
Makefile:188: cpan TAP::Parser::SourceHandler::pgTAP
cp sql/pgtap--0.95.0--0.96.0.sql.in sql/pgtap--0.95.0--0.96.0.sql
cp sql/pgtap--0.96.0--0.97.0.sql.in sql/pgtap--0.96.0--0.97.0.sql
cp sql/pgtap--0.97.0--0.98.0.sql.in sql/pgtap--0.97.0--0.98.0.sql
cp sql/pgtap--0.98.0--0.99.0.sql.in sql/pgtap--0.98.0--0.99.0.sql
cp sql/pgtap--0.99.0--1.0.0.sql.in sql/pgtap--0.99.0--1.0.0.sql
cp sql/pgtap.sql.in sql/pgtap.sql
sed -e 's,MODULE_PATHNAME,$libdir/pgtap,g' -e 's,__OS__,linux,g' -e 's,__VERSION__,1.3,g' sql/pgtap.sql > sql/pgtap.tmp
mv sql/pgtap.tmp sql/pgtap.sql
'/usr/bin/perl' -e 'for (grep { /^CREATE /} reverse <>) { chomp; s/CREATE (OR REPLACE )?/DROP /; s/DROP (FUNCTION|VIEW|TYPE) /DROP $1 IF EXISTS /; s/ (DEFAUL T|=)[ ]+[a-zA-Z0-9]+//g; print "$_;\n" }' sql/pgtap.sql > sql/uninstall_pgtap.sql
cp sql/pgtap.sql.in sql/pgtap-static.sql.tmp
*** Patching pgtap-static.sql with compat/install-9.6.patch
patching file sql/pgtap-static.sql.tmp
*** Patching pgtap-static.sql with compat/install-9.4.patch
patching file sql/pgtap-static.sql.tmp
*** Patching pgtap-static.sql with compat/install-9.2.patch
patching file sql/pgtap-static.sql.tmp
*** Patching pgtap-static.sql with compat/install-9.1.patch
patching file sql/pgtap-static.sql.tmp
sed -e 's#MODULE_PATHNAME#$libdir/pgtap#g' -e 's#__OS__#linux#g' -e 's#__VERSION__#1.3#g' sql/pgtap-static.sql.tmp > sql/pgtap-static.sql
'/usr/bin/perl' compat/gencore 0 sql/pgtap-static.sql > sql/pgtap-core.sql
'/usr/bin/perl' compat/gencore 1 sql/pgtap-static.sql > sql/pgtap-schema.sql
cp sql/pgtap.sql sql/pgtap--1.3.3.sql
cp sql/pgtap-core.sql sql/pgtap-core--1.3.3.sql
cp sql/pgtap-schema.sql sql/pgtap-schema--1.3.3.sql
11:12:02 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] make install
GNUmake running against Postgres version 17.0, with pg_config located at /u01/app/postgres/product/17/db_0/bin
Makefile:186: To use pg_prove, TAP::Parser::SourceHandler::pgTAP Perl module
Makefile:187: must be installed from CPAN. To do so, simply run:
Makefile:188: cpan TAP::Parser::SourceHandler::pgTAP
mkdir -p '/u01/app/postgres/product/17/db_0/share/extension'
mkdir -p '/u01/app/postgres/product/17/db_0/share/extension'
mkdir -p '/u01/app/postgres/product/17/db_0/share/doc/extension'
/bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 644 .//pgtap.control '/u01/app/postgres/product/17/db_0/share/extension/'
/bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 644 .//sql/pgtap--0.90.0--0.91.0.sql .//sql/pgtap--0.91.0--0.92.0.sql .//sql/pgtap--0.92.0--0.93.0.sql .//sql/pgtap--0.93.0--0.94.0.sql .//sql/pgtap--0.94.0--0.95.0.sql .//sql/pgtap--0.95.0--0.96.0.sql .//sql/pgtap--0.96.0--0.97.0.sql .//sql/pgtap--0.97.0--0.98.0.sql .//sql/pgtap--0.98.0--0.99.0.sql .//sql/pgtap--0.99.0--1.0.0.sql .//sql/pgtap--1.0.0--1.1.0.sql .//sql/pgtap--1.1.0--1.2.0.sql .//sql/pgtap--1.2.0--1.3.0.sql .//sql/pgtap--1.3.0--1.3.1.sql .//sql/pgtap--1.3.1--1.3.2.sql .//sql/pgtap--1.3.2--1.3.3.sql .//sql/pgtap--1.3.3.sql .//sql/pgtap--unpackaged--0.91.0.sql .//sql/pgtap-core--1.3.3.sql .//sql/pgtap-core.sql .//sql/pgtap-schema--1.3.3.sql .//sql/pgtap-schema.sql .//sql/pgtap.sql .//sql/uninstall_pgtap.sql '/u01/app/postgres/product/17/db_0/share/extension/'
/bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 644 .//doc/pgtap.mmd '/u01/app/postgres/product/17/db_0/share/doc/extension/'
As mentioned in the output of the previous command, we need to run some commands to be able to use pg_prove, which we are going to use later:
11:14:23 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] cpan TAP::Parser::SourceHandler::pgTAP
Loading internal logger. Log::Log4perl recommended for better logging
CPAN.pm requires configuration, but most of it can be done automatically.
If you answer 'no' below, you will enter an interactive dialog for each
configuration option instead.
Would you like to configure as much as possible automatically? [yes] yes
Warning: You do not have write permission for Perl library directories.
To install modules, you need to configure a local Perl library directory or
escalate your privileges. CPAN can help you by bootstrapping the local::lib
module or by configuring itself to use 'sudo' (if available). You may also
resolve this problem manually if you need to customize your setup.
What approach do you want? (Choose 'local::lib', 'sudo' or 'manual')
[local::lib] sudo
Fetching with HTTP::Tiny:
https://cpan.org/authors/01mailrc.txt.gz
Reading '/home/postgres/.cpan/sources/authors/01mailrc.txt.gz'
............................................................................DONE
Fetching with HTTP::Tiny:
https://cpan.org/modules/02packages.details.txt.gz
Reading '/home/postgres/.cpan/sources/modules/02packages.details.txt.gz'
Database was generated on Wed, 02 Apr 2025 08:29:02 GMT
..............
New CPAN.pm version (v2.38) available.
[Currently running version is v2.33]
You might want to try
install CPAN
reload cpan
to both upgrade CPAN.pm and run the new version without leaving
the current session.
..............................................................DONE
Fetching with HTTP::Tiny:
https://cpan.org/modules/03modlist.data.gz
Reading '/home/postgres/.cpan/sources/modules/03modlist.data.gz'
DONE
Writing /home/postgres/.cpan/Metadata
Running install for module 'TAP::Parser::SourceHandler::pgTAP'
Fetching with HTTP::Tiny:
https://cpan.org/authors/id/D/DW/DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
Fetching with HTTP::Tiny:
https://cpan.org/authors/id/D/DW/DWHEELER/CHECKSUMS
Checksum for /home/postgres/.cpan/sources/authors/id/D/DW/DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz ok
'YAML' not installed, will not store persistent state
Configuring D/DW/DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz with Build.PL
Created MYMETA.yml and MYMETA.json
Creating new 'Build' script for 'TAP-Parser-SourceHandler-pgTAP' version '3.37'
DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
/usr/bin/perl Build.PL --installdirs site -- OK
Running Build for D/DW/DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
Building TAP-Parser-SourceHandler-pgTAP
DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
./Build -- OK
Running Build test for DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
t/source_handler.t .. ok
All tests successful.
Files=1, Tests=47, 1 wallclock secs ( 0.03 usr 0.00 sys + 0.08 cusr 0.12 csys = 0.23 CPU)
Result: PASS
DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
./Build test -- OK
Running Build install for DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
Building TAP-Parser-SourceHandler-pgTAP
Installing /usr/local/man/man1/pg_prove.1p
Installing /usr/local/man/man1/pg_tapgen.1p
Installing /usr/local/share/perl/5.36.0/TAP/Parser/SourceHandler/pgTAP.pm
Installing /usr/local/man/man3/TAP::Parser::SourceHandler::pgTAP.3pm
Installing /usr/local/bin/pg_tapgen
Installing /usr/local/bin/pg_prove
DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
sudo ./Build install -- OK
11:15:03 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] mkdir -p '/u01/app/postgres/product/17/db_0/share/extension'
11:15:05 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] mkdir -p '/u01/app/postgres/product/17/db_0/share/extension'
mkdir -p '/u01/app/postgres/product/17/db_0/share/doc/extension'
11:15:10 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] /bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 64 4 .//pgtap.control '/u01/app/postgres/product/17/db_0/share/extension/'
11:15:18 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] /bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 64 4 .//sql/pgtap--0.90.0--0.91.0.sql .//sql/pgtap--0.91.0--0.92.0.sql .//sql/pgtap--0.92.0--0.93.0.sql .//sql/pgtap--0.93.0--0.94.0.sql .//sql/pgtap--0.94.0--0 .95.0.sql .//sql/pgtap--0.95.0--0.96.0.sql .//sql/pgtap--0.96.0--0.97.0.sql .//sql/pgtap--0.97.0--0.98.0.sql .//sql/pgtap--0.98.0--0.99.0.sql .//sql/pgtap--0 .99.0--1.0.0.sql .//sql/pgtap--1.0.0--1.1.0.sql .//sql/pgtap--1.1.0--1.2.0.sql .//sql/pgtap--1.2.0--1.3.0.sql .//sql/pgtap--1.3.0--1.3.1.sql .//sql/pgtap--1. 3.1--1.3.2.sql .//sql/pgtap--1.3.2--1.3.3.sql .//sql/pgtap--1.3.3.sql .//sql/pgtap--unpackaged--0.91.0.sql .//sql/pgtap-core--1.3.3.sql .//sql/pgtap-core.sql .//sql/pgtap-schema--1.3.3.sql .//sql/pgtap-schema.sql .//sql/pgtap.sql .//sql/uninstall_pgtap.sql '/u01/app/postgres/product/17/db_0/share/extension/'
11:15:33 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] /bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 64 4 .//doc/pgtap.mmd '/u01/app/postgres/product/17/db_0/share/doc/extension/'
11:15:37 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] pg_prove
No tests named and 't' directory not found at /usr/share/perl/5.36/App/Prove.pm line 522.
11:15:42 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] pg_prove --version
pg_prove 3.37
You can check if pgTAP was installed properly using the following command:
10:24:09 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] make installcheck
GNUmake running against Postgres version 17.0, with pg_config located at /u01/app/postgres/product/17/db_0/bin
Makefile:186: To use pg_prove, TAP::Parser::SourceHandler::pgTAP Perl module
Makefile:187: must be installed from CPAN. To do so, simply run:
Makefile:188: cpan TAP::Parser::SourceHandler::pgTAP
Using 89 parallel test connections
Rebuilding test/build/all_tests
Schedule changed to test/build/parallel.sch
cp `cat test/build/which_schedule` test/build/run.sch
echo "# +++ regress install-check in +++" && /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --bindir='/u01/app/postgres/product/17/db_0/bin' --inputdir=test --max-connections=89 --schedule test/schedule/main.sch --schedule test/build/run.sch
# +++ regress install-check in +++
# using postmaster on Unix socket, port 5432
ok 1 - build 369 ms
...
ok 4 - hastap 1309 ms
# parallel group (35 tests): matching istap do_tap moretap util performs_ok performs_within todotap check cmpok pg73 runjusttests roletap throwtap trigger usergroup enumtap policy runtests runnotests proctap fktap privs inheritance partitions valueset functap resultset aretap extension ownership ruletap pktap index unique
ok 5 + aretap 5911 ms
ok 6 + check 1558 ms
...
ok 39 + valueset 3784 ms
1..39
# All 39 tests passed.
Once installed, enable it in your database:
10:25:06 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] psql
psql (17.0 dbi services build)
Type "help" for help.
postgres=# \c hybrid
You are now connected to database "hybrid" as user "postgres".
hybrid=# CREATE EXTENSION pgtap;
CREATE EXTENSION
hybrid=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+--------------+-----------------------------------------------------------------------------------------------
btree_gist | 1.7 | training_app | support for indexing common datatypes in GiST
orafce | 4.14 | training_app | Functions and operators that emulate a subset of functions and packages from the Oracle RDBMS
pg_stat_statements | 1.11 | public | track planning and execution statistics of all SQL statements executed
pg_trgm | 1.6 | training_app | text similarity measurement and index searching based on trigrams
pgcrypto | 1.3 | training_app | cryptographic functions
pgtap | 1.3.3 | training_app | Unit testing for PostgreSQL
plperl | 1.0 | pg_catalog | PL/Perl procedural language
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
(8 rows)
pgTAP provides a wide range of assertions for testing various database objects. Let’s go through some examples.
1. Testing a FunctionAssume we have a function that verifies a password based on a specific pattern:
CREATE OR REPLACE FUNCTION training_app.f_password_verify (pv_password TEXT) RETURNS BOOLEAN
AS $$
SELECT pv_password ~ '^(?=.{10,}$)(?=.*[a-z])(?=.*[A-Z])(?=.*[0-9])(?=.*\W).*$';
$$ LANGUAGE sql;
hybrid=# \df f_password_verify
List of functions
Schema | Name | Result data type | Argument data types | Type
--------------+-------------------+------------------+---------------------+------
training_app | f_password_verify | boolean | pv_password text | func
(1 row)
To test this function with pgTAP:
hybrid=# SELECT plan(4);
plan
------
1..4
(1 row)
hybrid=# SELECT ok(training_app.f_password_verify('ValidPass1!'), 'Valid password should return true');
ok
------------------------------------------
ok 1 - Valid password should return true
(1 row)
hybrid=# SELECT ok(NOT training_app.f_password_verify('short1!'), 'Too short password should return false');
ok
-----------------------------------------------
ok 2 - Too short password should return false
(1 row)
hybrid=# SELECT ok(NOT training_app.f_password_verify('NoNumberPass!'), 'Password without a number should return false');
ok
------------------------------------------------------
ok 3 - Password without a number should return false
(1 row)
hybrid=# SELECT ok(NOT training_app.f_password_verify('NoSpecialChar1'), 'Password without special character should return false');
ok
---------------------------------------------------------------
ok 4 - Password without special character should return false
(1 row)
hybrid=# SELECT * FROM finish();
finish
--------
(0 rows)
Consider the users
table with the following schema:
hybrid=# \d users
Table "training_app.users"
Column | Type | Collation | Nullable | Default
----------+-----------+-----------+----------+------------------------------
id | integer | | not null | generated always as identity
username | text | | not null |
password | text | | not null |
created | date | | | now()
validity | tstzrange | | |
Indexes:
"users_pkey" PRIMARY KEY, btree (id)
"i_username_trgm" gin (username gin_trgm_ops)
"i_users_username" btree (username)
"i_users_username_btree_partial" btree (created) WHERE created >= '2024-11-07'::date AND created < '2024-11-08'::date
Check constraints:
"user_check_username" CHECK (username ~* '[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}'::text)
"user_check_username_length" CHECK (length(username) <= 72)
Referenced by:
TABLE "user_training" CONSTRAINT "fk_user_training_users" FOREIGN KEY (user_id) REFERENCES users(id)
TABLE "users_history" CONSTRAINT "fk_users_history_user_id_users_id" FOREIGN KEY (user_id) REFERENCES users(id)
Policies:
POLICY "policy_current_month" FOR SELECT
TO role_app_read_only
USING (((EXTRACT(month FROM created))::integer = (EXTRACT(month FROM now()))::integer))
Triggers:
t_log_user_history BEFORE UPDATE ON users FOR EACH ROW EXECUTE FUNCTION tf_user_history()
create or replace function training_app.tf_user_history() returns trigger as
$$
begin
insert into training_app.users_history ( user_id, username, password, created, validity)
values ( old.id, old.username, old.password, old.created, old.validity);
return new;
end;
$$ language plpgsql;
create trigger t_log_user_history
before update on training_app.users
for each row
execute procedure training_app.tf_user_history();
To test the constraints, create a test file test_users_constraints.sql
:
10:57:07 postgres@ws-pgdev:/u01/app/postgres/local/dmk/tests/ [sw] touch test_users_constraints.sql
10:57:37 postgres@ws-pgdev:/u01/app/postgres/local/dmk/tests/ [sw] cat test_users_constraints.sql
BEGIN;
SELECT plan(2);
-- Test CHECK constraint on username format
SELECT throws_like(
$$INSERT INTO training_app.users (username, password) VALUES ('invalid_user', 'Password1!')$$,
'new row for relation "users" violates check constraint "user_check_username"',
'Invalid username should fail CHECK constraint'
);
-- Test CHECK constraint on username length
SELECT throws_like(
$$INSERT INTO training_app.users (username, password) VALUES (repeat('a', 73), 'Password1!')$$,
'new row for relation "users" violates check constraint "user_check_username"',
'Username exceeding 72 characters should fail CHECK constraint'
);
SELECT * FROM finish();
ROLLBACK;
You can execute pgTAP tests using pg_prove
, a command-line tool for running TAP tests. We are now going to test it with the file we just created, test_users_constraints.sql.
Run it with pg_prove
:
11:45:14 postgres@ws-pgdev:/u01/app/postgres/local/dmk/tests/ [sw] pg_prove -d hybrid -U postgres -p 5432 test_users_constraints.sql
test_users_constraints.sql .. ok
All tests successful.
Files=1, Tests=2, 0 wallclock secs ( 0.05 usr 0.01 sys + 0.00 cusr 0.01 csys = 0.07 CPU)
Result: PASS
To verify that our trigger correctly logs changes to the users
table, we check:
- That the recorded historical data correctly reflects the old values before the update.
- That an update on
users
triggers an insert intousers_history
.
13:43:55 postgres@ws-pgdev:/u01/app/postgres/local/dmk/tests/ [sw] cat test_trigger.sql
BEGIN;
SELECT plan(2);
-- Insert a test user with a valid email as username
INSERT INTO training_app.users (username, password) VALUES ('testuser@example.com', 'TestPassword123!');
-- Update the user's username (this should activate the trigger)
UPDATE training_app.users SET username = 'updateduser@example.com' WHERE username = 'testuser@example.com';
-- Check if the corresponding row is added to the users_history table
SELECT ok(
(SELECT COUNT(*) FROM training_app.users_history WHERE user_id = (SELECT id FROM training_app.users WHERE username = 'updateduser@example.com')) > 0,
'User history should be logged in users_history after update'
);
-- Check if the values in users_history match the old values (before the update)
SELECT is(
(SELECT username FROM training_app.users_history WHERE user_id = (SELECT id FROM training_app.users WHERE username = 'updateduser@example.com') ORDER BY created DESC LIMIT 1),
'testuser@example.com',
'Username in user history should match the old (pre-update) value'
);
SELECT * FROM finish();
ROLLBACK;
Execute the test using:
13:58:38 postgres@ws-pgdev:/u01/app/postgres/local/dmk/tests/ [sw] pg_prove -d hybrid -U postgres -p 5432 test_trigger.sql
test_trigger.sql .. ok
All tests successful.
Files=1, Tests=2, 0 wallclock secs ( 0.04 usr 0.01 sys + 0.01 cusr 0.01 csys = 0.07 CPU)
Result: PASS
- Automated Testing: Helps maintain database integrity by catching errors early.
- SQL-Based: No need for external scripting languages; tests are written in SQL.
- Integration with CI/CD: Works with CI/CD pipelines to ensure database quality.
- Comprehensive Assertions: Supports functions, constraints, indexes, views, and more.
pgTAP is a powerful tool for unit testing PostgreSQL databases. By incorporating it into your workflow, you can ensure that your database logic remains robust and reliable over time. Whether you’re testing functions, constraints, or triggers, pgTAP provides a structured and repeatable approach to database testing. You can find more information about pgTAP on the official website.
Do you use pgTAP in your projects? Let me know in the comments how it has helped you!
L’article Unit Testing PostgreSQL with pgTAP est apparu en premier sur dbi Blog.
Dctm – Oracle DB permissions on recent versions
If you have been running Documentum on Oracle in the past, you might have noticed that recent versions of Dctm requires more Database permissions to be able to perform an install/upgrade. This blog will just be a quick one to show what I used to request for DB permissions for older versions of Documentum as well as what happen if you don’t have the needed permissions on the latest versions. So, let’s get right into it.
First of all, when working on big environments/customers, it is pretty rare to get full access to the Database. Therefore, I’ve always created the tablespace and repository owner accounts upfront, with the needed permissions, before starting a new Documentum installation that would then use this dedicated account. If you aren’t in this case, then I guess this blog isn’t very relevant, since the Documentum installer can create the account with the needed permissions by itself (since you provide it with the DB SYSADMIN account). Please also note that using the Dctm installer will grant unlimited tablespace usage to the repository owner, which DBAs don’t really like (and it’s not really required if you do things properly…).
For as long as I recall, these have been the permissions that I requested/granted on the repository owner account ( being the dm_dbo account name) for Oracle Databases:
GRANT CONNECT TO <REPO_NAME>;
GRANT RESOURCE TO <REPO_NAME>;
GRANT CREATE SESSION TO <REPO_NAME>;
GRANT ALTER SESSION TO <REPO_NAME>;
GRANT CREATE TABLE TO <REPO_NAME>;
GRANT CREATE CLUSTER TO <REPO_NAME>;
GRANT CREATE SYNONYM TO <REPO_NAME>;
GRANT CREATE VIEW TO <REPO_NAME>;
GRANT CREATE SEQUENCE TO <REPO_NAME>;
GRANT CREATE DATABASE LINK TO <REPO_NAME>;
GRANT CREATE PROCEDURE TO <REPO_NAME>;
GRANT CREATE TRIGGER TO <REPO_NAME>;
GRANT CREATE MATERIALIZED VIEW TO <REPO_NAME>;
GRANT CREATE JOB TO <REPO_NAME>;
The documentation does not really give you much details about the exact requirements and the content of the doc itself hasn’t changed much (at all?) either. However, with the above list, you usually had enough permissions to install/run/upgrade Documentum as well as perform usual administrative tasks (like create a database link to populate some custom tables with external data (ODBC?) or managing some archiving/cleanup).
With recent versions of Documentum though, trying to perform an upgrade, for example from 20.x to 23.x, if you had the above DB permissions only, then the upgrade (or OOTB installation) would fail with the Repository not being able to start. The Repository logs would show something like:
[dmadmin@cs-0 ~]$ cat $DOCUMENTUM/dba/log/<REPO_NAME>.log
OpenText Documentum Content Server (version 23.4.0000.0143 Linux64.Oracle)
Copyright (c) 2023. OpenText Corporation
All rights reserved.
2024-06-25T14:06:31.324462 13781[13781] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase <REPO_NAME> attempting to open"
2024-06-25T14:06:31.324558 13781[13781] 0000000000000000 [DM_SERVER_I_START_KEY_STORAGE_MODE]info: "Docbase <REPO_NAME> is using database for cryptographic key storage"
2024-06-25T14:06:31.324581 13781[13781] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase <REPO_NAME> process identity: user(dmadmin)"
2024-06-25T14:06:31.701344 13781[13781] 0000000000000000 Database Session is shutting down due to (Pq)
2024-06-25T14:06:31.701461 13781[13781] 0000000000000000 [DM_SERVER_I_START_ID]info: "Docbase <REPO_NAME> was not successfully opened. Docbase id: (1000001)"
[dmadmin@cs-0 ~]$
The above message “Database Session is shutting down due to (XX)” can be caused by missing/wrong DB details (like username), but it can also be caused by missing permissions on the DB Repository owner account. The problem is that this can happen even if you made sure to execute the “$DM_HOME/bin/dmdbtest” utility just before the Repository install/upgrade. This utility is(was) rather good, to test the DB connection and to make sure that you have(had) enough permissions. However, in this specific case, it doesn’t detect the issue, and it just completes successfully:
[dmadmin@cs-0 ~]$ ### Execution with initial DB permissions
[dmadmin@cs-0 ~]$ $DM_HOME/bin/dmdbtest -S<TNS_ALIAS> -U<USER_NAME> -P<PASSWORD>
Database successfully opened.
Test table successfully created.
Test view successfully created.
Test index successfully created.
Insert into table successfully done.
Index successfully dropped.
View successfully dropped.
Database case sensitivity test successfully past.
Table successfully dropped.
[dmadmin@cs-0 ~]$
Starting on Dctm 22.x (it was apparently around that time), some additional permissions are definitively required, but this utility either cannot check that, or it wasn’t updated for that (I let you have your own opinion on that matter ;)). To be able to install/upgrade your environment, you will need these additional permissions now (the first one being the most important apparently):
GRANT SELECT_CATALOG_ROLE TO <REPO_NAME>;
GRANT CREATE TYPE TO <REPO_NAME>;
GRANT CREATE INDEXTYPE TO <REPO_NAME>;
GRANT CREATE ANY INDEX TO <REPO_NAME>;
After a successful grant, if you would like to check the permissions of the DB account, you can execute these commands:
- SELECT USERNAME, GRANTED_ROLE FROM USER_ROLE_PRIVS;
- Should display the SELECT_CATALOG_ROLE role
- SELECT USERNAME, PRIVILEGE FROM USER_SYS_PRIVS;
- Should display the other three privileges
Note: yes, the documentation has been mentioning the Select Catalog Role as being required for a very long time and it’s part of the Dctm binaries since Aug-2001 (24y ago…) but I don’t think it was really required so far, since I installed dozens (hundreds?) of Dctm environment without that role on Oracle (it’s a sensitive one), and I never had any issues before, as I recall. But in recent versions, something changed, and we now need more permissions for certain.
In any cases, after adding the Repository owner account’s permissions on the Database, if you try to trigger the install/upgrade again, it should work properly:
[dmadmin@cs-0 ~]$ cat $DOCUMENTUM/dba/log/<REPO_NAME>.log
OpenText Documentum Content Server (version 23.4.0000.0143 Linux64.Oracle)
Copyright (c) 2023. OpenText Corporation
All rights reserved.
2024-06-25T15:20:28.223973 6372[6372] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase <REPO_NAME> attempting to open"
2024-06-25T15:20:28.224045 6372[6372] 0000000000000000 [DM_SERVER_I_START_KEY_STORAGE_MODE]info: "Docbase <REPO_NAME> is using database for cryptographic key storage"
2024-06-25T15:20:28.224709 6372[6372] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase <REPO_NAME> process identity: user(dmadmin)"
2024-06-25T15:20:29.048972 6372[6372] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Post Upgrade Processing."
2024-06-25T15:20:29.050037 6372[6372] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Base Types."
...
L’article Dctm – Oracle DB permissions on recent versions est apparu en premier sur dbi Blog.
Automating Business with M-Files
There is no debate: an Enterprise Content Management system is a must-have nowadays.

Now, it’s more than just a place to store documents. We need intelligent systems that help us manage content, improve collaboration and ensure compliance.
M-Files excels in this area, particularly with its Advanced Workflow and Version Control module. Let’s take a look at why!
Why are workflows essential?Many companies manage documents using only a basic ‘draft, final version’ lifecycle. This approach does not address issues of version confusion, which ultimately slows down activities and introduces risks.
M-Files transforms this with workflow automation, turning document lifecycles into structured, rule-based processes.
Advanced Workflow and Version ControlM-Files offers the Compliance Kit, a suite of tools that extends the basic capabilities of the solution, including “Advanced Workflow and Version Control”.
This package include some features, like:
- Change Management: add change request object in the Vault
- Periodic Tasks: to manage recurring assignments
- Advanced Notifications: extend the notification capabilities
- Training Management: to manage learning requirements and records
- Hierarchy Manager: to trigger action from related objects
But, for me, one of the most important is Version Control (and Version Control Lite) that provides foundation for controlled documents.
What does this mean in practice?Version control allows you to distinguish between working copies and published versions.
It means documents being edited (working copies) have their own workflow with steps, transitions, approvals, …permissions and visibility.
When this document reach the final state, Version control can automatically publish it as a final document.
A basic implementation can be something like:

Then when the workflow reach the “Effective” state, it generate an output document with a specific workflow like:

The creation and edition process is done in background. Simple users cannot access or use documents that are not in final state.
The last published document remains accessible and usable even if a new version is initiated.
Once a new version is published, the old version is moved to “superseded,” and the associated action can be archiving, deleting, or hiding it from normal users.
There are numerous typical use cases, but the most well-known are:
- Template management
- Procedures (SOPs)
- Contracts
M-Files offers a compelling toolkit for modernizing document management. The Advanced Workflow module adds structure and automation, and Version Control Lite provides clarity and control without unnecessary complexity.
These modules can be tailored to fit the unique business processes of any organization, whether it’s a mid-sized firm or a regulated enterprise, delivering real value with minimal disruption.
For any question feel free to contact us!
L’article Automating Business with M-Files est apparu en premier sur dbi Blog.