Feed aggregator

Deploying Azure Terraform code with Azure DevOps and a storage account as remote backend

Yann Neuhaus - Tue, 2026-02-03 03:00
Why this blog?

While I was working for a customer, I was tasked to create an Azure infrastructure, using Terraform and Azure DevOps (ADO). I thought about doing it like I usually do with GitLab but it wasn’t possible with ADO as it doesn’t store the state file itself. Instead I have to use an Azure Storage Account. I configured it, blocked public network, and realized that my pipeline couldn’t push the state in the Storage Account

In fact, ADO isn’t supported as a “Trusted Microsoft Service” and so it can’t bypass firewall rules using that option in Storage Accounts. For this to work, I had to create a self-hosted agent that run on Azure VM Scale Set and that will be the topic of this blog.

Azure resources creation Agent Creation

First thing, we create a Azure VM Scale Set. I kept most parameters to their default values but it can be customized. I chose Linux as operating system as it was what I needed. One important thing is to set the “Orchestration mode” to “Uniform”, else ADO pipelines won’t work.

Storage account

For the storage account that will store our state, any storage account should work. Just note that you also need to create a container inside of it to fill your terraform provider. Also, for network preferences we will go with “Public access” and “Enable from selected networks”. This is will allow public access only from restricted networks. I do this to avoid creating a private endpoint to connect to a fully private storage account.

Entra ID identity for the pipeline

We also need to create an Entra ID Enterprise Application that we will provide to the pipeline. This identity must have Contributor (or any look alike) role over the scope you target. Also, it must have at least Storage Blob Data Contributor on the Storage Account to be able to write in it.

Azure DevOps setup Terraform code

You can use any Terraform code you want, for my example I only use one which creates a Resource Group and a Virtual Network. Just note that your provider should look like this

Pipeline code

I’m used to split my pipeline in two files, the plan.yml will be given to the ADO pipeline and it will call the template to run its code. The things done in the pipeline are pretty simple. It installs Terraform on the VM Scale Set instance, then run the Terraform commands. The block of code can be reused for the “apply”.

Few things to note, in my plan.yml I set a Variable Group “Terraform_SPN” that I will show you just after. That’s where we will find the information about our previously created Entra Id Enterprise Application

In the template.yml, what is important to note is the pool definition. Here I point just a name, which correspond to ADO Agent Pool that I created. I’ll also show this step a bit further.

For the pipeline creation itself, we will go to Pipeline -> Create a new pipeline -> Azure Repos Git

Then Existing Azure Pipelines YAML file, and pick our file from our repo.

We will also create a Variable Group, the name doesn’t matter, just remember to put the same in your YAML code. Here you create 4 variables which are information coming from your tenant and your enterprise application. That’s gonna be used during the pipeline run to deploy your resources.

ADO Agent Pool

In the Project Settings, look for Agent Pools. Then create a new one and fill it as follow:

The Authorize button will appear after you select the subscription you want, and to accept this your user must have the Owner role, as it adds rights. This will allow ADO to communicate with Azure by creating a Service Principal. Then you can fill the rest as follow:

ADO pipeline run

When you first run your pipeline you must authorize it to use the Variable Group and the Agent pool.

One this is done, everything should go smoothly and end like this.

I hope that this blog was useful and could help you troubleshoot that king of problem between Azure and Azure DevOps.

L’article Deploying Azure Terraform code with Azure DevOps and a storage account as remote backend est apparu en premier sur dbi Blog.

Debugging GoldenGate Veridata Agents

Yann Neuhaus - Mon, 2026-02-02 01:29

A crucial step after installing a Veridata server is to set up Veridata agents. And, as mentioned in a previous blog post about GoldenGate Veridata installation, Oracle didn’t bother to indicate when an agent is failing. For instance, the following agent named vdt_agent1 starts successfully:

[oracle@vmvdt vdt_agent1]$ ./agent.sh start agent.properties
Formatter type set to TEXT in odl.xml.
[oracle@vmvdt vdt_agent1]$ ps -ef|grep java | grep agent
oracle     56408       1  0 12:47 pts/0    00:00:00 /u01/app/oracle/product/jdk-17.0.17/bin/java -Djava.util.logging.config.class=oracle.core.ojdl.logging.LoggingConfiguration -Doracle.core.ojdl.logging.config.file=/u01/app/oracle/product/vdt_agent1/config/odl.xml -Dhome=/u01/app/oracle/product/vdt23/agent -DagentHome=/u01/app/oracle/product/vdt_agent1 -XX:+UseParallelGC -Xms1024M -Dagent-manifest.jar=/u01/app/oracle/product/vdt23/agent/agent-manifest.jar -jar /u01/app/oracle/product/vdt23/agent/JavaAgent.jar agent.properties

And this agent named vdt_agent2 fails to start:

[oracle@vmvdt vdt_agent2]$ ./agent.sh start agent.properties
Formatter type set to TEXT in odl.xml.
[oracle@vmvdt vdt_agent2]$ ps -ef|grep java | grep agent

Zero difference in the agent.sh output, yet one agent is started, while the other is not. It is quite sad, when you compare the Veridata solution to other Oracle solutions, where you usually have a return on the status of what you’re starting.

The first thing to do would be to look at the logs, right ? So let’s see how this looks in practice:

[oracle@vmvdt3 product]$ pwd
/u01/app/oracle/product
[oracle@vmvdt3 product]$ ls -lrt vdt_agent*
vdt_agent1:
total 32
-rw-r-----. 1 oracle oinstall   79 Dec 21 10:49 VAOH.sh
-rw-r-----. 1 oracle oinstall 5037 Dec 21 10:49 agent.properties.sample
-rw-r-----. 1 oracle oinstall  172 Dec 21 10:49 ReadMe.txt
-rwxr-----. 1 oracle oinstall  261 Dec 21 10:49 configure_agent_ssl.sh
-rwxr-----. 1 oracle oinstall 1057 Dec 21 10:49 agent.sh
-rw-r-----. 1 oracle oinstall 5094 Dec 21 12:45 agent.properties
drwxr-x---. 3 oracle oinstall   62 Dec 21 12:47 config
drwxr-----. 2 oracle oinstall   57 Dec 21 12:47 logs

vdt_agent2:
total 32
-rw-r-----. 1 oracle oinstall   79 Dec 21 14:11 VAOH.sh
-rw-r-----. 1 oracle oinstall  172 Dec 21 14:11 ReadMe.txt
-rw-r-----. 1 oracle oinstall 5037 Dec 21 14:11 agent.properties.sample
-rwxr-----. 1 oracle oinstall  261 Dec 21 14:11 configure_agent_ssl.sh
-rwxr-----. 1 oracle oinstall 1057 Dec 21 14:11 agent.sh
-rw-r-----. 1 oracle oinstall 5050 Dec 21 14:12 agent.properties
drwxr-x---. 3 oracle oinstall   62 Dec 21 14:12 config

vdt_agent2 didn’t start, so there is no logs folder yet. But even if it had started before, the logs would not necessarily be updated. So, are we completely lost in this situation ? We first have to understand why there is no log at startup. Looking at the agent.sh script, we see the following:

...
  6 SCRIPT_DIR="`dirname "$0"`"
  7 AGENT_HOME="`cd "$SCRIPT_DIR" ; pwd`"
  8 export AGENT_HOME
  9
 10 . $AGENT_HOME/VAOH.sh
...
 24   sed -i "0,/class=['\"][^'\"]*['\"]/s|class=['\"][^'\"]*['\"]|cla    ss='oracle.core.ojdl.logging.ODLHandlerFactory'|" "$ODL_XML_PATH"
 25   echo "Formatter type set to TEXT in odl.xml."
 26 fi
 27
 28 $AGENT_ORACLE_HOME/agent_int.sh "$@"

AGENT_HOME refers to the agent deployed location, in our case it is /u01/app/oracle/product/vdt_agent2. As for AGENT_ORACLE_HOME, it is defined in the VAOH.sh script sourced earlier.

AGENT_ORACLE_HOME=/u01/app/oracle/product/vdt23/agent
export AGENT_ORACLE_HOME

It simply refers to the agent directory inside your Veridata home directory. So all options of agent.sh are passed to agent_int.sh, which looks like this:

...
 59 case "$1" in
 60     start)
 61         nohup "$JAVA_EXECUTABLE" $JAVA_OPTS -jar "$JAR_FILE" $2 >/    dev/null 2>&1 &
 62         ;;
 63     run)
 64         exec "$JAVA_EXECUTABLE" $JAVA_OPTS -jar "$JAR_FILE" $2
 65         ;;
 66     version)
 67         exec "$JAVA_EXECUTABLE" $JAVA_OPTS -jar "$JAR_FILE" versio    n $2
 68         ;;
...

We have the culprit here ! The start and run options are identical: start discards any error message and moves the process to the background, while run displays errors in the terminal. Let’s try to start the agent with this option instead:

[oracle@vmvdt vdt_agent2]$ ./agent.sh run agent.properties
Formatter type set to TEXT in odl.xml.
#server.jdbcDriver=ojdbc11-23.2.0.0.jar
[VERIAGT-BOOT] INFO Looking for home directory.
[VERIAGT-BOOT] INFO Found bootstrap class in file:/u01/app/oracle/product/vdt23/agent/JavaAgent.jar!/com/goldengate/veridata/agent/BootstrapNextGen.class.
[VERIAGT-BOOT] INFO Home directory: /u01/app/oracle/product/vdt23/agent
[VERIAGT-BOOT] INFO AGENT_DEPLOY_PATH not set, falling back to homeDir.
[VERIAGT-BOOT] INFO Preparing classpath.
[VERIAGT-BOOT] ERROR /u01/app/oracle/product/vdt_agent2/ojdbc11-23.9.0.25.07.jar does not exist. Check agent.properties configuration

In this case, the combination of the server.driversLocation and server.jdbcDriver parameters does not allow Veridata to locate the driver.

And for automation ?

Apart from the java process that you can search for, once started, the agent home will have a logs folder with two log files.

[oracle@vmvdt logs]$ pwd
/u01/app/oracle/product/vdt_agent3/logs
[oracle@vmvdt logs]$ ll
total 4
-rw-r-----. 1 oracle oinstall   0 Dec 21 18:57 vdtperf-agent.log
-rw-r-----. 1 oracle oinstall 778 Dec 21 18:57 veridata-agent.log

If you don’t have one after starting the agent for the first time, you know for sure something went wrong. However, if you want to automatically detect if the agent was started, you should search for the following message in veridata-agent.log.

[2025-12-21T19:01:21.570+00:00] [veridata] [NOTIFICATION] [OGGV-60002] [oracle.veridata.agent] [tid: 1] [ecid: 0000Ph2IQNqFk305zzDCiW1dI4G1000001,0] Veridata Agent running on vmvdt port 8833
How to change the log level of the agent ?

To change the log level of the agent, edit the config/odl.xml file in your agent deployed location. See the documentation for a list of all the options available. For instance, you can change the oracle.veridata logger from NOTIFICATION:1 to TRACE:1 to generate detailed debugging information.

        <logger name='oracle.veridata' level='NOTIFICATION:1'
            useParentHandlers='false'>
            <handler name='odl-handler' />
            <handler name='my-console-handler' />
        </logger>
Testing Agent Connection

Once the agent is started, you can test the connection from the Veridata Web UI. Once logged in, go to the Connections panel, and click on Create.

GoldenGate Veridata WebUI panel

In the agent connection creation panel, indicate the agent port, the host name and the database type. After testing the connection, you will know if your agent is correctly configured with the red ribbon showing “Agent validation is successful”.

Testing the agent connection from Veridata Web UI

A good practice when using the agent.sh script is to first run the agent, and then start it. This way, you will quickly know is something went wrong.

L’article Debugging GoldenGate Veridata Agents est apparu en premier sur dbi Blog.

Dctm – Managing licenses through OTDS

Yann Neuhaus - Sat, 2026-01-31 04:09

As you might know, since Documentum 24.4, there is a new requirement to have a valid license assigned to a user in order to log in to Documentum. This is a pretty important change, most likely designed with the X-Plan license model in mind, and one of the reasons why OpenText changed the license approach in the first place. This was likely done to provide better visibility and control over who is allowed to access and use Documentum, and to ensure that all usage is correctly licensed.

In this blog, I will use Documentum as well as OTDS in their freshly released version 25.4. Depending on the version you are working with, the steps might slightly differ. I will assume that both the Documentum and OTDS environments are fully available and ready for inter-connectivity and license configuration. I will use a demo environment I have internally that is hosted on RKE2 (Kubernetes from SUSE).. It doesn’t use the out-of-the-box images from OpenText but our own custom images built from scratch (same process used since Documentum 7.3, even before images/containers were officially provided):

  • OTDS URL: https://otds.dns.com/otdsws
  • Dctm Repo Name: dbi01
  • Dctm Repo inline account: adm_morgan
  • Dctm Repo OTDS (AD) account: morgan
  • OTDS Repo Resource ID: 12345678-1234-5678-9abc-123456789abc
  • OTDS Repo Secret Key: AbCd1234efGH5678IJkl9==
  • OTDS License Key Name: dctmlicense
  • OTDS Partition for inline accounts: dctminline
  • OTDS Partition for AD accounts: dbi
  • OTDS Business Admin: businessadmin

All the values above can be changed and you should update them based on your own environment/setup.

Note: The OTDS Partition for my AD, the Repository Resource & Access Roles as well as the OAuth Clients (+ possibly the Auth Handlers) are already created and configured. Since that is outside of the scope of the “license” part and it’s something you would already require even for versions prior to 24.4, I won’t cover that part in this blog. If you need help on that, don’t hesitate to contact us and we can help with the OTDS design and implementation.

1. Dctm Server – OTDS Authentication (turned auth+lic)

In previous versions of Documentum (like 16.4, 20.2, …, 25.2), the OTDS Authentication was handled by the JMS (Tomcat or WildFly if you go further back in versions). There was an application “OTDSAuthentication” deployed on the JMS that would link the Documentum Server and OTDS (REST calls). Starting in 25.4, this is now a standalone process that you can find under “$DM_HOME/OTDSAuthLicenseHttpServerBin” (a simple JAR file). This is started by the Repository automatically, but our custom images make sure it’s always running properly, even if something goes wrong and the Repository isn’t able to start it itself.

In that OTDS Auth application, there is a single configuration file that you need to configure to allow communication:

  • E.g. 16.4 (JMS on WildFly): $JMS_HOME/server/DctmServer_MethodServer/deployments/ServerApps.ear/OTDSAuthentication.war/WEB-INF/classes/otdsauth.properties
  • E.g. 23.4 (JMS on Tomcat): $JMS_HOME/webapps/OTDSAuthentication/WEB-INF/classes/otdsauth.properties
  • E.g. 25.4 (Standalone): $DM_HOME/OTDSAuthLicenseHttpServerBin/config/otdsauth.properties

The content is roughly the same for all versions of Documentum (it changed slightly but not too much). The main change that I wanted to mention here, linked to the licensing checks, is the addition of a new parameter called “admin_username“.

With the above parameters, this is a “simple” way to configure the “otdsauth.properties” file:

[dmadmin@cs-0 ~]$ # Definition of parameters, to fill the file
[dmadmin@cs-0 ~]$ otdsauth_properties="$DM_HOME/OTDSAuthLicenseHttpServerBin/config/otdsauth.properties"
[dmadmin@cs-0 ~]$ otds_base_url="https://otds.dns.com/otdsws"
[dmadmin@cs-0 ~]$ install_owner="dmadmin"
[dmadmin@cs-0 ~]$ repo="dbi01"
[dmadmin@cs-0 ~]$ otds_resource_id="12345678-1234-5678-9abc-123456789abc"
[dmadmin@cs-0 ~]$ otds_resource_key="AbCd1234efGH5678IJkl9=="
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ # First, to make sure the file is in "unix" file format. It's often provided by OT as DOS, even on linux, which is a problem...
[dmadmin@cs-0 ~]$ awk '{ sub("\r$", ""); print }' ${otdsauth_properties} > temp.properties
[dmadmin@cs-0 ~]$ mv temp.properties ${otdsauth_properties}
[dmadmin@cs-0 ~]$ chmod 660 ${otdsauth_properties}
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ # Configuration of the properties file
[dmadmin@cs-0 ~]$ sed -i "s,otds_rest_credential_url=.*,otds_rest_credential_url=${otds_base_url}/rest/authentication/credentials," ${otdsauth_properties}
[dmadmin@cs-0 ~]$ sed -i "s,otds_rest_ticket_url=.*,otds_rest_ticket_url=${otds_base_url}/rest/authentication/resource/validation," ${otdsauth_properties}
[dmadmin@cs-0 ~]$ sed -i "s,otds_rest_oauth2_url=.*,otds_rest_oauth2_url=${otds_base_url}/oauth2/token," ${otdsauth_properties}
[dmadmin@cs-0 ~]$ sed -i "s,synced_user_login_name=.*,synced_user_login_name=sAMAccountName," ${otdsauth_properties}
[dmadmin@cs-0 ~]$ sed -i "s,auto_cert_refresh=.*,auto_cert_refresh=true," ${otdsauth_properties}
[dmadmin@cs-0 ~]$ sed -i "s,cert_jwks_url=.*,cert_jwks_url=${otds_base_url}/oauth2/jwks," ${otdsauth_properties}
[dmadmin@cs-0 ~]$ sed -i "s,admin_username=.*,admin_username=${install_owner}," ${otdsauth_properties}
[dmadmin@cs-0 ~]$ sed -i "s,.*_resource_id=.*,${repo}_resource_id=${otds_resource_id}," ${otdsauth_properties}
[dmadmin@cs-0 ~]$ sed -i "s,.*_secretKey=.*,${repo}_secretKey=${otds_resource_key}," ${otdsauth_properties}
[dmadmin@cs-0 ~]$
2. Repo – OTDS Authentication (turned auth+lic)

With the new license requirement in conjunction with the fact that it is not handled by the JMS anymore, the URL to be configured in the Repository has been changed too. If you are upgrading your environment from a previous version, you will normally already have an “app_server_name” for “OTDSAuthentication“, with an URL of “http(s)://hostname:port/OTDSAuthentication/servlet/authenticate” (where hostname is either localhost or your local hostname // port is 9080 or 9082 for example). With the new version of Documentum, this will need to be changed to “http://localhost:port/otdsauthlicense” (where port is 8400 by default).

# Example of commands to add a new entry for "OTDSAuthentication" (if it already exists, you need to change the "append" to "set" the correct ID)
API> retrieve,c,dm_server_config
API> append,c,l,app_server_name
SET> OTDSAuthentication
API> append,c,l,app_server_uri
SET> http://localhost:8400/otdsauthlicense
API> save,c,l

Note: When I first saw that new standalone process through Java, I was a bit skeptical… The OTDS Authentication used to run through the JMS. That means that you could configured your Tomcat with all security and best practices and it would apply to the OTDS Authentication as well. With the new standalone process, it is only available through HTTP and it also only listens on localhost/127.0.0.1… I guess that means that you can forget about the failover across Documentum Servers if you have a High-Availability environment! The source code isn’t available so I don’t know if we could force it to HTTPS on another address with the help of Java Options, and the documentation doesn’t mention anything in that regard either (and I don’t want to de-compile the classes)… So not very good I would say. The only workaround would be to setup a front-end but that over-complicates things.

3. Status (#1)

At that point in time, assuming that you have your OTDS Partition / Resource / Access Role and that you have a Documentum version < 24.4 (with the “old” URL configured in the “dm_server_config“), you should be able to log in with any OTDS account (and inline accounts as well, obviously).

However, for higher versions, such as our 25.4 environment, you should still not be able to log in to the Repository. Trying to do so with either an inline account or an OTDS-enabled account would both result in failure with this message:

[dmadmin@cs-0 ~]$ # Definition of parameters, to test the log in
[dmadmin@cs-0 ~]$ repo="dbi01"
[dmadmin@cs-0 ~]$ inline_test_account="adm_morgan"
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ # Trying to log in to the Repository
[dmadmin@cs-0 ~]$ iapi ${repo} -U${inline_test_account}
Please enter password for adm_morgan:

	OpenText Documentum iapi - Interactive API interface
	Copyright (c) 2025. OpenText Corporation
	All rights reserved.
	Client Library Release 25.4.0000.0134

Connecting to Server using docbase dbi01
[DM_SESSION_E_AUTH_FAIL]error:  "Authentication failed for user adm_morgan with docbase dbi01."

[DM_LICENSE_E_NO_LICENSE_CONFIG]error:  "Could not find dm_otds_license_config object."

Could not connect
[dmadmin@cs-0 ~]$

To be more precise, without any configuration on a 25.4 environment, you can log in in a few ways, fortunately, otherwise it would break Documentum. Some “exceptions” were put in place by OpenText and from what I could see, it appears that the current behavior is as follows (GR = Global Registry):

  • Log in through iAPI / iDQL will only work with “dmadmin” and other default GR-linked accounts (dm_bof_registry / dmc_wdk_presets_owner / dmc_wdk_preferences_owner) in a GR Repository
  • Log in through iAPI / iDQL will only work with “dmadmin” in a non-GR Repository
  • Log in through DA will work with “dmadmin”
  • Log in to any other DFC Client will not work with any account
4. OTDS – License, inline partition and business admin

So far, I talked about things that would mostly be setup/available if you have a Documentum environment already using OTDS for its Authentication. Now, let’s proceed with the OTDS configuration related to the license management, with screenshots and example.

4.1. Uploading a license key

Let’s start with uploading the license key for Documentum into OTDS:

  • Log in to “https://otds.dns.com/otds-admin” with the “admin” account (=otadmin@otds.admin)
  • Go to “License Keys
  • Click on “Add
  • Set the “License Key Name” to “dctmlicense
  • Set the “Resource ID” to the Repository Resource ID previously created
  • Click on “Next
  • Click on “Get License File” and browse your local filesystem to find the needed license file (.lic)
  • Click on “Save

You should end up with something like this:

4.2. Creating a Partition for “inline” accounts

Then, let’s create a new non-synchronized partition that will be used to store the Documentum inline accounts:

  • Go to “Partitions
  • Click on “Add” and then “New Non-synchronized User Partition
  • Set the “Name” to “dctminline
  • Click on “Save

You should end up with something like this (Note: “dbi” is a synchronized partition coming from a development LDAP that I created for this demo environment):

4.3. Allocating a license to a Partition

Now, let’s allocate the license to that inline Partition (“dctminline“), so that later, when the users get added, they will be able to take a license from the pool to log in to the Repository. Please note that I specifically didn’t allocate the license to the LDAP Partition (“dbi“), to show the difference later on:

  • For the Partition “dctminline“, click on “Actions” and then “Allocate to License
  • Select the correct license, if you have multiple. For my parameters, it should be “dctmlicense – dbi01
  • Select the correct counter, if you have multiple (like System Accounts / X2_Premium / …)
  • Click on “Allocate to License

You should see a message saying that it was successfully allocated:

4.4. Creating a “Business Admin” account

The next step in OTDS is to create what OpenText calls a “Business Admin” account. When someone tries to log in to Documentum or a Client Application, Documentum will use that account to contact OTDS to check whether there are enough licenses available and if the person trying to log in is allowed to do so (i.e. has a license allocated). We will configure that connection on Documentum side later, but for now, let’s create that Business Admin account:

  • For the Partition “otds.admin“, click on “Actions” and then “View Members
  • Click on “Add” and then “New User
  • Set the “User Name” to “businessadmin
  • Click on “Next
  • Set the “Password Options” to “Do not require password change on reset” (dropdown)
  • Set the “Password Options” to “Password never expires” (checkbox)
  • Set the “Password” to XXX (remember that password, it will be used later)
  • Set the “Confirm Password” to XXX (re-enter here the account password)
  • Click on “Save

You should end-up with a new account in the “otds.admin” Partition (3rd for me, the default “admin” account + another one for the OTDS Resource for “dbi01” Repository):

4.5. Granting permissions to the “Business Admin” account

The last step is then to grant the necessary permission to the newly created Business Admin account, so it can check the license details:

  • Go to “Users & Groups
  • Click on “Groups
  • Search for the group named “otdsbusinessadmins” (there should be 1 result only)
  • For the Group “otdsbusinessadmins“, click on “Actions” and then “Edit Membership
  • Click on “Add Member
  • Search for the user named “businessadmin@otds.admin” (= user “businessadmin” created in “otds.admin” Partition)
  • Select the checkbox for that account
  • Click on “Add Selected

You should see a message saying that it was successfully done:

That completes the configuration on the OTDS Admin UI. There are quite a few manual steps to be done, but fortunately, you should (normally) only do that once. The documentation mention other steps about creating a Resource for the inline Partition but that’s only required in case you are going to create accounts inside OTDS and you expect them to be pushed to Documentum. In our case, since the inline accounts already exist in Documentum and we want to do the opposite, we don’t need a Resource for that. There are also other steps about creating roles and whatnot, but for the testing / initial setup that we are doing here, it’s not needed either.

5. Repo – “dm_otds_license_config” object

Now that OTDS is fully configured, the last step, as previously mentioned, is to tell Documentum which account it can use for license checks. This is done through the “dm_otds_license_config” object that you will need to create:

[dmadmin@cs-0 ~]$ # Definition of parameters, to create the license config object
[dmadmin@cs-0 ~]$ otds_base_url="https://otds.dns.com/otdsws"
[dmadmin@cs-0 ~]$ install_owner="dmadmin"
[dmadmin@cs-0 ~]$ repo="dbi01"
[dmadmin@cs-0 ~]$ otds_license_name="dctmlicense"
[dmadmin@cs-0 ~]$ otds_business_admin_username="businessadmin"
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ # Set the password in a secure way
[dmadmin@cs-0 ~]$ read -s -p "  --> Please enter the password here: " otds_business_admin_password
  --> Please enter the password here: XXX
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ # Creation of the license config object
[dmadmin@cs-0 ~]$ iapi ${repo} -U${install_owner} -Pxxx << EOC
create,c,dm_otds_license_config
set,c,l,otds_url
${otds_base_url}/rest
set,c,l,license_keyname
dctmlicense
set,c,l,business_admin_name
${otds_business_admin_username}
set,c,l,business_admin_password
${otds_business_admin_password}
save,c,l
reinit,c
apply,c,NULL,FLUSH_OTDS_CONFIG
EOC


	OpenText Documentum iapi - Interactive API interface
	Copyright (c) 2025. OpenText Corporation
	All rights reserved.
	Client Library Release 25.4.0000.0134

Connecting to Server using docbase dbi01
[DM_SESSION_I_SESSION_START]info:  "Session 012345678000aa1b started for user dmadmin."

Connected to OpenText Documentum Server running Release 25.4.0000.0143 Linux64.Oracle
Session id is s0
API> ...
000f424180001d00
API> SET> ...
OK
API> SET> ...
OK
API> SET> ...
OK
API> SET> ...
OK
API> ...
OK
API> ...
OK
API> ...
SUCCESS
API> Bye
[dmadmin@cs-0 ~]$

The “FLUSH_OTDS_CONFIG” apply command is only required if you modify an existing “dm_otds_license_config” object. However, I still added it, as this is a nice and simple way to make sure that the Repository is able to communicate with the standalone Java process (“OTDSAuthLicenseHttpServerBin“).

If it succeeds, you should see the “SUCCESS” message at the end. If it fails to communicate (e.g. you have an issue in your “dm_server_config.app_server_uri“), you should get an error about not being able to open a socket or something similar.

6. Status (#2)

At that point in time, since both the OTDS and Documentum Server configurations are complete, you might think that you would be able to log in to Documentum with an inline account. But that’s not the case. Trying to do so will result in this error:

[dmadmin@cs-0 ~]$ # Definition of parameters, to test the log in
[dmadmin@cs-0 ~]$ repo="dbi01"
[dmadmin@cs-0 ~]$ inline_test_account="adm_morgan"
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ # Trying to log in to the Repository with an inline account
[dmadmin@cs-0 ~]$ iapi ${repo} -U${inline_test_account}
Please enter password for adm_morgan:

	OpenText Documentum iapi - Interactive API interface
	Copyright (c) 2025. OpenText Corporation
	All rights reserved.
	Client Library Release 25.4.0000.0134

Connecting to Server using docbase dbi01
[DM_SESSION_E_AUTH_FAIL]error:  "Authentication failed for user adm_morgan with docbase dbi01."

[DM_LICENSE_E_USER_NOT_FOUND_OR_DUPLICATE]error:  "User adm_morgan not found in OTDS or duplicate user exists."

Could not connect
[dmadmin@cs-0 ~]$

This is because the inline account isn’t yet present inside OTDS. We did create an inline Partition, but it’s currently empty.

If you try to log in with an LDAP Partition account (e.g. “morgan“), then you will get a slightly different error message, since the account does exist in OTDS, but it’s in a Partition that we specifically didn’t allocate to the license yet:

[dmadmin@cs-0 ~]$ # Definition of parameters, to test the log in
[dmadmin@cs-0 ~]$ repo="dbi01"
[dmadmin@cs-0 ~]$ ldap_test_account="morgan"
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ # Trying to log in to the Repository with an inline account
[dmadmin@cs-0 ~]$ iapi ${repo} -U${ldap_test_account}
Please enter password for morgan:

	OpenText Documentum iapi - Interactive API interface
	Copyright (c) 2025. OpenText Corporation
	All rights reserved.
	Client Library Release 25.4.0000.0134

Connecting to Server using docbase dbi01
[DM_SESSION_E_AUTH_FAIL]error:  "Authentication failed for user morgan with docbase dbi01."

[DM_LICENSE_E_USER_NO_LICENSE_ALLOCATED]error:  "No License allocated for current user."

Could not connect
[dmadmin@cs-0 ~]$

At least, the error messages are accurate!

7. Dctm Server – migrating inline accounts to OTDS

As mentioned, the very last step is therefore to get all Documentum inline accounts created in OTDS. You can, obviously, create them all manually inside the inline Partition (“dctminline“), but if you have hundreds or even thousands of such accounts, it’s going to take hours (and probably a lot of human errors on such a repetitive task). For that purpose, OpenText provides a migration utility which you can use in this way:

[dmadmin@cs-0 ~]$ # Definition of parameters
[dmadmin@cs-0 ~]$ install_owner="dmadmin"
[dmadmin@cs-0 ~]$ repo="dbi01"
[dmadmin@cs-0 ~]$ otds_inline_partition="dctminline"
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ # Execution of the migration utility
[dmadmin@cs-0 ~]$ cd $DOCUMENTUM/dfc
[dmadmin@cs-0 dfc]$ java -Ddfc.properties.file=../config/dfc.properties -cp .:dfc.jar com.documentum.fc.tools.MigrateInlineUsersToOtds ${repo} ${install_owner} xxx ${otds_inline_partition}
...
imported user to otds: dmc_wdk_presets_owner
imported user to otds: dmc_wdk_preferences_owner
imported user to otds: dm_bof_registry
imported user to otds: d2ssouser
imported user to otds: dm_fulltext_index_user
imported user to otds: adm_morgan
...
[dmadmin@cs-0 dfc]$

If you go back to OTDS Admin UI, you should now be able to see all users being present inside the inline Partition (“dctminline“). Trying to log in with your inline account should now be working properly:

[dmadmin@cs-0 ~]$ # Definition of parameters, to test the log in
[dmadmin@cs-0 ~]$ repo="dbi01"
[dmadmin@cs-0 ~]$ inline_test_account="adm_morgan"
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ # Trying to log in to the Repository with an inline account
[dmadmin@cs-0 ~]$ iapi ${repo} -U${inline_test_account}
Please enter password for adm_morgan:

	OpenText Documentum iapi - Interactive API interface
	Copyright (c) 2025. OpenText Corporation
	All rights reserved.
	Client Library Release 25.4.0000.0134

Connecting to Server using docbase dbi01
[DM_SESSION_I_SESSION_START]info:  "Session 012345678000aa2d started for user adm_morgan."

Connected to OpenText Documentum Server running Release 25.4.0000.0143 Linux64.Oracle
Session id is s0
API> exit
Bye
[dmadmin@cs-0 ~]$

In the OTDS Admin UI, it is possible to see the current license usage. For that, go to “License Keys” and, for “dctmlicense“, you can click on “Actions” and then “View Counters“. This should now display “1” under “Unit Usage“, as the account has taken a license from the pool (“Reserved Seat“). If needed, you can proceed with the allocation of other Partitions (or users/groups). Please note that log in with one of the “exceptions” (i.e. “dmadmin“) shouldn’t use a counter.

L’article Dctm – Managing licenses through OTDS est apparu en premier sur dbi Blog.

Commercial PostgreSQL distributions with TDE (1) Fujitsu Enterprise Postgres (2) TDE

Yann Neuhaus - Fri, 2026-01-30 11:19

In the last post we did the setup of Fujitsu Enterprise Postgres so we’re now ready to look at how TDE is implemented in this distribution of PostgreSQL. The unit of encryption in this version of PostgreSQL is a tablespace but before we can encrypt anything we need to create a master encryption key.

The location where the keystore gets created is specified by the “keystore_location” parameter, a parameter not known in community PostgreSQL:

postgres@rhel9-latest:/home/postgres/ [fe18] mkdir /u02/pgdata/keystore
postgres@rhel9-latest:/home/postgres/ [fe18] psql -c "alter system set keystore_location = '/u02/pgdata/keystore'"
ALTER SYSTEM
postgres@rhel9-latest:/home/postgres/ [fe18] sudo systemctl restart fe-postgres-1.service
postgres@rhel9-latest:/home/postgres/ [fe18] psql -c "show keystore_location"
  keystore_location   
----------------------
 /u02/pgdata/keystore
(1 row)

Once that is ready we need to create the master encryption key using a function called “pgx_set_master_key”:

postgres@rhel9-latest:/home/postgres/ [fe18] psql
psql (18.0)
Type "help" for help.

postgres=# \df pgx_set_master_key
                                List of functions
   Schema   |        Name        | Result data type | Argument data types | Type 
------------+--------------------+------------------+---------------------+------
 pg_catalog | pgx_set_master_key | void             | text                | func
(1 row)

postgres=# SELECT pgx_set_master_key('secret');
ERROR:  passphrase is too short or too long
DETAIL:  The length of the passphrase must be between 8 and 200 bytes.
postgres=# SELECT pgx_set_master_key('secret123');
 pgx_set_master_key 
--------------------
 
(1 row)

This created a new keystore file under the location we’ve specified above which is of course not human readable:

postgres=# \! ls -la /u02/pgdata/keystore
total 4
drwxr-xr-x. 2 postgres postgres  25 Jan 28 10:43 .
drwxr-xr-x. 4 postgres postgres  34 Jan 28 10:39 ..
-rw-------. 1 postgres postgres 928 Jan 28 10:43 keystore.ks
postgres=# \! strings /u02/pgdata/keystore/keystore.ks
KSTR
@NA\
Io      BS
a!]I
>yu;
2r<:4
G)n%j
6wE"
@{OT
ym&M]
@1l'z
}5>,

The advantage of implementing TDE on the tablespace level and not the whole instance is, that we can still restart the instance without specifying the master encryption key:

postgres@rhel9-latest:/home/postgres/ [fe18] sudo systemctl restart fe-postgres-1.service
postgres@rhel9-latest:/home/postgres/ [fe18] psql
psql (18.0)
Type "help" for help.

postgres=$

Before we can encrypt anything we need to open the keystore using another function called “pgx_open_keystore”:

postgres=# \df pgx_open_keystore
                               List of functions
   Schema   |       Name        | Result data type | Argument data types | Type 
------------+-------------------+------------------+---------------------+------
 pg_catalog | pgx_open_keystore | void             | text                | func
(1 row)

postgres=# select pgx_open_keystore('secret123');
 pgx_open_keystore 
-------------------
 
(1 row)

Once the keystore is open we can create an encrypted tablespace and put some data inside:

postgres=# \! mkdir /var/tmp/tbsencr
postgres=# create tablespace tbsencr location '/var/tmp/tbsencr' with (tablespace_encryption_algorithm = 'AES256' );
CREATE TABLESPACE
postgres=# \dbs+
                                                        List of tablespaces
    Name    |  Owner   |     Location     | Access privileges |                 Options                  |   Size    | Description 
------------+----------+------------------+-------------------+------------------------------------------+-----------+-------------
 pg_default | postgres |                  |                   |                                          | 23 MB     | 
 pg_global  | postgres |                  |                   |                                          | 790 kB    | 
 tbsencr    | postgres | /var/tmp/tbsencr |                   | {tablespace_encryption_algorithm=AES256} | 928 bytes | 
(3 rows)
postgres=# create table t1 ( a int, b text ) tablespace tbsencr;
CREATE TABLE
postgres=# insert into t1 select i, i::text from generate_series(1,100) i;
INSERT 0 100

Trying to read from that table without opening the keystore will of course fail:

postgres@rhel9-latest:/home/postgres/ [fe18] sudo systemctl restart fe-postgres-1.service
postgres@rhel9-latest:/home/postgres/ [fe18] psql -c "select * from t1"
ERROR:  could not encrypt or decrypt data because the keystore is not open
HINT:  Open the existing keystore, or set the master encryption key to create and open a new keystore

There are not only pgx_* functions, there are also pgx_* catalog views which give you more information about specific topics not available in community PostgreSQL, e.g. for listing the tablespaces and their encryption scheme:

postgres=# select spcname, spcencalgo from pg_tablespace ts, pgx_tablespaces tsx where ts.oid = tsx.spctablespace;
  spcname   | spcencalgo 
------------+------------
 pg_default | none
 pg_global  | none
 tbsencr    | AES256
(3 rows)

Like in Oracle with the wallet there is the option to auto open the keystore:

postgres@rhel9-latest:/home/postgres/ [fe18] pgx_keystore --enable-auto-open /u02/pgdata/keystore/keystore.ks 
Enter passphrase: 
auto-open of the keystore has been enabled

The whole management of the key store and also the auto open options are explained well in the official documentation.

Btw: If you dump the data with pg_dump while the keystore is open this will result in an unencrypted dump:

postgres@rhel9-latest:/home/postgres/ [fe18] pg_dump | grep -A 3 COPY
COPY public.t (a) FROM stdin;
1
\.

--
COPY public.t1 (a, b) FROM stdin;
1       1
2       2
3       3

There is the pgx_dmpall command which can be used to backup the whole instance, but this is out of scope for this post.

L’article Commercial PostgreSQL distributions with TDE (1) Fujitsu Enterprise Postgres (2) TDE est apparu en premier sur dbi Blog.

GoldenGate REST API basics with Python

Yann Neuhaus - Thu, 2026-01-29 01:10

Oracle GoldenGate REST API has been around for quite some time now, but I’ve yet to see it used in practice at customers. Every time I tried to introduce it, it was some sort of novelty. Mainly because DBAs tend to dislike automation, but also for technical reasons. Even though the REST API was introduced in GoldenGate 12c with the Microservices Architecture, some customers are still stuck with the Classic Architecture. As a reminder, the classic architecture is now completely absent from the 23ai version (plan your migration now !).

That being said, where to start when using the GoldenGate REST API? Oracle has some basic documentation using curl, but I want to take things a bit further by leveraging the power of Python, starting with basic requests.

Make your first request to the GoldenGate REST API

If you really have no idea what a REST API is, there are tons of excellent articles online for you to get into it. Getting back to the basics, in Python, the requests module will handle the API calls for us.

The most basic REST API call would look like what I show below. Adapt the credentials, and the service manager host and port.

import requests

url = "http://vmogg:7809/services"
auth = ("ogg_user", "ogg_password")
result = requests.get(url, auth=auth)

Until now, nothing fascinating, but with the 200 return code below, we know that the call succeeded. The ok flag gives us the status of the call:

>>> result
<Response [200]>
>>> result.ok
True

And to get the real data returned by the API, use the json method.

>>> result.json()
{'$schema': 'api:versions', 'links': [{'rel': 'current', 'href': 'http://vmogg:7809/services/v2', 'mediaType': 'application/json'}, {'rel': 'canonical', 'href': 'http://vmogg:7809/services', 'mediaType': 'application/json'}, {'rel': 'self', 'href': 'http://vmogg:7809/services', 'mediaType': 'application/json'}], 'items': [{'$schema': 'api:version', 'version': 'v2', 'isLatest': True, 'lifecycle': 'active', 'catalog': {'links': [{'rel': 'canonical', 'href': 'http://vmogg:7809/services/v2/metadata-catalog', 'mediaType': 'application/json'}]}}]}

Up until that point, you are successfully making API connections to your GoldenGate service manager, but nothing more. What you need to change is the URL, and more specifically the endpoint.

/services is called an endpoint, and the full list of endpoints can be found in the GoldenGate documentation. Not all of them are useful, but when looking for a specific GoldenGate action, this endpoint library is a good starting point.

For instance, to get the list of all the deployments associated with your service manager, use the /services/v2/deployments endpoint. If you get an OGG-12064 error, it means that the credentials are not correct (they were technically not needed for the first example).

>>> url = "http://vmogg:7809/services/v2/deployments"
>>> requests.get(url, auth=auth).json()
{
    '$schema': 'api:standardResponse',
    'links': [
        {'rel': 'canonical', 'href': 'http://vmogg:7809/services/v2/deployments', 'mediaType': 'application/json'},
        {'rel': 'self', 'href': 'http://vmogg:7809/services/v2/deployments', 'mediaType': 'application/json'},
        {'rel': 'describedby', 'href': 'http://vmogg:7809/services/v2/metadata-catalog/versionDeployments', 'mediaType': 'application/schema+json'}
    ],
    'messages': [],
    'response': {
        '$schema': 'ogg:collection',
        'items': [
            {'links': [{'rel': 'parent', 'href': 'http://vmogg:7809/services/v2/deployments', 'mediaType': 'application/json'}, {'rel': 'canonical', 'href': 'http://vmogg:7809/services/v2/deployments/ServiceManager', 'mediaType': 'application/json'}], '$schema': 'ogg:collectionItem', 'name': 'ServiceManager', 'status': 'running'},
            {'links': [{'rel': 'parent', 'href': 'http://vmogg:7809/services/v2/deployments', 'mediaType': 'application/json'}, {'rel': 'canonical', 'href': 'http://vmogg:7809/services/v2/deployments/ogg_test_01', 'mediaType': 'application/json'}], '$schema': 'ogg:collectionItem', 'name': 'ogg_test_01', 'status': 'running'},
            {'links': [{'rel': 'parent', 'href': 'http://vmogg:7809/services/v2/deployments', 'mediaType': 'application/json'}, {'rel': 'canonical', 'href': 'http://vmogg:7809/services/v2/deployments/ogg_test_02', 'mediaType': 'application/json'}], '$schema': 'ogg:collectionItem', 'name': 'ogg_test_02', 'status': 'running'}
        ]
    }
}

Already, we’re starting to get a bit lost in the output (even though I cleaned it for you). Without going too much into the details, when the call succeeds, we are interested in the response.items object, discarding $schema and links objects. When the call fails, let’s just display the output for now.

def parse(response):
    try:
        return response.json()
    except ValueError:
        return response.text

def extract_main(result):
    if not isinstance(result, dict):
        return result
    resp = result.get("response", result)
    if "items" not in resp:
        return resp
    exclude = {"links", "$schema"}
    return [{k: v for k, v in i.items() if k not in exclude} for i in resp["items"]]

result = requests.get(url, auth=auth)
if result.ok:
    response = parse(result)
    main_response = extract_main(response)

We now have a more human-friendly output for our API calls ! For this specific example, we only retrieve the deployment names and their status.

>>> main_response
[{'name': 'ServiceManager', 'status': 'running'}, {'name': 'ogg_test_01', 'status': 'running'}, {'name': 'ogg_test_02', 'status': 'running'}]
GoldenGate POST API calls

Some API calls require you to send data instead of just receiving it. A common example is the creation of a GoldenGate user. Both the role and the username are part of the endpoint. Using the post method instead of get, we will give the user settings (the password, essentially) in the params argument:

import requests

role = "User"
user = "ogg_username"
url = f"http://vmogg:7809/services/v2/authorizations/{role}/{user}"
auth = ("ogg_user", "ogg_password")
data = {
    "credential": "your_password"
}

result = requests.post(url, auth=auth, json=data)

To check if the user was created, you can go to the Web UI or check the ok flag again.

>>> result.ok
True

Here, the API doesn’t provide us with much information when the call succeeds:

>>> result.text
'{"$schema":"api:standardResponse","links":[{"rel":"canonical","href":"http://vmogg:7809/services/v2/authorizations/User/ogg_username","mediaType":"application/json"},{"rel":"self","href":"http://vmogg:7809/services/v2/authorizations/User/ogg_username","mediaType":"application/json"}],"messages":[]}'
A fully working OGGRestAPI Python class

When dealing with the REST API, you will quickly feel the need for a standard client object that will handle everything for you. A very basic ogg_rest_api.py script class will look like this:

import requests
import urllib3


class OGGRestAPI:
    def __init__(self, url, username=None, password=None, ca_cert=None, verify_ssl=True):
        """
        Initialize Oracle GoldenGate REST API client.

        :param url: Base URL of the OGG REST API. It can be:
                    'http(s)://hostname:port' without NGINX reverse proxy,
                    'https://nginx_host:nginx_port' with NGINX reverse proxy.
        :param username: service username
        :param password: service password
        :param ca_cert: path to a trusted CA cert (for self-signed certs)
        :param verify_ssl: bool, whether to verify SSL certs
        """
        self.base_url = url
        self.username = username
        self.auth = (self.username, password)
        self.headers = {'Accept': 'application/json', 'Content-Type': 'application/json'}
        self.verify_ssl = ca_cert if ca_cert else verify_ssl

        if not verify_ssl and protocol == 'https':
            # Disable InsecureRequestWarning if verification is off
            urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

    def _request(self, method, path, *, params=None, data=None, extract=True):
        url = f'{self.base_url}{path}'
        response = requests.request(
            method,
            url,
            auth=self.auth,
            headers=self.headers,
            params=params,
            json=data,
            verify=self.verify_ssl
        )
        self._check_response(response)
        result = self._parse(response)
        return self._extract_main(result) if extract else result

    def _get(self, path, params=None, extract=True):
        return self._request('GET', path, params=params, extract=extract)

    def _post(self, path, data=None, extract=True):
        return self._request('POST', path, data=data, extract=extract)

    def _put(self, path, data=None, extract=True):
        return self._request('PUT', path, data=data, extract=extract)

    def _patch(self, path, data=None, extract=True):
        return self._request('PATCH', path, data=data, extract=extract)

    def _delete(self, path, extract=True):
        return self._request('DELETE', path, extract=extract)

    def _check_response(self, response):
        if not response.ok:
            print(f'HTTP {response.status_code}: {response.text}')
            response.raise_for_status()

    def _parse(self, response):
        try:
            return response.json()
        except ValueError:
            return response.text

    def _extract_main(self, result):
        if not isinstance(result, dict):
            return result

        resp = result.get('response', result)
        if 'items' not in resp:
            return resp

        exclude = {'links', '$schema'}
        return [{k: v for k, v in i.items() if k not in exclude} for i in resp['items']]

With this, we can connect to the API and generate the same GET query as before to retrieve all deployments. This time, we only provide the endpoint, and not the whole URL.

>>> from ogg_rest_api import OGGRestAPI
>>> client_blog = OGGRestAPI(url="http://vmmogg:7809", username="ogg", password="ogg")
>>> client_blog._get("/services/v2/deployments")
[{'name': 'ServiceManager', 'status': 'running'}, {'name': 'ogg_test_01', 'status': 'running'}, {'name': 'ogg_test_02', 'status': 'running'}]

As you can imagine, all GoldenGate API functionalities can be integrated in this class, enhancing GoldenGate management and monitoring. Next time you want to automate your GoldenGate processes, please consider using this REST API !

REST API calls to a secured GoldenGate deployment

If your GoldenGate deployment is secure, you can still use this Python class. The requests module will handle it for you. I give two examples below for a secured deployment using a self-signed certificate:

# Checking CA automatically (Trusted CA)
>>> client_blog = OGGRestAPI(url="https://vmogg:7809", username="ogg", password="ogg")

# Providing RootCA (self-signed certificate)
>>> client_blog = OGGRestAPI(url="https://vmogg:7809", username="ogg", password="ogg", ca_cert="/path/to/RootCA_cert.pem", verify_ssl=True)

# Disabling verification
>>> client_blog = OGGRestAPI(url="https://vmogg:7809", username="ogg", password="ogg", verify_ssl=False)

L’article GoldenGate REST API basics with Python est apparu en premier sur dbi Blog.

Thoughts on the M-Files GKO 2026 in Lisbon

Yann Neuhaus - Wed, 2026-01-28 11:53

My colleague, David Hueber, and I had the pleasure of attending the M-Files Global Kick-Off 2026 (GKO 2026) in Lisbon, Portugal, this week. The weather was unfortunate, but that’s not what matters.

From January 26 to January 29, this event brought together M-Files employees and partners from around the world for the first time. They aligned their strategy, exchanged views, and motivated each other to help our customers achieve even more with the solution.

Of course, a lot of the information is still confidential, so I won’t reveal any big secrets in the article. However, there are always things to share.

A Global Team Coming Together

Now more than ever, partners are the cornerstone of the M-Files strategy. We are not just “executors”; we are an extension of their vision. A real player of the team and for me it makes a big difference.

From strategic sessions to deep dives into product vision, GKO showcased once again why M‑Files continues to be a leader in intelligent information management.

Key Themes: AI, Metadata,…

It’s no surprise that this year is a logical continuation of the previous ones. Artificial intelligence is at the heart of the solution. Metadata increases the accuracy of AI results, and the recently introduced workspaces provide more context for your data. “Context” is the key word for this year!

A major focus this year is the evolving role of AI in document management, and how M-Files continues to push the boundaries with metadata‑driven automation. The momentum around:

  • Enhancing information discovery
  • Improving knowledge work productivity
  • Scaling AI-powered document processes
Inspiring Conversations & Roadmap Insights

Throughout the sessions, we were given a sneak preview of M-Files’ strategic direction for 2026. The presentations and discussions particularly resonated with our daily experience in the field, where we see organizations seeking clarity, automation, and structure in an information-overloaded world.

Collaboration with Colleagues

Attending with David made the experience even more memorable. Yep, he’s my boss, so I have to suck up a little.

Together, we engaged with peers from different regions, compared our consulting practice experiences, and aligned ourselves more closely with M-Files’ strategic priorities. Events like GKO demonstrate the importance of community and collaboration in developing effective ECM and information management solutions.

Ready for 2026

GKO 2026 is almost over, and it left us inspired, aligned, and energized to bring even more value to our clients this year. With M-Files continuing to innovate at a rapid pace, we’re excited to translate these insights into impactful ECM solutions across our projects.
Stay tuned, more reflections and deep dives into M-Files and intelligent content management are on the way!

Feel free to ask us any digitalization questions.

L’article Thoughts on the M-Files GKO 2026 in Lisbon est apparu en premier sur dbi Blog.

SQL Server 2022 CU23: Database Mail is broken, but your alerts shouldn’t be

Yann Neuhaus - Wed, 2026-01-28 02:35

“One single alerting email is missing, and all your monitoring feels deserted.”

This is the harsh lesson many SQL Server DBAs learned the hard way this week. If your inbox has been suspiciously quiet since your last maintenance window, don’t celebrate just yet: your instance might have simply lost its voice.

CU23: The poisoned gift

The Cumulative Update 23 for SQL Server 2022 (KB5074819), released on January 15 2026, quickly became the “hot” topic on technical forums. The reason? A major regression that purely and simply breaks Database Mail.

Could not load file or assembly 'Microsoft.SqlServer.DatabaseMail.XEvents, Version=17.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified.

This error message is the visible part of the iceberg. Beyond the technical crash, it is the silence of your monitoring that should worry you. The real danger here is that while the mailing engine fails, your SQL Agent jobs don’t necessarily report an error. Since the mail items are never even processed, they don’t appear in the failed_items or unsent_items views with typical error statuses. For most monitoring configurations, this means you stay completely unaware that your instance has lost its voice. You aren’t getting alerts, but everything looks fine on the surface.

Database Mail: The (Too) Quiet Hero

We often tend to downplay the importance of Database Mail, treating it as a minor utility. Yet, for many of us, it is the backbone of our monitoring. From SQL Agent job failure notifications to corruption alerts or disk space thresholds, Database Mail is a critical component. When it fails, your infrastructure visibility evaporates, leaving you flying blind in a rather uncomfortable technical fog.

Rollback or Status Quo?

While waiting for an official fix, the best way to protect your production remains a rollback (guidelines available here). Uninstalling a CU is never an easy task: it implies additional downtime and a fair amount of stress. However, staying in total darkness regarding your servers’ health is a much higher risk. Microsoft has promised a hotfix “soon” but in the meantime, a server that reboots is better than a server that suffers in silence.

The Light at the End of the Tunnel (is in the MSDB)

If a rollback is impossible due to SLAs or internal policies, remember that not all is lost. Even if the emails aren’t being sent, the information itself isn’t volatile. SQL Server continues to conscientiously log everything it wants to tell you inside the msdb system tables.

You can query the following table to keep an eye on the alerts piling up:

SELECT mailitem_id, recipients, subject, body, send_request_date, sent_date
 FROM [msdb].[dbo].[sysmail_mailitems]
 where sent_date is null
 and send_request_date > '2026-01-15 00:00:00.000'

It’s less stylish than a push notification, but it’s your final safety net to ensure your Log Shipping hasn’t flatlined or your backups haven’t devoured the last available Gigabyte.

How to Build a Solid “Home-Made” Quick Fix

If you have an SMTP gateway reachable via PowerShell, you can bridge the gap using the native Send-MailMessage cmdlet. This approach effectively bypasses the broken DatabaseMail.exe by using the PowerShell network stack to ship your alerts.

Testing with Papercut

In a Lab environment, you likely don’t have a full-blown Exchange server. To test this script, I recommend using Papercut. It acts as a local SMTP gateway that catches any outgoing mail and displays it in a UI without actually sending it to the internet. Simply run the Papercut executable, and it will listen on localhost:25.

The Recovery Script

For the purpose of this article’s demonstration, the following Central Management Server (CMS) setup is used:

Run the following script as a scheduled task every 15 minutes to replay any messages stuck in the msdb queue:

Import-Module -Name dbatools;

Set-DbatoolsConfig -FullName 'sql.connection.trustcert' -Value $true;

$cmsServer = "SQLAGAD\LAB01"
$cmsGroupName = "LAB_AG"
$smtpServer = "localhost"

try {
    $instances = Get-DbaRegServer -SqlInstance $cmsServer -ErrorAction Stop | 
                 Where-Object group -match $cmsGroupName;

    if (-not $instances) {
        Write-Warning "No instances found in the group '$cmsGroupName' on $cmsServer."
        return
    }

    foreach ($server in $instances) {
        Write-Host "Processing: $($server.Name)" -ForegroundColor Cyan
        
        $query = "SELECT SERVERPROPERTY('ServerName') SQLInstance, * FROM [msdb].[dbo].[sysmail_mailitems] where sent_date is null and send_request_date > DATEADD(minute, -15, GETDATE())"
        
        $blockedMails = Invoke-DbaQuery -SqlInstance $server.Name -Database msdb -Query $query -ErrorAction SilentlyContinue

        if ($blockedMails) {
            foreach ($mail in $blockedMails) {
                Send-MailMessage -SmtpServer $smtpServer `
                                 -From "MSSQLSecurity@lto.com" `
                                 -To $mail.recipients `
                                 -Subject "[CU23-RECOVERY] $($mail.subject)" `
                                 -Body $mail.body `
                                 -Priority High

                Write-Host "Successfully re-sent: $($mail.subject)" -ForegroundColor Green
            }
        }
    }
}
catch {
    Write-Error "Error accessing the CMS: $($_.Exception.Message)"
}

Resulting alerts in the Papercut mailbox:

By leveraging the dbatools module and the native Send-MailMessage function, we can create a “recovery bridge.” The following script is designed to be run as a scheduled task (e.g., every 15 minutes). It scans your entire infrastructure via your CMS, identifies any messages that failed to send in the last 15-minute window, and replays them through PowerShell’s network stack instead of SQL Server’s.

Note on Data Integrity: You will notice that this script intentionally performs no UPDATE or DELETE operations on the msdb tables. We chose to treat the system database as a read-only ‘source of truth’ to avoid any further corruption or inconsistency while the instance is already in an unstable state.

Let’s wrap-up !

Until Microsoft releases an official fix for this capricious CU23, this script acts as a life support system for your monitoring alerts. It is simple, effective, and most importantly, it prevents the Monday morning nightmare of discovering failed weekend backups that went unnoticed because the notification engine was silent.

So, if your SQL alerts prefer staying cozy in the msdb rather than doing their job, you now have the bridge to get them moving. Set up the scheduled task, run the script, and go grab a coffee, your emails are finally back on track.

Until the hotfix lands, keep your scripts ready!

L’article SQL Server 2022 CU23: Database Mail is broken, but your alerts shouldn’t be est apparu en premier sur dbi Blog.

Commercial PostgreSQL distributions with TDE (1) Fujitsu Enterprise Postgres (1) Setup

Yann Neuhaus - Tue, 2026-01-27 08:56

While TDE (Transparent Data Encryption) is considered a checklist feature in some parts of the PostgreSQL community this topic comes up all over again. The same is true with our customers, it doesn’t matter at all if it technically makes sense to have TDE or not, some just must have it for reasons outside of their control, mostly due to legal requirements. As vanilla community PostgreSQL does not provide TDE the only option is to use one of the commercial distributions of PostgtreSQL. There are several out there and we’ll take a look at some of them but today we’ll start with Fujitsu Enterprise Postgres. This will be a two blog series, the first one (this) describing how to setup this version of PostgreSQL and in the follow up post we’ll look at how TDE looks like in this distribution and how it can be setup in Fujitsu Enterprise Postgres.

Fujitsu provides a 90 trial version for which you need to register here. Once you’ve done that you should receive an Email with a link to download the software (around 1.3gb). The supported operating systems are either RHEL 8 & 9 or SLES 15. If you are on Debian or anything based on Debian such as Ubuntu then you can already stop here as it is not supported.

As usual, the operating system needs to be prepared, and on RHEL 9 which I am using here this means enabling the code ready builder repository and installing the required packages:

[root@rhel9-latest ~]$ subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms
[root@rhel9-latest ~]$ dnf install alsa-lib audit-libs bzip2-libs cyrus-sasl-lib gdb pcp-system-tools glibc glibc.i686 iputils libnsl2 libicu libgcc libmemcached-awesome libselinux libstdc++ libtool-ltdl libzstd llvm lz4-libs ncurses-libs net-tools nss-softokn-freebl pam perl-libs protobuf-c python3 rsync sudo sysstat tcl unzip xz-libs zlib perl

For this the system needs to be subscribed to Red Hat but you might also try either Rocky or Alma Linux, this should work as well (both are clones of RHEL) and those do not need a subscription.

In the Quick Start Guide an operating system user called “fepuser” is used, but we prefer to go with standard “postgres” user, so:

[root@rhel9-latest ~]$ groupadd postgres
[root@rhel9-latest ~]$ useradd -g postgres -m postgres
[root@rhel9-latest ~]$ grep postgres /etc/sudoers
postgres        ALL=(ALL)       NOPASSWD: ALL

We’ll also not use “/database/inst1” as PGDATA but rather our standard:

[root@rhel9-latest ~]$ su - postgres
[postgres@rhel9-latest ~]$ sudo mkdir -p /u02/pgdata
[postgres@rhel9-latest ~]$ sudo chown postgres:postgres /u02/pgdata/
[postgres@rhel9-latest ~]$ exit

As the installation files are provided in ISO format, loop-back mount that:

[root@rhel9-latest ~]$ mkdir /mnt/media
[root@rhel9-latest ~]$ mount -t iso9660 -r -o loop /root/ep-postgresae-linux-x64-1800-1.iso /mnt/media/
[root@rhel9-latest ~]$ ls -l /mnt/media/
total 161
dr-xr-xr-x. 4 root root  2048 Dec  4 03:24 CIR
dr-xr-xr-x. 3 root root  2048 May 18  2023 CLIENT64
dr-xr-xr-x. 4 root root  2048 Dec  4 03:24 COMMON
-r-xr-xr-x. 1 root root  4847 Aug 14 10:49 install.sh
dr-xr-xr-x. 5 root root  2048 Nov  5 08:47 manual
dr-xr-xr-x. 2 root root 24576 Oct 29 17:04 OSS_Licence
dr-xr-xr-x. 3 root root  2048 Jun 13  2016 parser
dr-xr-xr-x. 3 root root  2048 Mar 27  2024 PGBACKREST
dr-xr-xr-x. 3 root root  2048 May 18  2023 PGPOOL2
-r--r--r--. 1 root root 46882 Nov 28 02:39 readme.txt
-r--r--r--. 1 root root 53702 Nov 28 01:21 readme_utf8.txt
dr-xr-xr-x. 2 root root  2048 May 18  2023 sample
dr-xr-xr-x. 3 root root  2048 May 18  2023 SERVER
-r-xr-xr-x. 1 root root 12848 Aug 14 10:50 silent.sh
dr-xr-xr-x. 3 root root  2048 May 18  2023 WEBADMIN

That should be all to start the installation (there is a silent mode as well, but we’re not going to look into this for the scope of this post):

[root@rhel9-latest ~]$ cd /mnt/media/
[root@rhel9-latest media]$ ./install.sh
ERROR: The installation of Uninstall (middleware) ended abnormally.

Installation was ended abnormally.

… and that directly fails. Looking at the log file it becomes clear that “tar” is missing (this is a minimal installation of RHEL 9 and this does apparently does not come with tar by default):

[root@rhel9-latest media]$ grep tar /var/log/fsep_SERVER64_media_1800_install.log
sub_envcheck.sh start
sub_cir_install.sh start
./CIR/Linux/cirinst.sh: line 562: tar: command not found
sub_envcheck.sh start
sub_cir_install.sh start
./CIR/Linux/cirinst.sh: line 562: tar: command not found

So, once more:

[root@rhel9-latest media]$ dnf install -y tar
[root@rhel9-latest media]$ ./install.sh

The following products can be installed:
1: Fujitsu Enterprise Postgres Advanced Edition (64bit) 18
2: Fujitsu Enterprise Postgres Client (64bit) 18
3: Fujitsu Enterprise Postgres WebAdmin 18
4: Fujitsu Enterprise Postgres Pgpool-II 18
5: Fujitsu Enterprise Postgres pgBackRest 18

Select the product to be installed.
Note: If installing the Server, it is strongly recommended to install WebAdmin.
To select multiple products, separate using commas (,). (Example: 1,2)
[number,all,q](The default value is 1,2,3): all

From the output above we can see that Fujitsu bundles Pgpool-II and pgBackRest with their distribution of PostgreSQL. We’re just going to install “all” and the installation runs smoothly until the end:

Selected product
  Fujitsu Enterprise Postgres Advanced Edition (64bit) 18
  Fujitsu Enterprise Postgres Client (64bit) 18
  Fujitsu Enterprise Postgres WebAdmin 18
  Fujitsu Enterprise Postgres Pgpool-II 18
  Fujitsu Enterprise Postgres pgBackRest 18

Do you want to install the above product?
y: Proceed to the next step
n: Select the product again
q: Quit without installing
[y,n,q](The default value is y): y

==============================================================================

Product to be installed
  Fujitsu Enterprise Postgres Advanced Edition (64bit) 18
    New installation
  Fujitsu Enterprise Postgres Client (64bit) 18
    New installation
  Fujitsu Enterprise Postgres WebAdmin 18
    New installation
  Fujitsu Enterprise Postgres Pgpool-II 18
    New installation
  Fujitsu Enterprise Postgres pgBackRest 18
    New installation

Installation directory information
  Fujitsu Enterprise Postgres Advanced Edition (64bit) 18
    /opt/fsepv18server64
  Fujitsu Enterprise Postgres Client (64bit) 18
    /opt/fsepv18client64
  Fujitsu Enterprise Postgres WebAdmin 18
    /opt/fsepv18webadmin
  Fujitsu Enterprise Postgres Pgpool-II 18
    /opt/fsepv18pgpool-II
  Fujitsu Enterprise Postgres pgBackRest 18
    /opt/fsepv18pgbackrest

Setup information
  WebAdmin setup: Execute
  Web server port number: 27515
  WebAdmin internal port number: 27516

Start installation using the above information?
y: Start the installation
c: Change the information
q: Quit without installing
[y,c,q](The default value is y): y


==============================================================================
Starting installation.

Fujitsu Enterprise Postgres Advanced Edition (64bit) 18 Installation

Installation is complete.

Fujitsu Enterprise Postgres Client (64bit) 18 Installation

Installation is complete.

Fujitsu Enterprise Postgres WebAdmin 18 Installation

Installation is complete.

Fujitsu Enterprise Postgres Pgpool-II 18 Installation

Installation is complete.

Fujitsu Enterprise Postgres pgBackRest 18 Installation

Installation is complete.

Starting setup.

Sets up WebAdmin.

Setup is complete.

Installed successfully.

All the files have been installed under “/opt”, grouped by component:

[root@rhel9-latest ~]$ ls -l /opt/
total 4
dr-xr-x---.  5 root root   59 Jan 26 09:41 FJSVcir
drwxr-xr-x.  3 root root 4096 Jan 26 09:42 FJSVqstl
drwxr-xr-x. 14 root root  155 Nov 21 08:57 fsepv18client64
drwxr-xr-x.  7 root root   63 Jan 26 09:42 fsepv18pgbackrest
drwxr-xr-x.  9 root root   91 Jan 26 09:42 fsepv18pgpool-II
drwxr-xr-x. 15 root root  162 Oct 27 05:57 fsepv18server64
drwxr-xr-x. 13 root root  137 Jan 26 09:42 fsepv18webadmin

What follows below is the installation of our DMK, just ignore it (or ask in the comments if you’re interested), this makes it a bit easier with the environment:

[postgres@rhel9-latest ~]$ sudo mkdir -p /opt/local
[postgres@rhel9-latest ~]$ sudo chown postgres:postgres /opt/local/
[postgres@rhel9-latest ~]$ cd /opt/local/
[postgres@rhel9-latest local]$ unzip /home/postgres/dmk_postgres-2.4.1.zip
[postgres@rhel9-latest local]$ cat /etc/pgtab
fe18:/opt/fsepv18server64/bin/:dummy:9999:D
[postgres@rhel9-latest local]$ sudo chown postgres:postgres /etc/pgtab
[postgres@rhel9-latest local]$ cp dmk/etc/dmk.conf.unix dmk/etc/dmk.conf
[postgres@rhel9-latest local]$ dmk/bin/dmk.bash
[postgres@rhel9-latest local]$ cat dmk/templates/profile/dmk.postgres.profile >> ~/.bash_profile
[postgres@rhel9-latest local]$ exit
logout
[root@rhel9-latest ~]$ su - postgres
Last login: Mon Jan 26 09:51:49 CET 2026 from 192.168.122.1 on pts/1


 PostgreSQL Clusters up and running on this host:
 ----------------------------------------------------------------------------------------------

 PostgreSQL Clusters NOT running on this host:
 ----------------------------------------------------------------------------------------------

Having that setup we do not need to worry about PATH and other environment settings, they just got setup correctly, so we can directly ask for which version of PostgreSQL we’re faced here:

postgres@rhel9-latest:/home/postgres/ [fe18] which initdb
/opt/fsepv18server64/bin/initdb
postgres@rhel9-latest:/home/postgres/ [fe18] initdb --version
initdb (PostgreSQL) 18.0

Except for the installation directory there does not seem to be a special Fujitsu branding and we’re on PostgreSQL 18.0. The preferred method of creating a new PostgreSQL instance seems to be to use WebAdmin which was installed and started during the installation:

We could go this way, but as we do want to know how it looks in the background we’re going to use plain initdb. Looking at the switches of this version of initdb it is mostly the same as plain PostgreSQL except for the “coordinator”, “coord_conninfo”, and “datanode” switches:

postgres@rhel9-latest:/home/postgres/ [fe18] initdb --help
initdb initializes a PostgreSQL database cluster.

Usage:
  initdb [OPTION]... [DATADIR]

Options:
  -A, --auth=METHOD         default authentication method for local connections
      --auth-host=METHOD    default authentication method for local TCP/IP connections
      --auth-local=METHOD   default authentication method for local-socket connections
      --coordinator         create an instance of the coordinator       <- XXXXXXXXXXXXXXXXXXXXXXXXX
      --coord_conninfo      connection parameters to the coordinator    <- XXXXXXXXXXXXXXXXXXXXXXXXX
      --datanode            create an instance of the datanode          <- XXXXXXXXXXXXXXXXXXXXXXXXX
 [-D, --pgdata=]DATADIR     location for this database cluster
  -E, --encoding=ENCODING   set default encoding for new databases
  -g, --allow-group-access  allow group read/execute on data directory
      --icu-locale=LOCALE   set ICU locale ID for new databases
      --icu-rules=RULES     set additional ICU collation rules for new databases
  -k, --data-checksums      use data page checksums
      --locale=LOCALE       set default locale for new databases
      --lc-collate=, --lc-ctype=, --lc-messages=LOCALE
      --lc-monetary=, --lc-numeric=, --lc-time=LOCALE
                            set default locale in the respective category for
                            new databases (default taken from environment)
      --no-locale           equivalent to --locale=C
      --builtin-locale=LOCALE
                            set builtin locale name for new databases
      --locale-provider={builtin|libc|icu}
                            set default locale provider for new databases
      --no-data-checksums   do not use data page checksums
      --pwfile=FILE         read password for the new superuser from file
  -T, --text-search-config=CFG
                            default text search configuration
  -U, --username=NAME       database superuser name
  -W, --pwprompt            prompt for a password for the new superuser
  -X, --waldir=WALDIR       location for the write-ahead log directory
      --wal-segsize=SIZE    size of WAL segments, in megabytes

Less commonly used options:
  -c, --set NAME=VALUE      override default setting for server parameter
  -d, --debug               generate lots of debugging output
      --discard-caches      set debug_discard_caches=1
  -L DIRECTORY              where to find the input files
  -n, --no-clean            do not clean up after errors
  -N, --no-sync             do not wait for changes to be written safely to disk
      --no-sync-data-files  do not sync files within database directories
      --no-instructions     do not print instructions for next steps
  -s, --show                show internal settings, then exit
      --sync-method=METHOD  set method for syncing files to disk
  -S, --sync-only           only sync database files to disk, then exit

Other options:
  -V, --version             output version information, then exit
  -?, --help                show this help, then exit

If the data directory is not specified, the environment variable PGDATA
is used.

Report bugs to <pgsql-bugs@lists.postgresql.org>.
PostgreSQL home page: <https://www.postgresql.org/>

(Not sure if bugs for this version of initdb really should go to the official mailing lists, but this is another topic).

Initializing a new instance is not different from what we know from vanilla PostgreSQL:

postgres@rhel9-latest:/home/postgres/ [fe18] initdb --pgdata=/u02/pgdata/fe18/PG1 --lc-collate="C" --lc-ctype="C" --encoding=UTF8
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with this locale configuration:
  locale provider:   libc
  LC_COLLATE:  C
  LC_CTYPE:    C
  LC_MESSAGES: en_US.UTF-8
  LC_MONETARY: en_US.UTF-8
  LC_NUMERIC:  en_US.UTF-8
  LC_TIME:     en_US.UTF-8
The default text search configuration will be set to "english".

Data page checksums are enabled.

creating directory /u02/pgdata/fe18/PG1 ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default "max_connections" ... 100
selecting default "shared_buffers" ... 128MB
selecting default time zone ... Europe/Berlin
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /u02/pgdata/fe18/PG1 -l logfile start

As we do want the instance to start automatically when the operating system is starting up lets use the systemd service template provided by Fujitsu to integrate that with systemd:

## Template
postgres@rhel9-latest:/home/postgres/ [fe18] cat /opt/fsepv18server64/share/fsepsvoi.service.sample
# Copyright FUJITSU LIMITED 2025
[Unit]
Description=FUJITSU Enterprise Postgres <inst1>
Requires=network-online.target
After=network.target network-online.target

[Service]
ExecStart=/bin/bash -c '/opt/fsepv<x>server64/bin/pgx_symstd start /opt/fsepv<x>server64 /database/inst1'
ExecStop=/bin/bash -c '/opt/fsepv<x>server64/bin/pgx_symstd stop /opt/fsepv<x>server64 /database/inst1'
ExecReload=/bin/bash -c '/opt/fsepv<x>server64/bin/pgx_symstd reload /opt/fsepv<x>server64 /database/inst1'
Type=forking
User=fsepuser
Group=fsepuser

[Install]
WantedBy=multi-user.target

## Adapted
postgres@rhel9-latest:/home/postgres/ [fe18] sudo cat /etc/systemd/system/fe-postgres-1.service
# Copyright FUJITSU LIMITED 2025
[Unit]
Description=FUJITSU Enterprise Postgres PG1
Requires=network-online.target
After=network.target network-online.target

[Service]
Environment="PATH=/opt/fsepv18server64/bin/:$PATH"
ExecStart=/bin/bash -c '/opt/fsepv18server64/bin/pgx_symstd start /opt/fsepv18server64 /u02/pgdata/fe18/PG1/'
ExecStop=/bin/bash -c '/opt/fsepv18server64/bin/pgx_symstd stop /opt/fsepv18server64 /u02/pgdata/fe18/PG1/'
ExecReload=/bin/bash -c '/opt/fsepv18server64/bin/pgx_symstd reload /opt/fsepv18server64 /u02/pgdata/fe18/PG1/'
Type=forking
User=postgres
Group=postgres

[Install]
WantedBy=multi-user.target

“pgx_symstd” is a simple wrapper around pg_ctl, not sure about the reason for this. Enable and start:

postgres@rhel9-latest:/home/postgres/ [fe18] sudo systemctl enable fe-postgres-1
Created symlink /etc/systemd/system/multi-user.target.wants/fe-postgres-1.service → /etc/systemd/system/fe-postgres-1.service.
11:22:11 postgres@rhel9-latest:/home/postgres/ [fe18] sudo systemctl daemon-reload
11:22:13 postgres@rhel9-latest:/home/postgres/ [fe18] sudo systemctl start fe-postgres-1
11:22:20 postgres@rhel9-latest:/home/postgres/ [fe18] sudo systemctl status fe-postgres-1
● fe-postgres-1.service - FUJITSU Enterprise Postgres PG1
     Loaded: loaded (/etc/systemd/system/fe-postgres-1.service; enabled; preset: disabled)
     Active: active (running) since Mon 2026-01-26 11:22:16 CET; 8s ago
    Process: 33363 ExecStart=/bin/bash -c /opt/fsepv18server64/bin/pgx_symstd start /opt/fsepv18server64 /u02/pgdata/fe18/PG1/ (code=exited, status=0/SUCCESS)
   Main PID: 33366 (postgres)
      Tasks: 10 (limit: 7834)
     Memory: 31.3M (peak: 33.0M)
        CPU: 48ms
     CGroup: /system.slice/fe-postgres-1.service
             ├─33366 /opt/fsepv18server64/bin/postgres -D /u02/pgdata/fe18/PG1
             ├─33367 "postgres: io worker 0"
             ├─33368 "postgres: io worker 1"
             ├─33369 "postgres: io worker 2"
             ├─33370 "postgres: checkpointer "
             ├─33371 "postgres: background writer "
             ├─33373 "postgres: walwriter "
             ├─33374 "postgres: autovacuum launcher "
             ├─33375 "postgres: user profile status writer "
             └─33376 "postgres: logical replication launcher "

Jan 26 11:22:16 rhel9-latest bash[33366]: 2026-01-26 10:22:16.003 GMT [33366] WARNING:  The license will expire in 90 days.
Jan 26 11:22:16 rhel9-latest bash[33366]: 2026-01-26 11:22:16.014 CET [33366] LOG:  starting PostgreSQL 18.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit
Jan 26 11:22:16 rhel9-latest bash[33366]: 2026-01-26 11:22:16.015 CET [33366] LOG:  listening on IPv6 address "::1", port 27500
Jan 26 11:22:16 rhel9-latest bash[33366]: 2026-01-26 11:22:16.015 CET [33366] LOG:  listening on IPv4 address "127.0.0.1", port 27500
Jan 26 11:22:16 rhel9-latest bash[33366]: 2026-01-26 11:22:16.019 CET [33366] LOG:  listening on Unix socket "/tmp/.s.PGSQL.27500"

Notice the license warning in the output. Another thing to note is the “user profile status writer” process as this does not exist in community PostgreSQL. Looking at the version of PostgreSQL confirms that we’re on 18.0:

postgres@rhel9-latest:/home/postgres/ [fe18] psql
psql (18.0)
Type "help" for help.

postgres=$ select version();
                                                 version
----------------------------------------------------------------------------------------------------------
 PostgreSQL 18.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit
(1 row)

postgres=$

There is also a file provided by Fujitsu which details the version installed:

postgres@rhel9-latest:/home/postgres/ [fe18] cat /opt/fsepv18server64/etc/product.inf
PRODUCT="Fujitsu Enterprise Postgres Advanced Edition"
MAJOR=18
MINOR=0
MICRO=0
VERSION1="18"
VERSION2=1800
RELEASE=0
EDITION=AE
EXEMODE=64
TYPE=SERVER
AREA=COMMON
OSTYPE=Linux
PRODUCTTYPE=FSEP
SOFTWAREID=7A705C507E0E11EE8008303035303536
FJQSSNAME="FSEP1864"
PKG_MC=ON
FORMALNAME="Fujitsu Enterprise Postgres"

Finally, let’s import the new instance into WebAdmin:

Reducing

Importing works fine, but there are not a lot of parameters which can be changed in WebAdmin, maybe this is on purpose.

Final thoughts about the installation and initial setup of the instance:

  • Bundling several products makes sense, as you anyway need a tool for backup/restore and connection pooling. I am not judging the selection of the tools.
  • Providing a graphical user interface makes sense for users not very familiar with PostgreSQL but I guess more experienced users will not use it.
  • There is no Fujitsu branding (at least not in the binaries).
  • The installation process is easy and straight forward.
  • Not sure about the pgx_ wrappers, at least for the systemd integration this seems not to be necessary (there are several more of them we’ve not looked at right now).
  • Limiting the supported operating systems to Red Hat and SUSE excludes the complete Debian world. I guess this is because Fujitsu is targeting larger customers which are anyway on either Red Hat or SUSE.

In the next post we’ll look at how TDE looks like in Fujitsu Enterprise Postgres.

L’article Commercial PostgreSQL distributions with TDE (1) Fujitsu Enterprise Postgres (1) Setup est apparu en premier sur dbi Blog.

Comparing Data When Migrating Databases With GoldenGate Veridata: Installation Guide

Yann Neuhaus - Mon, 2026-01-26 01:44

More often than not, migrating a database is not much about moving the data but about ensuring that nothing was lost in the process. In all migration projects, a DBA will have to answer the following question: “Is my target database really identical to the source ?

This is what Oracle GoldenGate Veridata was made for, and before diving into more specific use cases of Veridata in upcoming blog posts, we will see how to install and set up GoldenGate Veridata 23c (not called 23ai, for some reason…), while answering the most common questions.

Can GoldenGate Veridata be installed on the same host as a GoldenGate hub ?

This sounds like a very good idea at first sight. You have a GoldenGate hub setup for all your GoldenGate replications, with access to all databases being at both ends of your GoldenGate replications.

However, there is no real reason to have both GoldenGate and Veridata on the same host, since they have two very different roles. In addition to this, Oracle strongly recommends having a host dedicated to Veridata operations.

Prerequisites for GoldenGate Veridata installation Certification Matrix

To make sure you respect the requirements for a Veridata setup, check the official Oracle certification matrix. For Veridata 23c, Oracle and Red Hat Linux 8 and 9 are supported.

Download the latest patched complete installation

In recent years, Oracle provided patched installations, which saved a lot of time during setup. For Veridata, I strongly recommend using the latest patch for the installation instead of patching it later. Indeed, a new feature was added in the configuration assistant, so please download the installer from patch 38437002 (released in November 2025) before continuing.

For the installation, we will use the standard oracle user and oinstall group. Unzip the binaries to a stage directory.

[root@vmvdt~] mkdir -p /u01/app/oracle/product
[root@vmvdt ~] mkdir -p /u01/stage
[root@vmvdt ~] unzip -oq p38437002_231000_Linux-x86-64.zip -d /u01/stage
[root@vmvdt ~] chown oracle:oinstall -R /u01

Contrary to usual Oracle installations, you will also need to download the JDK. Oracle recommends downloading JDK 17 from the archive, and I’ve never tried using a newer version, so we will stay with this one.

[oracle@vmvdt ~] cd /u01/app/oracle/product
[oracle@vmvdt ~] tar zxvf /u01/stage/jdk-17.0.17_linux-x64_bin.tar.gz
Installing GoldenGate Veridata with the OUI (graphic installation) Installing binaries

With the oracle user, run the runInstaller to install Veridata binaries.

[oracle@vmvdt ~] /u01/stage/fbo_oggvdt_linux_services_shiphome/Disk1/runInstaller

On the first panel, you can decide whether to install only Veridata Server, only Veridata Agent or both. We’ll choose the latter option here.

You can then pick whether to install a new MySQL database or use an existing one. As a reminder, only MySQL is supported for storing metadata, so you cannot opt for an Oracle database here. To keep things simple, let’s decide to install a new MySQL database. The installation will be done by the configuration assistant in a later process.

After entering the software location (/u01/app/oracle/produdct/vdt23 in this example), you can click on Next for all the following steps, since no action is required. If it’s your first Oracle installation on the server, you must give the inventory directory location. You can keep the default /u01/oraInventory if you follow OFA.

The binaries are installed, and you can now run the Configuration Assistant.

Running the Configuration Assistant

After installing the binaries, you can run the Configuration Assistant. Make sure JAVA_HOME is declared, and add the binaries to the PATH variable.

[oracle@vmvdt ~]$ export JAVA_HOME=/u01/app/oracle/product/jdk-17.0.17
[oracle@vmvdt ~]$ export PATH=$JAVA_HOME/bin:$PATH
[oracle@vmvdt ~]$ /u01/app/oracle/product/vdt23/bin/vdtca.sh

First, give a root password and a veridata password. These will be passwords of the root and veridata users for the MySQL instance, not the Web UI !

For now, we’ll also keep the default recommended memory parameters:

After this, choose the Web UI administrator account.

Then, decide how to secure your installation. We will keep things simple here by using self-signed certificates, but you should not use them in production, of course. If you want a port that is not the default 8831, now is the time to specify it.

A window should pop up, asking you to run a script.

As root, run the MySQL installation script located at $VERIDATA_HOME/script/vdtca/install_configure_mysql.sh, and then click Ok. The rest of the installation will follow. You can find below the output of the MySQL installation script.

[root@vmvdt oracle]# /u01/app/oracle/product/vdt23/script/vdtca/install_configure_mysql.sh
[INFO] Searching in '/u01/app/oracle/product/vdt23/script/vdtca/../..' for a directory matching 'mysql-commercial*'...
[INFO] Found a matching directory: '/u01/app/oracle/product/vdt23/script/vdtca/../../mysql-commercial-8.0.42-linux-glibc2.17-x86_64-minimal'
2025-12-21T09:29:57.376559Z 0 [System] [MY-013169] [Server] /u01/app/oracle/product/vdt23/mysql-commercial-8.0.42-linux-glibc2.17-x86_64-minimal/bin/mysqld (mysqld 8.0.42-commercial) initializing of server in progress as process 50020
2025-12-21T09:29:57.383296Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2025-12-21T09:29:58.173524Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2025-12-21T09:29:59.799156Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
[INFO] The Configuration Assistant will set the root user password as what you entered in the next stage of veridata repository creation.
[INFO] MySQL has been installed and started.
[INFO] Please return to the Configuration Assistant and click "OK" to continue.

After this, the installation is complete, and the last window displays the URL for the GoldenGate Veridata Web UI:

You have successfully installed GoldenGate Veridata, and can now connect to the Web UI !

Installing GoldenGate Veridata with the CLI (silent installation) Installing binaries

To perform a silent installation, let’s create a vdt.rsp file. As mentioned in the graphic installation chapter, you must decide what to install, and whether to build a new MySQL repository. This is what the file should look like for a Server/Agent setup with a new repository.

[oracle@vmvdt ~]$ cat vdt.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_ogginstall_response_schema_v23_1_0
INSTALL_OPTION=SERVERAGENT
IS_NEWREPO=true
SOFTWARE_LOCATION=/u01/app/oracle/product/vdt23
INVENTORY_LOCATION=/u01/oraInventory
UNIX_GROUP_NAME=oinstall

Specify INSTALL_OPTION=SERVER for a server-only setup, and AGENT for an agent-only setup.

Then, run the installer with the -silent and -responseFile options:

[oracle@vmvdt stage]$ /u01/stage/fbo_oggvdt_linux_services_shiphome/Disk1/runInstaller -silent -responseFile /home/oracle/vdt.rsp
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 17959 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4095 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2025-12-21_10-01-20AM. Please wait ...
You can find the log of this install session at:
 /tmp/OraInstall2025-12-21_10-01-20AM/installActions2025-12-21_10-01-20AM.log
The installation of Oracle Veridata Services was successful.
Please check '/u01/oraInventory/logs/silentInstall2025-12-21_10-01-20AM.log' for more details.
Successfully Setup Software.
The log of this install session can be found at:
 /u01/oraInventory/logs/installActions2025-12-21_10-01-20AM.log
Running the Configuration Assistant

After this, prepare a vdtca.rsp file with the following content (full file in the appendix below):

oracle.install.responseFileVersion=/oracle/install/rspfmt_oggca_response_schema_v23_1_0
# SECTION A - GENERAL
DB_TYPE=MYSQL
ORACLE_USER=
ORACLE_PASS=
WALLET_LOC=
SERVICE_NAME=
IS_NEW_MYSQL=true
MYSQL_INSTALL_USER_TYPE=sudo
EXISTING_MYSQL_HOST=
EXISTING_MYSQL_PORT=
EXISTING_MYSQL_USERNAME=
EXISTING_MYSQL_PASSWORD=
NEW_MYSQL_ROOT_USERNAME=root
NEW_MYSQL_ROOT_PASSWORD=root_strong_password
NEW_MYSQL_VERIDATA_USERNAME=veridata
NEW_MYSQL_VERIDATA_PASSWORD=veridata_strong_password
SCHEMA_PREFIX=vdt
MIN_HEAP_SIZE=3g
MAX_HEAP_SIZE=10g
ADMINISTRATOR_USER=veridata
ADMINISTRATOR_PASSWORD=veridata_strong_password
STRONG_PWD_POLICY_ENABLED=true
IS_SSL_ENABLED=true
IS_SELF_SIGNED_CERTIFICATE=true
IS_PEM_FILES=false
CERTIFICATE_LOCATION=
PRIVATE_KEY_LOCATION=
CA_CERTIFICATE_LOCATION=
PRIVATE_KEY_PASSWORD=
VERIDATA_PORT=8831

And run the vdtca.sh script in the binary directory. Don’t forget to set JAVA_HOME and add binaries to the path !

[oracle@vmvdt ~]$ export JAVA_HOME=/u01/app/oracle/product/jdk-17.0.17
[oracle@vmvdt ~]$ export PATH=$JAVA_HOME/bin:$PATH
[oracle@vmvdt ~]$ /u01/app/oracle/product/vdt23/bin/vdtca.sh -silent -responseFile /home/oracle/vdtca.rsp
Accessing GoldenGate Veridata Web UI

Before connecting to the URL, make sure the port is open (below commands are just examples)

[root@vmvdt ~]# firewall-cmd --list-ports

[root@vmvdt ~]# firewall-cmd --add-port 8831/tcp --permanent
success
[root@vmvdt ~]# firewall-cmd --reload
success
[root@vmvdt ~]# firewall-cmd --list-ports
8831/tcp

Then, you can open the following URL, depending on your configuration: https://vmvdt.com:8831/veridata

Login with the username and password given for the Web UI during the installation process (not the ones from the MySQL installation !)

You now have access to the Web UI !

Veridata Agent Installation and Configuration

Since we opted for the SERVERAGENT option when setting up Veridata, we don’t need to install the agent separately. If you only installed Veridata server in the first place, run the OUI or the silent installation once more where you want your agent to be, with the AGENT option.

Run the agent_script.sh to configure the agent. Warning: you must select a location that is outside the Veridata home ! I will choose /u01/app/oracle/product/vdt_agent1.

[oracle@vmvdt ~]$ export JAVA_HOME=/u01/app/oracle/product/jdk-17.0.17
[oracle@vmvdt ~]$ /u01/app/oracle/product/vdt23/agent/agent_config.sh /u01/app/oracle/product/vdt_agent1
Successfully deployed the agent.

Then, configure the agent, by copying the template properties file:

[oracle@vmvdt ~]$ cd /u01/app/oracle/product/vdt_agent1
[oracle@vmvdt vdt_agent1]$ cp -p agent.properties.sample agent.properties
[oracle@vmvdt vdt_agent1]$ vim agent.properties

Specify the following parameters in the agent.properties file:

  • server.port=8832 : Listening port for the agent.
  • database.url=jdbc:oracle:thin:@hostname:1521:PDB1 : Connection string for the database. Example for other databases are given in the sample file.
  • server.jdbcDriver=ojdbc11-23.2.0.0.jar : for an Oracle database
  • server.driversLocation=/u01/app/oracle/product/vdt23/agent/drivers : points to the agent/drivers directory in your Veridata home directory.

The full list of drivers available is shown below. For other drivers, see the documentation.

[oracle@vmvdt ~]$ ll /u01/app/oracle/product/vdt23/agent/drivers/
total 44584
-rw-------. 1 oracle oinstall  2487576 Nov  6 03:10 mysql-connector-j-8.3.0.jar
-rw-------. 1 oracle oinstall  7493107 Nov  6 03:10 ojdbc11-23.9.0.25.07.jar
-rw-------. 1 oracle oinstall  1098916 Nov  6 03:10 postgresql-42.7.7.jar
-rw-------. 1 oracle oinstall 19019852 Nov  6 03:10 vddb2-5.1.4.jar
-rw-------. 1 oracle oinstall 13643019 Nov  6 03:10 vdsqlserver-6.0.0.jar
-rw-------. 1 oracle oinstall  1899742 Nov  6 03:10 vdsybase-5.1.4.jar

You can finally start the agent:

[oracle@vmvdt vdt_agent1]$ ./agent.sh start agent.properties

Fore some reason, Oracle didn’t bother with having a log mentioning the agent is successfully started. To be sure, you can check the java processes.

[oracle@vmvdt ~]$ ps -ef | grep java | grep agent
oracle     56408       1  0 12:47 pts/0    00:00:00 /u01/app/oracle/product/jdk-17.0.17/bin/java -Djava.util.logging.config.class=oracle.core.ojdl.logging.LoggingConfiguration -Doracle.core.ojdl.logging.config.file=/u01/app/oracle/product/vdt_agent1/config/odl.xml -Dhome=/u01/app/oracle/product/vdt23/agent -DagentHome=/u01/app/oracle/product/vdt_agent1 -XX:+UseParallelGC -Xms1024M -Dagent-manifest.jar=/u01/app/oracle/product/vdt23/agent/agent-manifest.jar -jar /u01/app/oracle/product/vdt23/agent/JavaAgent.jar agent.properties

You have successfully installed and configured GoldenGate Veridata server and agent !

Appendix [ERROR] [MY-013276] [Server] Failed to set datadir to when running install_configure_mysql.sh as root

If you stumble on this error when running the install_configure_mysql.sh script as root, it is most probably related to the grants of the directories leading to the Veridata installation directory. Make sure they all have the x (execute) permission; otherwise, the vdtrepouser won’t be able to access them.

And remember, you shouldn’t install Veridata on a host where other Oracle processes are running !

Appendix : vdtca.rsp example

Here is an example of a response file for the configuration assistant. I included at the end the full file with Oracle annotations:

oracle.install.responseFileVersion=/oracle/install/rspfmt_oggca_response_schema_v23_1_0
# SECTION A - GENERAL
DB_TYPE=MYSQL
ORACLE_USER=
ORACLE_PASS=
WALLET_LOC=
SERVICE_NAME=
IS_NEW_MYSQL=true
MYSQL_INSTALL_USER_TYPE=sudo
EXISTING_MYSQL_HOST=
EXISTING_MYSQL_PORT=
EXISTING_MYSQL_USERNAME=
EXISTING_MYSQL_PASSWORD=
NEW_MYSQL_ROOT_USERNAME=root
NEW_MYSQL_ROOT_PASSWORD=root_strong_password
NEW_MYSQL_VERIDATA_USERNAME=veridata
NEW_MYSQL_VERIDATA_PASSWORD=veridata_strong_password
SCHEMA_PREFIX=vdt
MIN_HEAP_SIZE=3g
MAX_HEAP_SIZE=10g
ADMINISTRATOR_USER=veridata
ADMINISTRATOR_PASSWORD=veridata_strong_password
STRONG_PWD_POLICY_ENABLED=true
IS_SSL_ENABLED=true
IS_SELF_SIGNED_CERTIFICATE=true
IS_PEM_FILES=false
CERTIFICATE_LOCATION=
PRIVATE_KEY_LOCATION=
CA_CERTIFICATE_LOCATION=
PRIVATE_KEY_PASSWORD=
VERIDATA_PORT=8831

And the full file:

################################################################################
## Copyright(c) Oracle Corporation 2016, 2022. All rights reserved.           ##
##                                                                            ##
## Specify values for the variables listed below to customize your            ##
## installation.                                                              ##
##                                                                            ##
## Each variable is associated with a comment. The comments can help to       ##
## populate the variables with the appropriate values.                        ##
##                                                                            ##
## IMPORTANT NOTE: This file should be secured to have read permission only   ##
## by the Oracle user or an administrator who owns this configuration to      ##
## protect any sensitive input values.                                        ##
##                                                                            ##
################################################################################

#-------------------------------------------------------------------------------
# Do not change the following system generated value.
#-------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_oggca_response_schema_v23_1_0


################################################################################
##                                                                            ##
## Oracle GoldenGate Veridata deployment configuration options and details    ##
##                                                                            ##
################################################################################

#-------------------------------------------------------------------------------
##                                                                            ##
## Instructions to fill out this response file                                ##
## -------------------------------------------                                ##
## Specify an option true or false                                            ##
## Specify true to use the Build-In MySql Repository Database                 ##                                            ##
## Specify false to use an existing MySql database for Repository             ##
##                                                                            ##
##                                                                            ##
#-------------------------------------------------------------------------------

DB_TYPE=MYSQL

#------------------------------------------------------------------------------------------
#
#    This section needs to be filled only if the Parameter DB_TYPE is set to ORACLE
#
#------------------------------------------------------------------------------------------

ORACLE_USER=

ORACLE_PASS=

WALLET_LOC=

SERVICE_NAME=


#--------------------------------------------------------------------------------------
#
#  This section needs to be filled only if the Parameter IS_NEW_MYSQL is set to false                                             #
#
#--------------------------------------------------------------------------------------


IS_NEW_MYSQL=true

MYSQL_INSTALL_USER_TYPE=sudo

EXISTING_MYSQL_HOST=

EXISTING_MYSQL_PORT=

EXISTING_MYSQL_USERNAME=

EXISTING_MYSQL_PASSWORD=

#------------------------------------------------------------------------------------------
#
#    This section needs to be filled only if the Parameter IS_NEW_MYSQL is set to true
#
#------------------------------------------------------------------------------------------

NEW_MYSQL_ROOT_USERNAME=root

NEW_MYSQL_ROOT_PASSWORD=root_strong_password

NEW_MYSQL_VERIDATA_USERNAME=veridata

NEW_MYSQL_VERIDATA_PASSWORD=veridata_strong_password

#------------------------------------------------------------------------------------------
#
#      Specify a prefix for the schema which will be created in the repository database
#
#------------------------------------------------------------------------------------------

SCHEMA_PREFIX=vdt

#--------------------------------------------------------------------------------
#       Specify the Minimum and Maximum Heap Size
#--------------------------------------------------------------------------------

MIN_HEAP_SIZE=3g

MAX_HEAP_SIZE=10g

#-------------------------------------------------------------------------------
# Specify if the admin user should enforce a strong password policy.
#-------------------------------------------------------------------------------

ADMINISTRATOR_USER=veridata

ADMINISTRATOR_PASSWORD=veridata_strong_password

#-------------------------------------------------------------------------------
# Specify if the admin user should enforce a strong password policy.
# Specify true to enable strong password policy management.
#-------------------------------------------------------------------------------
STRONG_PWD_POLICY_ENABLED=true

#-------------------------------------------------------------------------------------------
# Specify an option true or false
# Specify true to enable SSL
# Specify false to disable SSL
#-------------------------------------------------------------------------------------------

IS_SSL_ENABLED=true

#-------------------------------------------------------------------------------------------
# This section needs to be filled only if the parameter IS_SSL_ENABLED is set to true
# Specify an option true or false
#-------------------------------------------------------------------------------------------

IS_SELF_SIGNED_CERTIFICATE=true

IS_PEM_FILES=false

#-------------------------------------------------------------------------------------------
# Specify the certificate location
#-------------------------------------------------------------------------------------------

CERTIFICATE_LOCATION=

PRIVATE_KEY_LOCATION=

CA_CERTIFICATE_LOCATION=

PRIVATE_KEY_PASSWORD=

#-------------------------------------------------------------------------------------------
# The Default Port used by Veridata is 8830. You can also provide your own Custom Port.
# If SSL is enabled, the Defualt Port is 8831. You can also provide your own Custom Port
#-------------------------------------------------------------------------------------------

VERIDATA_PORT=8831

L’article Comparing Data When Migrating Databases With GoldenGate Veridata: Installation Guide est apparu en premier sur dbi Blog.

GoldenGate 26ai: Installation and New Administration Features

Yann Neuhaus - Thu, 2026-01-22 11:33

Earlier this week, Oracle Database 26ai was released, but GoldenGate 26ai is also available now ! New features were added, but before diving into some of these changes in an upcoming blog post, let’s see what changed from a GoldenGate administrator perspective.

Installation, setup and patching

As you may know, GoldenGate 26ai is actually just a patch of the existing 23ai version. For these 23ai setups, upgrading to 26ai is nothing more than applying the January patch (38850057, or 23.26.0.0.0).

But if you plan on installing a new GoldenGate 26ai (38850058 on MOS), you also don’t have to change a lot of things compared to what you were already doing with 23ai. I wrote an extensive guide for both graphic and silent installations, and there is very little change.

  • In the graphic installation, nothing changes except cosmetic aspects. You can follow the graphic installation guide for 23ai.
  • In the silent installation, only the oracle.install.responseFileVersion parameter changes. Even INSTALL_OPTION stays the same. A working ogg26.rsp to install the binaries looks like this:
oracle.install.responseFileVersion=/oracle/install/rspfmt_ogginstall_response_schema_v26_1_0
INSTALL_OPTION=ORA23ai
SOFTWARE_LOCATION=/u01/app/oracle/product/ogg26ai
INVENTORY_LOCATION=/u01/app/oraInventory
UNIX_GROUP_NAME=oinstall

For the rest (including the oggca.sh run), nothing changes from the guide mentioned above.

Web UI new features

GoldenGate 23ai came with a lot of changes to the web UI. 26ai keeps the same overall look but with many interesting new features.

Services are not separated anymore
In the 23ai web UI, services were separated with multiple tabs

As shown above, GoldenGate 23ai and earlier versions of the Microservices Architecture were separating services in the Web UI without any real advantage from a user experience perspective.

Processes and ports for the different GoldenGate 26ai services are still distinct, but in the Web UI, you no longer have a distinction between services. The interface is more user-friendly, with everything managed on one tab.

This way, you can access distribution and receiver services features directly from the administration service tab.

You also have a new Services tab where you can check the status of all the services of a single deployment. Before, you could only do that from the service manager interface.

AI features

GoldenGate 26ai comes with new AI features that are available from the Web UI. I will dedicate a blog to these changes, but here is a first glimpse of what is possible. From the service manager interface, you can add AI providers and models.

At the moment, four different providers are available:

  • Gemini
  • OCI Generative AI
  • OpenAI
  • Voyage AI

For OCI Generative AI, you specify OCIDs of different OCI components. For the others, a URL and the API key are the only things you need. Once the provider is added, you can add models (for instance, OpenAI’s gpt-5.2).

And last but not least, you can view the new AI service on the server:

[oracle@vmogg bin]$ ps -ef|grep AIService
oracle     60905   60721  0 20:50 ?        00:00:00 /u01/app/oracle/product/ogg26ai/bin/AIService

If you haven’t jumped into the Microservices Architecture yet, don’t waste your time with intermediary versions of GoldenGate. If your source and target support it, install GoldenGate 26ai now !

L’article GoldenGate 26ai: Installation and New Administration Features est apparu en premier sur dbi Blog.

How Blocking-Lock Brownouts Can Escalate from Row-Level to Complete System Outages

Jeremy Schneider - Mon, 2026-01-19 22:23
This article is a shortened version. For the full writeup, go to https://github.com/ardentperf/pg-idle-test/tree/main/conn_exhaustion

This test suite demonstrates a failure mode when application bugs which poison connection pools collide with PgBouncers that are missing peer config and positioned behind a load balancer. PgBouncer’s peering feature (added with v1.19 in 2023) should be configured if multiple PgBouncers are being used with a load balancer – this feature prevents the escalation demonstrated here.

The failures described here are based on real-world experiences. While uncommon, this failure mode has been seen multiple times in the field.

Along the way, we discover unexpected behaviors (bugs?) in Go’s database/sql (or sqlx) connection pooler with the pgx client and in Postgres itself.

Sample output: https://github.com/ardentperf/pg-idle-test/actions/workflows/test.yml

The Problem in Brief

Go’s database/sql allows connection pools to become poisoned by returning connections with open transactions for re-use. Transactions opened with db.BeginTx() will be cleaned up, but – for example – conn.ExecContext(..., "BEGIN") will not be cleaned up. PR #2481 proposes some cleanup logic in pgx for database/sql connection pools (not yet merged); I tested the PR with this test suite. The PR relies on the TxStatus indicator in the ReadyForStatus message which Postgres sends back to the client as part of its network protocol.

A poisoned connection pool can cause an application brownout since other sessions updating the same row wait indefinitely for the blocking transaction to commit or rollback its own update. On a high-activity or critical table, this can quickly lead to significant pile-ups of connections waiting to update the same locked row. With Go this means context deadline timeouts and retries and connection thrashing by all of the threads and processes that are trying to update the row. Backoff logic is often lacking in these code paths. When there is a currently running SQL (hung – waiting for a lock), pgx first tries to send a cancel request and then will proceed to a hard socket close.

If PgBouncer’s peering feature is not enabled, then cancel requests load-balanced across multiple PgBouncers will fail because the cancel key only exists on the PgBouncer that created the original connection. The peering feature solves the cancel routing problem by allowing PgBouncers to forward cancel requests to the correct peer that holds the cancel key. This feature should be enabled – the test suite demonstrates what happens when it is not.

Postgres immediately cleans up connections when it receives a cancel request. However, Postgres does not clean up connections when their TCP sockets are hard closed, if the connection is waiting for a lock. As a result, Postgres connection usage climbs while PgBouncer continually opens new connection that block on the same row. The app’s poisoned connection pool quickly leads to complete connection exhaustion in the Postgres server.

Existing connections will continue to work, as long as they don’t try to update the row which is locked. But the row-level brownout now becomes a database-level brownout – or perhaps a complete system outage (once the Go database/sql connection pool is exhausted) – because postgres rejects all new connection attempts from the application.

Result: Failed cancels → client closes socket → backends keep running → CLOSE_WAIT accumulates → Postgres hits max_connections → system outage

Table of Contents
  1. The Problem in Brief
  2. Table of Contents
  3. Architecture
  4. The Test Scenarios
    1. PgBouncer Count: 1 vs 2 (nopeers mode)
    2. Failure Mode: Sleep vs Poison
    3. Pool Mode: nopeers vs peers (2 PgBouncers)
    4. Summary
  5. Test Results
    1. Transactions Per Second
    2. TCP CLOSE-WAIT Accumulation
    3. Connection Pool Wait Time vs PgBouncer Client Wait
  6. Detection and Prevention
Architecture

The test uses Docker Compose to create this infrastructure with configurable number of PgBouncer instances.

The Test Scenarios

test_poisoned_connpool_exhaustion.sh accepts three parameters: <num_pgbouncers> <poison|sleep> <peers|nopeers>

In this test suite:

  1. The failure is injected 20 seconds after the test starts.
  2. Idle connections are aborted and rolled back after 20 seconds.
  3. Postgres is configured to abort and rollback any and all transactions if they are not completed within 40 seconds. Note that the transaction_timeout setting (for total transaction time) should be used cautiously, and is available in Postgres v17 and newer.
PgBouncer Count: 1 vs 2 (nopeers mode) ConfigCancel BehaviorOutcome1 PgBouncerAll cancels route to same instanceCancels succeed, no connection exhaustion2 PgBouncers~50% cancels route to wrong instanceCancels fail, connection exhaustion Failure Mode: Sleep vs Poison ModeWhat HappensOutcomeTimeoutsleepTransaction with row lock is held for 40 seconds without returning to poolNormal blocking scenario where lock holder is idle (not sending queries)Idle timeout fires after 20s, terminates session & releases lockspoisonTransaction with row lock is returned to pool while still openBug where connections with open transactions are reusedIdle timeout never fires (connection is actively used). Transaction timeout fires after 40s, terminates session and releases locks Pool Mode: nopeers vs peers (2 PgBouncers) ModePgBouncer ConfigCancel BehaviornopeersIndependent PgBouncers (no peer awareness)Cancel requests may route to wrong PgBouncer via load balancerpeersPgBouncer peers enabled (cancel key sharing)Cancel requests are forwarded to correct peer Summary PgBouncersFailure ModePool ModeExpected Outcome2poisonnopeersDatabase-level Brownout or System Outage – TPS crashes to ~4, server connections max out at 95, TCP sockets accumulate in CLOSE_WAIT state, cl_waiting spikes1poisonnopeersRow-level Brownout – TPS drops with no recovery (~11), server connections stay healthy at ~11, no server connection exhaustion2poisonpeersRow-level Brownout – TPS drops with no recovery (~15), cl_waiting stays at 0, peers forward cancels correctly2sleepnopeersDatabase-level Brownout or System Outage – Server connection spike to 96, full recovery after lock released and some extra time, system outage vs brownout depends on how quickly the idle timeout releases lock2sleeppeersRow-level Brownout – No connection spike, full recovery after lock released, no risk of system outage Test Results Transactions Per Second

TPS is the best indicator of actual application impact. It’s important to notice that PgBouncer peering does not prevent application impact from either poisoned connection pools or sleeping sessions. The section below titled “Detection and Prevention” has ideas which address the actual root cause and truly prevent application impact.

After the lock is acquired at t=20, TPS drops from ~700 to near zero in all cases as workers block on the locked row held by the open transaction.

Sleep mode (orange/green lines): Around t=40, Postgres’s idle_in_transaction_session_timeout (20s) fires and kills the blocking session. TPS recovers to ~600-700.

Poison mode (red/purple/blue lines): The lock-holding connection is never idle—it’s constantly being picked up by workers attempting queries—so the idle timeout never fires. TPS remains near zero until Postgres’s transaction_timeout (40s) fires at t=60, finally terminating the long-running transaction and releasing the lock.

TCP CLOSE-WAIT Accumulation

2 PgBouncers (nopeers) (red/orange lines): CLOSE_WAIT connections accumulate rapidly because:

  1. Cancel request goes to wrong PgBouncer → fails
  2. Client gives up and closes socket
  3. Server backend is still blocked on lock, hasn’t read the TCP close
  4. Connection enters CLOSE_WAIT state on Postgres

In poison mode (red), CLOSE_WAIT remains at ~95 until transaction_timeout fires at t=60. In sleep mode (orange), CLOSE_WAIT clears around t=40 when idle_in_transaction_session_timeout fires.

1 PgBouncer and peers modes (purple/blue/green lines): Minimal or zero CLOSE_WAIT because cancel requests succeed—either routing to the single PgBouncer or being forwarded to the correct peer.

Connection Pool Wait Time vs PgBouncer Client Wait

Go’s database/sql pool tracks how long goroutines wait to acquire a connection (db.Stats().WaitDuration). PgBouncer tracks cl_waiting—clients waiting for a server connection. These metrics measure wait time at different layers of the stack.

This graph shows 2 PgBouncers in poison mode (nopeers)—the worst-case scenario:

  • TPS (green) crashes to near zero and stays there until transaction_timeout fires at t=60
  • oldest_xact_age (purple) climbs steadily from 0 to 40 seconds
  • Total Connections (brown) climb rapidly after poison injection at t=20 as failed cancels leave backends in CLOSE_WAIT
  • Once Postgres hits max_connections - superuser_reserved_connections (95), new connections are refused
  • PgBouncer #1 cl_waiting (red) and PgBouncer #2 cl_waiting (orange) then spike as clients queue up waiting for available connections

Note the gap between when transaction_timeout fires (t=60, visible as oldest_xact_age dropping to 0) and when TPS fully recovers. TPS recovery correlates with cl_waiting dropping back to zero—PgBouncer needs time to clear the queue of waiting clients and re-establish healthy connection flow. This recovery gap only occurs in nopeers mode; the TPS comparison graph shows that peers mode recovers immediately when the lock is released because connections never exhaust and cl_waiting stays at zero.

Why is AvgWait (blue) so low despite the system being in distress? The poisoned connection (holding the lock) continues executing transactions without blocking—it already holds the lock, so its queries succeed immediately. This one connection cycling rapidly through the pool with sub-millisecond wait times heavily skews the average lower, masking the fact that other connections are blocked.

The cl_waiting metric is collected as cnpg_pgbouncer_pools_cl_waiting from CloudNativePG. See CNPG PgBouncer metrics.

Detection and Prevention

Monitoring and Alerting:

Alert on:

  • Most Important: cnpg_backends_max_tx_duration_seconds showing transactions open for longer than some threshold
  • cnpg_backends_total showing established connections at a high percentage of max_connections
  • Number of backends waiting on locks over some threshold
-- Count backends waiting on locks
SELECT count(*) FROM pg_stat_activity WHERE wait_event_type = 'Lock';

Prevention Options:

Options to prevent the root cause (connection pool poisoning):

  1. Find and fix connection leaks in the application – ensure all transactions are properly committed or rolled back
  2. Use OptionResetSession callback – automatically discard leaked connections (see below)
  3. Fix at the driver level – PR #2481 proposes automatic detection in pgx (not yet merged)

Options to prevent the escalation from row-level brownout to system outage:

  1. Enable PgBouncer peering – if using multiple PgBouncers behind a load balancer, configure the peer_id and [peers] section so cancel requests are forwarded to the correct instance (see PgBouncer documentation). This prevents connection exhaustion but does not prevent the TPS drop from lock contention.
  2. Use session affinity (sticky sessions) in the load balancer based on client IP – ensures cancel requests route to the same PgBouncer as the original connection (see HAProxy Session Affinity example below)

Options to limit the duration/impact:

  1. Set appropriate timeout defaults – configure system-wide timeouts to automatically terminate problematic sessions:
    • idle_in_transaction_session_timeout – terminates sessions idle in a transaction (e.g., 5min)
    • transaction_timeout (Postgres 17+) – use caution; limits total transaction duration regardless of activity (e.g., 30min)

Potential Postgres Enhancement:

This would not address the root cause, but Postgres could better handle CLOSE_WAIT accumulation by checking socket status while waiting for locks. Since Postgres already checks for interrupts periodically (which is why cancels work), it’s possible that similar logic could detect forcibly closed sockets and clean up blocked backends sooner.

Results Summary, Understanding the Layers Leading to the System Outage, Unique Problems, and more - available in the full writeup at https://github.com/ardentperf/pg-idle-test/tree/main/conn_exhaustion

Detecting Password Guessing in Oracle using LCOUNT

Pete Finnigan - Thu, 2026-01-15 20:27
I want to briefly discuss how we might detect that an attacker is trying to guess passwords and trying to connect to an Oracle database. One simple way that I have been aware of for many years, decades even is....[Read More]

Posted by Pete On 15/01/26 At 03:08 PM

Categories: Security Blogs

How to activate the auto-index feature in Oracle 19c Enterprise Edition?

Tom Kyte - Wed, 2026-01-14 12:27
How to activate the auto-index feature in Oracle 19c Enterprise Edition? To activate on PDB send me the error: <code>BEGIN dbms_auto_index.configure('AUTO_INDEX_MODE','IMPLEMENT'); END; Informe de error - ORA-40216: funcion no soportada ORA-06512: en "SYS.DBMS_SYS_ERROR", linea 79 ORA-06512: en "SYS.DBMS_AUTO_INDEX_INTERNAL", linea 10967 ORA-06512: en "SYS.DBMS_AUTO_INDEX", linea 301 ORA-06512: en linea 1 40216. 00000 - "feature not supported" *Cause: The feature was not supported in the API. *Action: Modify the code to avoid usage of the feature.</code>
Categories: DBA Blogs

PAGE REFRESH CAUSED BY DATABASE TRIGGER.

Tom Kyte - Wed, 2026-01-14 12:27
Hi, I have created an application that is tracking a set of processes that are being run from a master table. When a process is completed, a flag in a column name PROC_EXEC_STATUS changes from N to Y for each process. I want my oracle apex application to refresh when the flag for the each individual processes in the table is updated. I have used javascript but then because the processes do not have a defined running time, it just continues to refresh and sometimes is not in sync with the process running time. I am on apex version 24.2.
Categories: DBA Blogs

Collection vs Global Temporary Table in Oracle 19c

Tom Kyte - Wed, 2026-01-14 12:27
I have PL/SQL package with a function that returns a list of item numbers. This package/function is repeatedly called by online with multiple users; passing parameters to the function to use within the SQL statement(s). A new string (short list of item types) will now be passed to the function to narrow down the item numbers being returned. My question: is it better to use a Collection or a Global Temporary Table to insert the item types into, which will then be used in the where clause of the SQL statement to select the item numbers.
Categories: DBA Blogs

How to ensure SESSION_USER = application user when running DBMS_SCHEDULER jobs that call external C++ functions?

Tom Kyte - Wed, 2026-01-14 12:27
<u></u><u><b>We are building a system where:</b></u> - Application users connect using different DB users - They call DBMS_SCHEDULER.CREATE_JOB to create jobs - Jobs are always expected to execute in the application user(APPUSER) schema - We call external C++ functions via LIBRARY objects - We use VPD policies and also log changes based on SESSION_USER However, we observe that jobs run with SESSION_USER = job_creator, which is not always the same as application user. <b><u>Question:</u></b> <b>How can we ensure that DBMS_SCHEDULER jobs always run with SESSION_USER = application user, regardless of who initiates the creation?</b> <u><b>Reproducible test case:</b></u> Tested on Oracle 19.25 (Non-CDB installation) on RHEL 9.5. - APPUSER: Application schema and owner of all used objects - DBUSER: One of the application/database users (there are many such users with different DB names) Steps to reproduce: <u>1. Execute as SYS:</u> <code> create user appuser identified by ***; grant create session to appuser; grant create library to appuser; grant create procedure to appuser; GRANT CREATE ANY TABLE TO appuser; grant create sequence to appuser; alter user appuser quota unlimited on USERS; create user dbuser identified by ***; grant create session to dbuser; grant create job to dbuser; grant create any job to dbuser; </code> <u>2. Execute as appuser:</u> <code> CREATE OR REPLACE LIBRARY extproc_userinfo_lib AS '/app/bin/libgetuser.so'; drop function loguserinfo; CREATE OR REPLACE FUNCTION appuser.LogUserInfo RETURN BINARY_INTEGER AS LANGUAGE C LIBRARY extproc_userinfo_lib NAME "LogUserInfo" WITH CONTEXT PARAMETERS ( CONTEXT, RETURN int ); grant execute on appuser.LogUserInfo to public; drop table appuser.applog; create table appuser.applog ( id number generated always as identity primary key ,message varchar2(300)--, ,log_time timestamp default systimestamp ); select * from appuser.applog order by log_time desc; </code> <u>3. Login as OS user oracle to Linux(I use RHEL 9.5) and create a new file /app/bin/getuser.cpp with the following contents. It insert both SESSION_USER and CURRENT_USER to table APPLOG: </u> <code> #include <oci.h> #include <cstring> #include <cstdio> typedef struct OCIExtProcContext OCIExtProcContext; #define OCIEXTPROC_SUCCESS 0 extern "C" int LogUserInfo(OCIExtProcContext* ctx) { OCIEnv* envhp = nullptr; OCIError* errhp = nullptr; OCISvcCtx* svchp = nullptr; OCIStmt* stmthp = nullptr; const char* query = "insert into appuser.applog(message) " "SELECT 'SESSION_USER:' || SYS_CONTEXT('USERENV','SESSION_USER') || " "'; CURRENT_USER:' || SYS_CONTEXT('USERENV','CURRENT_USER') " "FROM dual"; if (OCIExtProcGetEnv(ctx, &envhp, &svchp, &errhp) != OCIEXTPROC_SUCCESS) return -1; if (OCIHandleAlloc(envhp, (void**)&stmthp, OCI_HTYPE_STMT, 0, NULL) != O...
Categories: DBA Blogs

Happy New Year 2026, Oracle Security Training, MySQL, Anniversary

Pete Finnigan - Wed, 2026-01-14 12:27
I want to wish a very happy New year for 2026 to all of our customers, web site visitors and all of our social media connections and likers and commenters. Thanks for all of your support over the years!! We....[Read More]

Posted by Pete On 13/01/26 At 12:02 PM

Categories: Security Blogs

Migrate your database to 26ai on ODA

Yann Neuhaus - Mon, 2026-01-12 08:36
Introduction

Oracle 26ai is not yet available on Bare Metal ODA, but it will probably be in a few weeks. For now, 26ai is available as a DB System on Oracle Database Appliance, meaning that it’s already possible to test this new version. Let’s try to migrate a 19c Bare Metal PDB to a 26ai DB System using 3 different methods.

Prerequisites

26ai is only available with the minimum release 19.29, meaning on ODA X8-2 and newer models. You will need to patch your ODA to 19.29 first if you want to test 26ai. 26ai is only compatible with multitenant architecture, so another prerequisite is to migrate your non-container databases to PDBs. If you’re still using non-CDB architecture, and among these 3 methods, only the first one will work for you.

The recommended patching method as told by Oracle is using AutoUpgrade: please refer to this blog post if you want more details.

On my side, I will explore these 3 other possibilities:

  • Datapump Export/Import
  • PDB copy through a DB link
  • PDB unplug/plug
Download the needed files for 26ai DB System

These files are needed for creating a 26ai DB System:

36524660 => System image for 26ai DB Systems
36524627 => the GI clone needed for deploying newer 26ai GI version
36524642 => the DB clone for deploying new version 26ai

Be sure to choose the very latest 19.29 patch when downloading these files, download link from MOS will first propose older versions.

My 19c source database

My source database is a 19c container database with one PDB running on an ODA X8-2M. I want to migrate this PDB to 26ai:

su - oracle
 . oraenv <<< CPROD19
ORACLE_SID = [CPROD19] ? The Oracle base remains unchanged with value /u01/app/odaorabase/oracle
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jan 8 09:53:43 2026
Version 19.29.0.0.0

Copyright (c) 1982, 2025, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.29.0.0.0

show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PRODDB01                       READ WRITE NO

exit
Provision a new 26ai DB System

It’s not yet possible to deploy a 26ai container database on Bare Metal, so let’s create a DB System for testing this migration. First unzip and register the VM template, GI 26ai and DB 26ai:

cd /opt/dbi

unzip -o p36524660_1929000_Linux-x86-64.zip
unzip -o p36524627_1929000_Linux-x86-64.zip
unzip -o p36524642_1929000_Linux-x86-64.zip

odacli update-repository -f /opt/dbi/odacli-dcs-23.26.0.0.0-251116-ODAVM-19.29.0.0.zip
odacli update-repository -f /opt/dbi/odacli-dcs-23.26.0.0.0-251116-GI-23.26.0.0.zip
odacli update-repository -f /opt/dbi/odacli-dcs-23.26.0.0.0-251116-DB-23.26.0.0.zip

sleep 30 ; odacli list-jobs | tail -n 4
d4700315-db8c-4522-af55-0fddd262bfe4     Repository Update                                                           2025-12-08 15:20:14 CET             Success
ba7d452f-e03d-46d0-a607-fd7c758cd1b1     Repository Update                                                           2025-12-08 15:20:59 CET             Success
bfc102d1-985b-4792-8054-03709aa8d949     Repository Update                                                           2025-12-08 15:21:20 CET             Success

odacli describe-dbsystem-image | grep 23.26
DBVM                  23.26.0.0.0           23.26.0.0.0
GI                    23.26.0.0.0           23.26.0.0.0
DB                    23.26.0.0.0           23.26.0.0.0

A CPU pool is needed for DB Systems, if you don’t have one. A 2-core CPU pool is enough:

odacli create-cpupool -c 2 -n cpupool2c -dbs

Now let’s create a json file, then create a new DB System based on this file:

cat /opt/dbi/create_dbs-04-tst-cdb.json
{
    "system": {
        "name": "dbs-04-tst",
        "shape": "dbs2",
        "systemPassword": "***********",
        "timeZone": "Europe/Zurich",
        "diskGroup": "DATA",
        "cpuPoolName": "cpupool2c",
        "enableRoleSeparation": true,
        "customRoleSeparation": {
            "groups": [
                {
                    "name": "oinstall",
                    "id": 1001,
                    "role": "oinstall"
                },
                {
                    "name": "dbaoper",
                    "id": 1002,
                    "role": "dbaoper"
                },
                {
                    "name": "dba",
                    "id": 1003,
                    "role": "dba"
                },
                {
                    "name": "asmadmin",
                    "id": 1004,
                    "role": "asmadmin"
                },
                {
                    "name": "asmoper",
                    "id": 1005,
                    "role": "asmoper"
                },
                {
                    "name": "asmdba",
                    "id": 1006,
                    "role": "asmdba"
                }
            ],
            "users": [
                {
                    "name": "grid",
                    "id": 1001,
                    "role": "gridUser"
                },
                {
                    "name": "oracle",
                    "id": 1000,
                    "role": "oracleUser"
                }
            ]
        }
    },
    "database": {
        "name": "CTEST26",
        "uniqueName": "CTEST26_S1",
        "domainName": "dbi-lab.ch",
        "adminPassword": "**********",
        "version": "23.26.0.0.0",
        "edition": "EE",
        "type": "SI",
        "dbClass": "OLTP",
        "shape": "odb2",
        "role": "PRIMARY",
        "enableDbConsole": false,
        "enableFlashStorage": false,
        "redundancy": null,
        "characterSet": {
            "characterSet": "AL32UTF8",
            "nlsCharacterset": "AL16UTF16",
            "dbTerritory": "AMERICA",
            "dbLanguage": "AMERICAN"
        },
        "rmanBackupPassword": null,
        "enableTDE": false,
        "isCdb": true,
        "pdbName": "pdb1",
        "pdbAdminUser": "pdbadmin",
        "tdePassword": null
    },
    "network": {
        "domainName": "dbi-lab.ch",
        "ntpServers": [
            "85.195.224.28"
        ],
        "dnsServers": [
            "10.16.0.250"
        ],
        "nodes": [
            {
                "name": "dbs-04-tst",
                "ipAddress": "10.16.0.247",
                "netmask": "255.255.255.0",
                "gateway": "10.16.0.1",
                "number": 0
            }
        ],
        "publicVNetwork": "pubnet"
    },
    "grid": {
        "language": "en"
    }
}

odacli create-dbsystem -p /opt/dbi/create_dbs-04-tst-cdb.json
Enter password for system "dbs-04-tst":
Retype password for system "dbs-04-tst":
Enter administrator password for DB "CTEST26":
Retype administrator password for DB "CTEST26":
...

35 minutes later, my new DB System is ready to use:

odacli describe-job -i c79ecbbf-dd0c-4b75-ba10-249c78b25f33

Job details
----------------------------------------------------------------
                     ID:  c79ecbbf-dd0c-4b75-ba10-249c78b25f33
            Description:  DB System dbs-04-tst creation
                 Status:  Success
                Created:  January 08, 2026 10:23:39 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Create DB System metadata                January 08, 2026 10:23:39 CET            January 08, 2026 10:23:41 CET            Success
Persist new DB System                    January 08, 2026 10:23:41 CET            January 08, 2026 10:23:41 CET            Success
Remove unused network interfaces         January 08, 2026 10:23:41 CET            January 08, 2026 10:23:41 CET            Success
Validate DB System creation              January 08, 2026 10:23:41 CET            January 08, 2026 10:23:46 CET            Success
Create the infra quota group             January 08, 2026 10:23:46 CET            January 08, 2026 10:23:50 CET            Success
Setup DB System environment              January 08, 2026 10:23:46 CET            January 08, 2026 10:23:46 CET            Success
Create DB System ASM volume              January 08, 2026 10:23:50 CET            January 08, 2026 10:24:01 CET            Success
Create DB System ACFS filesystem         January 08, 2026 10:24:01 CET            January 08, 2026 10:24:13 CET            Success
Create DB System ACFS mount point        January 08, 2026 10:24:01 CET            January 08, 2026 10:24:01 CET            Success
Create DB System VM ACFS snapshots	 January 08, 2026 10:24:13 CET            January 08, 2026 10:24:28 CET            Success
Setup ASM client cluster config          January 08, 2026 10:24:28 CET            January 08, 2026 10:24:30 CET            Success
Create temporary SSH key pair            January 08, 2026 10:24:30 CET            January 08, 2026 10:24:31 CET            Success
Create BM certificates in ACFS location  January 08, 2026 10:24:31 CET            January 08, 2026 10:24:32 CET            Success
Create DB System cloud-init config	 January 08, 2026 10:24:32 CET            January 08, 2026 10:24:32 CET            Success
Provision DB System VM(s)                January 08, 2026 10:24:32 CET            January 08, 2026 10:24:35 CET            Success
Create u01 vDisk for DB System           January 08, 2026 10:24:35 CET            January 08, 2026 10:24:36 CET            Success
Add DB System to Clusterware             January 08, 2026 10:24:36 CET            January 08, 2026 10:24:37 CET            Success
Attach disks to DB System                January 08, 2026 10:24:36 CET            January 08, 2026 10:24:36 CET            Success
Attach u01 vDisk to DB System            January 08, 2026 10:24:36 CET            January 08, 2026 10:24:36 CET            Success
Add extra PCI slots                      January 08, 2026 10:24:37 CET            January 08, 2026 10:24:37 CET            Success
Create Network Filters                   January 08, 2026 10:24:37 CET            January 08, 2026 10:24:37 CET            Success
Export clones repository                 January 08, 2026 10:24:37 CET            January 08, 2026 10:24:37 CET            Success
Start DB System                          January 08, 2026 10:24:37 CET            January 08, 2026 10:24:40 CET            Success
Wait DB System VM first boot             January 08, 2026 10:24:40 CET            January 08, 2026 10:26:49 CET            Success
Setup Mutual TLS (mTLS)                  January 08, 2026 10:26:49 CET            January 08, 2026 10:26:51 CET            Success
Cleanup temporary SSH key pair           January 08, 2026 10:26:51 CET            January 08, 2026 10:26:52 CET            Success
Copy ASM client cluster config           January 08, 2026 10:26:51 CET            January 08, 2026 10:26:51 CET            Success
Delete Mutual TLS (mTLS) Artifacts	 January 08, 2026 10:26:51 CET            January 08, 2026 10:26:51 CET            Success
Wait DB System VM DCS Agent readiness    January 08, 2026 10:26:52 CET            January 08, 2026 10:27:12 CET            Success
Add Network Filters                      January 08, 2026 10:27:12 CET            January 08, 2026 10:27:14 CET            Success
Install DB System                        January 08, 2026 10:27:14 CET            January 08, 2026 10:53:57 CET            Success
Wait DB System VM DCS Agent readiness    January 08, 2026 10:53:57 CET            January 08, 2026 10:55:58 CET            Success
Set DB System as configured              January 08, 2026 10:55:58 CET            January 08, 2026 10:55:58 CET            Success

odacli list-dbsystems

Name                  Shape       GI version          DB info                         Status                  Created                   Updated
--------------------  ----------  ------------------  ------------------------------  ----------------------  ------------------------  ------------------------
dbs-04-tst            dbs2        23.26.0.0.0         23.26(CONFIGURED=1)             CONFIGURED              2026-01-08 10:23:39 CET   2026-01-08 10:55:58 CET
Migration using datapump

Let’s consider a classic migration using datapump. This tool has been the official one for migration for nearly 2 decades now.

Let’s first create a new PDB inside the container database of the new DB System:

ssh root@dbs-04-tst
su - oracle
. oraenv <<< CTEST26
sqlplus / as sysdba
show pdbs
    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB1 			  READ WRITE NO

alter pluggable database PDB1 close immediate;
Pluggable database altered.

drop pluggable database PDB1 including datafiles;
Pluggable database dropped.

create pluggable database MIG26DTPUMP admin user pdbadmin identified by "*********";
Pluggable database created.

alter pluggable database MIG26DTPUMP open;
Pluggable database altered.

alter pluggable database MIG26DTPUMP save state;
Pluggable database altered.

show pdbs
    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 4 MIG26DTPUMP			  READ WRITE NO

alter session set container=MIG26DTPUMP;

create tablespace users datafile size 100M autoextend on maxsize 10G;
Tablespace created.

exit

On source database, non-CDB or PDB it doesn’t matter, export data using datapump:

su - oracle
. oraenv <<< CPROD19
export ORACLE_PDB_SID=PRODDB01
sqlplus / as sysdba
create or replace directory mig as '/backup/dumps/';
col current_scn for 999999999999
select current_scn from v$database;
    CURRENT_SCN
---------------
        6292616
exit
expdp system directory=MIG full=y dumpfile=PRODDB01_20260112.dmp logfile=exp_PRODDB01_20260112.log flashback_scn=6292616

Export: Release 19.0.0.0.0 - Production on Mon Jan 12 10:36:52 2026
Version 19.29.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** directory=MIG full=y dumpfile=PRODDB01_20260112.dmp logfile=exp_PRODDB01_20260112.log flashback_scn=6292616
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type DATABASE_EXPORT/STATISTICS/MARKER
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/TABLESPACE
...

. . exported "WMSYS"."WM$METADATA_MAP"                   11.16 KB       0 rows
. . exported "MDSYS"."RDF_PARAM$"                        6.515 KB       3 rows
. . exported "SYS"."AUDTAB$TBS$FOR_EXPORT"               5.960 KB       2 rows
. . exported "SYS"."DBA_SENSITIVE_DATA"                  8.070 KB       0 rows
. . exported "SYS"."DBA_TSDP_POLICY_PROTECTION"          7.656 KB       0 rows
. . exported "SYS"."NACL$_ACE_EXP"                       9.812 KB       0 rows
. . exported "SYS"."NACL$_HOST_EXP"                      6.921 KB       1 rows
. . exported "SYS"."NACL$_WALLET_EXP"                    5.929 KB       0 rows
. . exported "DBI"."CONSULTANTS"                         5.937 KB       1 rows
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /backup/dumps/PRODDB01_20260112.dmp
Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at Mon Jan 12 10:38:52 2026 elapsed 0 00:01:54


ls -lrth /backup/dumps/*20260112*
-rw-r-----. 1 oracle asmadmin  34M Jan 12 10:38 /backup/dumps/PRODDB01_20260112.dmp
-rw-r--r--. 1 oracle asmadmin 9.8K Jan 12 10:38 /backup/dumps/exp_PRODDB01_20260112.log

On target database, let’s do the import:

ssh root@dbs-04-tst
su - oracle
. oraenv <<< CTEST26
export ORACLE_PDB_SID=MIG26DTPUMP
sqlplus / as sysdba
create or replace directory mig as '/backup/dumps/';
exit

vi /backup/dumps/imp_PRODDB01.par
directory=MIG
schemas=dbi
dumpfile=PRODDB01_20260112.dmp
logfile=imp_PRODDB01_to_MIG26DTPUMP_20260112.log
exclude=statistics

impdp system parfile=/backup/dumps/imp_PRODDB01.par
Import: Release 23.26.0.0.0 - for Oracle Cloud and Engineered Systems on Mon Jan 12 10:51:39 2026
Version 23.26.0.0.0

Copyright (c) 1982, 2025, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle AI Database 26ai Enterprise Edition Release 23.26.0.0.0 - for Oracle Cloud and Engineered Systems
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Warning: possible data loss in character set conversions
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":  system/******** parfile=/backup/dumps/imp_PRODDB01.par
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA/LOGREP
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "DBI"."CONSULTANTS"                           5.9 KB       1 rows
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully completed at Mon Jan 12 10:51:54 2026 elapsed 0 00:00:08

Let’s now compute statistics for my schema:

sqlplus / as sysdba
exec dbms_stats.gather_schema_stats(ownname => 'DBI');
PL/SQL procedure successfully completed.
exit;

Let’s do a quick check on data:

sqlplus / as sysdba
select start_date from dbi.consultants where name like 'DUB%';

START_DAT
---------
04-SEP-17

exit

That’s fine.

Pros & cons

  • + Same procedure as previous migrations
  • + Reorganize data and indexes
  • + Keep the source database untouched
  • – Rather slow when database is big
Migration using PDB copy through DB link

If the source database is a PDB, it can be copied through a DB link, this is a very convenient way of migrating a database.

First, it’s weird but the PDB administrator of the source DB must have the CREATE PLUGGABLE DATABASE privilege:

su - oracle
. oraenv <<< CPROD19
sqlplus / as sysdba
alter session set container=PRODDB01;
grant create pluggable database to pdbadmin;
exit

On the DB System, let’s create a new TNS entry for the 19c PDB:

ssh dbs-04-tst
su - oracle
. oraenv <<< CTEST26
vi $ORACLE_HOME/network/admin/tnsnames.ora

...
PRODDB01 =
  (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = dbioda01.dbi-lab.ch)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = proddb01.dbi-lab.ch)
    )
  )

On the target CDB, create a new database link to the source database:

CREATE DATABASE LINK PRODDB01 CONNECT TO PDBADMIN IDENTIFIED BY "*********"  USING 'PRODDB01';

select count(*) from dual@PRODDB01;
  COUNT(*)
----------
         1

Now let’s copy the source PDB under a new name on the 26ai container:

create pluggable database MIG26DBLINK from PRODDB01@PRODDB01;
Pluggable database created.

alter pluggable database MIG26DBLINK open;

Opening this PDB will take some minutes because it automatically triggers the upgrade from 19c to 26ai:

tail -n 15 --f /u01/app/oracle/diag/rdbms/ctest26_s1/CTEST26/trace/alert_CTEST26.log 
...
MIG26DBLINK(3):Starting Upgrade on PDB Open
2026-01-12T10:57:49.606651+01:00
MIG26DBLINK(3):alter pluggable database application APP$CDB$CATALOG begin install '19.0.0.0.0' on error capture
MIG26DBLINK(3):Completed: alter pluggable database application APP$CDB$CATALOG begin install '19.0.0.0.0' on error capture
MIG26DBLINK(3):alter pluggable database application APP$CDB$CATALOG end install '19.0.0.0.0'
MIG26DBLINK(3):Completed: alter pluggable database application APP$CDB$CATALOG end install '19.0.0.0.0'
MIG26DBLINK(3):alter pluggable database application APP$CDB$CATALOG begin upgrade
  '19.0.0.0.0' to '23.0.0.0.0.partial' on error capture
MIG26DBLINK(3):Completed: alter pluggable database application APP$CDB$CATALOG begin upgrade
  '19.0.0.0.0' to '23.0.0.0.0.partial' on error capture
...
MIG26DBLINK(3):SERVER COMPONENT id=CATJAVA: status=UPGRADED, version=23.26.0.0.0, timestamp=2026-01-12 11:09:54 Container=MIG26DBLINK Id=3
2026-01-12T11:09:58.200636+01:00
MIG26DBLINK(3):Resize operation completed for file# 19, fname +DATA/CTEST26_S1/482EE16D86417106E063F700240A6555/DATAFILE/undotbs1.294.1222340161, old size 450560K, new size 455680K
2026-01-12T11:09:59.496448+01:00
MIG26DBLINK(3):Resize operation completed for file# 19, fname +DATA/CTEST26_S1/482EE16D86417106E063F700240A6555/DATAFILE/undotbs1.294.1222340161, old size 455680K, new size 460800K
2026-01-12T11:10:26.701609+01:00
MIG26DBLINK(3):oracle :
MIG26DBLINK(3):SERVER COMPONENT id=XDB: status=UPGRADED, version=23.26.0.0.0, timestamp=2026-01-12 11:10:26 Container=MIG26DBLINK Id=3
2026-01-12T11:10:36.378304+01:00
MIG26DBLINK(3):Resize operation completed for file# 19, fname +DATA/CTEST26_S1/482EE16D86417106E063F700240A6555/DATAFILE/undotbs1.294.1222340161, old size 460800K, new size 465920K
2026-01-12T11:10:45.077998+01:00
MIG26DBLINK(3):Resize operation completed for file# 19, fname +DATA/CTEST26_S1/482EE16D86417106E063F700240A6555/DATAFILE/undotbs1.294.1222340161, old size 465920K, new size 471040K
...
MIG26DBLINK(3):***************************************************************
MIG26DBLINK(3):WARNING: Pluggable Database MIG26DBLINK with pdb id - 3 is
MIG26DBLINK(3):         altered with errors or warnings. Please look into
MIG26DBLINK(3):         PDB_PLUG_IN_VIOLATIONS view for more details.
MIG26DBLINK(3):***************************************************************
MIG26DBLINK(3):--ATTENTION--
MIG26DBLINK(3):Errors reported while opening PDB (ContainerId: 3) and have been recorded in pdb_alert$ table.
MIG26DBLINK(3) Error Violation: SQL Patch, Cause: '23.26.0.0.0 Release_Update 2509270520' is installed in the CDB but no release updates are installed in the PDB, Action: Call datapatch to install in the PDB or the CDB
2026-01-12T11:13:31.632629+01:00
MIG26DBLINK(3):Opening pdb with no Resource Manager plan active
Violations: Type: 1, Count: 1
Violations: Type: 2, Count: 2
Completed: Pluggable database MIG26DBLINK opened read write
Completed: alter pluggable database MIG26DBLINK open

After a couple of minutes, the PDB is opened with errors:

Warning: PDB altered with errors.
exit

Let’s apply datapatch on this very PDB:

$ORACLE_HOME/OPatch/datapatch -pdbs MIG26DBLINK
SQL Patching tool version 23.26.0.0.0 Lim on Mon Jan 12 11:14:55 2026
Copyright (c) 2012, 2025, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/product/23.0.0.0/dbhome_1/cfgtoollogs/sqlpatch/sqlpatch_sid_CTEST26_ts_2026_01_12_11_14_55_pid_5074/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
  No interim patches found

Current state of release update SQL patches:
  Binary registry:
    23.26.0.0.0 Release_Update 250927052030: Installed
  PDB MIG26DBLINK:
    Applied 23.26.0.0.0 Release_Update 250927052030 successfully

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: MIG26DBLINK
    No interim patches need to be rolled back
    No release update patches need to be installed
    No interim patches need to be applied

Bypass install queue:
  For the following PDBs: MIG26DBLINK
    No interim rollbacks will bypass install
    Patch 38404116 (Database Release Update : 23.26.0.0.0 (38404116) Gold Image): will bypass install
      Apply from 23.26.0.0.0 Release_Update 250927052030 to 23.26.0.0.0 Release_Update 250927052030
    No interim applys will bypass install


Installation queue after removing bypass entries...
Installation queue:
  For the following PDBs: MIG26DBLINK
    No interim patches need to be rolled back
    No release update patches need to be installed
    No interim patches need to be applied


Processing bypass install queue:
  Patch 38404116 apply (pdb MIG26DBLINK): SUCCESS (bypass_install)

SQL Patching tool complete on Mon Jan 12 11:15:03 2026

Everything is fine, now let’s close and open this PDB in normal mode:

sqlplus / as sysdba
show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 MIG26DBLINK			  READ WRITE YES
	 4 MIG26DTPUMP			  READ WRITE NO

alter pluggable database MIG26DBLINK close immediate;
alter pluggable database MIG26DBLINK open;
alter pluggable database MIG26DBLINK save state;

show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 MIG26DBLINK			  READ WRITE NO
	 4 MIG26DTPUMP			  READ WRITE NO

Let’s do a quick data check:

sqlplus / as sysdba
alter session set container=MIG26DBLINK;
select start_date from dbi.consultants where name like 'DUB%';
START_DAT
---------
04-SEP-17

exit

Database is ready to use.

Pros & cons

  • + Maybe the easiest way of migrating a PDB
  • + Keep the source database untouched
  • + Rather fast
  • + No downtime of the source database
  • – Need to apply datapatch
Migration using unplug/plug features

If the source database is a PDB, it can be unplugged and plugged back to another container. This is how multitenant architecture was presented when it first came to 12cR1.

Let’s unplug the source PDB:

su - oracle
. oraenv <<< CPROD19
sqlplus / as sysdba
alter pluggable database PRODDB01 close immediate;
Pluggable database altered.

alter pluggable database PRODDB01 unplug into '/backup/dumps/PRODDB01.xml';
Pluggable database altered.

exit

Now let’s plug this PDB to the new 26ai container:

ssh root@dbs-04-tst
su - oracle
. oraenv <<< CTEST26

sqlplus / as sysdba

show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 MIG26DBLINK			  READ WRITE NO
	 4 MIG26DTPUMP			  READ WRITE NO

create pluggable database MIG26REPLUG using '/backup/dumps/PRODDB01.xml' COPY;
create pluggable database MIG26REPLUG using '/backup/dumps/PRODDB01.xml' COPY
*
ERROR at line 1:
ORA-19505: failed to identify file
"+DATA/CPROD19_S1/450AA2873245B6F5E063F100240AC35A/DATAFILE/system.302.122233608
1"
ORA-17503: ksfdopn:2 Failed to open file
+DATA/CPROD19_S1/450AA2873245B6F5E063F100240AC35A/DATAFILE/system.302.1222336081
ORA-15260: permission denied on ASM disk group
Help: https://docs.oracle.com/error-help/db/ora-19505/

Files from the source database are not readable by this DB System. Let’s change the permission on these files from the Bare Metal ASM instance:

exit
exit
exit
su - grid

asmcmd ls -lt +DATA/CPROD19_S1/450AA2873245B6F5E063F100240AC35A/DATAFILE/* --permission
Type      Redund  Striped  Time             Sys  User                                     Group  Permission  Name
DATAFILE  MIRROR  COARSE   JAN 12 11:00:00  Y    oracle@e7d9d36f6b157fdabf89026bcf48d757          rw-------  USERS.297.1222336081
DATAFILE  MIRROR  COARSE   JAN 12 11:00:00  Y    oracle@e7d9d36f6b157fdabf89026bcf48d757          rw-------  UNDOTBS1.306.1222336081
DATAFILE  MIRROR  COARSE   JAN 12 11:00:00  Y    oracle@e7d9d36f6b157fdabf89026bcf48d757          rw-------  SYSTEM.302.1222336081
DATAFILE  MIRROR  COARSE   JAN 12 11:00:00  Y    oracle@e7d9d36f6b157fdabf89026bcf48d757          rw-------  SYSAUX.305.1222336081

asmcmd ls -lt +DATA/CTEST26_S1/DATAFILE/* --permission
Type      Redund  Striped  Time             Sys  User                                     Group  Permission  Name
DATAFILE  MIRROR  COARSE   JAN 12 10:00:00  Y    oracle@b94b488f82b1cf5cbf58f54162aafac0          rw-------  USERS.299.1222339507
DATAFILE  MIRROR  COARSE   JAN 12 10:00:00  Y    oracle@b94b488f82b1cf5cbf58f54162aafac0          rw-------  UNDOTBS1.693.1222338993
DATAFILE  MIRROR  COARSE   JAN 12 10:00:00  Y    oracle@b94b488f82b1cf5cbf58f54162aafac0          rw-------  SYSTEM.906.1222338951
DATAFILE  MIRROR  COARSE   JAN 12 10:00:00  Y    oracle@b94b488f82b1cf5cbf58f54162aafac0          rw-------  SYSAUX.832.1222338987

asmcmd chmod a+r +DATA/CPROD19_S1/450AA2873245B6F5E063F100240AC35A/DATAFILE/*
asmcmd ls -lt +DATA/CPROD19_S1/450AA2873245B6F5E063F100240AC35A/DATAFILE/* --permission
Type      Redund  Striped  Time             Sys  User                                     Group  Permission  Name
DATAFILE  MIRROR  COARSE   JAN 12 11:00:00  Y    oracle@e7d9d36f6b157fdabf89026bcf48d757          rw-r--r--  USERS.297.1222342153
DATAFILE  MIRROR  COARSE   JAN 12 11:00:00  Y    oracle@e7d9d36f6b157fdabf89026bcf48d757          rw-r--r--  UNDOTBS1.301.1222341995
DATAFILE  MIRROR  COARSE   JAN 12 11:00:00  Y    oracle@e7d9d36f6b157fdabf89026bcf48d757          rw-r--r--  SYSTEM.305.1222341995
DATAFILE  MIRROR  COARSE   JAN 12 11:00:00  Y    oracle@e7d9d36f6b157fdabf89026bcf48d757          rw-r--r--  SYSAUX.302.1222341995

exit

ssh root@dbs-04-tst
su - oracle
. oraenv <<< CTEST26

sqlplus / as sysdba

create pluggable database MIG26REPLUG using '/backup/dumps/PRODDB01.xml' COPY;
Pluggable database created.

show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 MIG26DBLINK			  READ WRITE NO
	 4 MIG26DTPUMP			  READ WRITE NO
	 6 MIG26REPLUG			  MOUNTED

alter pluggable database MIG26REPLUG open;

It will also take a couple of minutes for this upgrade to 26ai. You will see corresponding traces in the alert.log file of the 26ai container:

tail -f /u01/app/oracle/diag/rdbms/ctest26_s1/CTEST26/trace/alert_CTEST26.log
...
2026-01-12T11:34:21.075364+01:00
MIG26REPLUG(6):Starting Upgrade on PDB Open
...
MIG26REPLUG(6):***************************************************************
MIG26REPLUG(6):WARNING: Pluggable Database MIG26REPLUG with pdb id - 6 is
MIG26REPLUG(6):         altered with errors or warnings. Please look into
MIG26REPLUG(6):         PDB_PLUG_IN_VIOLATIONS view for more details.
MIG26REPLUG(6):***************************************************************
MIG26REPLUG(6):--ATTENTION--
MIG26REPLUG(6):Errors reported while opening PDB (ContainerId: 6) and have been recorded in pdb_alert$ table.
MIG26REPLUG(6) Error Violation: SQL Patch, Cause: '23.26.0.0.0 Release_Update 2509270520' is installed in the CDB but no release updates are installed in the PDB, Action: Call datapatch to install in the PDB or the CDB
2026-01-12T11:49:56.836062+01:00
MIG26REPLUG(6):Opening pdb with no Resource Manager plan active
Violations: Type: 1, Count: 1
Violations: Type: 2, Count: 1
Completed: Pluggable database MIG26REPLUG opened read write
Completed: alter pluggable database MIG26REPLUG open

Let’s have a look at my PDBs and apply the datapatch on MIG26REPLUG:

show pdbs
    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 MIG26DBLINK			  READ WRITE NO
	 4 MIG26DTPUMP			  READ WRITE NO
	 6 MIG26REPLUG			  READ WRITE YES

exit

$ORACLE_HOME/OPatch/datapatch -pdbs MIG26REPLUG

SQL Patching tool version 23.26.0.0.0 Lim on Mon Jan 12 11:51:42 2026
Copyright (c) 2012, 2025, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/product/23.0.0.0/dbhome_1/cfgtoollogs/sqlpatch/sqlpatch_sid_CTEST26_ts_2026_01_12_11_51_42_pid_24026/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
  No interim patches found

Current state of release update SQL patches:
  Binary registry:
    23.26.0.0.0 Release_Update 250927052030: Installed
  PDB MIG26REPLUG:
    Applied 23.26.0.0.0 Release_Update 250927052030 successfully

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: MIG26REPLUG
    No interim patches need to be rolled back
    No release update patches need to be installed
    No interim patches need to be applied

Bypass install queue:
  For the following PDBs: MIG26REPLUG
    No interim rollbacks will bypass install
    Patch 38404116 (Database Release Update : 23.26.0.0.0 (38404116) Gold Image): will bypass install
      Apply from 23.26.0.0.0 Release_Update 250927052030 to 23.26.0.0.0 Release_Update 250927052030
    No interim applys will bypass install


Installation queue after removing bypass entries...
Installation queue:
  For the following PDBs: MIG26REPLUG
    No interim patches need to be rolled back
    No release update patches need to be installed
    No interim patches need to be applied


Processing bypass install queue:
  Patch 38404116 apply (pdb MIG26REPLUG): SUCCESS (bypass_install)

SQL Patching tool complete on Mon Jan 12 11:51:50 2026

Now let’s restart the PDB in normal mode:

sqlplus / as sysdba

alter pluggable database MIG26REPLUG close immediate;
alter pluggable database MIG26REPLUG open;
alter pluggable database MIG26REPLUG save state;

show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 MIG26DBLINK			  READ WRITE NO
	 4 MIG26DTPUMP			  READ WRITE NO
	 6 MIG26REPLUG			  READ WRITE NO

This PDB is now running on 26ai.

Let’s check my consultants table:

alter session set container=MIG26REPLUG;
select start_date from dbi.consultants where name like 'DUB%';
START_DAT
---------
04-SEP-17

exit

Pros & cons

  • + Easy migration
  • + Preserve source files with the copy option
  • + Rather fast
  • – Need a downtime of the source PDB
  • – Reuse of source database is only possible by unplugging and plugging back the PDB with the xml file
Conclusion

There are multiple ways of migrating to 26ai. There is quite a lot of new features coming with 26ai. You will use them or not, but trying to migrate to this version is something to start thinking about in 2026.

L’article Migrate your database to 26ai on ODA est apparu en premier sur dbi Blog.

Upgrade Classic Extracts to Integrated Mode Before Migrating to GoldenGate 23ai

Yann Neuhaus - Mon, 2026-01-12 02:05

Classic Extracts have been deprecated in GoldenGate 19c and shouldn’t be part of your GoldenGate replications for a long time now. Yet, some customers still have them, and with GoldenGate 19c support coming to an end in a few months, it is high time you upgrade to GoldenGate 23ai.

However, GoldenGate 23ai doesn’t support classic extract, and the migration utility provided by Oracle will not take care of them. So, what to do with your classic extracts before migrating to GoldenGate 23ai ?

Upgrade the extracts to integrated mode

The natural solution is to migrate the extracts from the classic mode to the integrated mode. It might not be possible, depending on your situation, but it is the easiest way to get your extracts ready for a GoldenGate 23ai migration.

To do so, we will use the info extract <extract> upgrade command, which tells us whether the extract is ready for an upgrade. Log in with the correct alias, stop the extract and register the extract before running the command.

GGSCI (vmogg) > dblogin useridalias ggadmin
Successfully logged into database.

GGSCI (vmogg as ggadmin@DB1) > stop extract ext

Sending STOP request to EXTRACT EXT ...
Request processed.

GGSCI (vmogg as ggadmin@DB1) > info extract ext

EXTRACT    EXT       Last Started 2025-12-21 10:13   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:00:10 ago)
Log Read Checkpoint  Oracle Redo Logs
                     2025-12-21 10:19:13  Seqno 10, RBA 18472448
                     SCN 0.2170788 (2170788)

GGSCI (vmogg as ggadmin@DB1) > register extract ext database

2025-12-21 10:21:13  INFO   OGG-02003  Extract EXT successfully registered with database at SCN 2170800.

GGSCI (vmogg as ggadmin@DB1) > info extract ext upgrade

You might need to run the command multiple times, depending on the output of the last command. Once the extract is ready to be upgraded, you will see the following output.

GGSCI (vmogg as ggadmin@DB1) > info extract ext upgrade
Extract EXT is ready to be upgraded to integrated capture.

After this, run the upgrade command and restart the extract. The extract is now in Oracle Integrated Redo Logs mode.

GGSCI (vmogg as ggadmin@DB1) > alter extract ext upgrade integrated tranlog
Extract EXT successfully upgraded to integrated capture.

GGSCI (vmogg as ggadmin@DB1) > info extract ext

EXTRACT    EXT       Initialized   2025-12-21 10:13   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:11:43 ago)
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2025-12-21 10:19:13
                     SCN 0.2170788 (2170788)

GGSCI (vmogg as ggadmin@DB1) > start extract ext

Sending START request to MANAGER ...
EXTRACT EXT starting

If your extract doesn’t start, have a detailed look at the parameters used in the extract with view params ext. You could use parameters for your extract that were accepted for classic extracts but not supported for integrated extracts.

Repeat this procedure for all your classic extracts in all your GoldenGate deployments, and you will be ready to migrate to GoldenGate 23ai !

OGG-08241 when registering the extract

If you get the following error when registering the extract:

OGG-08241  ERROR: This EXTRACT EXT is already registered with the database.

Check the dba_capture view of the database you are trying to register the extract into. If you see an extract already registered, it might mean that you previously registered an integrated extract with the same name.

SELECT capture_name, queue_name, queue_owner, source_database, start_scn from dba_capture;
‘Recovery SCN has not reached SCN’ error

If you receive this error when running the upgrade command, you should try restarting the extract and stopping it again.

Extract ext is not ready to be upgraded because recovery SCN ... has not reached SCN ...

start extract ext
stop extract ext
info extract ext upgrade
What if my extract is ABENDED ?

Technically, the info extract ... upgrade command works for ABENDED extracts. But depending on the reason your extract ended up in this state, you might not be able to upgrade.

L’article Upgrade Classic Extracts to Integrated Mode Before Migrating to GoldenGate 23ai est apparu en premier sur dbi Blog.

Pages

Subscribe to Oracle FAQ aggregator