Feed aggregator
Protected: Upgrade RHEL from 9.6 to 10.1 (when running PostgreSQL/Patroni)
This content is password-protected. To view it, please enter the password below.
Password:
L’article Protected: Upgrade RHEL from 9.6 to 10.1 (when running PostgreSQL/Patroni) est apparu en premier sur dbi Blog.
Deployment Creation INS-85037 Error With GoldenGate 26ai for DB2 z/OS
Among all the automation I was doing around a GoldenGate installation for DB2, I recently ended up with an INS-85037 error when running the configuration assistant oggca.sh. And because this error is quite common and has many different possible root causes, I wanted to write about it.
If you’re wondering how to set up GoldenGate 26ai for DB2 z/OS, it is very similar to what you would do with GoldenGate for Oracle. For more information on standard GoldenGate setups, you can read my blog posts about both 26ai and 23ai installations.
For the binary installation, the main difference is that INSTALL_OPTION should be set to DB2ZOS. A complete oggcore.rsp response file would look like this:
oracle.install.responseFileVersion=/oracle/install/rspfmt_ogginstall_response_schema_v23_1_0
INSTALL_OPTION=DB2ZOS
SOFTWARE_LOCATION=/u01/app/oracle/product/oggzos
INVENTORY_LOCATION=/u01/app/oraInventory
UNIX_GROUP_NAME=oinstall
When running the configuration assistant, some options are not available, but the main difference is in the environment variables section of the response file. You should have an IBMCLIDRIVER variable set to your DB2 driver’s path.
ENV_LD_LIBRARY_PATH=${IBMCLIDRIVER}/1ib:${OGG_HOME}/1ib
IBMCLIDRIVER=/path/to/ibmclidriver
ENV_USER_VARS=
Possible Solution for INS-85037
That being said, here is the exact error I had when running the Configuration Assistant oggca.sh:
[FATAL] [INS-85037] Deployment creation failed.
ACTION: Check logs at /u01/app/oraInventory/logs/OGGCAConfigActions2026-03-22_15-19-15PM for more information.
*MORE DETAILS*
Return code 503 (Service Unavailable) does not match the expected code 201 (Created).
Verification failed for REST call to 'http://127.0.0.1:7810/services/v2/authorizations/security/ogguser'
Results for "Add a new deployment":
..."Verifying Service Manager deployment status.": SUCCEEDED
..."Adding 'zos_test' deployment.": SUCCEEDED
...."Configuring and starting the Administration Service.": SUCCEEDED
..."Verifying the initial Administration Service configuration.": SUCCEEDED
..."Adding user 'ogguser' to administer the deployment.": FAILED
Log of this session available at: /u01/app/oraInventory/logs/OGGCAConfigActions2026-03-22_15-19-15PM
The deployment creation failed and the associated files will be deleted from disk. Oracle recommends that if you want to keep the log files, you should move them to another location.
Log files will be copied to:
/u01/app/oraInventory/logs/OGGCAConfigActions2026-03-22_15-19-15PM/userdeploy_logs_2026-03-22_15-19-15PM
[WARNING] [INS-32090] Software installation was unsuccessful.
ACTION: Refer to the log files for details or contact Oracle Support Services.
Unfortunately, the installation logs did not show anything other than the following:
SEVERE: Deployment creation job failed.
INFO: Service Manager deployment that was created as part of the process needs to be removed.
INFO: Running clean-up job for Service Manager.
SEVERE: Removing Service Manager deployment.
The deployment and the service manager get deleted after the installation failure, but the logs are also copied to the oraInventory installation logs. Looking at the ServiceManager.log in the smdeploy folder, we don’t get much information.
ERROR| Configuration does not contain a 'config/network' specification. (ServiceManager.Topology)
The same applies to the restapi.log, where the logs start after the initial deployment creation error. Unfortunately, none of this was really helpful in my case. After quite some digging, I found that the response file I was using when running oggca.sh had an error. In the custom section for environment variables, I had the following settings:
# SECTION G - ENVIRONMENT VARIABLES
ENY_LD_LIBRARY_PATH-S{IBMCLIDRIVER}/1ib:${OGG_HOME}/11b
IBMCLIDRIVER=/u01/app/ibm/db2_odbc_cli_11_5
ENV_USER_VARS=
It looks like what I gave earlier, except that the path for the clidriver was incomplete.
oracle@vmogg:/home/oracle/ [ogg] ls -l /u01/app/ibm/db2_odbc_cli_11_5
drwxr-xr-x 3 oracle oinstall 23 Mar 22 2026 odbc_cli
oracle@vmogg:/home/oracle/ [ogg] ls -l /u01/app/ibm/db2_odbc_cli_11_5/odbc_cli/clidriver/
-r-xr-xr-x 1 oracle oinstall 4170 Mar 17 2021 Readme.txt
drwxr-xr-x 2 oracle oinstall 36 Mar 22 2026 adm
drwxr-xr-x 2 oracle oinstall 122 Mar 22 2026 bin
drwxr-xr-x 2 oracle oinstall 197 Mar 22 2026 bnd
drwxr-xr-x 2 oracle oinstall 157 Mar 22 09:16 cfg
drwxr-xr-x 2 oracle oinstall 24 Mar 22 2026 cfecache
drwxr-xr-x 4 oracle oinstall 27 Mar 22 2026 conv
drwxr-xr-x 3 oracle oinstall 49 Mar 22 09:26 db2dump
drwxr-xr-x 3 oracle oinstall 217 Mar 22 2026 lib
drwxr-xr-x 3 oracle oinstall 124 Mar 22 09:26 license
drwxr-xr-x 3 oracle oinstall 28 Mar 22 2026 msg
drwxr-xr-x 3 oracle oinstall 21 Mar 22 2026 properties
drwxr-xr-x 3 oracle oinstall 20 Mar 22 2026 security32
drwxr-xr-x 3 oracle oinstall 20 Mar 22 2026 security64
After correcting the oggca.rsp response file with the correct path, the configuration assistant ran successfully.
oracle@vmogg:/u01/app/oracle/product/ogg26/bin [ogg] oggca.sh -silent -responseFile /home/oracle/oggca.rsp
Successfully Setup Software.
Next time you encounter an error like this when setting up GoldenGate for DB2, make sure to check not only the variable value but also the actual content of the IBMCLIDRIVER directory !
NB: If you had this error for any other kind of setup, make sure to always check all the content of the response file you are using, as well as the prerequisites. (CLIDRIVER in this case, but it could be XAG, etc.)
L’article Deployment Creation INS-85037 Error With GoldenGate 26ai for DB2 z/OS est apparu en premier sur dbi Blog.
Discover refreshable clone PDB with Autoupgrade
AutoUpgrade with a refreshable clone is basically “zero‑panic upgrades with a live copy of your database.”.
What problem it solvesTraditionally you had to schedule a maintenance window, stop everything, take a backup, upgrade, and hope nothing went wrong.
With a refreshable clone PDB, AutoUpgrade builds and continuously syncs a copy of your database while production stays online. At cutover time, you just stop users, do a last refresh, convert/upgrade the clone, and switch them over. If something goes wrong, the original source is untouched and you can fall back quickly.
Core idea in simple termsThink of your non‑CDB or old‑version PDB as the “master” and the refreshable clone PDB as a “follow‑me” copy sitting in the target CDB.
AutoUpgrade:
- Creates a PDB in the target CDB via database link (initial clone of datafiles).
- Marks it as refreshable, so redo from the source is applied and it keeps rolling forward.
- Lets you test the clone (read‑only) while users are still working on the source.
- At a controlled start time, runs a last refresh, disconnects it from the source, converts it to a normal PDB, and upgrades it.
From your point of view: you prepare everything days in advance, and the real downtime shrinks to “final refresh + upgrade + app switch.”
High‑level lifecycleFor a non‑CDB to PDB migration or a PDB upgrade, the flow looks like this:
Preparation- You have a source: non‑CDB 12.2/19c or older PDB.
- You have a target: a higher‑version CDB (for example 23ai/26ai) with enough space and network.
- You configure AutoUpgrade with the source and target, plus the parameter telling it to use refreshable clone PDB.
- In deploy mode, AutoUpgrade creates the pluggable database in the target CDB via DB link, copies the datafiles, and defines it as refreshable.
- From now on, redo is shipped from source to target and applied, so the clone stays close to current.
- The source database stays fully online; business keeps running.
- The refreshable clone is read‑only, so you can query it, run app smoke tests, check performance characteristics, etc.
- AutoUpgrade keeps the job running in the background, doing periodic refreshes.
- When you reach the maintenance window, users leave the system and you quiesce activity on the source.
- AutoUpgrade performs a final refresh: last redo from source is applied on the clone so you don’t lose any committed data.
- The clone is then disconnected from the source, turned into a regular PDB, and AutoUpgrade moves into the upgrade and conversion steps (non‑CDB to PDB conversion if needed, then catalog/PSU/UTLRP, etc.).
- You point applications to the new PDB in the target CDB.
- The original source database still exists; if you hit a show‑stopper, you can redirect apps back to it and plan a new attempt.
In practice, the “scary” part is only the final refresh and the moment you switch your apps.
Why DBAs like this patternSome clear advantages:
- Minimal downtime: Most of the heavy lifting (copy + sync) happens while production is running; downtime is limited to final refresh and upgrade.
- Built‑in rollback: Because the source stays untouched, you always have a clean fallback without restore/recovery.
- Realistic testing: You test against a clone built from real production data that is almost up‑to‑date, not a weeks‑old backup.
- Automation: AutoUpgrade orchestrates the create‑clone, refresh, disconnect, convert, and upgrade steps; you mostly steer with parameters and commands instead of custom scripts.
Trade‑offs are mainly around resources: you need disk, CPU, and network to maintain the refreshable clone, and you have to ensure redo shipping is reliable (archivelog gaps or network glitches can break the refresh and need fixing).
Typical exampleImagine you need to move a 19c non‑CDB to a new 26ai CDB on a different host, with less than 30 minutes downtime:
- Monday: you configure AutoUpgrade with the refreshable clone option, start the job. The tool creates the PDB clone in the 26ai CDB and starts streaming redo. Users never notice.
- Next days: you let it refresh every few minutes, developers connect read‑only to the clone and test their application against 26ai. Everything looks good.
- Saturday night: you enter the maintenance window, let open transactions finish, stop app traffic, and tell AutoUpgrade to proceed to the final refresh. Once that’s done, it disconnects the clone, upgrades it, and runs post‑upgrade steps.
- After checks, you change the service names on the app side so they point to the new PDB. Your downtime is mostly spent waiting for the upgrade scripts, not copying terabytes of data.
L’article Discover refreshable clone PDB with Autoupgrade est apparu en premier sur dbi Blog.
How to Standardize SQL Server Disks on VMs using Ansible
Today, the benefits of automation no longer need much explanation: saving time, reducing human error, and ensuring every environment remains aligned with internal standards. What is less obvious, however, is how using an Ansible Playbook can provide advantages that more traditional scripting approaches — such as large PowerShell scripts — struggle to offer. That is exactly what I want to explore here.
When you complete an automated deployment of a SQL Server environment on Windows Server, there is a real sense of achievement. You have invested time and effort, and you expect that investment to pay off thanks to the reliability and repeatability of automation.
But everything changes when the next Windows Server upgrade or SQL Server version arrives… or when corporate standards evolve. Suddenly, you need to reopen a multi-thousand‑line PowerShell script and:
- Integrate the required changes while keeping execution stable,
- Avoid subtle but potentially critical regressions,
- Maintain clear and usable logging,
- Retest the entire automation workflow,
- Troubleshoot new issues introduced by the modifications.
This is precisely the type of situation where Ansible becomes a far better long‑term investment. Its architecture and philosophy offer several advantages:
- Native idempotence, ensuring the same result even after multiple runs,
- A declarative YAML approach, focusing on the desired end state rather than the execution steps,
- Windows Server and SQL Server modules, providing built‑in idempotence and saving significant time,
- Agentless connectivity, simplifying deployment on new machines,
- A modular structure (roles, modules, variables), making adaptation and reuse of your automation much easier.
In this article, I will give you a concrete overview by walking you through how to configure the disks required for SQL Server using Ansible.
1-Map iSCSI controllers to disk numbersWhen developing an Ansible Playbook, one fundamental principle is to design for idempotence from the very start—not just rely on idempotent modules.
On Windows, disk numbering is not guaranteed: it depends on several factors – how disks are detected at startup, the firmware, and so on.
As a result, disk numbers may change from one reboot to another.
To ensure consistent and reliable execution of your deployment, this behavior must be accounted for directly in the design of your Playbook.
Otherwise, it may introduce wrong behaviors, and lead to:
- formatting the wrong disk,
- mounting volumes on incorrect devices,
- completely breaking the SQL Server provisioning workflow.
In other words, idempotence is no longer guaranteed.
To ensure stable and predictable executions, you must determine dynamically the correct disk numbering at each execution.
You can use Get-Disk PowerShell command to achieve your goal, by searching iSCSI controller number and LUN position from Location property.
$adapter = {{ disk.adapter }}
$lun = {{ disk.lun}}
(Get-Disk | Where-Object {
$_.Location -match "Adapter $adapter\s+:.*\s+LUN $lun"
}).number
We have done our mapping between VM specifications and Windows disk numbers.
2-Loop SQL Server disksSince we often have several disks to configure — Data, Logs, TempDB — we need to perform the same actions repeatedly on each disk:
- dynamically determine the disk number,
- initialize it in GPT,
- create the partition and format the volume in NTFS with a 64 KB allocation unit size,
- assign an access path (drive letter or mountpoint),
- apply certain specific configuration settings, such as disabling indexing,
- verify the compliance of the disk configuration.
As these actions are identical for all disks, the best approach is to factorize the tasks.
The Ansible pattern, for such scenario, is to loop that call in a dedicated Task File.
---
- name: Manage all disk properties based on Location and Target numbers
ansible.builtin.include_tasks: disks_properties.yml
loop:
- name: data
location: "{{ disk_specs.data.location }}"
target: "{{ disk_specs.data.target }}"
label: "{{ disk_specs.data.label }}"
letter: "{{ disk_specs.data.letter }}"
- name: logs
location: "{{ disk_specs.logs.location }}"
target: "{{ disk_specs.logs.target }}"
label: "{{ disk_specs.logs.label }}"
letter: "{{ disk_specs.logs.letter }}"
- name: tempdb
location: "{{ disk_specs.tempdb.location }}"
target: "{{ disk_specs.tempdb.target }}"
label: "{{ disk_specs.tempdb.label }}"
letter: "{{ disk_specs.tempdb.letter }}"
loop_control:
loop_var: disk
...
Since we performed our loop in the previous section on the disks_properties.yml file, we can now implement the configuration actions inside this file.
First, we will retrieve the disk number and then begin configuring the disk according to best practices and our internal standards.
To guarantee idempotence, we will mark this step as not changed: this is only a Get action:
---
- name: Identify the {{ disk.name }} disk number
ansible.windows.win_shell: |
$adapter = {{ disk.target }}
$lun = {{ disk.location }}
(Get-Disk | Where-Object {
$_.Location -match "Adapter $adapter\s+:.*\s+LUN $lun"
}).number
register: disk_num
changed_when: false
Then, we will register the disk number as an Ansible Fact for all this task file execution call.
- name: Set fact for {{ disk.name }} disk number
ansible.builtin.set_fact:
"disk_number_{{ disk.name }}": "{{ disk_num.stdout | trim | int }}"
We can now initialize the disk using community.windows module. Of course, use Ansible module if possible.
The parameter disk_bps.partition_style is a variable of my Ansible Role, to guarantee GPT will be used.
- name: Initialize disks
community.windows.win_initialize_disk:
disk_number: "{{ lookup('vars', 'disk_number_' + disk.name) }}"
style: "{{ disk_bps.partition_style }}"
From there, we can create our partition:
- name: Create partition with letter {{ disk.letter }} for disk {{ disk.name }}
community.windows.win_partition:
drive_letter: "{{ disk.letter }}"
partition_size: "-1"
disk_number: "{{ lookup('vars', 'disk_number_' + disk.name) }}"
And now format our volume with allocation unit size 64KB:
- name: Create a partition letter {{ disk.letter }} on disk {{ disk.name }} with label {{ disk.label }}
community.windows.win_format:
drive_letter: "{{ disk.letter }}"
allocation_unit_size: "{{ disk_bps.allocation_unit_size_bytes }}"
new_label: "{{ disk.label }}"
...
As I mentioned earlier in previous section, we can also add tasks relative to some specific standards or a tasks to guarantee disk compliance.
4- Execute the PlaybookNow that our Ansible Role windows_disks is ready, we can call it through a Playbook.
Of course, we must adjust the reality of the iSCSI configuration of the Virtual Machine.
---
- name: Configure Disks by detecting Disk Number
hosts: Raynor
gather_facts: false
vars:
disk_specs:
data:
location: 0
target: 1
label: SQL_DATA
letter: E
logs:
location: 0
target: 2
label: SQL_TLOG
letter: L
tempdb:
location: 0
target: 3
label: SQL_TEMPDB
letter: T
tasks:
- name: gather facts
ansible.builtin.setup:
changed_when: false
tags: [always]
- name: Configure Disks
ansible.builtin.import_role:
name: windows_disks
tags: windows_disks
...
CONCLUSION
We have had an overview of how Ansible makes automation easier to maintain and to evolve, by focusing on the logic of our deployment and not on the code to achieve it.
Now, updating your standards or upgrading versions will no longer require rewriting scripts, but mainly adapting variables.
However, it is important to be aware that idempotence must also be maintained through design.
L’article How to Standardize SQL Server Disks on VMs using Ansible est apparu en premier sur dbi Blog.
Creating Path Connections with GoldenGate REST API
When automating your GoldenGate deployment management, you might want to create path connections with the GoldenGate REST API. This is an important aspect when connecting GoldenGate deployments with distribution paths. A first step towards this is to create a path connection on the same deployment as the distribution server where the distribution path will run.
In the GoldenGate web UI, you can easily create Path Connections. Just go to the Path Connections tab, add a path, and specify the following information:
- Credential Alias: Alias used to connect to the target deployment. It doesn’t have to match any name on the target deployment.
- User ID: Real username that must exist on the target deployment.
- Password: Password associated with the User ID given before.
restapi.log analysis
But what about the REST API ? When looking at the list of endpoints given by Oracle, no REST endpoint explicitly refers to path connections, so how to create path connections through the REST API ?
The key point to understand is that path connections are not independent GoldenGate objects. In fact, they exist as a subset of another object, which you should know by now : aliases. Aliases are created to store credentials and are organized in domains. The default domain is called OracleGoldenGate, and Oracle has a reserved name for a subtype of domains : Network.
We can see this easily when creating a path connection through the web UI, and then looking at the restapi.log file. Open the log file located in the var/log folder of your deployment, or read the blog I wrote about restapi.log analysis. Using this method, we see the endpoint and the content of the API call. Here, for instance, I created a path connection from the web UI, to connect to ogg_user with the ogg_target alias.
oracle@vmogg: jq -c 'select (.request.context.verb == "POST" and .request.context.uriTemplate == "/services/{version}/credentials/{domain}/{alias}")' restapi.ndjson
{"request":{"context":{"verb":"POST","uri":"/services/v2/credentials/Network/ogg_target","uriTemplate":"/services/{version}/credentials/{domain}/{alias}"}},"content":{"userid":"ogg_user","password":"** Masked **"},...}
Path connection creation with the REST API
To summarize, path connections are just aliases in the Network domain. This simplifies the creation of path connections. You just need to make a POST API call to the alias endpoint, specifying Network as the domain. The exact endpoint is then /services/{version}/credentials/Network/{alias}.
Quick example: using the GoldenGate Python client I presented in another blog, let’s create an alias in the Network domain :
>>> from oggrestapi import OGGRestAPI
>>> ogg_client = OGGRestAPI(
url="https://vmogg",
username="ogg",
password="ogg")
Connected to OGG REST API at https://vmogg
>>> ogg_client.create_alias(
alias='ogg_dbi_blog',
domain='Network',
data={
"userid": "ogg_user_on_target",
"password": "***"
}
)
{'$schema': 'api:standardResponse', 'links': [{'rel': 'canonical', 'href': 'https://vmogg/services/v2/credentials/Network/ogg_dbi_blog', 'mediaType': 'application/json'}, {'rel': 'self', 'href': 'https://vmogg/services/v2/credentials/Network/ogg_dbi_blog', 'mediaType': 'application/json'}], 'messages': [{'$schema': 'ogg:message', 'title': 'Credential store altered.', 'code': 'OGG-15114', 'severity': 'INFO', 'issued': '2026-03-22T10:14:01Z', 'type': 'https://docs.oracle.com/en/middleware/goldengate/core/23.26/error-messages/'}]}
After refreshing the web UI, the newly created path connection is visible.
L’article Creating Path Connections with GoldenGate REST API est apparu en premier sur dbi Blog.
Dctm – Another DM_LICENSE_E_INVALID_LICENSE error but caused by JMS this time
At the end of last year, I published a first blog about a DM_LICENSE_E_INVALID_LICENSE error in D2 SSO login through OTDS. The root cause in that previous post was a duplicate user with one lowercase and one uppercase user_login_name. However, I did mention that there can be several reasons for that error. In this blog, I will describe another such case.
1. Symptoms in D2 logsThe generated D2 logs associated with this new issue are almost exactly the same. The only difference is that the Repository returns “null” as the userid (user_name). See the message “Authentication failed for user null with docbase REPO_NAME“. This wasn’t the case in the other blog post:
[tomcat@d2-0 logs]$ cat D2.log
...
2025-12-08 12:21:14,784 UTC [INFO ] (https-jsse-nio-8080-exec-47) - c.emc.x3.portal.server.X3HttpSessionListener : Created http session 8531D373A3EA12A398B158AF656E7D20
2025-12-08 12:21:14,784 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS : No user name on the Http session yet
2025-12-08 12:21:14,785 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS : No access_token found in Http request or Cookie Redirecting to OTDS Server
2025-12-08 12:21:14,786 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Identified scheme : https
2025-12-08 12:21:14,786 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Identified server name : d2.domain.com
2025-12-08 12:21:14,787 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Identified server port : 443
2025-12-08 12:21:14,787 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Built server host is : https://d2.domain.com:443
2025-12-08 12:21:14,788 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] header name=Host, value=d2.domain.com
2025-12-08 12:21:14,789 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] MaxHeaderValueSize: 8192
2025-12-08 12:21:14,792 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] validating the input valued2.domain.com
2025-12-08 12:21:14,793 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Identified host : d2.domain.com
2025-12-08 12:21:14,794 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Overall base URL built : https://d2.domain.com/D2
2025-12-08 12:21:14,795 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : Redirection url post encoding - https%3A%2F%2Fd2.domain.com%2FD2%2Fd2_otds.html%3ForigUrl%3D%2FD2%2F
2025-12-08 12:21:14,797 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS : OAUTH final login sendRedirect URL : https://otds-mfa.domain.com/otdsws/oauth2/auth?response_type=token&client_id=dctm-ns-d2&redirect_uri=https%3A%2F%2Fd2.domain.com%2FD2%2Fd2_otds.html%3ForigUrl%3D%2FD2%2F&logon_appname=Documentum+Client+CE+23.4
2025-12-08 12:21:14,798 UTC [DEBUG] (https-jsse-nio-8080-exec-47) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS : Sending redirection as it's not a rpc call : https://otds-mfa.domain.com/otdsws/oauth2/auth?response_type=token&client_id=dctm-ns-d2&redirect_uri=https%3A%2F%2Fd2.domain.com%2FD2%2Fd2_otds.html%3ForigUrl%3D%2FD2%2F&logon_appname=Documentum+Client+CE+23.4
2025-12-08 12:21:15,018 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] MaxHeaderKeySize: 256
2025-12-08 12:21:15,018 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] MaxHeaderValueSize: 8192
2025-12-08 12:21:15,020 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS : No user name on the Http session yet
2025-12-08 12:21:15,021 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS : Found access_token on Http Cookie, invalidating the cookie by setting maxAge 0
2025-12-08 12:21:15,022 UTC [INFO ] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS : setting the cookie as secure as its a https request
2025-12-08 12:21:15,024 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS : OTDS responded with a oauth token
2025-12-08 12:21:15,025 UTC [INFO ] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : ------ Begin getUntrustedJwtHeader : eyJraWQiOiI1YjM4...oSD8Xh3vVmkekcA
2025-12-08 12:21:15,026 UTC [INFO ] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : getUntrustedJwtHeader oauthTokenWithoutSignature : eyJraWQiOiI1YjM4...i1xYWN0LWQyIn0.
2025-12-08 12:21:15,614 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : ------ Begin validateOTDSTokenClaims : MYUSERID
2025-12-08 12:21:15,615 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : validateOTDSTokenClaims for user : MYUSERID , OTDS : currenttime: 1765196475615 expirationtime: 1765200074000
2025-12-08 12:21:15,615 UTC [INFO ] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : ------ End validateOTDSTokenClaims : MYUSERID
2025-12-08 12:21:15,615 UTC [INFO ] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : PublicKey for Key id : 5b38b...bf487 exists
2025-12-08 12:21:15,617 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS Deafault Repository from shiro configured : REPO_NAME
2025-12-08 12:21:15,617 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS : generating DM_Ticket for user : MYUSERID in Repository : REPO_NAME
2025-12-08 12:21:16,522 UTC [ERROR] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS : OAuth Token Error occurred while generating a DCTM MultiUse Ticket for user : MYUSERID
2025-12-08 12:21:16,522 UTC [ERROR] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : OTDS : OAuth Token Error please validate the OTDS Config of user exists in Repository
com.documentum.fc.client.DfAuthenticationException: [DM_SESSION_E_AUTH_FAIL]error: "Authentication failed for user null with docbase REPO_NAME."
at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:52)
at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getException(DocbaseMessageManager.java:137)
at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.checkForMessages(NetwiseDocbaseRpcClient.java:332)
at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.applyForObject(NetwiseDocbaseRpcClient.java:680)
at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection$8.evaluate(DocbaseConnection.java:1572)
at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.evaluateRpc(DocbaseConnection.java:1272)
at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.applyForObject(DocbaseConnection.java:1564)
at com.documentum.fc.client.impl.docbase.DocbaseApi.authenticateUser(DocbaseApi.java:1894)
at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.authenticate(DocbaseConnection.java:460)
at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.open(DocbaseConnection.java:140)
at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.<init>(DocbaseConnection.java:109)
at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.<init>(DocbaseConnection.java:69)
at com.documentum.fc.client.impl.connection.docbase.DocbaseConnectionFactory.newDocbaseConnection(DocbaseConnectionFactory.java:32)
at com.documentum.fc.client.impl.connection.docbase.DocbaseConnectionManager.createNewConnection(DocbaseConnectionManager.java:202)
at com.documentum.fc.client.impl.connection.docbase.DocbaseConnectionManager.getDocbaseConnection(DocbaseConnectionManager.java:132)
at com.documentum.fc.client.impl.session.SessionFactory.newSession(SessionFactory.java:24)
...
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1190)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63)
at java.base/java.lang.Thread.run(Thread.java:840)
2025-12-08 12:21:16,524 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : redirectToErrorPage : Redirecting to Error Page as Login failed for user : null and exception : {}
com.emc.x3.portal.server.filters.authc.X3OTDSAuthenticationFilter$1: Authentication failed for user null with repository REPO_NAME.
at com.emc.x3.portal.server.filters.authc.X3OTDSAuthenticationFilter.validateTokenAndGetUserId(X3OTDSAuthenticationFilter.java:1167)
at com.emc.x3.portal.server.filters.authc.X3OTDSAuthenticationFilter.onAccessDenied(X3OTDSAuthenticationFilter.java:293)
at org.apache.shiro.web.filter.AccessControlFilter.onAccessDenied(AccessControlFilter.java:133)
at org.apache.shiro.web.filter.AccessControlFilter.onPreHandle(AccessControlFilter.java:162)
at org.apache.shiro.web.filter.PathMatchingFilter.isFilterChainContinued(PathMatchingFilter.java:223)
at org.apache.shiro.web.filter.PathMatchingFilter.preHandle(PathMatchingFilter.java:198)
...
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1190)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63)
at java.base/java.lang.Thread.run(Thread.java:840)
2025-12-08 12:21:16,524 UTC [INFO ] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : Adding the LicenseException to the Session : DM_SESSION_E_AUTH_FAIL
2025-12-08 12:21:16,526 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Identified scheme : https
2025-12-08 12:21:16,526 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Identified server name : d2.domain.com
2025-12-08 12:21:16,526 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Identified server port : 443
2025-12-08 12:21:16,528 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Built server host is : https://d2.domain.com:443
2025-12-08 12:21:16,529 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] header name=Host, value=d2.domain.com
2025-12-08 12:21:16,530 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] MaxHeaderValueSize: 8192
2025-12-08 12:21:16,531 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] validating the input valued2.domain.com
2025-12-08 12:21:16,532 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Identified host : d2.domain.com
2025-12-08 12:21:16,533 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Overall base URL built : https://d2.domain.com/D2
2025-12-08 12:21:16,534 UTC [DEBUG] (https-jsse-nio-8080-exec-5) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : D2 redirecting to errorPage JSP : https://d2.domain.com/D2/errors/authenticationError.jsp
2025-12-08 12:21:16,567 UTC [DEBUG] (https-jsse-nio-8080-exec-26) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] MaxHeaderKeySize: 256
2025-12-08 12:21:16,567 UTC [DEBUG] (https-jsse-nio-8080-exec-26) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] MaxHeaderValueSize: 8192
2025-12-08 12:21:16,568 UTC [DEBUG] (https-jsse-nio-8080-exec-26) - c.e.x.p.s.f.authc.X3OTDSAuthenticationFilter : No LicenseExcepton found on HttpSession hence not Redirectling to License ErrorPage
2025-12-08 12:21:16,571 UTC [DEBUG] (https-jsse-nio-8080-exec-26) - c.e.x.p.s.f.a.X3TrustHttpAuthenticationFilter : Selected Repository : REPO_NAME
2025-12-08 12:21:16,573 UTC [DEBUG] (https-jsse-nio-8080-exec-26) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] MaxHeaderKeySize: 256
2025-12-08 12:21:16,574 UTC [DEBUG] (https-jsse-nio-8080-exec-26) - o.o.e.logging.slf4j.Slf4JLogLevelHandlers$4 : [EVENT SUCCESS -> /D2/HTTPUtilities] MaxHeaderValueSize: 8192
2025-12-08 12:21:16,578 UTC [INFO ] (https-jsse-nio-8080-exec-26) - c.emc.x3.portal.server.X3HttpSessionListener : Expired Http session id : 8531D373A3EA12A398B158AF656E7D20
2025-12-08 12:21:16,578 UTC [DEBUG] (https-jsse-nio-8080-exec-26) - com.emc.x3.server.context.ContextManager : Create a new context manager
...
[tomcat@d2-0 logs]$
As usual, the next step is to check the Repository logs with the authentication trace enabled:
[dmadmin@cs-0 ~]$ cat $DOCUMENTUM/dba/log/$DOCBASE_NAME.log
...
2025-12-08T12:21:16.235912 3567122[3567122] 0101234580c77e96 [AUTH] Entering RPC AUTHENTICATE_USER
2025-12-08T12:21:16.236052 3567122[3567122] 0101234580c77e96 [AUTH] Start Authentication : LOGON_NAME=MYUSERID, DOMAIN_NAME=, OS_LOGON_NAME=tomcat, OS_LOGON_DOMAIN=, ASSUME_USER=0, TRUSTED_LOGIN_ALLOWED=1, PRINCIPAL_AUTH=0, DO_SET_LOCALE=0, RECONNECT=0, CLIENT_TOKEN=[-36, 8, 66, 12, 89, 102, -85, -11, 6, -115, -34, -68, -123, 11, 100]
2025-12-08T12:21:16.236115 3567122[3567122] 0101234580c77e96 [AUTH] Start Authenticate Client Instance
2025-12-08T12:21:16.236215 3567122[3567122] 0101234580c77e96 [AUTH] Start Verify Signature, Client : dfc_327WHMY40Mglbp4taDgajZEM39Lc , Host : d2-0.d2.dctm-ns.svc.cluster.local
2025-12-08T12:21:16.244603 3567122[3567122] 0101234580c77e96 [AUTH] End Verify Signature, Client : dfc_327WHMY40Mglbp4taDgajZEM39Lc , Host : d2-0.d2.dctm-ns.svc.cluster.local
2025-12-08T12:21:16.244657 3567122[3567122] 0101234580c77e96 [AUTH] End Authenticate Client Instance
2025-12-08T12:21:16.303325 3567122[3567122] 0101234580c77e96 [AUTH] Start-AuthenticateUser: ClientHost(d2-0.d2.dctm-ns.svc.cluster.local), LogonName(null), LogonOSName(tomcat), LogonOSDomain(), UserExtraDomain(), ServerDomain()
2025-12-08T12:21:16.303410 3567122[3567122] 0101234580c77e96 [AUTH] Start-AuthenticateUserName:
2025-12-08T12:21:16.303442 3567122[3567122] 0101234580c77e96 [AUTH] dmResolveNamesForCredentials: auth_protocol()
2025-12-08T12:21:16.305698 3567122[3567122] 0101234580c77e96 [AUTH] [DM_USER_E_NOT_DOCUMENTUM_USER]error: "User null does not exist in the docbase"
2025-12-08T12:21:16.305720 3567122[3567122] 0101234580c77e96 [AUTH] End-AuthenticateUserName: dm_user.user_login_domain(), Result: 0
2025-12-08T12:21:16.305730 3567122[3567122] 0101234580c77e96 [AUTH] Not Found dm_user.user_login_name(null), dm_user.user_login_domain()
2025-12-08T12:21:16.519331 3567122[3567122] 0101234580c77e96 [AUTH] Final Auth Result=F, LOGON_NAME=null, AUTHENTICATION_LEVEL=1, OS_LOGON_NAME=tomcat, OS_LOGON_DOMAIN=, CLIENT_HOST_NAME=d2-0.d2.dctm-ns.svc.cluster.local, CLIENT_HOST_ADDR=172.1.1.1, USER_LOGON_NAME_RESOLVED=1, AUTHENTICATION_ONLY=0, USER_NAME=, USER_OS_NAME=null, USER_LOGIN_NAME=null, USER_LOGIN_DOMAIN=, USER_EXTRA_CREDENTIAL[0]=, USER_EXTRA_CREDENTIAL[1]=, USER_EXTRA_CREDENTIAL[2]=e2, USER_EXTRA_CREDENTIAL[3]=, USER_EXTRA_CREDENTIAL[4]=, USER_EXTRA_CREDENTIAL[5]=, SERVER_SESSION_ID=0101234580c77e96, AUTH_BEGIN_TIME=Mon Dec 8 12:21:16 2025, AUTH_END_TIME=Mon Dec 8 12:21:16 2025, Total elapsed time=0 seconds
2025-12-08T12:21:16.519359 3567122[3567122] 0101234580c77e96 [AUTH] Exiting RPC AUTHENTICATE_USER
...
[dmadmin@cs-0 ~]$
There is one thing that is quite strange in these logs. If you look at the beginning, it traces the authentication for “MYUSERID“. But then, in the middle of the process, that user_name becomes “null“. I do not recall seeing that behavior before, so I started investigating what might have caused it.
The account “MYUSERID” existed in the Repository. This issue occurred on the same application as in the previous blog post, but this time in the TEST/QA environment (instead of DEV). The same OTDS and users were present, so my account was definitely there (without duplicates in TEST/QA).
3. Investigating OTDS authentication logsSince the dm_user object had a “user_source” of OTDS, I then checked the OTDS Authentication log file from the JMS. For this Documentum 23.4 version, the log file was “$JMS_HOME/logs/otdsauth.log“. Starting from version 25.4, this log file is located inside “$DOCUMENTUM/dba/log” instead:
[dmadmin@cs-0 ~]$ cat $JMS_HOME/logs/otdsauth.log
...
2025-12-08 11:49:46,106 UTC ERROR [] (https-jsse-nio-9082-exec-36) Thread[https-jsse-nio-9082-exec-36,5,main] java.io.IOException: Unable to tunnel through proxy. Proxy returns "HTTP/1.1 502 Bad Gateway"
at java.base/sun.net.www.protocol.http.HttpURLConnection.doTunneling0(HttpURLConnection.java:2311)
at java.base/sun.net.www.protocol.http.HttpURLConnection.doTunneling(HttpURLConnection.java:2181)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1465)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1436)
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:220)
at com.documentum.cs.otds.OTDSAuthenticationServlet.validatePassword(OTDSAuthenticationServlet.java:275)
at com.documentum.cs.otds.OTDSAuthenticationServlet.doPost(OTDSAuthenticationServlet.java:175)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:590)
...
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:840)
2025-12-08 12:21:16,302 UTC ERROR [] (https-jsse-nio-9082-exec-50) Exception while fetching certificates from jwks url
[dmadmin@cs-0 ~]$
The first error message (11:49) occurred about 30 minutes before the authentication attempt. On the other hand, the last line (12:21) is directly linked to the problem according to its timestamp. This indicates that the Documentum Server was trying to fetch the JWKS certificate. This happens when the OTDS Authentication Servlet is configured with the “auto_cert_refresh=true” parameter (see the “otdsauth.properties” file).
This forces the Documentum Server to contact the OTDS Server in order to retrieve the correct or current SSL certificate to use. However, that request failed. Even though it is not explicitly written, it is easy to deduce that the first error, related to a proxy communication issue, is the root cause.
4. Checking newly added proxy and correcting itAs far as I knew, there should not have been any proxy configured on Documentum, since all components are internal to the customer and located within the same network. However, when checking the startup logs of the JMS, I noticed that a new proxy configuration had recently been added when the Tomcat process restarted less than two hours earlier:
[dmadmin@cs-0 ~]$ grep proxy $JMS_HOME/logs/catalina.out
...
2025-12-08 10:54:56,385 UTC INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dhttp.proxyHost=proxy.domain.com
2025-12-08 10:54:56,385 UTC INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dhttp.proxyPort=2010
2025-12-08 10:54:56,385 UTC INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dhttps.proxyHost=proxy.domain.com
2025-12-08 10:54:56,385 UTC INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dhttps.proxyPort=2011
...
[dmadmin@cs-0 ~]$
After checking with the relevant teams, it turned out that this issue was not really related to Documentum itself. Someone had simply restarted the JMS after adding proxy settings as new JVM parameters while testing an external service that required internet access. Yes, directly in TEST/QA without validating in DEV first – it happens apparently.
However, since no exceptions were configured through the no_proxy setting (“-Dhttp.nonProxyHosts” JVM parameter), it meant that 100% of the requests initiated by the JVM were forwarded to the proxy. That proxy had no knowledge of the OTDS server (which is expected), so the communication simply failed.
After correcting the proxy configuration (either by removing it or by adding all internal domains to the no_proxy setting), the JVM was able to communicate with OTDS again. As a consequence, the D2 SSO started working successfully and the environment was back “online” for all testers. These two blog posts clearly demonstrate that just because D2 displays an error, it doesn’t mean that the real root cause is obvious. Careful investigation and analysis of the log files is always essential.
L’article Dctm – Another DM_LICENSE_E_INVALID_LICENSE error but caused by JMS this time est apparu en premier sur dbi Blog.
installation of database
Read Data from CSV File Using External Table
23 Years of Oracle Security
Posted by Pete On 16/03/26 At 01:14 PM
Credential Errors (OGG-15409) with GoldenGate Migration Utility
The GoldenGate migration utility provided by Oracle allows you to quickly upgrade your classic architecture into GoldenGate 26ai with Microservices Architecture. But even after some updates, it still has a few bugs, as I explained in a previous blog post.
One of them can lead to an OGG-15409 error during the migration. This error will not appear when running the migration tool in dryrun mode. You might then be faced with this issue only when doing the real migration. Here is the exact error:
ERROR: Unable to patch EXTRACT EXT, response is HTTP Status-Code 400: Bad Request..
[ERROR] OGG-15409 - Alias 'ggadmin_alias' not found in credential store domain 'OracleGoldenGate'.
Extract EXT Process Definitions patched.
Where does the error come from ?
The first step is to understand what is causing the issue. For this, you need to understand how the GoldenGate migration utility works.
When migrating extracts (or replicats), GoldenGate will make API calls to the new Microservices Architecture administration service to register the extract (or replicat). Once created, it will alter it with a PATCH request to update the credentials used.
We can see it in the restapi.log:
{"context":{"verb":"PATCH","uri":"/services/v2/extract/EXT",...},"content":{"credentials":{"alias":"ggadmin_alias","domain":"OracleGoldenGate"}},...}
Unfortunately, once the migration is done, you cannot re-run the migration. You will need to fix this manually.
But since this is the only post-migration task made on extracts and replicats, it is rather easy to do. You can just create the aliases first, and call the REST API to alter all extracts and replicats. In Python, using the client I presented in a previous blog post, it would look like the following. First, create the client connection.
from oggrestapi import OGGRestAPI
ogg_client = OGGRestAPI(url='https://vmogg:7810', username='ogg', password='***')
Then, check the content of the extract (or replicat) using the retrieve_extract (or retrieve_replicat) method. For the moment, we don’t see any credentials key.
# This retrieves all the configuration of an extract, except for the configuration file
>>> {k:v for k,v in ogg_client.retrieve_extract('EXT').items() if k != 'config'}
{'$schema': 'ogg:extract', 'targets': [{'name': 'aa', 'path': 'source', 'sizeMB': 500, ...}], 'description': 'dbi blog migration', 'source': 'tranlogs', 'type': 'Integrated'}
Then, create the alias(es) with the create_alias method.
ogg_client.create_alias(
alias='ggadmin_alias',
domain='OracleGoldenGate',
data={
"userid":"ggadmin@vmora:1521/DB",
"password": "***"
}
)
And finally, alter the extracts with the update_extract method.
ogg_client.update_extract(
extract='EXT',
data={
"alias": "ggadmin_alias",
"domain": "OracleGoldenGate"
}
)
If you had the issue with a replicat, the syntax is exactly the same, with the update_replicat method.
ogg_client.update_replicat(
replicat='REP',
data={
"alias": "ggadmin_alias",
"domain": "OracleGoldenGate"
}
)
You can check that the credentials are there by reusing the retrieve_extract (or retrieve_replicat) method. This time, we see the credentials key !
>>> {k:v for k,v in ogg_client.retrieve_extract('EXT').items() if k != 'config'}
{'$schema': 'ogg:extract', 'credentials': {'alias': 'ggadmin_alias', 'domain': 'OracleGoldenGate'}, 'targets': [{'name': 'aa', 'path': 'source', 'sizeMB': 500, ...}], 'description': 'dbi blog migration', 'source': 'tranlogs', 'type': 'Integrated', ...}
How to avoid this error ?
For some reason, the credentials of the source setup will not always be migrated. If you don’t have too many aliases, I would suggest creating the aliases in the target environment. This way, you know they are working even before attempting the migration. This should definitely be part of your new deployment tests.
L’article Credential Errors (OGG-15409) with GoldenGate Migration Utility est apparu en premier sur dbi Blog.


