Feed aggregator
ARRAY PROCESSING
SQL profile and SQL plan baseline in Standard Edition 2 (SE2)
Bulk Update of GoldenGate Extract Profiles
After writing blogs about Default profile updates and a bug affecting profiles in GoldenGate 26ai, I realized it would be a good idea to present a quick way to update all processes at once with the REST API. Because even if you set a new profile as default with the {"isDefault": true} flag, previously created extracts and replicats will not be updated !
To explain this further, let’s look at an extract setting using the retrieve_extract endpoint. I’m using the Python client I presented in another blog. Everything I’m talking about here is not specific to extracts, and can be extended to replicats. However, it doesn’t apply to distribution paths.
If an extract was created in the most basic configuration, it inherits the Default profile implicitly, but this information is not registered in the extract definition. Below is the configuration of such an extract. To save you time, just remember that there is no mention of any profile setting.
>>> ogg_client.retrieve_extract('EXTSRC')
{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
Extract with a profile set
If a profile is assigned to the extract, the managedProcessSettings key is added to the extract configuration. This is true whether you assign a custom profile or re-assign the Default profile after having changed the extract configuration. Below, we can see the difference with the following key/value pair : 'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile'
{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile', 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
Extract with a custom configuration
If an extract has a custom unique configuration that is not a profile, the full configuration is added as a JSON to the extract definition under the same managedProcessSettings key. I give below an example:
{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'managedProcessSettings': {'autoStart': {'enabled': True, 'delay': 60}, 'autoRestart': {'enabled': False, 'onSuccess': False, 'delay': 0, 'retries': 9, 'window': 0, 'disableOnFailure': True, 'failures': 0}}, 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
How to modify a single extract’s profile with the REST API ?
Modifying the profile of an extract or replicat with the REST API means updating the managedProcessSettings key of the process. For an extract, you would use the update_extract endpoint, with a data payload that only contains the managedProcessSettings key, and its value. It can be the profile name or the full profile definition, as shown above. Here is an example of a single update of an extract:
>>> ogg_client.update_extract(
extract='EXTSRC',
data={
'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile'
}
)
And do not worry about entering a wrong name for the profile. If you do this, the API returns the following error:
>>> ogg_client.update_extract(
extract='EXTSRC',
data={
'managedProcessSettings': 'ogg:managedProcessSettings:NonExistingProfile'
}
)
Exception: ERROR - https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC: The managed process settings profile 'ogg:managedProcessSettings:NonExistingProfile' does not exist. ; INFO - https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC: The item type with name 'ogg:managedProcessSettings:NonExistingProfile' does not exist.
How to modify all extracts and replicats ?
Based on the above, modifying all extracts and replicats is easy. You should just iterate over your extracts and replicats and apply the profile that you want. Below is an example script where you can modify the profile_name variable and the connection details to the Administration Service of your deployment. You could even decide to have different profiles assigned to extracts and replicats, if needed.
from oggrestapi import OGGRestAPI
profile_name = 'dbiProfile'
ogg_client = OGGRestAPI(
url='https://vmogg',
user='ogg',
password='ogg',
reverse_proxy=True,
deployment='ogg_test_01'
)
for extract in ogg_client.list_extracts():
extract_name = extract['name']
print(f"Updating extract {extract_name}")
ogg_client.update_extract(
extract=extract_name,
data={
'managedProcessSettings': f'ogg:managedProcessSettings:{profile_name}'
}
)
for replicat in ogg_client.list_replicats():
replicat_name = replicat['name']
print(f"Updating replicat {replicat_name}")
ogg_client.update_replicat(
replicat=replicat_name,
data={
'managedProcessSettings': f'ogg:managedProcessSettings:{profile_name}'
}
)
After running this against the Administration Service, all your extracts and replicats will now follow the same new profile.
L’article Bulk Update of GoldenGate Extract Profiles est apparu en premier sur dbi Blog.
Loosing java remote console full control during ODA Reimage
Recently I had to patch an ODA at one of our customer from 19.20 to 19.26, going through 19.24. As you are aware of, knowing we go from Oracle Linux 7 (19.20) to Oracle Linux 8 (19.24), we need to patch the ODA using Data Preserving Reprovisioning. This includes a reimage of the nodes…
Data Preserving Reprovisioning will allow you reprovisioning an already deployed Oracle Database Appliance system keeping the storage and the databases on the appliance, so without any modifications on the information stored in the ASM (storage). The information of the source system will be saved in a server data archive files. The appliance will then be reimaged with Oracle Database Appliance release iso image and the saved metadata will be used to directly reprovision the system and bring back all the resources such as databases, DB systems, Oracle ASR, and others.
The reimage can only be done using the java remote console, available with the ILOM. Remember that to attach the iso on the java console and use it during the reimage, you need to add it and connect it as described in the next picture.
It is important to see the following red message to ensure that the connection with the iso is made and that it can be used.
The problem is that after a time, during the reimage process, the java remote console lost full control on the node, having as consequence a lost of the iso connection and a failure in the reimage process.
This could be seen in the console with the message (View Only) displayed in the bar and the Red Cross on the top of the Computer (screen)/mouse image.
Here:
When it should be:
Here the whole picture showing the connection problem.
And we can see the message : “Sorry, keyboard and mouse connection has been lost. \nIf storage was being shared it has been lost also”.
When a correct connection would be the following one, having (Full Control) message displayed in the bar, with no lost of keyboard/mouse and storage (iso) connection.
After several tries it was always the same problem. After a few minutes, lost of the connection…
I then decided to check the ILOM sessions, connecting to the ILOM through ssh connection.
login as: root
Keyboard-interactive authentication prompts from server:
| Password:
End of keyboard-interactive prompts from server
Oracle(R) Integrated Lights Out Manager
Version 5.1.1.23 r151958
Copyright (c) 2023, Oracle and/or its affiliates. All rights reserved.
Warning: HTTPS certificate is set to factory default.
Hostname: ODA01-ilo
-> show /SP/sessions
/SP/sessions
Targets:
55342
55346 (current)
Properties:
Commands:
cd
delete
show
I found surprising to see so high sessions id. I decided to reset the ILOM. This would of course not restart the node.
-> reset /SP Are you sure you want to reset /SP (y/n)? y Performing reset on /SP
And connected again to the ILOM through ssh to checked the sessions:
login as: root
Keyboard-interactive authentication prompts from server:
| Password:
End of keyboard-interactive prompts from server
Oracle(R) Integrated Lights Out Manager
Version 5.1.1.23 r151958
Copyright (c) 2023, Oracle and/or its affiliates. All rights reserved.
Warning: HTTPS certificate is set to factory default.
Hostname: ODA01-ilo
-> show /SP/sessions
/SP/sessions
Targets:
1 (current)
Properties:
Commands:
cd
delete
show
The sessions id were back to one, and I did not have any lost connection issue any more. I could successfully reimage the ODA to 19.24 version.
L’article Loosing java remote console full control during ODA Reimage est apparu en premier sur dbi Blog.
Oracle AQ Scaling with PL/SQL Callbacks, DBMS_SCHEDULER Jobs
PL/SQL Execution Time
Partitions evicting sequence cached values
PDB Snapshot Copy/Carousel
Flashback Schema
Fuzzing PL/SQL
Posted by Pete On 20/04/26 At 09:34 AM
The day your ECM System became unusable
It started like any normal Tuesday.
No alarms. No warning signs. No urgent emails flooding inboxes before 8 AM.
Just a regular working day, until people tried to open documents.
At first, it was a single complaint.
“Hey, I can’t access the invoice archive.”
Then another.
“The HR files are taking forever to load.”
Then ten more.
“Nothing is opening.”
And within an hour, the entire ECM system, the one everyone depended on every single day, had become nearly unusable.
This wasn’t just a storage platform.
It was the operational backbone of the company.
Contracts, invoices, compliance records, procedures, employee files, approval workflows, audit trails, everything lived there.
Teams across finance, HR, legal, procurement, and operations relied on it constantly.
If the ECM system slowed down, business slowed down.
If it stopped… business stopped.
And that morning, it stopped.
The issue had been building quietly for months.
Nobody noticed because the pain came slowly.
A few extra seconds to retrieve a file, a delayed workflow here, a failed search there.
People adapted. They worked around it.
IT assumed it was “normal system aging.”
But underneath, the real problem was growing:
- No proper retention policy.
- No archive strategy.
- No performance monitoring.
- No governance around content growth.
Years of duplicate files as well as outdated versions of documents no one needed.
Massive email attachments stored for nothing or expired records that should have been deleted years ago.
With all those things, the repository had become digital landfill.
As a result, storage became bloated, indexes became overloaded, and search performance collapsed. Backups also took a dangerous amount of time.
Finally, one morning, the system simply couldn’t keep up anymore.
Not because of one dramatic failure, but rather hundreds of small, ignored ones.
The finance department could not process payments, the HR department could not retrieve employee documentation, and the legal department could not access signed contracts.
In practical terms, stalled approvals delayed, customer responses and froze audit preparation.
Executives wanted answers, users wanted miracles, and IT needed time.
No one had enough of it.
Emergency meetings replaced actual work.
Consultants were called in.
Recovery plans were hastily put together.
Suddenly, budgets appeared for problems that had been ignored for years.
But the real cost wasn’t just technical; it was a loss of trust.
People stopped trusting the system.
Once users lose confidence in your ECM platform, rebuilding that trust is far more difficult than resolving storage issues.
Most ECM failures don’t happen overnight.
They happen silently, slowly, and politely.
Then, one day, they become impossible to ignore.
Governance may feel boring until disaster makes it urgent.
Retention policies feel optional until storage becomes a crisis.
Performance reviews feel like “later” work until later arrives.
An ECM system doesn’t fail because it stores too much.
The system fails because no one decides what shouldn’t be stored
The lesson was painful.
You don’t manage an ECM system when it breaks.
You manage it so it never gets there.
By the time everyone notices, it’s already too late.
Unfortunately, this scenario isn’t rare. As consultants at dbi services, we’re often called in after the damage has been done. Of course, our job is to find solutions, it’s still better to act before a crisis hits. That way, we have time to implement strategies without rushing.
L’article The day your ECM System became unusable est apparu en premier sur dbi Blog.
Gemma 4 for Structured Data Extraction: Can It Beat Qwen 3.5?
PostgreSQL 19: Importing statistics from remote servers
Usually we do not see many foreign data wrappers being used by our customers. Most of them use the foreign data wrapper for Oracle to fetch data from Oracle systems. Some of them use the foreign data wrapper for files but that’s mostly it. Only one (I am aware of) actually uses the foreign data wrapper for PostgreSQL which obviously connects PostgreSQL to PostgreSQL. Some foreign data wrappers allow for collecting optimizer statistics on foreign tables and the foreign data wrappers for Oracle and PostgreSQL are examples for this. These local statistics are better than nothing but you need to take care that they are up to date and for that you need a fresh copy of the statistics over the remote data. PostgreSQL 19 will come with a solution for that when it comes to the foreign data wrapper for PostgreSQL. Actually, the solution is not in the foreign data wrapper for PostgreSQL but in the underlying framework and postgres_fdw uses can use that from version 19 on.
For looking at this we need a simple setup, so we initialize two new PostgreSQL 19 clusters and connect them with postgres_fdw:
postgres@:/home/postgres/ [pgdev] initdb --version
initdb (PostgreSQL) 19devel
postgres@:/home/postgres/ [pgdev] initdb --pgdata=/var/tmp/pg1
postgres@:/home/postgres/ [pgdev] initdb --pgdata=/var/tmp/pg2
postgres@:/home/postgres/ [pgdev] echo "port=8888" >> /var/tmp/pg1/postgresql.auto.conf
postgres@:/home/postgres/ [pgdev] echo "port=8889" >> /var/tmp/pg2/postgresql.auto.conf
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/pg1/ start
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/pg2/ start
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create extension postgres_fdw"
CREATE EXTENSION
postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "create table t ( a int, b text, c timestamptz )"
CREATE TABLE
postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "insert into t select i, md5(i::text), now() from generate_series(1,1000000) i"
INSERT 0 1000000
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create server srv_pg2 foreign data wrapper postgres_fdw options(port '8889', dbname 'postgres')"
CREATE SERVER
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create user mapping for postgres server srv_pg2 options (user 'postgres', password 'postgres')"
CREATE USER MAPPING
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create foreign table ft (a int, b text, c timestamptz) server srv_pg2 options (schema_name 'public', table_name 't')"
CREATE FOREIGN TABLE
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select count(*) from ft"
count
---------
1000000
(1 row)
What we have now is one table in the cluster on port 8889 and this table is attached as a foreign table in the cluster on port 8888.
We already have statistics on the source table in the cluster on port 8889:
postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "select reltuples::bigint from pg_class where relname = 't'"
reltuples
-----------
1000000
(1 row)
… but we do not have any statistics on the foreign table in the cluster on port 8888:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select reltuples::bigint from pg_class where relname = 'ft'"
reltuples
-----------
-1
(1 row)
Only after manually analyzing the foreign table the statistics show up:
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "analyze ft"
ANALYZE
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select reltuples::bigint from pg_class where relname = 'ft'"
reltuples
-----------
1000000
(1 row)
The issue that can arise with these local statistics is, that they probably become outdated when the source table is modified:
postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "insert into t select i, md5(i::text), now() from generate_series(1000001,2000000) i"
INSERT 0 1000000
postgres@:/home/postgres/ [DEV] psql -p 8889 -c "select reltuples::bigint from pg_class where relname = 't'"
reltuples
-----------
2000000
(1 row)
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select reltuples::bigint from pg_class where relname = 'ft'"
reltuples
-----------
1000000
(1 row)
As you can see, the row counts do not match anymore. Once the local statistics are gathered we again have the same picture on both sides:
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "analyze ft"
ANALYZE
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select reltuples::bigint from pg_class where relname = 'ft'"
reltuples
-----------
2000000
(1 row)
One way to avoid this issue even before PostgreSQL 19 is to tell postgres_fdw to run analyze on the remote table and to use those statistics:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "alter foreign table ft options ( use_remote_estimate 'true' )"
In this case the local statistics will not be used but of course this comes with the overhead of the additional analyze on the remote side.
From PostgreSQL 19 there is another option:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "alter foreign table ft options ( restore_stats 'true' )"
ALTER FOREIGN TABLE
This option tells postgres_fdw to import the statistics from the remote side and store them locally. If that fails it will run analyze as above, the commit message nicely explains this:
Add support for importing statistics from remote servers.
Add a new FDW callback routine that allows importing remote statistics
for a foreign table directly to the local server, instead of collecting
statistics locally. The new callback routine is called at the beginning
of the ANALYZE operation on the table, and if the FDW failed to import
the statistics, the existing callback routine is called on the table to
collect statistics locally.
Also implement this for postgres_fdw. It is enabled by "restore_stats"
option both at the server and table level. Currently, it is the user's
responsibility to ensure remote statistics to import are up-to-date, so
the default is false.
As usual, thanks to all involved.
L’article PostgreSQL 19: Importing statistics from remote servers est apparu en premier sur dbi Blog.
GoldenGate 26ai Logs Explained
GoldenGate logs are a powerful source of information when troubleshooting or analyzing your deployments. However, there is at least one log file per service in the Microservices Architecture. This means that for a GoldenGate setup with multiple deployments, you can quickly have tens of log files to search for information. In that context, knowing where to look is important, especially if you are upgrading from a Classic Architecture where you only had to check the ggserr.log file.
The restapi.log files are one of the most important logs because of the amount of information they provide. They allow you to completely understand what is happening behind the scenes. It takes time to learn how to analyze them, but I wrote a dedicated article on REST API log analysis to help you through this. There is one restapi.log file per deployment, because the REST API calls are always made to a specific deployment. This is also true when using a reverse proxy.
These restapi.log files are also a source of information for other GoldenGate log files. For instance, REST API logs on the AI Service will be cleaned and filtered before ending up in the log dedicated to the AI Service. This is another reason why you should understand the content of these restapi.log files.
Here is a log example from a restapi.log file. Again, for more details, read the blog linked above.
2026-04-09 19:46:36.789+0000 ERROR|RestAPI.adminsrvr | Request #1581: {
"context": {
"httpContextKey": 140353514441552,
"verbId": 7,
"verb": "PATCH",
"originalVerb": "PATCH",
"uri": "/services/v2/extracts/EXTSRC",
"protocol": "http",
"headers": {
"X-OGG-Context": "services/ogg_test_01/adminsrvr",
"X-OGG-Deployment": "ogg_test_01",
"X-OGG-Service": "adminsrvr",
"X-OGG-Version": "v2",
"X-OGG-Resource": "extracts/EXTSRC",
"X-Real-IP": "**",
"X-Forwarded-For": "**",
"X-Forwarded-Host": "vmogg",
"X-Forwarded-Proto": "https",
"X-Forwarded-Server": "oggvm1",
"X-SSL-Client-Verify": "NONE",
"Connection": "keep-alive",
"Host": "vmogg",
"Content-Length": "75",
"User-Agent": "python-requests/2.32.3",
"Accept-Encoding": "gzip, deflate",
"Accept": "application/json",
"Content-Type": "application/json",
"Cookie": "** Masked **",
"Authorization": "** Masked **",
"X-OGG-Requestor-Id": "",
"X-OGG-Feature-List": ""
},
"host": "vmogg",
"securityEnabled": false,
"authorization": {
"authUserName": "ogg",
"authPassword": "** Masked **",
"authMode": "Basic",
"authUserRole": "Security"
},
"requestId": 4,
"uriTemplate": "/services/{version}/extracts/{extract}"
},
"content": {
"managedProcessSettings": "ogg:managedProcessSettings:NonExistingProfile"
},
"isScaRequest": true,
"parameters": {
"uri": {
"extract": "EXTSRC",
"version": "v2"
}
}
}
Response: {
"context": {
"httpContextKey": 140353514441552,
"requestId": 4,
"code": "404 Not Found",
"headers": {
"Content-Type": "application/json",
"Set-Cookie": "** Masked **"
},
"Content-Type": "application/json",
"contentType": "application/json"
},
"isScaResponse": true,
"content": {
"$schema": "api:standardResponse",
"links": [
{
"rel": "canonical",
"href": "https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC",
"mediaType": "application/json"
},
{
"rel": "self",
"href": "https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC",
"mediaType": "application/json"
}
],
"messages": [
{
"$schema": "ogg:message",
"title": "The managed process settings profile 'ogg:managedProcessSettings:NonExistingProfile' does not exist.",
"code": "OGG-08122",
"severity": "ERROR",
"issued": "2026-04-09T19:46:36Z",
"type": "https://docs.oracle.com/en/middleware/goldengate/core/23.26/error-messages/"
},
{
"$schema": "ogg:message",
"title": "The item type with name 'ogg:managedProcessSettings:NonExistingProfile' does not exist.",
"code": "OGG-12029",
"severity": "INFO",
"issued": "2026-04-09T19:46:36Z",
"type": "https://docs.oracle.com/en/middleware/goldengate/core/23.26/error-messages/"
}
]
}
}
Admin Client Log File
Depending on your configuration, the adminclient log file might not always be in the same location. More specifically, you can have multiple adminclient.log files for a single GoldenGate installation. The reason behind this is that the logs go to the location of your OGG_VAR_HOME environment variable. The adminclient.log can be located in :
- In your Service Manager
var/logdirectory, if your GoldenGate user has the Service ManagerOGG_VAR_HOMEset in the environment. - In your deployment
var/logdirectory, if you change theOGG_VAR_HOMEenvironment variable when switching between deployments at the OS level. This is not recommended, as it can lead to multipleadminclient.logfiles and mixed-up logs between deployments. - In your GoldenGate binaries, if you misconfigured your environment and are calling the
adminclientwith an emptyOGG_VAR_HOMEvariable. This is not ideal, as you could lose the log file after patching or upgrading.
There is not much to see in this file with the default logging configuration, but it can be used to troubleshoot connection issues with the adminclient.
Let’s look at the var/log folder of the Service Manager deployment.
oggsm/var/log:
ServiceManager.log
restapi.log
AIService.log
In GoldenGate 26ai, there are three different logs for the Service Manager:
ServiceManager.log: In a healthy installation, this log file should only contain logs generated when services start or stop. But if your Administration Server doesn’t start, this log is a good starting point for troubleshooting.
2026-04-09T19:20:28.124+0000 INFO | Task 'ogg_test_01:adminsrvr' started with process id 2505 (ogg_test_01:adminsrvr)
2026-04-09T19:20:28.159+0000 INFO | Task 'ServiceManager:AIService' started with process id 2515 (ServiceManager:AIService)
2026-04-09T19:20:28.182+0000 INFO | Task 'ogg_test_01:distsrvr' started with process id 2522 (ogg_test_01:distsrvr)
2026-04-09T19:20:28.215+0000 INFO | Task 'ogg_test_01:recvsrvr' started with process id 2542 (ogg_test_01:recvsrvr)
2026-04-09T19:20:28.234+0000 INFO | Task 'ogg_test_01:pmsrvr' started with process id 2553 (ogg_test_01:pmsrvr)
2026-04-09T19:20:29.243+0000 INFO | Installation topology updated (ServiceManager.Topology)
restapi.log: See the chapter above about REST API Log Files.
AIService.log: This is where logs from the AI Service are gathered. It mostly consists of a cleaned collection of the REST API logs from calls made against the AI Service. For instance, here is a log you can get when adding an AI Provider to the Service Manager.
2026-04-09 19:32:21.056 DATA |AIService |AIService.RestApi | 247 RequestHandlers.cpp | 00000000-0000-0000-D5F8-000000000006> {"$schema":"api:standardResponse","links":[],"response":{"providers":[{"id":"1","name":"GeminiAiStudio","type":"gemini","baseUrl":"https://generativelanguage.googleapis.com/v1beta/models/gemini-flash-latest:generateContent","authentication":{"type":"api_key","configured":true},"links":[{"rel":"parent","href":"/api/v1/providers","mediaType":"application/json"},{"rel":"canonical","href":"/api/v1/providers/1","mediaType":"application/json"}]}],"metadata":{"fromCache":false,"providerCount":1,"lastUpdatedEpochMicroseconds":1775827941054749}},"messages":[]} (Thread 2)
Deployment Log Files
GoldenGate deployments have more log file types than the Service Manager. In the var/log folder of your deployments, you should have the following files:
ogg_test_01/var/log:
adminsrvr.log
distsrvr.log
ER-events.log
extract.log
ggserr.log
pmsrvr.log
recvsrvr.log
replicat.log
restapi.log
adminsrvr.log,distsrvr.log,recvsrvr.logandpmsrvr.log: These files contain all the information, warning and error messages from actions or events in the main GoldenGate services (Administration, Distribution, Receiver and Performance Metrics Services). They are not as extensive as what you can find in therestapi.logfile, but they provide a really good overview of what is happening inside your deployment.ER-events.log: This is a log file that can be used for monitoring your GoldenGate deployments. All warning and error messages on your extracts and replicats are available here. It is sometimes verbose, but still very useful for monitoring.
2026-04-10T07:11:17.844+0000 ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, EXTA.prm: PROCESS ABENDING.
2026-04-10T07:12:24.447+0000 WARNING OGG-06658 Oracle GoldenGate Capture for Oracle, EXTB.prm: Unable to restore a failed connection to the database after 3 retries.
extract.logandreplicat.log: In these log files, you can find status information, statistics, and transactions processed by your extracts and replicats. This does not provide fundamentally new information compared to other sources (adminclient, for instance), but it is a useful view when observing all processes at the same time.
2026-04-08T09:13:10.249+0000 INFO | INFO OGG-01026 Oracle GoldenGate Capture for Oracle, EXTA.prm: Rolling over remote file PDB1/aa000000321. (main)
2026-04-08T09:14:10.540+0000 INFO | INFO OGG-01026 Oracle GoldenGate Capture for Oracle, EXTB.prm: Rolling over remote file PDB1/ab000000645. (main)
2026-04-08T09:14:32.963+0000 INFO | INFO OGG-01026 Oracle GoldenGate Capture for Oracle, EXTC.prm: Rolling over remote file PDB1/ac000000128. (main)
ggserr.log: This log serves the same purpose as it did in the old Classic Architecture, containing all information about your deployment. Contrary to what its name suggests, it does not only contain the errors happening in your deployment.
restapi.log: See the chapter above about REST API Log Files.
In the GoldenGate web UI, you can access, filter, or query some of the logs listed above. From the Service Manager, the Diagnosis Log tab displays the ServiceManager.log file with filtering options.
You can also quickly enable debug logs for the Service Manager, which is usually only needed when the Oracle Support requests it. This will generate a ServiceManager-debug.log file in your var/log directory, and it can be downloaded from the web UI and sent to the Oracle Support or your dbi GoldenGate expert for analysis.
When connected to a deployment in the web UI, you can also view different logs.
- From the home page, the Critical Events view displays the content of the
ER-events.logfile.
- From the Services page, you have access to the Diagnosis Log of each service. They will respectively display the
adminsrvr.log,distsrvr.log,recvsrvr.logandpmsrvr.logfiles. Similar to what I just explained, you can also enable debug logs temporarily if needed.
L’article GoldenGate 26ai Logs Explained est apparu en premier sur dbi Blog.
It's Been a Minute...
Hi.
It's been a while.
Not sure if this is a one-off or not, but I'm going to give it another go. I miss writing (publicly).
Like, this is super awkward. "What do I say?"
"How are you?"
"Things are good."
"How are you?"
Create Distribution Paths in NGINX-Secured GoldenGate 26ai
Setting up a distribution path is an important step when connecting two GoldenGate deployments. However, there are so many connection options that it’s hard to know how to proceed. In this blog, I will present the way to connect two deployments secured through NGINX.
Here are the prerequisites before attempting the distribution path creation:
- Having two servers, each running a GoldenGate Microservices Architecture. I will use the latest version here, 26ai.
- Having a running extract on the source setup.
- Port openings: port
443should be open between the two deployments.
In my environment, I have the following:
- Source environment:
oggvm1, with a deployment namedogg_test_01, aCDB01with aPDB1containing aAPP_PDB1schema and atable1table. - Target environment:
oggvm2, with the equivalent deploymentogg_test_02,CDB02,PDB2,APP_PDB2schema, and atable2table.
With all of this in mind, here are the three steps needed to create a working distribution path between two GoldenGate deployments secured with NGINX:
- Create a Path Connection
- Add the certificates used in NGINX to secure the target deployment to the source’s certificate management store.
- Create and start the distribution path.
To open a connection to the target deployment ogg_test_02, the source deployment needs a path connection. As explained in a previous blog, path connections are aliases of an existing user on a target deployment.
It is recommended to separate roles and not use the administrator account of your target deployment, so let’s first create the user.
- On
ogg_test_02(target deployment), go to the User Administration tab and add a new user. For the role, Oracle recommends using theOperatorrole, so there is no reason to use the higher-privilegedAdministratororSecurityroles.
- On
ogg_test_01(source deployment), create the path connection. Only theUseridandPasswordfields must match what you just created on the target. The alias is just known on the source side and doesn’t have to match the username set on the target deployment. The alias defined here in the source deployment will only appear when choosing a connection during the distribution path creation.
Because your deployments are secured with NGINX, you have to make sure that the certificates being used by one deployment are recognized by the other deployment. To do so, on the source only, you have to register the root certificate authority of the target deployment.
This can only be done at the Service Manager level. In the Service Manager web UI, go to Certificate Management and add a CA Certificate in the source deployment ogg_test_01. You have two options here.
- If the CA only needs to be registered on this specific deployment, you can register it with the Local option. This is the more secure option, but it is very often not needed.
- If the CA registration should be shared with other deployments, you can register it with the Shared option.
Paste the root certificate file used to secure your second deployment. The name used must be unique but will not be used anywhere else in the process.
How should I register a certificate chain ?
If the certificate file contains multiple certificates, GoldenGate doesn’t allow you to register them in one go. Instead, you need to break down the file and register each part individually. Again, the name and order in which you register the certificates do not matter, except for management purposes. I described the issue in more detail in a blog about the OGG-30007 error.
We can now finally create the distribution path. On the source deployment, go on the Distribution Service Paths tab and register the path as such:
- Path information: Specify the path name. If possible, make the target (and the source) visible in the name, to ease management when having multiple distribution paths.
- Source options: Here, you should only select the extract at the source of this distribution path. The rest will be filled automatically or can stay with the default value.
- Target options: This is where the configuration is specific. Because we are using a NGINX-secured deployment on the target, you must click on Reverse proxy enabled and choose the
wsstarget protocol.
- Advanced, Filtering and Managed Options: Nothing here is specific to our setup, but you can of course customize these options as needed.
And that’s it. Your deployments are now connected, and you should see the trail files on the target once you start the distribution path.
oracle@oggvm2:~/ ll $OGG_DEPLOYMENT_HOME/var/lib/data/PDB1
total 0
-rw-r-----. 1 oracle oinstall 0 Mar 22 07:34 bb000000000
The remote peer submitted a certificate that failed validation
If your distribution path doesn’t start and generates a “certificate that failed validation” error, it means that you incorrectly registered your certificates. Make sure that the target deployment’s certificates are registered on the source deployment’s service manager and not the other way around.
You can also try to use the certificate in an OGG_CLIENT_TLS_CAPATH environment variable on the source and connect with the adminclient to check if it’s working.
L’article Create Distribution Paths in NGINX-Secured GoldenGate 26ai est apparu en premier sur dbi Blog.
PostgreSQL 19: Online enabling of data checksums
Since PostgreSQL 18 was released last year checksums are enabled by default when a new cluster is initialized. This also means, that you either need to explicitly disable that when you upgrade from a previous version of PostgreSQL or you need to enable this in the old version of PostgreSQL you want to upgrade from. The reason is, that pg_upgrade will complain if the old and new version of PostgreSQL do not have the same setting for this.
Enabling and disabling checksums in offline mode can be done since several versions of PostgreSQL using pg_checksums, but as mentioned: This will not work if the cluster is running:
postgres@:/home/postgres/ [181] pg_checksums --version
pg_checksums (PostgreSQL) 18.1
postgres@:/home/postgres/ [181] pg_checksums --pgdata=$PGDATA
Checksum operation completed
Files scanned: 966
Blocks scanned: 2969
Bad checksums: 0
Data checksum version: 1 -> This means "enabled"
postgres@:/home/postgres/ [181] pg_checksums --pgdata=$PGDATA --disable
pg_checksums: error: cluster must be shut down
Even in PostgreSQL 19 this is still same: You cannot use pg_checksum to enable or disable checksums while the cluster is running.
What will change in version 19 is that two new functions have been added, one for enabling checksums and one for disabling checksums:
postgres=# \dfS *checksums*
List of functions
Schema | Name | Result data type | Argument data types | Type
------------+---------------------------+------------------+--------------------------------------------------------------+------
pg_catalog | pg_disable_data_checksums | void | | func
pg_catalog | pg_enable_data_checksums | void | cost_delay integer DEFAULT 0, cost_limit integer DEFAULT 100 | func
(2 rows)
As mentioned in the commit message this is implemented by background workers and to actually see those processes on the operating system lets create some data so the workers really have something to do:
postgres=# create table t ( a int, b text, c timestamptz );
CREATE TABLE
postgres=# insert into t select i, md5(i::text), now() from generate_series(1,10000000) i;
INSERT 0 10000000
As this is version 19 of PostgreSQL currently checksum are enabled:
postgres=# show data_checksums;
data_checksums
----------------
on
(1 row)
To disable that online, pg_disable_data_checksums is the function to use:
postgres=# select * from pg_disable_data_checksums();
pg_disable_data_checksums
---------------------------
(1 row)
postgres=# show data_checksums;
data_checksums
----------------
off
(1 row)
To enable checksums online pg_enable_data_checksums is the function to use. If you want to see the background workers you might grep for that in a second session on the operating system:
-- first session, connected to PostgreSQL
postgres=# select pg_enable_data_checksums();
pg_enable_data_checksums
--------------------------
(1 row)
postgres=# show data_checksums ;
data_checksums
----------------
on
(1 row)
-- second session, on the OS
postgres@:/home/postgres/postgresql/ [pgdev] watch "ps -ef | grep checksum | grep -v watch"
Every 2.0s: ps -ef | grep checksum | grep -v watch pgbox.it.dbi-services.com: 09:49:20 AM
in 0.006s (0)
postgres 4931 2510 0 09:49 ? 00:00:00 postgres: pgdev: datachecksum launcher
postgres 4932 2510 25 09:49 ? 00:00:00 postgres: pgdev: datachecksum worker
postgres 4964 4962 0 09:49 pts/2 00:00:00 grep checksum
Because enabling the checksum comes with some overhead there is throttling control as it is already the case for autovacuum:
postgres=# select pg_enable_data_checksums(cost_delay=>1,cost_limit=>3000);
pg_enable_data_checksums
--------------------------
(1 row)
Very nice, thanks to all involved.
L’article PostgreSQL 19: Online enabling of data checksums est apparu en premier sur dbi Blog.



