Feed aggregator
Oracle AQ Scaling with PL/SQL Callbacks, DBMS_SCHEDULER Jobs
PL/SQL Execution Time
Partitions evicting sequence cached values
PDB Snapshot Copy/Carousel
Flashback Schema
Fuzzing PL/SQL
Posted by Pete On 20/04/26 At 09:34 AM
The day your ECM System became unusable
It started like any normal Tuesday.
No alarms. No warning signs. No urgent emails flooding inboxes before 8 AM.
Just a regular working day, until people tried to open documents.
At first, it was a single complaint.
“Hey, I can’t access the invoice archive.”
Then another.
“The HR files are taking forever to load.”
Then ten more.
“Nothing is opening.”
And within an hour, the entire ECM system, the one everyone depended on every single day, had become nearly unusable.
This wasn’t just a storage platform.
It was the operational backbone of the company.
Contracts, invoices, compliance records, procedures, employee files, approval workflows, audit trails, everything lived there.
Teams across finance, HR, legal, procurement, and operations relied on it constantly.
If the ECM system slowed down, business slowed down.
If it stopped… business stopped.
And that morning, it stopped.
The issue had been building quietly for months.
Nobody noticed because the pain came slowly.
A few extra seconds to retrieve a file, a delayed workflow here, a failed search there.
People adapted. They worked around it.
IT assumed it was “normal system aging.”
But underneath, the real problem was growing:
- No proper retention policy.
- No archive strategy.
- No performance monitoring.
- No governance around content growth.
Years of duplicate files as well as outdated versions of documents no one needed.
Massive email attachments stored for nothing or expired records that should have been deleted years ago.
With all those things, the repository had become digital landfill.
As a result, storage became bloated, indexes became overloaded, and search performance collapsed. Backups also took a dangerous amount of time.
Finally, one morning, the system simply couldn’t keep up anymore.
Not because of one dramatic failure, but rather hundreds of small, ignored ones.
The finance department could not process payments, the HR department could not retrieve employee documentation, and the legal department could not access signed contracts.
In practical terms, stalled approvals delayed, customer responses and froze audit preparation.
Executives wanted answers, users wanted miracles, and IT needed time.
No one had enough of it.
Emergency meetings replaced actual work.
Consultants were called in.
Recovery plans were hastily put together.
Suddenly, budgets appeared for problems that had been ignored for years.
But the real cost wasn’t just technical; it was a loss of trust.
People stopped trusting the system.
Once users lose confidence in your ECM platform, rebuilding that trust is far more difficult than resolving storage issues.
Most ECM failures don’t happen overnight.
They happen silently, slowly, and politely.
Then, one day, they become impossible to ignore.
Governance may feel boring until disaster makes it urgent.
Retention policies feel optional until storage becomes a crisis.
Performance reviews feel like “later” work until later arrives.
An ECM system doesn’t fail because it stores too much.
The system fails because no one decides what shouldn’t be stored
The lesson was painful.
You don’t manage an ECM system when it breaks.
You manage it so it never gets there.
By the time everyone notices, it’s already too late.
Unfortunately, this scenario isn’t rare. As consultants at dbi services, we’re often called in after the damage has been done. Of course, our job is to find solutions, it’s still better to act before a crisis hits. That way, we have time to implement strategies without rushing.
L’article The day your ECM System became unusable est apparu en premier sur dbi Blog.
Gemma 4 for Structured Data Extraction: Can It Beat Qwen 3.5?
PostgreSQL 19: Importing statistics from remote servers
Usually we do not see many foreign data wrappers being used by our customers. Most of them use the foreign data wrapper for Oracle to fetch data from Oracle systems. Some of them use the foreign data wrapper for files but that’s mostly it. Only one (I am aware of) actually uses the foreign data wrapper for PostgreSQL which obviously connects PostgreSQL to PostgreSQL. Some foreign data wrappers allow for collecting optimizer statistics on foreign tables and the foreign data wrappers for Oracle and PostgreSQL are examples for this. These local statistics are better than nothing but you need to take care that they are up to date and for that you need a fresh copy of the statistics over the remote data. PostgreSQL 19 will come with a solution for that when it comes to the foreign data wrapper for PostgreSQL. Actually, the solution is not in the foreign data wrapper for PostgreSQL but in the underlying framework and postgres_fdw uses can use that from version 19 on.
For looking at this we need a simple setup, so we initialize two new PostgreSQL 19 clusters and connect them with postgres_fdw:
postgres@:/home/postgres/ [pgdev] initdb --version
initdb (PostgreSQL) 19devel
postgres@:/home/postgres/ [pgdev] initdb --pgdata=/var/tmp/pg1
postgres@:/home/postgres/ [pgdev] initdb --pgdata=/var/tmp/pg2
postgres@:/home/postgres/ [pgdev] echo "port=8888" >> /var/tmp/pg1/postgresql.auto.conf
postgres@:/home/postgres/ [pgdev] echo "port=8889" >> /var/tmp/pg2/postgresql.auto.conf
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/pg1/ start
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/pg2/ start
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create extension postgres_fdw"
CREATE EXTENSION
postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "create table t ( a int, b text, c timestamptz )"
CREATE TABLE
postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "insert into t select i, md5(i::text), now() from generate_series(1,1000000) i"
INSERT 0 1000000
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create server srv_pg2 foreign data wrapper postgres_fdw options(port '8889', dbname 'postgres')"
CREATE SERVER
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create user mapping for postgres server srv_pg2 options (user 'postgres', password 'postgres')"
CREATE USER MAPPING
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create foreign table ft (a int, b text, c timestamptz) server srv_pg2 options (schema_name 'public', table_name 't')"
CREATE FOREIGN TABLE
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select count(*) from ft"
count
---------
1000000
(1 row)
What we have now is one table in the cluster on port 8889 and this table is attached as a foreign table in the cluster on port 8888.
We already have statistics on the source table in the cluster on port 8889:
postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "select reltuples::bigint from pg_class where relname = 't'"
reltuples
-----------
1000000
(1 row)
… but we do not have any statistics on the foreign table in the cluster on port 8888:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select reltuples::bigint from pg_class where relname = 'ft'"
reltuples
-----------
-1
(1 row)
Only after manually analyzing the foreign table the statistics show up:
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "analyze ft"
ANALYZE
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select reltuples::bigint from pg_class where relname = 'ft'"
reltuples
-----------
1000000
(1 row)
The issue that can arise with these local statistics is, that they probably become outdated when the source table is modified:
postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "insert into t select i, md5(i::text), now() from generate_series(1000001,2000000) i"
INSERT 0 1000000
postgres@:/home/postgres/ [DEV] psql -p 8889 -c "select reltuples::bigint from pg_class where relname = 't'"
reltuples
-----------
2000000
(1 row)
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select reltuples::bigint from pg_class where relname = 'ft'"
reltuples
-----------
1000000
(1 row)
As you can see, the row counts do not match anymore. Once the local statistics are gathered we again have the same picture on both sides:
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "analyze ft"
ANALYZE
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select reltuples::bigint from pg_class where relname = 'ft'"
reltuples
-----------
2000000
(1 row)
One way to avoid this issue even before PostgreSQL 19 is to tell postgres_fdw to run analyze on the remote table and to use those statistics:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "alter foreign table ft options ( use_remote_estimate 'true' )"
In this case the local statistics will not be used but of course this comes with the overhead of the additional analyze on the remote side.
From PostgreSQL 19 there is another option:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "alter foreign table ft options ( restore_stats 'true' )"
ALTER FOREIGN TABLE
This option tells postgres_fdw to import the statistics from the remote side and store them locally. If that fails it will run analyze as above, the commit message nicely explains this:
Add support for importing statistics from remote servers.
Add a new FDW callback routine that allows importing remote statistics
for a foreign table directly to the local server, instead of collecting
statistics locally. The new callback routine is called at the beginning
of the ANALYZE operation on the table, and if the FDW failed to import
the statistics, the existing callback routine is called on the table to
collect statistics locally.
Also implement this for postgres_fdw. It is enabled by "restore_stats"
option both at the server and table level. Currently, it is the user's
responsibility to ensure remote statistics to import are up-to-date, so
the default is false.
As usual, thanks to all involved.
L’article PostgreSQL 19: Importing statistics from remote servers est apparu en premier sur dbi Blog.
GoldenGate 26ai Logs Explained
GoldenGate logs are a powerful source of information when troubleshooting or analyzing your deployments. However, there is at least one log file per service in the Microservices Architecture. This means that for a GoldenGate setup with multiple deployments, you can quickly have tens of log files to search for information. In that context, knowing where to look is important, especially if you are upgrading from a Classic Architecture where you only had to check the ggserr.log file.
The restapi.log files are one of the most important logs because of the amount of information they provide. They allow you to completely understand what is happening behind the scenes. It takes time to learn how to analyze them, but I wrote a dedicated article on REST API log analysis to help you through this. There is one restapi.log file per deployment, because the REST API calls are always made to a specific deployment. This is also true when using a reverse proxy.
These restapi.log files are also a source of information for other GoldenGate log files. For instance, REST API logs on the AI Service will be cleaned and filtered before ending up in the log dedicated to the AI Service. This is another reason why you should understand the content of these restapi.log files.
Here is a log example from a restapi.log file. Again, for more details, read the blog linked above.
2026-04-09 19:46:36.789+0000 ERROR|RestAPI.adminsrvr | Request #1581: {
"context": {
"httpContextKey": 140353514441552,
"verbId": 7,
"verb": "PATCH",
"originalVerb": "PATCH",
"uri": "/services/v2/extracts/EXTSRC",
"protocol": "http",
"headers": {
"X-OGG-Context": "services/ogg_test_01/adminsrvr",
"X-OGG-Deployment": "ogg_test_01",
"X-OGG-Service": "adminsrvr",
"X-OGG-Version": "v2",
"X-OGG-Resource": "extracts/EXTSRC",
"X-Real-IP": "**",
"X-Forwarded-For": "**",
"X-Forwarded-Host": "vmogg",
"X-Forwarded-Proto": "https",
"X-Forwarded-Server": "oggvm1",
"X-SSL-Client-Verify": "NONE",
"Connection": "keep-alive",
"Host": "vmogg",
"Content-Length": "75",
"User-Agent": "python-requests/2.32.3",
"Accept-Encoding": "gzip, deflate",
"Accept": "application/json",
"Content-Type": "application/json",
"Cookie": "** Masked **",
"Authorization": "** Masked **",
"X-OGG-Requestor-Id": "",
"X-OGG-Feature-List": ""
},
"host": "vmogg",
"securityEnabled": false,
"authorization": {
"authUserName": "ogg",
"authPassword": "** Masked **",
"authMode": "Basic",
"authUserRole": "Security"
},
"requestId": 4,
"uriTemplate": "/services/{version}/extracts/{extract}"
},
"content": {
"managedProcessSettings": "ogg:managedProcessSettings:NonExistingProfile"
},
"isScaRequest": true,
"parameters": {
"uri": {
"extract": "EXTSRC",
"version": "v2"
}
}
}
Response: {
"context": {
"httpContextKey": 140353514441552,
"requestId": 4,
"code": "404 Not Found",
"headers": {
"Content-Type": "application/json",
"Set-Cookie": "** Masked **"
},
"Content-Type": "application/json",
"contentType": "application/json"
},
"isScaResponse": true,
"content": {
"$schema": "api:standardResponse",
"links": [
{
"rel": "canonical",
"href": "https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC",
"mediaType": "application/json"
},
{
"rel": "self",
"href": "https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC",
"mediaType": "application/json"
}
],
"messages": [
{
"$schema": "ogg:message",
"title": "The managed process settings profile 'ogg:managedProcessSettings:NonExistingProfile' does not exist.",
"code": "OGG-08122",
"severity": "ERROR",
"issued": "2026-04-09T19:46:36Z",
"type": "https://docs.oracle.com/en/middleware/goldengate/core/23.26/error-messages/"
},
{
"$schema": "ogg:message",
"title": "The item type with name 'ogg:managedProcessSettings:NonExistingProfile' does not exist.",
"code": "OGG-12029",
"severity": "INFO",
"issued": "2026-04-09T19:46:36Z",
"type": "https://docs.oracle.com/en/middleware/goldengate/core/23.26/error-messages/"
}
]
}
}
Admin Client Log File
Depending on your configuration, the adminclient log file might not always be in the same location. More specifically, you can have multiple adminclient.log files for a single GoldenGate installation. The reason behind this is that the logs go to the location of your OGG_VAR_HOME environment variable. The adminclient.log can be located in :
- In your Service Manager
var/logdirectory, if your GoldenGate user has the Service ManagerOGG_VAR_HOMEset in the environment. - In your deployment
var/logdirectory, if you change theOGG_VAR_HOMEenvironment variable when switching between deployments at the OS level. This is not recommended, as it can lead to multipleadminclient.logfiles and mixed-up logs between deployments. - In your GoldenGate binaries, if you misconfigured your environment and are calling the
adminclientwith an emptyOGG_VAR_HOMEvariable. This is not ideal, as you could lose the log file after patching or upgrading.
There is not much to see in this file with the default logging configuration, but it can be used to troubleshoot connection issues with the adminclient.
Let’s look at the var/log folder of the Service Manager deployment.
oggsm/var/log:
ServiceManager.log
restapi.log
AIService.log
In GoldenGate 26ai, there are three different logs for the Service Manager:
ServiceManager.log: In a healthy installation, this log file should only contain logs generated when services start or stop. But if your Administration Server doesn’t start, this log is a good starting point for troubleshooting.
2026-04-09T19:20:28.124+0000 INFO | Task 'ogg_test_01:adminsrvr' started with process id 2505 (ogg_test_01:adminsrvr)
2026-04-09T19:20:28.159+0000 INFO | Task 'ServiceManager:AIService' started with process id 2515 (ServiceManager:AIService)
2026-04-09T19:20:28.182+0000 INFO | Task 'ogg_test_01:distsrvr' started with process id 2522 (ogg_test_01:distsrvr)
2026-04-09T19:20:28.215+0000 INFO | Task 'ogg_test_01:recvsrvr' started with process id 2542 (ogg_test_01:recvsrvr)
2026-04-09T19:20:28.234+0000 INFO | Task 'ogg_test_01:pmsrvr' started with process id 2553 (ogg_test_01:pmsrvr)
2026-04-09T19:20:29.243+0000 INFO | Installation topology updated (ServiceManager.Topology)
restapi.log: See the chapter above about REST API Log Files.
AIService.log: This is where logs from the AI Service are gathered. It mostly consists of a cleaned collection of the REST API logs from calls made against the AI Service. For instance, here is a log you can get when adding an AI Provider to the Service Manager.
2026-04-09 19:32:21.056 DATA |AIService |AIService.RestApi | 247 RequestHandlers.cpp | 00000000-0000-0000-D5F8-000000000006> {"$schema":"api:standardResponse","links":[],"response":{"providers":[{"id":"1","name":"GeminiAiStudio","type":"gemini","baseUrl":"https://generativelanguage.googleapis.com/v1beta/models/gemini-flash-latest:generateContent","authentication":{"type":"api_key","configured":true},"links":[{"rel":"parent","href":"/api/v1/providers","mediaType":"application/json"},{"rel":"canonical","href":"/api/v1/providers/1","mediaType":"application/json"}]}],"metadata":{"fromCache":false,"providerCount":1,"lastUpdatedEpochMicroseconds":1775827941054749}},"messages":[]} (Thread 2)
Deployment Log Files
GoldenGate deployments have more log file types than the Service Manager. In the var/log folder of your deployments, you should have the following files:
ogg_test_01/var/log:
adminsrvr.log
distsrvr.log
ER-events.log
extract.log
ggserr.log
pmsrvr.log
recvsrvr.log
replicat.log
restapi.log
adminsrvr.log,distsrvr.log,recvsrvr.logandpmsrvr.log: These files contain all the information, warning and error messages from actions or events in the main GoldenGate services (Administration, Distribution, Receiver and Performance Metrics Services). They are not as extensive as what you can find in therestapi.logfile, but they provide a really good overview of what is happening inside your deployment.ER-events.log: This is a log file that can be used for monitoring your GoldenGate deployments. All warning and error messages on your extracts and replicats are available here. It is sometimes verbose, but still very useful for monitoring.
2026-04-10T07:11:17.844+0000 ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, EXTA.prm: PROCESS ABENDING.
2026-04-10T07:12:24.447+0000 WARNING OGG-06658 Oracle GoldenGate Capture for Oracle, EXTB.prm: Unable to restore a failed connection to the database after 3 retries.
extract.logandreplicat.log: In these log files, you can find status information, statistics, and transactions processed by your extracts and replicats. This does not provide fundamentally new information compared to other sources (adminclient, for instance), but it is a useful view when observing all processes at the same time.
2026-04-08T09:13:10.249+0000 INFO | INFO OGG-01026 Oracle GoldenGate Capture for Oracle, EXTA.prm: Rolling over remote file PDB1/aa000000321. (main)
2026-04-08T09:14:10.540+0000 INFO | INFO OGG-01026 Oracle GoldenGate Capture for Oracle, EXTB.prm: Rolling over remote file PDB1/ab000000645. (main)
2026-04-08T09:14:32.963+0000 INFO | INFO OGG-01026 Oracle GoldenGate Capture for Oracle, EXTC.prm: Rolling over remote file PDB1/ac000000128. (main)
ggserr.log: This log serves the same purpose as it did in the old Classic Architecture, containing all information about your deployment. Contrary to what its name suggests, it does not only contain the errors happening in your deployment.
restapi.log: See the chapter above about REST API Log Files.
In the GoldenGate web UI, you can access, filter, or query some of the logs listed above. From the Service Manager, the Diagnosis Log tab displays the ServiceManager.log file with filtering options.
You can also quickly enable debug logs for the Service Manager, which is usually only needed when the Oracle Support requests it. This will generate a ServiceManager-debug.log file in your var/log directory, and it can be downloaded from the web UI and sent to the Oracle Support or your dbi GoldenGate expert for analysis.
When connected to a deployment in the web UI, you can also view different logs.
- From the home page, the Critical Events view displays the content of the
ER-events.logfile.
- From the Services page, you have access to the Diagnosis Log of each service. They will respectively display the
adminsrvr.log,distsrvr.log,recvsrvr.logandpmsrvr.logfiles. Similar to what I just explained, you can also enable debug logs temporarily if needed.
L’article GoldenGate 26ai Logs Explained est apparu en premier sur dbi Blog.
It's Been a Minute...
Hi.
It's been a while.
Not sure if this is a one-off or not, but I'm going to give it another go. I miss writing (publicly).
Like, this is super awkward. "What do I say?"
"How are you?"
"Things are good."
"How are you?"
Create Distribution Paths in NGINX-Secured GoldenGate 26ai
Setting up a distribution path is an important step when connecting two GoldenGate deployments. However, there are so many connection options that it’s hard to know how to proceed. In this blog, I will present the way to connect two deployments secured through NGINX.
Here are the prerequisites before attempting the distribution path creation:
- Having two servers, each running a GoldenGate Microservices Architecture. I will use the latest version here, 26ai.
- Having a running extract on the source setup.
- Port openings: port
443should be open between the two deployments.
In my environment, I have the following:
- Source environment:
oggvm1, with a deployment namedogg_test_01, aCDB01with aPDB1containing aAPP_PDB1schema and atable1table. - Target environment:
oggvm2, with the equivalent deploymentogg_test_02,CDB02,PDB2,APP_PDB2schema, and atable2table.
With all of this in mind, here are the three steps needed to create a working distribution path between two GoldenGate deployments secured with NGINX:
- Create a Path Connection
- Add the certificates used in NGINX to secure the target deployment to the source’s certificate management store.
- Create and start the distribution path.
To open a connection to the target deployment ogg_test_02, the source deployment needs a path connection. As explained in a previous blog, path connections are aliases of an existing user on a target deployment.
It is recommended to separate roles and not use the administrator account of your target deployment, so let’s first create the user.
- On
ogg_test_02(target deployment), go to the User Administration tab and add a new user. For the role, Oracle recommends using theOperatorrole, so there is no reason to use the higher-privilegedAdministratororSecurityroles.
- On
ogg_test_01(source deployment), create the path connection. Only theUseridandPasswordfields must match what you just created on the target. The alias is just known on the source side and doesn’t have to match the username set on the target deployment. The alias defined here in the source deployment will only appear when choosing a connection during the distribution path creation.
Because your deployments are secured with NGINX, you have to make sure that the certificates being used by one deployment are recognized by the other deployment. To do so, on the source only, you have to register the root certificate authority of the target deployment.
This can only be done at the Service Manager level. In the Service Manager web UI, go to Certificate Management and add a CA Certificate in the source deployment ogg_test_01. You have two options here.
- If the CA only needs to be registered on this specific deployment, you can register it with the Local option. This is the more secure option, but it is very often not needed.
- If the CA registration should be shared with other deployments, you can register it with the Shared option.
Paste the root certificate file used to secure your second deployment. The name used must be unique but will not be used anywhere else in the process.
How should I register a certificate chain ?
If the certificate file contains multiple certificates, GoldenGate doesn’t allow you to register them in one go. Instead, you need to break down the file and register each part individually. Again, the name and order in which you register the certificates do not matter, except for management purposes. I described the issue in more detail in a blog about the OGG-30007 error.
We can now finally create the distribution path. On the source deployment, go on the Distribution Service Paths tab and register the path as such:
- Path information: Specify the path name. If possible, make the target (and the source) visible in the name, to ease management when having multiple distribution paths.
- Source options: Here, you should only select the extract at the source of this distribution path. The rest will be filled automatically or can stay with the default value.
- Target options: This is where the configuration is specific. Because we are using a NGINX-secured deployment on the target, you must click on Reverse proxy enabled and choose the
wsstarget protocol.
- Advanced, Filtering and Managed Options: Nothing here is specific to our setup, but you can of course customize these options as needed.
And that’s it. Your deployments are now connected, and you should see the trail files on the target once you start the distribution path.
oracle@oggvm2:~/ ll $OGG_DEPLOYMENT_HOME/var/lib/data/PDB1
total 0
-rw-r-----. 1 oracle oinstall 0 Mar 22 07:34 bb000000000
The remote peer submitted a certificate that failed validation
If your distribution path doesn’t start and generates a “certificate that failed validation” error, it means that you incorrectly registered your certificates. Make sure that the target deployment’s certificates are registered on the source deployment’s service manager and not the other way around.
You can also try to use the certificate in an OGG_CLIENT_TLS_CAPATH environment variable on the source and connect with the adminclient to check if it’s working.
L’article Create Distribution Paths in NGINX-Secured GoldenGate 26ai est apparu en premier sur dbi Blog.
PostgreSQL 19: Online enabling of data checksums
Since PostgreSQL 18 was released last year checksums are enabled by default when a new cluster is initialized. This also means, that you either need to explicitly disable that when you upgrade from a previous version of PostgreSQL or you need to enable this in the old version of PostgreSQL you want to upgrade from. The reason is, that pg_upgrade will complain if the old and new version of PostgreSQL do not have the same setting for this.
Enabling and disabling checksums in offline mode can be done since several versions of PostgreSQL using pg_checksums, but as mentioned: This will not work if the cluster is running:
postgres@:/home/postgres/ [181] pg_checksums --version
pg_checksums (PostgreSQL) 18.1
postgres@:/home/postgres/ [181] pg_checksums --pgdata=$PGDATA
Checksum operation completed
Files scanned: 966
Blocks scanned: 2969
Bad checksums: 0
Data checksum version: 1 -> This means "enabled"
postgres@:/home/postgres/ [181] pg_checksums --pgdata=$PGDATA --disable
pg_checksums: error: cluster must be shut down
Even in PostgreSQL 19 this is still same: You cannot use pg_checksum to enable or disable checksums while the cluster is running.
What will change in version 19 is that two new functions have been added, one for enabling checksums and one for disabling checksums:
postgres=# \dfS *checksums*
List of functions
Schema | Name | Result data type | Argument data types | Type
------------+---------------------------+------------------+--------------------------------------------------------------+------
pg_catalog | pg_disable_data_checksums | void | | func
pg_catalog | pg_enable_data_checksums | void | cost_delay integer DEFAULT 0, cost_limit integer DEFAULT 100 | func
(2 rows)
As mentioned in the commit message this is implemented by background workers and to actually see those processes on the operating system lets create some data so the workers really have something to do:
postgres=# create table t ( a int, b text, c timestamptz );
CREATE TABLE
postgres=# insert into t select i, md5(i::text), now() from generate_series(1,10000000) i;
INSERT 0 10000000
As this is version 19 of PostgreSQL currently checksum are enabled:
postgres=# show data_checksums;
data_checksums
----------------
on
(1 row)
To disable that online, pg_disable_data_checksums is the function to use:
postgres=# select * from pg_disable_data_checksums();
pg_disable_data_checksums
---------------------------
(1 row)
postgres=# show data_checksums;
data_checksums
----------------
off
(1 row)
To enable checksums online pg_enable_data_checksums is the function to use. If you want to see the background workers you might grep for that in a second session on the operating system:
-- first session, connected to PostgreSQL
postgres=# select pg_enable_data_checksums();
pg_enable_data_checksums
--------------------------
(1 row)
postgres=# show data_checksums ;
data_checksums
----------------
on
(1 row)
-- second session, on the OS
postgres@:/home/postgres/postgresql/ [pgdev] watch "ps -ef | grep checksum | grep -v watch"
Every 2.0s: ps -ef | grep checksum | grep -v watch pgbox.it.dbi-services.com: 09:49:20 AM
in 0.006s (0)
postgres 4931 2510 0 09:49 ? 00:00:00 postgres: pgdev: datachecksum launcher
postgres 4932 2510 25 09:49 ? 00:00:00 postgres: pgdev: datachecksum worker
postgres 4964 4962 0 09:49 pts/2 00:00:00 grep checksum
Because enabling the checksum comes with some overhead there is throttling control as it is already the case for autovacuum:
postgres=# select pg_enable_data_checksums(cost_delay=>1,cost_limit=>3000);
pg_enable_data_checksums
--------------------------
(1 row)
Very nice, thanks to all involved.
L’article PostgreSQL 19: Online enabling of data checksums est apparu en premier sur dbi Blog.
SQL macro with different table
user_tab_identity_cols.generation_type and BY DEFAULT ON NULL
Reducing SS contention with Local Temp Tablespace fails
M-Files didn’t fix your problem, your organization did.
Last week a wrote about the reasons why ECM projects fails.
M-Files is a wonderful tool and probably my favorite ECM, but it won’t solve your problem.
Companies that initiate digitalization projects like to convince themselves that investing in an ECM will solve all their problems: no more chaos, data loss, inefficiency, or lack of collaboration.
I’m sorry to break your dreams, but it won’t!
Tools AmplifySoftware like M-Files is powerful. No doubt about it.
It structures information, automates workflows, enables collaboration, and improves visibility. However, all of that depends on one thing:
What you put into it.
If your processes are unclear, M-Files will exacerbate the confusion.
Likewise, if your governance is weak, M-Files will amplify that weakness.
M-Files won’t magically make your teams start collaborating.
Tools don’t create discipline; they expose it, or the lack of it.
Think of M-Files less as a solution and more as a multiplier.
- Good processes become great.
- Broken processes break faster.
So why do organizations keep expecting tools to “fix everything”?
Because it’s easier.
Buying software seems like progress. It’s tangible, measurable and budgeted. It shows action.
On the other hand, fixing an organization requires:
- Alignment between teams
- Clear ownership
- Hard decisions on processes
- Cultural change
That’s messy, Political and Slow.
Instead, companies subconsciously shift the responsibility.
Once we have the tool, things will improve.
But what if they don’t?
They blame:
- The tool
- The implementation
- The users
Almost never the organization itself.
The Real Success FactorsWhen M-Files works really well, it’s never just about the tool.
It’s because a few key things were already in place or built alongside it.
Before digitizing anything, successful teams answer:
- What actually happens today?
- What should happen?
- Who is responsible at each step?
M-Files then becomes the execution layer, not the definition layer. It is not the job of M-Files to determine what should be done, but rather to ensure that the process is under control and follows organizational rules.
Ownership & AccountabilityEvery document, workflow, and decision needs an owner.
Without that:
- Workflows are stuck.
- Approvals take forever.
- Nobody feels responsible.
With it:
- M-Files flows naturally.
- Decisions are made faster.
- Accountability is visible.
Over-engineered systems fail.
The best M-Files setups are:
- Simple
- Intuitive
- Close to how people already work
Not “perfect.” Just usable and adopted.
Improving a work process is possible at any stage, but simplifying an existing one is challenging.
Change ManagementThe biggest challenge isn’t technical, it’s human.
People need to:
- Understand why things are changing
- Trust the system
- See personal value
Without that, even the best setup gets ignored.
Continuous ImprovementSuccessful organizations don’t “finish” their M-Files project.
They:
- Iterate
- Adjust workflows
- Refine metadata
- Listen to users
The system evolves with the business, which is the most important aspect, in my opinion, and one that is often overlooked.
Thinking differentlyThe real question is not: “What can M-Files do for us?”
Instead think: “Are we mature enough to successfully implement M-Files?”
It’s obviously not easy to answer that question, especially when you’re focused on your core business.
And that’s perfectly normal, that’s why we’re here to bridge the gap between software and the realities of business.
We’re not just here to install and configure systems; when we take on a digital transformation project, our primary role is to assess the organization’s readiness from an outside perspective, without passing judgment. Then we identify the necessary steps to reach the target.
To me, the real reward at the end of a project isn’t when the client says, “M-Files works well,” but when they say, “Thanks to M-Files, we’ve improved our collaboration and streamlined our interactions” and that makes a big difference.
L’article M-Files didn’t fix your problem, your organization did. est apparu en premier sur dbi Blog.
PostgreSQL 19: get_*_ddl functions
PostgreSQL already comes with plenty of system information functions to reconstruct the commands to create various objects, e.g. constraints or indexes. Starting with PostgreSQL 19 more functions will be available, namely those:
- pg_get_database_ddl
- pg_get_role_ddl
- pg_get_tablespace_ddl
As the names imply they can be used to recreate the commands to create a database, a role, or a tablespace.
To see what they do lets create a small setup:
postgres=# select version();
version
---------------------------------------------------------------------------------------
PostgreSQL 19devel dbi services build on x86_64-linux, compiled by gcc-15.1.1, 64-bit
(1 row)
postgres=# create user u with login password 'u';
CREATE ROLE
postgres=# \! mkdir /var/tmp/tbs
postgres=# create tablespace tbs location '/var/tmp/tbs' with ( random_page_cost = 1.1 );
CREATE TABLESPACE
postgres=# create database d with owner = u tablespace = tbs;
CREATE DATABASE
postgres=# alter database d connection limit = 10;
ALTER DATABASE
postgres=# \l
List of databases
Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges
-----------+----------+----------+-----------------+-------------+-------------+-------------+-----------+-----------------------
d | u | UTF8 | icu | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu | |
postgres | postgres | UTF8 | icu | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu | |
template0 | postgres | UTF8 | icu | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | icu | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
(4 rows)
To get the commands to recreate that database the new function “pg_get_database_ddl” can be used:
postgres=# select * from pg_get_database_ddl ( 'd'::regdatabase );
pg_get_database_ddl
---------------------------------------------------------------------------------------------------------------------------------------------------------
CREATE DATABASE d WITH TEMPLATE = template0 ENCODING = 'UTF8' LOCALE_PROVIDER = icu LOCALE = 'en_US.UTF-8' ICU_LOCALE = 'en-US-x-icu' TABLESPACE = tbs;
ALTER DATABASE d OWNER TO u;
ALTER DATABASE d CONNECTION LIMIT = 10;
(3 rows)
There are some options to control the output format and what gets reconstructed, e.g.:
postgres=# select * from pg_get_database_ddl ( 'd'::regdatabase, 'pretty', 'true' );
pg_get_database_ddl
-----------------------------------------
CREATE DATABASE d +
WITH TEMPLATE = template0 +
ENCODING = 'UTF8' +
LOCALE_PROVIDER = icu +
LOCALE = 'en_US.UTF-8' +
ICU_LOCALE = 'en-US-x-icu' +
TABLESPACE = tbs;
ALTER DATABASE d OWNER TO u;
ALTER DATABASE d CONNECTION LIMIT = 10;
(3 rows)
postgres=# select * from pg_get_database_ddl ( 'd'::regdatabase, 'pretty', 'true', 'owner', 'false' );
pg_get_database_ddl
-----------------------------------------
CREATE DATABASE d +
WITH TEMPLATE = template0 +
ENCODING = 'UTF8' +
LOCALE_PROVIDER = icu +
LOCALE = 'en_US.UTF-8' +
ICU_LOCALE = 'en-US-x-icu' +
TABLESPACE = tbs;
ALTER DATABASE d CONNECTION LIMIT = 10;
(2 rows)
postgres=# select * from pg_get_database_ddl ( 'd'::regdatabase, 'pretty', 'true', 'owner', 'false', 'tablespace', 'false' );
pg_get_database_ddl
-----------------------------------------
CREATE DATABASE d +
WITH TEMPLATE = template0 +
ENCODING = 'UTF8' +
LOCALE_PROVIDER = icu +
LOCALE = 'en_US.UTF-8' +
ICU_LOCALE = 'en-US-x-icu';
ALTER DATABASE d CONNECTION LIMIT = 10;
(2 rows)
The other two functions behave the same (but do not have exactly the same options):
postgres=# select * from pg_get_tablespace_ddl('tbs');
pg_get_tablespace_ddl
---------------------------------------------------------------
CREATE TABLESPACE tbs OWNER postgres LOCATION '/var/tmp/tbs';
ALTER TABLESPACE tbs SET (random_page_cost='1.1');
(2 rows)
postgres=# select * from pg_get_tablespace_ddl('tbs', 'pretty', 'true');
pg_get_tablespace_ddl
----------------------------------------------------
CREATE TABLESPACE tbs +
OWNER postgres +
LOCATION '/var/tmp/tbs';
ALTER TABLESPACE tbs SET (random_page_cost='1.1');
(2 rows)
postgres=# select * from pg_get_tablespace_ddl('tbs', 'pretty', 'true', 'owner', 'false');
pg_get_tablespace_ddl
----------------------------------------------------
CREATE TABLESPACE tbs +
LOCATION '/var/tmp/tbs';
ALTER TABLESPACE tbs SET (random_page_cost='1.1');
(2 rows)
… and finally for the roles:
postgres=# select * from pg_get_role_ddl ('u');
pg_get_role_ddl
--------------------------------------------------------------------------------------------
CREATE ROLE u NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS;
(1 row)
postgres=# select * from pg_get_role_ddl ('u', 'pretty', 'true');
pg_get_role_ddl
-------------------
CREATE ROLE u +
NOSUPERUSER +
INHERIT +
NOCREATEROLE +
NOCREATEDB +
LOGIN +
NOREPLICATION+
NOBYPASSRLS;
(1 row)
postgres=# select * from pg_get_role_ddl ('u', 'pretty', 'true', 'memberships', 'false');
pg_get_role_ddl
-------------------
CREATE ROLE u +
NOSUPERUSER +
INHERIT +
NOCREATEROLE +
NOCREATEDB +
LOGIN +
NOREPLICATION+
NOBYPASSRLS;
(1 row)
Nice, and again: Thanks to all involved.
L’article PostgreSQL 19: get_*_ddl functions est apparu en premier sur dbi Blog.
PostgreSQL 19: json format for “copy to”
PostgreSQL already has impressive support for working with data in json format. If you look at the jsonb data type and all the built-in functions and operators you can use, there is so much you can do with it by default. Starting with PostgreSQL 19 there is one feature more when it comes to working with data in json format.
“COPY” already is quite powerful and the fastest way to get data in and out of PostgreSQL (you may read some previous posts about copy here, here, and here).
As usual lets start with a simple table:
postgres=# create table t ( a int primary key, b text );
CREATE TABLE
postgres=# insert into t select i, md5(i::text) from generate_series(1,1000000) i;
INSERT 0 1000000
To get that data out in text format you might simply do this:
postgres=# copy t to '/var/tmp/t';
COPY 1000000
postgres=# \! head /var/tmp/t
1 c4ca4238a0b923820dcc509a6f75849b
2 c81e728d9d4c2f636f067f89cc14862c
3 eccbc87e4b5ce2fe28308fd9f2a7baf3
4 a87ff679a2f3e71d9181a67b7542122c
5 e4da3b7fbbce2345d7772b0674a318d5
6 1679091c5a880faf6fb5e6087eb1b2dc
7 8f14e45fceea167a5a36dedd4bea2543
8 c9f0f895fb98ab9159f51fd0297e236d
9 45c48cce2e2d7fbdea1afc51c7c6ad26
10 d3d9446802a44259755d38e6d163e820
Starting with PostgreSQL 19 you can do the same in json format:
postgres=# copy t to '/var/tmp/t1' with (format json);
COPY 1000000
postgres=# \! head /var/tmp/t1
{"a":1,"b":"c4ca4238a0b923820dcc509a6f75849b"}
{"a":2,"b":"c81e728d9d4c2f636f067f89cc14862c"}
{"a":3,"b":"eccbc87e4b5ce2fe28308fd9f2a7baf3"}
{"a":4,"b":"a87ff679a2f3e71d9181a67b7542122c"}
{"a":5,"b":"e4da3b7fbbce2345d7772b0674a318d5"}
{"a":6,"b":"1679091c5a880faf6fb5e6087eb1b2dc"}
{"a":7,"b":"8f14e45fceea167a5a36dedd4bea2543"}
{"a":8,"b":"c9f0f895fb98ab9159f51fd0297e236d"}
{"a":9,"b":"45c48cce2e2d7fbdea1afc51c7c6ad26"}
{"a":10,"b":"d3d9446802a44259755d38e6d163e820"}
Specifying a SQL is also supported:
postgres=# copy (select a from t) to '/var/tmp/t1' with (format json);
COPY 1000000
postgres=# \! head /var/tmp/t1
{"a":1}
{"a":2}
{"a":3}
{"a":4}
{"a":5}
{"a":6}
{"a":7}
{"a":8}
{"a":9}
{"a":10}
As noted in the commit message there are some options which are not compatible with the json format:
- HEADER
- DEFAULT
- NULL
- DELIMITER
- FORCE QUOTE
- FORCE NOT NULL
- and FORCE NULL
Also not supported (currently) is “copy from”.
L’article PostgreSQL 19: json format for “copy to” est apparu en premier sur dbi Blog.
OGG-30007 : How To Register Certificates In GoldenGate ?
After working on a GoldenGate deployment recently, I felt that the OGG-30007 error would be worth writing about. This error happens when registering certificates in GoldenGate. Whether you do it from the web UI or with the REST API, this GoldenGate error is detailed as such:
Processing of the certificate PEM portion of the certificate bundle resulted in more than one (1) certificate objects.
Code: OGG-30007
Cause: The read and decode processing of the specified portion of the certificate bundle produced more than one (1) objects. More than one PEM encoded object is present in the data.
Action: Review the specified portion of the certificate bundle and correct as needed. Only a single PEM encoded object is expected.
This error occurs because GoldenGate expects exactly one PEM object per import, while a certificate chain file contains multiple certificates. In the web UI, a pop-up will alert you that something is wrong:
As mentioned, this typically happens when registering a certificate chain. For instance, you could face the issue when connecting two deployments secured with NGINX. The server presents a certificate chain including the intermediate, while the client (GoldenGate) must trust both the root and the intermediate.
But when the Certificate Authority doesn’t sign your certificate directly with the Root Certificate, but with an Intermediate Certificate, the server presents a certificate chain including the intermediate. A certificate like the following will generate an OGG-30007 error if you try to add it to the truststore:
-----BEGIN CERTIFICATE-----
(intermediate)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(root)
-----END CERTIFICATE-----
And to make sure that your connections work, you should not only add the root certificate, but also the intermediate certificate. Because of the way GoldenGate stores these certificates, two separate entries must be created in the truststore. To ease monitoring and certificate management, you can name them rootCA_ogg_target and intermediateCA_ogg_target
With this, you should have no problem connecting GoldenGate deployments ! To avoid OGG-30007, ensure that each certificate is imported separately. In practice, this means extracting the root and intermediate certificates from the chain file and registering them as individual entries in the GoldenGate truststore.
L’article OGG-30007 : How To Register Certificates In GoldenGate ? est apparu en premier sur dbi Blog.



