Yann Neuhaus
Customer experience – How to change ip address in a fully clustered environment
If you have a clusturized environment already up and running, but you want or need change the complete ip adress of your wall with minimal downtime (full downtime during operation: 3 min).
You’re in the right place, I will show you how we can do that.
ContextTwo-node clustered environment in sql 2022 with OS environment in 2022 .In 1 node, there is a standalone instance and an Always-on instance with the other node. During the changes if you have just a standalone instance, your instance will be unvailable.
Here’s the step-by-step procedure for modify IPs adresses of a complete clustering environnement:
1) Change Ip of the two virtual machine
Change into Network parameters like VLAN settings and MAC address ( on vsphere)
Than control IP on netword card
Access the network interface card on the respective nodes and make the change.
2) Change Ip of the cluster card on each vm
Modify on network card and change dns if necessary
3) Change ip of the cluster ( on windows failover cluster)
In the Failover Cluster Manager pane, select your cluster and expand Cluster Core Resources.
Right-click the cluster, and select Properties >IP address.
Change the IP address of the failover cluster using the Edit option and click OK.
Click Apply.
4) Change ip of the the instance sql
If you have specific IP for your instance add the new IP on network card
However you can just change the IP on on sql configuration manager
Restart the instance
Control connection with the changes
5) Change ip of the listener
Go to the AG in the failover cluster manager, locate the server name in the bottom panel, right-click and go to properties and change the static IP address.
Problem we encountered
As it was a cluster, the two IP ranges didn’t have the same firewall rules. This initially blocked the hardware part of the system, as well as the AG witness, which was unable to control the state of the two nodes. The network team then set the same rules on both ranges, and all was well.
L’article Customer experience – How to change ip address in a fully clustered environment est apparu en premier sur dbi Blog.
Exascale Infrastructure : new flavor of Exadata Database Service
Exadata on Exascale Infrastructure is a new deployment option for Exadata Database Service. It comes in addition to the well known Exadata Cloud@Customer or Exadata on Dedicated Infrastructure options already available. It is based on new storage management technology decoupling database and Grid Infrastructure clusters from the Exadata underlying storage servers by integrating the database kernel directly with the Exascale storage structures.
What is Exascale Infrastructure ?Simply put, Exascale is the next-generation of Oracle Exadata Database Services. It combines a cloud storage approach for flexibilty and hyper-elasticity with the performance of Exadata Infrastructure. It introduces a loosely-coupled shared and multitenant architecture where Database and Grid Infrastructure clusters are decoupled from the underlying Exadata storage servers which become a pool of shared storage resources available for multiple Grid Infrastructure clusters and databases.
Strict data isolation provides secure storage sharing while storage pooling enables flexible and dynamic provisioning combined with better storage space and processing utilization.
Advanced snapshot and cloning features, leveraging Redirect-On-Write technology instead of Copy-On-Write, enable space-efficient thin clones from any read/write database or pluggable database. Read-only test master are therefore a thing of the past. These features alone make Exascale a game-changer for database refreshes, CI/CD workflows and pipelines, development environments provisioning, all done with single SQL commands and much faster than before.
Block storage services allow the creation of arbitrary-sized block volumes for use by numerous applications. Exascale block volumes are also used to store Exadata database server virtual machines images enabling :
- creation of more virtual machines
- removal of local storage dependency inside the Exadata compute nodes
- seamless migration of virtual machines between different compute nodes
Finally, the following hardware and software considerations complete this brief presentation of Exascale:
- runs on 2-socket Oracle Exadata system hardware with RoCE Network Fabric (X8M-2 or later)
- Database 23ai release 23.5.0 or later is required for full-featured native Database file storage in Exascale
- Exascale block volumes support databases using older Database software releases back to Oracle Database 19c
- Exascale is built into Exadata System Software (since release 24.1)
The main point with Exascale architecture is cloud-scale, multi-tenant resource pooling, both for storage and compute.
Storage pooling
Exascale is based on pooling storage servers which provide services such as Storage Pools, Vaults and Volume Management. Vaults, which can be considered an equivalent of ASM diskgroups, are directly accessed by the database kernel.
File and extent management is done by Exadata System Software, thus freeing the compute layer from ASM processes and memory structures for database files access and extents management (with Database 23ai or 26ai; for Database 19c, ASM is still required). With storage management moving to storage servers, resource management becomes more flexible and more memory and CPU resources are available on compute nodes to process database tasks.
Exascale also provides redundancy, caching, file metadata management, snapshots and clones as well as security and data integrity features.
Of course, since Exascale is built on top of Exadata, you benefit from features like RDMA, RoCE, Smart Flash Cache, XRMEM, Smart Scan, Storage Indexes, Columnar Caching.
Compute poolingOn the compute side, we have database-optimized servers which run Database 23ai or newer and Grid Infrastructure Cluster management software. The physical database servers host the VM Clusters and are managed by Oracle. Unlike Exadata on Dedicated Infrastructure, there is no need to provision an infrastructure before going ahead with VM Cluster creation and configuration : in Exascale, you only deal with VM Clusters.
VM file systems are centrally-hosted by Oracle on RDMA-enabled block volumes in a system-vault. VM images used by the VM Clusters are no more hosted on local storage on the database servers. This enables the number of VMs running on the database servers to raise from 12 to 50.
Each VM Cluster accesses a Database Vault storing the database files with strict isolation from other VM Clusters database files.
Virtual Cloud NetworkClient and backup connectivity is provided by Virtual Cloud Network (VCN) services.
This loosely-coupled, shared and multitenant architecture enables far greater flexibilty than what was possible with ASM or even Exadata on Dedicated Infrastructure. Exascale’s hyper-elasticity enables to start with very small VM Clusters and then scale as the workloads increase, with Oracle managing the infrastructure automatically. You can start as small as 1 VM per cluster (up to 10), 8 eCPUs per VM (up to 200), 22GB of memory per VM, 220GB file system storage per VM and 300GB Vault storage per VM Cluster (up to 100TB). Memory is tightly coupled to eCPUs configuration with 2.75GB per eCPU and thus does not scale independently from the eCPUs number.
For those new to eCPU, it is a standard billing metric based on the number of cores per hour elastically allocated from a pool of compute and storage servers. eCPUs are not tied to the make, model or clock speed of the underlying processor. By contrast, an OCPU is the equivalent of one physical core with hyper-threading enabled.
To summarizeTo best understand what Exascale Infrastructure is and introduces, here is the wording which best describes this new flavor of Exadata Database Service :
- loosely-coupling of compute and storage cloud
- hyper-elasticity
- shared and multi-tenant service model
- ASM-less (for 23ai or newer)
- Exadata performance, reliability, availability and security at any scale
- CI/CD friendly
- pay-per-use model
Stay tuned for more on Exascale …
L’article Exascale Infrastructure : new flavor of Exadata Database Service est apparu en premier sur dbi Blog.
OGG-00423 when performing initial load with GoldenGate 23ai
Very quick piece of blog today to tackle the OGG-00423 error. There is not much information online on the matter, and the official Oracle documentation doesn’t help. If you ever stumble upon an OGG-00423 error when setting up GoldenGate replication, remember that it’s most certainly related to grants given to the GoldenGate user. An example of the error is given below, after starting an initial load:
2025-10-27 09:52:40 ERROR OGG-00423 Could not find definition for pdb_source.app_source.t1.
This error happened when doing an initial load with the following configuration:
extract extini
useridalias cdb01
extfile aa
SOURCECATALOG pdb_source
table app_source.t1, SQLPredicate "As of SCN 3899696";
This error should be present whether you use SOURCECATALOG or the full three-part TABLE name.
For replicats, a common solution for OGG-00423 is to use the ASSUMETARGETDEFS parameter in the configuration file, but this is a replication-only parameter, and there is no such thing for the initial load. In this case, the error was due to the user defined in the cdb01 alias lacking select on the specified t1 table:
[oracle@vmogg ~]$ sqlplus c##ggadmin
Enter password:
SQL> alter session set container=pdb_source;
Session altered.
SQL> select * from app_source.t1;
select * from app_source.t1
*
ERROR at line 1:
ORA-00942: table or view does not exist
After granting the correct SELECT privilege to the GoldenGate user, it works ! Here is an example on how to grant this select. You might want to grant it differently, depending on security aspects in your deployments:
sqlplus / as sysdba
ALTER SESSION SET CONTAINER=PDB_SOURCE;
GRANT SELECT ANY TABLE TO C##GGADMIN CONTAINER=CURRENT;
NB: When testing GoldenGate setups, a good practice to debug OGG-errors if you do not use a DBA user for replication is to temporarily grant DBA (or a DBA role) to the GoldenGate user. This way, you can quickly track down the root cause of your problem, at least if it’s related to grants.
L’article OGG-00423 when performing initial load with GoldenGate 23ai est apparu en premier sur dbi Blog.
SQL Server 2025 release
Microsoft has announced the release of SQL Server 2025. The solution can be downloaded using the following link: https://www.microsoft.com/en-us/sql-server/
Among the new features available, we have:
- The introduction of the Standard Developer Edition, which offers the same features as the Standard Edition, but is free when used in a non-production environment, similar to the Enterprise Developer Edition (formerly Developer Edition).
- The removal of the Web Edition.
- The Express Edition can now host databases of up to 50 GB. In practice, it is quite rare to see our customers use the Express Edition. It generally serves as a toolbox or for very specific scenarios where the previous 10 GB limit was not an issue. Therefore, lifting this limit will not have a major impact due to the many other restrictions it still has.
- The introduction of AI capabilities within the database engine, including vector indexes, vector data types, and the corresponding functions.
- For development purposes, SQL Server introduces a native JSON data type and adds support for regular expressions (regex).
- Improvements to Availability Groups, such as the ability to offload full, differential, and transaction log backups to a secondary replica.
- The introduction of optimized locking and a new ZSTD algorithm for backup compression.
We also note the release of SQL Server Management Studio 22.
References :
https://learn.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2025?view=sql-server-ver17
https://techcommunity.microsoft.com/blog/sqlserver/sql-server-2025-is-now-generally-available/4470570
https://learn.microsoft.com/en-us/ssms/release-notes-22
Thank you, Amine Haloui.
L’article SQL Server 2025 release est apparu en premier sur dbi Blog.
ORA-44001 when setting up GoldenGate privileges on a CDB
I was recently setting up GoldenGate for a client when I was struck by a ORA-44001 error. I definitely wasn’t the first one to come across this while playing with grants on GoldenGate users, but nowhere could I find the exact reason for the issue. Not a single question or comment on that matter offered a solution.
The problem occurs when running the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE package described in the documentation. An example given by the documentation is the following:
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(GRANTEE => 'c##ggadmin', CONTAINER => 'ALL');
And the main complaint mentioned regarding this command was the following ORA-44001 error:
SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##ggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-44001: invalid schema
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_ASSERT", line 410
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 50
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 3082
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3632
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2
The solution is in fact quite simple. But I decided to investigate it a bit further, playing with the multitenant architecture. In this blog, I will use an Oracle 19c CDB with a single pluggable database named PDB1.
For me, it was really the only thing that mattered when encountering this error. On a CDB with tens of PDBs, you might have some PDBs in read-only mode. Whether it’s to keep templates aside, or for temporary restrictions on a specific PDB. Let’s try to replicate the error.
First example: PDB in read-write, grant operation succeedsIf you first try to grant the admin privileges with a PDB in read-write, it succeeds:
SQL> alter pluggable database pdb1 open read write;
Pluggable database altered.
SQL> create user c##oggadmin identified by ogg;
User created.
SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
PL/SQL procedure successfully completed.
Second example: PDB in read-only before the user creation, grant operation fails with ORA-44001
If you first put the PDB in read-only mode, and then create the user, then the user doesn’t exist, and you get the ORA-44001 when granting privileges.
SQL> drop user c##oggadmin;
User dropped.
SQL> alter pluggable database pdb1 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb1 open read only;
Pluggable database altered.
SQL> create user c##oggadmin identified by ogg;
User created.
SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-44001: invalid schema
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_ASSERT", line 410
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 50
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 3082
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3632
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2
Third example: PDB in read-only after the user creation, grant operation fails with ORA-16000
Where this gets tricky is the order in which you write the query. If you create the user before putting a PDB in read-only, you get another error, because the user actually exists:
SQL> drop user c##oggadmin;
User dropped.
SQL> alter pluggable database pdb1 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb1 open read write;
Pluggable database altered.
SQL> create user c##oggadmin identified by ogg;
User created.
SQL> alter pluggable database pdb1 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb1 open read only;
Pluggable database altered.
SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-16000: database or pluggable database open for read-only access
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 93
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 84
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 123
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3635
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2
As often with Oracle, the error messages can be misleading. The third example clearly points to the issue, while the second one is tricky to debug (even though it is completely valid).
Should I create a GoldenGate user at the CDB-level ?Depending on your replication configuration, you might need to create a common user instead of multiple users per PDB. For instance, this is strictly required when setting up a downstream extract. However, in general, it might be a bad idea to create a common C##GGADMIN user and granting it privileges with CONTAINER => ALL, because you might not want such a privileged user to exist on all your PDBs.
L’article ORA-44001 when setting up GoldenGate privileges on a CDB est apparu en premier sur dbi Blog.
Alfresco – Solr search result inconsistencies
We recently encountered an error at a customer’s site. Their Alfresco environment was behaving strangely.
Sometimes the search results worked, and sometimes they did not get the expected results.
The environment is composed of 2 Alfresco7 nodes in cluster and 2 Solr 6.6 nodes load balanced (in active-active mode).
Sometimes the customer isn’t able to retrieve the document he created recently.
Investigation stepsSince we have load balancing in place, the first step is to confirm that everything is okay on the two nodes.
- I checked that Alfresco is running as expected. Nothing out of the ordinary; the processes are there and there are no errors in the log files and everything is green in the admin console.
- Then, I checked the alfresco-global.properties on both nodes to ensure the configuration is the same. We never know. I also checked the way we connect to Solr and confirmed that the load-balanced URL is being used.
- At this point, it is almost certain that the problem is with Solr. We will start by checking the administration console. Because we have load balancing, we must connect to each node individually and cannot use the URL in alfresco-global.properties.
- At first glance, everything seems fine, but a closer inspection of the Core Admin panel reveals a difference of several thousand “NumDocs” between the two nodes. These values may differ because they are internal Solr files. However, the discrepancy is too high in my opinion.
- How can this assumption be verified? Move to any core and run a query to list all the files (cm:name:*). On the first node, the query returns an error. On the second node, I received an answer similar to the one below:

- Now moving to the server where I have the error, in the logs there are errors like:
2025-11-10 15:33:32.466 ERROR (searcherExecutor-137-thread-1-processing-x:alfresco-3) [ x:alfresco-3] o.a.s.c.SolrCore null:org.alfresco.service.cmr.dictionary.DictionaryException10100009 d_dictionary.model.err.parse.failure
at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:113)
at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:99)
at org.alfresco.solr.tracker.ModelTracker.loadPersistedModels(ModelTracker.java:181)
at org.alfresco.solr.tracker.ModelTracker.<init>(ModelTracker.java:142)
at org.alfresco.solr.lifecycle.SolrCoreLoadListener.createModelTracker(SolrCoreLoadListener.java:341)
at org.alfresco.solr.lifecycle.SolrCoreLoadListener.newSearcher(SolrCoreLoadListener.java:139)
at org.apache.solr.core.SolrCore.lambda$getSearcher$15(SolrCore.java:2249)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.jibx.runtime.JiBXException: Error accessing document
at org.jibx.runtime.impl.XMLPullReaderFactory$XMLPullReader.next(XMLPullReaderFactory.java:293)
at org.jibx.runtime.impl.UnmarshallingContext.toStart(UnmarshallingContext.java:446)
at org.jibx.runtime.impl.UnmarshallingContext.unmarshalElement(UnmarshallingContext.java:2750)
at org.jibx.runtime.impl.UnmarshallingContext.unmarshalDocument(UnmarshallingContext.java:2900)
at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:108)
... 11 more
Caused by: java.io.EOFException: input contained no data
at org.xmlpull.mxp1.MXParser.fillBuf(MXParser.java:3003)
at org.xmlpull.mxp1.MXParser.more(MXParser.java:3046)
at org.xmlpull.mxp1.MXParser.parseProlog(MXParser.java:1410)
at org.xmlpull.mxp1.MXParser.nextImpl(MXParser.java:1395)
at org.xmlpull.mxp1.MXParser.next(MXParser.java:1093)
at org.jibx.runtime.impl.XMLPullReaderFactory$XMLPullReader.next(XMLPullReaderFactory.java:291)
... 15 more
- It looks like the problem is related to the model definition. We need to check if the models are still there in ../solr_data/models. The models are still in place, but one of them is 0 KB.
- So we need to force delete the empty file and restart Solr to force the model to be reimported.
After taking these actions, we reimported the model file and the errors in the logs disappeared. In the admin console, we can see NumDocs increasing again. When we re-run the query, we get a result.
L’article Alfresco – Solr search result inconsistencies est apparu en premier sur dbi Blog.
PostgreSQL 19: Logical replication of sequences
Logical replication in PostgreSQL got a lot of features and performance improvements over the last releases. It was introduced in PostgreSQL 10 back in 2017, and PostgreSQL 9.6 (in 2016) introduced logical decoding which is the basis for logical replication. Today logical replication is really mature and from my point of view only two major features are missing: DDL replication and the replication of sequences. The latter is now possible with the upcoming PostgreSQL 19 next year, and this is what this post is about.
Before we can see how this works we need a logical replication setup. An easy method to set this up is to create a physical replica and then transform that into a logical replica using pg_createsubscriber:
postgres@:/home/postgres/ [pgdev] psql -c "create table t ( a int primary key generated always as identity, b text)"
CREATE TABLE
postgres@:/home/postgres/ [pgdev] psql -c "insert into t (b) values ('aaaa')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -c "insert into t (b) values ('bbbb')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] pg_basebackup --pgdata=/var/tmp/dummy --write-recovery-conf --checkpoint=fast
postgres@:/home/postgres/ [pgdev] echo "port=8888" >> /var/tmp/dummy/postgresql.auto.conf
postgres@:/home/postgres/ [pgdev] pg_createsubscriber --all --pgdata=/var/tmp/dummy --subscriber-port=8888 --publisher-server="host=localhost,port=5432"
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/dummy start
2025-11-11 13:21:20.818 CET - 1 - 9669 - - @ - 0LOG: redirecting log output to logging collector process
2025-11-11 13:21:20.818 CET - 2 - 9669 - - @ - 0HINT: Future log output will appear in directory "pg_log".
2025-11-11 13:21:21.250 CET - 1 - 9684 - - @ - 0LOG: redirecting log output to logging collector process
2025-11-11 13:21:21.250 CET - 2 - 9684 - - @ - 0HINT: Future log output will appear in directory "pg_log".
Once this is done we have a logical replica and the data is synchronized:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from t"
a | b
---+------
1 | aaaa
2 | bbbb
(2 rows)
A quick check the replication is ongoing:
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values('cccc');"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from t"
a | b
---+------
1 | aaaa
2 | bbbb
3 | cccc
(3 rows)
The “generated always as identidy” we used above to create the table automatically created a sequence for us:
ostgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences;"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 3
Checking the same sequence on the replica clearly shows that the sequence is not synchronized (last_value is still a 2):
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences;"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 2
The reason is, that sequences are not synchronized automatically:
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_publication;"
Expanded display is on.
-[ RECORD 1 ]---+-------------------------------
oid | 16397
pubname | pg_createsubscriber_5_f58e9acd
pubowner | 10
puballtables | t
puballsequences | f
pubinsert | t
pubupdate | t
pubdelete | t
pubtruncate | t
pubviaroot | f
pubgencols | n
As there currently is no way to enable sequence synchronization for an existing publication we can either drop and re-create or add an additional publication and subscription just for the sequences:
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "create publication pubseq for all sequences;"
CREATE PUBLICATION
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_publication;"
Expanded display is on.
-[ RECORD 1 ]---+-------------------------------
oid | 16397
pubname | pg_createsubscriber_5_f58e9acd
pubowner | 10
puballtables | t
puballsequences | f
pubinsert | t
pubupdate | t
pubdelete | t
pubtruncate | t
pubviaroot | f
pubgencols | n
-[ RECORD 2 ]---+-------------------------------
oid | 16398
pubname | pubseq
pubowner | 10
puballtables | f
puballsequences | t
pubinsert | t
pubupdate | t
pubdelete | t
pubtruncate | t
pubviaroot | f
pubgencols | n
The publication has sequence replication enabled and the subscription to consume this can be created like this:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create subscription subseq connection 'host=localhost port=5432' publication pubseq"
CREATE SUBSCRIPTION
Now the sequence is visible in pg_subscription_rel on the subscriber:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from pg_subscription_rel;"
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+------------
24589 | 16385 | r |
24590 | 16384 | r | 0/04004780 -- sequence
(2 rows)
State “r” means ready, so the sequence should have synchronized, and indeed:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 3
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 3
Adding new rows to the table, which also increases the last_value of the sequence, should also synchronize the sequences:
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values ('eeee')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values ('ffff')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 6
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 4
… but is not happening automatically. To get them synchronized you need to refresh the subscription:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "alter subscription subseq refresh sequences"
Expanded display is on.
ALTER SUBSCRIPTION
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 6
Great, this reduces the work to fix the sequences quite a bit and is really helpful. As usual, thanks to all involved.
L’article PostgreSQL 19: Logical replication of sequences est apparu en premier sur dbi Blog.
PostgreSQL 19: The “WAIT FOR” command
When you go for replication and you don’t use synchronous replication there is always a window when data written on the primary is not yet available in the replica. This is known as “replication lag” and can be monitored using the pg_stat_replication catalog view. A recent commit to PostgreSQL 19 implements a way to wait for data to be visible on the replica without switching to synchronous replication, and this is what the “WAIT FOR” command is for.
Before we can see how that works we need a replica, because when you try to execute this command on a primary you’ll get this:
postgres=# select version();
version
---------------------------------------------------------------------------------------
PostgreSQL 19devel on x86_64-linux, compiled by gcc-15.1.1, 64-bit
(1 row)
postgres=# WAIT FOR LSN '0/306EE20';
ERROR: recovery is not in progress
HINT: Waiting for the replay LSN can only be executed during recovery.
postgres=#
So, let’s create a replica and start it up:
postgres@:/home/postgres/ [pgdev] mkdir /var/tmp/dummy
postgres@:/home/postgres/ [pgdev] pg_basebackup --pgdata=/var/tmp/dummy --write-recovery-conf --checkpoint=fast
postgres@:/home/postgres/ [pgdev] echo "port=8888" >> /var/tmp/dummy/postgresql.auto.conf
postgres@:/home/postgres/ [pgdev] chmod 700 /var/tmp/dummy
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/dummy start
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select pg_is_in_recovery()"
pg_is_in_recovery
-------------------
t
(1 row)
As nothing is happening on the primary right now, data on the primary and the replica is exactly the same:
postgres=# select usename,sent_lsn,write_lsn,flush_lsn,replay_lsn from pg_stat_replication;
usename | sent_lsn | write_lsn | flush_lsn | replay_lsn
----------+------------+------------+------------+------------
postgres | 0/03000060 | 0/03000060 | 0/03000060 | 0/03000060
To see how “WAIT FOR” behaves we need to a little cheating and pause WAL replaying on the replica (we could also cut the the network between the primary and the replica):
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from pg_wal_replay_pause();"
pg_wal_replay_pause
---------------------
(1 row)
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from pg_is_wal_replay_paused();"
pg_is_wal_replay_paused
-------------------------
t
(1 row)
On the primary, create a table and get the current LSN:
postgres@:/home/postgres/ [pgdev] psql -c "create table t(a int)"
CREATE TABLE
postgres@:/home/postgres/ [pgdev] psql -c "insert into t values(1)"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -c "select pg_current_wal_insert_lsn();"
pg_current_wal_insert_lsn
---------------------------
0/03018CA8
(1 row)
As WAL replay on the replica is paused, the “WAIT FOR” command will now block:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "WAIT FOR LSN '0/03018CA8'"
This is the intended behavior as we want to make sure that we can see all the data up to this LSN. Once we resume WAL replay on the replica the “WAIT FOR” command will return success as all the data reached the replica:
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select * from pg_wal_replay_resume();"
pg_wal_replay_resume
----------------------
(1 row)
… the other session will unblock:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "WAIT FOR LSN '0/03018CA8'"
status
---------
success
(1 row)
So, starting with PostgreSQL 19 next year, there is a way for applications to make sure that a replica reached all the data up to a specific LSN by blocking until the data is there.
L’article PostgreSQL 19: The “WAIT FOR” command est apparu en premier sur dbi Blog.
Setting up TLS encryption and authentication in MongoDB
When securing a MongoDB deployment, protecting sensitive data is paramount. MongoDB supports encryption throughout the lifecycle of the data, with three primary types of data encryption :
- Encryption in transit
- Encryption at rest
- Encryption in use
Among these, encryption in transit is fundamental : it protects data as it moves between your application and the database. In MongoDB, this is achieved through TLS (Transport Layer Security), which ensures that communication remains private and secure. You have two options when it comes to using TLS for your database :
- Using TLS for encryption only.
- Using TLS both for encryption and authentication to the database.
We’ll first create a Certificate Authority. These certificates will be self-signed, which is fine for testing, but you shouldn’t use self-signed certificates in a production environment ! On Linux, use the openssl library to generate the certificates.
openssl req -newkey rsa:4096 -nodes -x509 -days 365 -keyout ca.key -out ca.pem -subj "/C=CH/ST=ZH/L=Zurich/O=dbi/OU=MongoDBA/CN=vm.domain.com"
Here is a description of some important parameters of the commands :
-newkey rsa:4096: Generates a new private key and a certificate request using RSA with a 4096-bit key size.-nodes: Skips password encryption of the private key. Without it, OpenSSL would prompt you to set a passphrase.-x509: Generates a self-signed certificate.x509is supported by MongoDB.-days 365: Validity of the certificate in days.-keyout ca.key: Filename for the private key.-out ca.pem: Filename for the certificate.-subj "...": Provides the subject’s Distinguished Name (DN). If you don’t specify it, OpenSSL will prompt for each field.
Then, we’ll create the server certificate for the MongoDB instance. In the openssl-server.cnf file below, you should change the req_distinguished_name fields with what you used while creating the Certificate Authority, and replace vm.domain.com by the name of your machine.
If you only have an IP and no DNS entry for your VM, use IP.1 instead of DNS.1 in the alt_names section.
cat > openssl-server.cnf <<EOF
[ req ]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[ req_distinguished_name ]
C = CH
ST = ZH
L = Zurich
O = dbi
OU = MongoDBA
CN = myVM
[ v3_req ]
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = myVM
EOF
Then, generates the certificate with these commands :
openssl req -newkey rsa:4096 -nodes -keyout mongodb-server.key -out mongodb-server.csr -config openssl-server.cnf
openssl x509 -req -in mongodb-server.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out mongodb-server.crt -days 365 -extensions v3_req -extfile openssl-server.cnf
cat mongodb-server.key mongodb-server.crt > mongodb-server.pem
Create a Client Certificate
Finally, we’ll create a client certificate. The process is the same, with a few tweaks :
OUshould be different from the one from the server certificate. It is not mandatory for the communication, but it will be for the authentication if you decide to enable it.CNshould also be different.extendedKeyUsageshould be set withclientAuthinstead ofserverAuth.
cat > openssl-client.cnf <<EOF
[ req ]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[ req_distinguished_name ]
C = CH
ST = ZH
L = Zurich
O = dbi
OU = MongoDBAClient
CN = userApp
[ v3_req ]
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
EOF
The creation of the certificate is the same.
openssl req -newkey rsa:4096 -nodes -keyout mongodb-client.key -out mongodb-client.csr -config openssl-client.cnf
openssl x509 -req -in mongodb-client.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out mongodb-client.crt -days 365 -extensions v3_req -extfile openssl-client.cnf
cat mongodb-client.key mongodb-client.crt > mongodb-client.pem
MongoDB Configuration Change
Make sure to set permissions correctly for your certificates.
chmod 600 ca.pem mongodb-server.pem mongodb-client.pem
chown mongod: ca.pem mongodb-server.pem mongodb-client.pem
Now, you can change your MongoDB configuration file to include the certificates. Simply add the net.tls part to your mongod.conf file.
net:
bindIp: yourIP
port: 27017
tls:
mode: requireTLS
certificateKeyFile: /path/to/mongodb-server.pem
CAFile: /path/to/ca.pem
You can now restart your MongoDB instance with systemctl restart mongod (or whatever you’re using), and then try the connection to your instance for your client. Of course, the port mentioned in the net.port field of your configuration file shouldn’t be blocked by your firewall.
> mongosh --host myVM --port 27017 --tls --tlsCertificateKeyFile mongodb-client.pem --tlsCAFile ca.pem
Current Mongosh Log ID: 682c9641bbe4593252ee7c8c
Connecting to: mongodb://vmIP:27017/?directConnection=true&tls=true&tlsCertificateKeyFile=Fclient.pem&tlsCAFile=ca.pem&appName=mongosh+2.5.1
Using MongoDB: 8.0.9
Using Mongosh: 2.5.1
For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/
test>
You’re now connected to your MongoDB instance through TLS ! And if you’re not using the certificate, the requireTLS mode prevents the connection from being established, and generated these error messages in your logs :
{"t":{"$date":"2025-05-21T04:43:55.277+00:00"},"s":"I", "c":"EXECUTOR", "id":22988, "ctx":"conn52","msg":"Error receiving request from client. Ending connection from remote","attr":{"error":{"code":141,"codeName":"SSLHandshakeFailed","errmsg":"The server is configured to only allow SSL connections"},"remote":"IP:50100","connectionId":52}}
If you want to learn more about MongoDB logs, I wrote a blog on this topic: MongoDB Log Analysis : A Comprehensive Guide.
Setting up TLS authenticationNow that you’re connected, we will set up authentication so that you can be connected as a specific user to MongoDB. Using the already established connection, create a user in the $external database. Each client certificate that you create can be mapped to one MongoDB user. Retrieve the username that you will use :
> openssl x509 -in mongodb-client.pem -inform PEM -subject -nameopt RFC2253 | grep subject
subject=CN=userApp,OU=MongoDBA,O=dbi,L=Zurich,ST=ZH,C=CH
And then create the user in the $external database, using the existing MongoDB connection :
test> db.getSiblingDB("$external").runCommand({
createUser: "CN=userApp,OU=MongoDBA,O=dbi,L=Zurich,ST=ZH,C=CH",
roles: [
{ role: "userAdminAnyDatabase", db: "admin" }
]
});
To check that everything works as intended, you can try to display collections in the admin database. For the moment, there is no error because you have all the rights to do it (no authorization is enforced).
test> use admin
switched to db admin
admin> show collections;
system.users
system.version
You can now edit the MongoDB configuration file by adding the net.tls.allowConnectionsWithoutCertificates set to true, and the security.authorization flag set to enabled. The mongod.conf file should look like this :
net:
bindIp: X.X.X.X
port: XXXXX
tls:
mode: requireTLS
certificateKeyFile: /path/to/mongodb-server.pem
CAFile: /path/to/ca.pem
allowConnectionsWithoutCertificates: false
security:
authorization: enabled
After restarting with systemctl restart mongod, you can now connect again. If you use the same command as before, you will log in without authentication, and get the error below whenever you try to do anything :
MongoServerError[Unauthorized]: Command listCollections requires authentication
So you should now connect via this command :
mongosh --host vmIP --port 27017 --tls --tlsCertificateKeyFile mongodb-client.pem --tlsCAFile ca.pem --authenticationDatabase '$external' --authenticationMechanism MONGODB-X509
If you want to show the admin collections, you will now get an error, because your user only has the userAdminAnyDatabase role granted (this role was chosen during the user creation, see above).
admin> show collections
MongoServerError[Unauthorized]: not authorized on admin to execute command { listCollections: 1, filter: {}, cursor: {}, nameOnly: true, authorizedCollections: false, lsid: { id: UUID("9c48b4c4-7702-49ce-a97c-52763b2ad6b3") }, $db: "admin" }
But it’s fine, you can grant yourself more roles (readWriteAnyDatabase, for instance) and create new users if you want.
The communication between the client and the server is now fully secured. Congratulations !
MongoServerError[BadValue]Side note : if you ever encounter this error:
MongoServerError[BadValue]: Cannot create an x.509 user with a subjectname that would be recognized as an internal cluster member
… make sure to follow the RFC-2253 standards. For instance, you could have this error if one of the field is too long. Also, as a reminder, the client certificate should have a different Distinguished Name (DN) than the server certificate (see documentation for more information).
L’article Setting up TLS encryption and authentication in MongoDB est apparu en premier sur dbi Blog.
Securing an Existing Unsecure GoldenGate Installation
You might have an existing unsecure GoldenGate installation that you would like to secure, whether it’s for security reasons or because you would like to dissociate the installation and its securing process. After searching everywhere in the Oracle documentation for how to proceed, I decided to try, investigate and eventually even asked Oracle directly. Here is the answer.
For a TL;DR version of the answer, please go to the end of the blog, but in the meantime, here was my reasoning.
Setup differences between a secure and unsecure GoldenGate installation Installation differencesFrom an installation perspective, the difference between a secure and unsecure installation is narrow. I talked earlier about graphic and silent GoldenGate installations, and for the silent installation, the following response file parameters are the only one involved in this security aspect:
# SECTION C - SECURITY MANAGER
SECURITY_ENABLED=false
# SECTION H - SECURITY
TLS_1_2_ENABLED=false
TLS_1_3_ENABLED=false
FIPS_ENABLED=false
SERVER_CERTIFICATE=
SERVER_CERTIFICATE_KEY_FILE=
SERVER_CA_CERTIFICATES_FILE=
CLIENT_CERTIFICATE=
CLIENT_CERTIFICATE_KEY_FILE=
CLIENT_CA_CERTIFICATES_FILE
*_ENABLED parameters are just flags that should be set to true to secure the installation (at least for SECURITY_ENABLED and one TLS parameter), and then you need to provide the certificate files (client and server, three for each).
To summarize, there is not much you have to do to configure a secure GoldenGate setup. So it shouldn’t be that difficult to enable these security features after installation: one flag, and a few certificates.
Configuration differencesFrom a configuration perspective, there are not many differences either. Looking at the deploymentConfiguration.dat file for both secure and unsecure service managers, the only difference lies in the SecurityManager.config.securityDetails section. After cleaning what is similar, here are the differences:
# Secure installation
"securityDetails": {
"network": {
"common": {
"fipsEnabled": false,
},
"inbound": {
"authMode": "clientOptional_server",
"cipherSuites": [
"TLS_AES_256_GCM_SHA384",
"TLS_AES_128_GCM_SHA256",
"TLS_CHACHA20_POLY1305_SHA256"
],
"protocolVersion": "TLS_ALL"
},
"outbound": {
"authMode": "clientOptional_server",
}
}
},
# Unsecure installation
"securityDetails": {
"network": {
"common": {
"fipsEnabled": false,
},
"inbound": {
"authMode": "clientOptional_server",
"cipherSuites": "^((?!anon|RC4|NULL|3DES).)*$",
},
"outbound": {
"authMode": "client_server",
}
}
},
Basically, securityDetails.outbound.authMode is set to clientOptional_server on one side, and client_server on the other. And the unsecure configuration has a different securityDetails.inbound.cipherSuites parameter, and a missing securityDetails.protocolVersion parameter.
But nothing in the configuration points to the wallet files, locates in $OGG_ETC_HOME/ssl. So, how to add them here ?
When connecting to an unsecure GoldenGate service manager, you still have the ability to add and manage certificates from the UI, the same way you would do on a secure installation:
It is unfortunate, but just adding the certificates from the UI doesn’t make your installation secure. In fact, even after modifying the deploymentConfiguration.dat files, the last piece missing in the configuration, as described above, it doesn’t work. You will only end up with a broken installation, even when doing the same with all your deployments and restarting everything.
Unfortunately, not at this point. And it was confirmed earlier this week on the MOSC forums by Gopal Gaur, Senior Principal Software Engineer working on GoldenGate at Oracle.
You can not convert non secure deployment into secure deployment, you will need a new service manager that supports sever side SSL/TLS.
You can not convert non secure deployment into secure deployment at this stage, we have an opened enhancement for this.
To wrap up, bad news: it is not possible to secure an existing GoldenGate installation, but good news, Oracle is apparently working on it. In the meantime, just re-install GoldenGate…
L’article Securing an Existing Unsecure GoldenGate Installation est apparu en premier sur dbi Blog.
GoldenGate 23ai Installation: Graphic and Silent Mode Comparison for Automation
Automating Oracle installations can sometimes be daunting, given the long list of parameters available. We’ll compare both graphic and silent installations of GoldenGate 23ai, focusing on building minimalist response files for automation purposes.
You can set up GoldenGate in two different ways:
- From the base archive, available on eDelivery (V1042871-01.zip for Linux x86-64, for instance)
- From the patched archive, updated quarterly and available on the Oracle Support. At the time of writing of this blog, GoldenGate 23.9 is the latest version available (23.10, now called 23.26, was announced but not released yet). You can find the MOS Document 3093376.1 on the subject, or 2193391.1 for general patching information on GoldenGate. Patch 38139663 is the completely patched installation (we will use this one in the blog), while patch 38139662 is the patch-only archive, applied on an existing GoldenGate installation.
For the purpose of this installation, we will use the oracle-database-preinstall rpm, even if we don’t need all the things it brings. If you plan on installing GoldenGate on an existing Oracle database server, Oracle recommends using a separate user. We will keep oracle here.
[root@vmogg ~] dnf install -y oracle-database-preinstall-23ai
[root@vmogg ~] mkdir -p /u01/stage
[root@vmogg ~] chown oracle:oinstall -R /u01
With the oracle user created through the rpm installation, unzip GoldenGate source file into a stage area:
[root@vmogg ~] su - oracle
[oracle@vmogg ~] cd /u01/stage
[oracle@vmogg stage] unzip -oq p38139663_23902507OGGRU_Linux-x86-64.zip -d /u01/stage/
Installing GoldenGate with the OUI (graphic installation)
Installing GoldenGate binaries
Running the graphic installation of GoldenGate is not any different from what you would do with an Oracle database installation.
After setting up X11 display (out of the scope of this blog), you should first define the OGG_HOME variable to the location of the GoldenGate installation and then run the installer:
[oracle@vmogg ~]$ export OGG_HOME=/u01/app/oracle/product/ogg23ai
[oracle@vmogg ~]$ /u01/stage/fbo_ggs_Linux_x64_Oracle_services_shiphome/Disk1/runInstaller
Bug: depending on the display options you have, you might have a color mismatch on the GoldenGate installation window, most of it appearing black (see this image). If this happens, run the following command before launching the installation: export _JAVA_OPTIONS="-Dsun.java2d.xrender=false"
Just click Next on the first step. Starting from GoldenGate 23ai, Classic Architecture was desupported, so you don’t have to worry anymore about which architecture to choose. The Microservices Architecture is the only choice now.
Fill in the software location for your installation of GoldenGate. This will match the OGG_HOME environment variable. If the variable is set prior to launching the runInstaller, the software location is filled automatically.
Step 3 is just a summary of the installation. You can save the response file at this stage and use it later to standardize your installations with the silent installation described below. Then, click on Install.
After a few seconds, the installation is complete, and you can exit the installer.
If it’s your first Oracle-related installation on this server, you might have to run the /u01/app/oraInventory/orainstRoot.sh script as root when prompted to do so.
[root@vmogg ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
The binary installation is complete.
Installing the Service Manager and the First Deployment
Once the binaries are installed, with the same oracle X11 terminal, run the oggca.sh script located in the $OGG_HOME/bin directory:
[oracle@vmogg ~]$ export OGG_HOME=/u01/app/oracle/product/ogg23ai
[oracle@vmogg ~]$ $OGG_HOME/bin/oggca.sh
On the first step (see below), you will have:
- Software Home, which contains the GoldenGate binaries, also called
$OGG_HOME - Deployment Home, filled with the location of the service manager directory (and not the GoldenGate deployment, thank you Oracle for this one…).
- Port (default is 7809) of the service manager. This will be the main point of entry for the web UI.
- Register as a service/system daemon, if you want GoldenGate to be a service on your server.
- Integrate with XAG, for a GoldenGate RAC installation (out of the scope of this blog).
- Enable Security, with the associated certificates and key. You can leave this unchecked if you just want to test the GoldenGate installation process.
- We leave the rest unchecked.
Next, fill in the credentials for the service manager. Enabling Strong Password Policy will force you to enter a secure password.
In previous versions of GoldenGate, you could first set up the service manager and wait before configuring your first deployment. It is now mandatory to set up the first deployment:
- Deployment Name:
ogg_test_01for this installation. It is not just cosmetic, you will refer to this name for connection, in theadminclientand on the Web UI. - Deployment Home: Path to the deployment home. Logs, trail files and configuration will sit there.
- Ports: Four ports need to be filled here. I would recommend using the default ports for the first deployment (7810, 7811, 7812 and 7813), or ports following the service manager port (7809). For a second deployment, you can continue with the following ports (7814, 7815, 7816, 7817), or keep the same units digit (7820, 7821, 7822, 7823) for a better understanding of your GoldenGate infrastructure.
- Remote Metrics for the Deployment: out of the scope of this blog, not needed for a basic GoldenGate installation.
- Security: If you secured your service manager earlier in the previous step, you should secure your deployment here, providing keys and certificates.
- Replication Options:
TNS_ADMINcould already be filled, otherwise just specify its path. GoldenGate will look for TNS entries here. You should also fill in the replication schema name.
Later, fill in the credentials for the deployment. They can be different from the service manager credentials, or you can check the box to keep the same credentials.
In the summary screen, review your configuration, and save the response file for later if required. Click on Finish to start the installation.
The installation should take a few seconds:
That’s it, you have successfully installed GoldenGate 23ai ! Go to the web UI section for how to connect to your GoldenGate environment.
Installing GoldenGate with the CLI (silent installation) Installing GoldenGate binariesTo perform the GoldenGate installation process in silent mode, you can either use a response file containing the arguments needed for the installation or give these arguments in the command line.
For the GoldenGate binaries installation, create a oggcore_23ai.rsp file, changing SOFTWARE_LOCATION, INVENTORY_LOCATION and UNIX_GROUP_NAME as needed:
[oracle@vmogg ~]$ cat oggcore_23ai.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_ogginstall_response_schema_v23_1_0
INSTALL_OPTION=ORA23ai
SOFTWARE_LOCATION=/u01/app/oracle/product/ogg23ai
INVENTORY_LOCATION=/u01/app/oraInventory
UNIX_GROUP_NAME=oinstall
Then, run the installer with the -silent and -responseFile options:
[oracle@vmogg ~]$ /u01/stage/fbo_ggs_Linux_x64_Oracle_services_shiphome/Disk1/runInstaller -silent -responseFile /home/oracle/oggcore_23ai.rsp
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 17094 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4095 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2025-09-24_09-53-23AM. Please wait ...
You can find the log of this install session at:
/u01/app/oraInventory/logs/installActions2025-09-24_09-53-23AM.log
As a root user, run the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
Successfully Setup Software.
The installation of Oracle GoldenGate Services was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2025-09-24_09-53-23AM.log' for more details.
Same thing as with the graphic installation: if it’s the first time you run an Oracle-related installation on this server, run the orainstRoot.sh script as root:
[root@vmogg ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
Installing the Service Manager and the First Deployment
Once the binaries are installed, run the oggca.sh script with the response file corresponding to the service manager and deployment that you want to create. The content of the response file oggca.rsp should be adapted to your needs, but I integrated a full example in the appendix below.
[oracle@vmogg ~]$ /u01/app/oracle/product/ogg23ai/bin/oggca.sh -silent -responseFile /home/oracle/oggca.rsp
As part of the process of registering Service Manager as a system daemon, the following steps will be performed:
1- The deployment will be stopped before registering the Service Manager as a daemon.
2- A new popup window will show the details of the script used to register the Service Manager as a daemon.
3- After the register script is executed, the Service Manager daemon will be started in the background and the deployment will be automatically restarted.
Click "OK" to continue.
In order to register Service Manager as a system service/daemon, as a "root" user, execute the following script:
(1). /u01/app/oracle/product/ogg23ai_SM/bin/registerServiceManager.sh
To execute the configuration scripts:
1.Open a terminal window
2.Login as "root"
3.Run the script
Successfully Setup Software.
If you asked for the creation of a service, then run the following command as root:
[root@vmogg ~]# /u01/app/oracle/product/ogg23ai_SM/bin/registerServiceManager.sh
Copyright (c) 2017, 2024, Oracle and/or its affiliates. All rights reserved.
----------------------------------------------------
Oracle GoldenGate Install As Service Script
----------------------------------------------------
OGG_HOME=/u01/app/oracle/product/ogg23ai
OGG_CONF_HOME=/u01/app/oracle/product/ogg23ai_SM/etc/conf
OGG_VAR_HOME=/u01/app/oracle/product/ogg23ai_SM/var
OGG_USER=oracle
Running OracleGoldenGateInstall.sh...
Created symlink /etc/systemd/system/multi-user.target.wants/OracleGoldenGate.service → /etc/systemd/system/OracleGoldenGate.service.
Successfully Setup Software.
Warning: If you plan on automating a GoldenGate installation and setup, make sure response files can only be read by the oracle user, and clean the response files if you need to keep passwords in plain text inside the file.
To add or remove a deployment to an existing service manager, you can do that graphically with oggca.sh, or in silent mode. In silent mode, I give below a minimal response file example to add a new deployment (removing everything that would be already configured, like service manager properties). Of course, the deployment name, paths, and ports should be different from an existing deployment. And if your deployment is secured, you should fill SECTION H - SECURITY in the same way you did for the first installation, and specify SECURITY_ENABLED=true in SECTION C - SERVICE MANAGER.
oracle.install.responseFileVersion=/oracle/install/rspfmt_oggca_response_schema_v23_1_0
# SECTION A - GENERAL
CONFIGURATION_OPTION=ADD
DEPLOYMENT_NAME=ogg_test_02
# SECTION B - ADMINISTRATOR ACCOUNT
ADMINISTRATOR_USER=ogg
ADMINISTRATOR_PASSWORD=ogg_password
DEPLOYMENT_ADMINISTRATOR_USER=ogg
DEPLOYMENT_ADMINISTRATOR_PASSWORD=ogg_password
# SECTION C - SERVICE MANAGER
HOST_SERVICEMANAGER=your_host
PORT_SERVICEMANAGER=7809
SECURITY_ENABLED=false
STRONG_PWD_POLICY_ENABLED=false
# SECTION E - SOFTWARE HOME
OGG_SOFTWARE_HOME=/u01/app/oracle/product/ogg23ai
# SECTION F - DEPLOYMENT DIRECTORIES
OGG_DEPLOYMENT_HOME=/u01/app/oracle/product/ogg_test_02
OGG_ETC_HOME=/u01/app/oracle/product/ogg_test_02/etc
OGG_CONF_HOME=/u01/app/oracle/product/ogg_test_02/etc/conf
OGG_SSL_HOME=/u01/app/oracle/product/ogg_test_02/etc/ssl
OGG_VAR_HOME=/u01/app/oracle/product/ogg_test_02/var
OGG_DATA_HOME=/u01/app/oracle/product/ogg_test_02/var/lib/data
OGG_ARCHIVE_HOME=/u01/app/oracle/product/ogg_test_02/var/lib/archive
# SECTION G - ENVIRONMENT VARIABLES
ENV_LD_LIBRARY_PATH=${OGG_HOME}/lib/instantclient:${OGG_HOME}/lib
ENV_TNS_ADMIN=/u01/app/oracle/network/admin
ENV_STREAMS_POOL_SIZE=
ENV_USER_VARS=
# SECTION H - SECURITY
TLS_1_2_ENABLED=false
TLS_1_3_ENABLED=false
FIPS_ENABLED=false
SERVER_CERTIFICATE=
SERVER_CERTIFICATE_KEY_FILE=
SERVER_CA_CERTIFICATES_FILE=
CLIENT_CERTIFICATE=
CLIENT_CERTIFICATE_KEY_FILE=
CLIENT_CA_CERTIFICATES_FILE=
# SECTION I - SERVICES
ADMINISTRATION_SERVER_ENABLED=true
PORT_ADMINSRVR=7820
DISTRIBUTION_SERVER_ENABLED=true
PORT_DISTSRVR=7821
NON_SECURE_DISTSRVR_CONNECTS_TO_SECURE_RCVRSRVR=false
RECEIVER_SERVER_ENABLED=true
PORT_RCVRSRVR=7822
METRICS_SERVER_ENABLED=true
METRICS_SERVER_IS_CRITICAL=false
PORT_PMSRVR=7823
PMSRVR_DATASTORE_TYPE=BDB
PMSRVR_DATASTORE_HOME=
ENABLE_DEPLOYMENT_REMOTE_METRICS=false
DEPLOYMENT_REMOTE_METRICS_LISTENING_HOST=
DEPLOYMENT_REMOTE_METRICS_LISTENING_PORT=0
# SECTION J - REPLICATION OPTIONS
OGG_SCHEMA=OGGADMIN
Removing a deployment
Same thing for the removal of an existing deployment, where the minimal response file is even simpler. You just need the deployment name and service manager information.
oracle.install.responseFileVersion=/oracle/install/rspfmt_oggca_response_schema_v23_1_0
# SECTION A - GENERAL
CONFIGURATION_OPTION=REMOVE
DEPLOYMENT_NAME=ogg_test_02
# SECTION B - ADMINISTRATOR ACCOUNT
ADMINISTRATOR_USER=ogg
ADMINISTRATOR_PASSWORD=ogg_password
DEPLOYMENT_ADMINISTRATOR_USER=ogg
DEPLOYMENT_ADMINISTRATOR_PASSWORD=ogg_password
# SECTION C - SERVICE MANAGER
HOST_SERVICEMANAGER=your_host
PORT_SERVICEMANAGER=7809
SECURITY_ENABLED=false
# SECTION H - SECURITY
TLS_1_2_ENABLED=false
TLS_1_3_ENABLED=false
FIPS_ENABLED=false
SERVER_CERTIFICATE=
SERVER_CERTIFICATE_KEY_FILE=
SERVER_CA_CERTIFICATES_FILE=
CLIENT_CERTIFICATE=
CLIENT_CERTIFICATE_KEY_FILE=
CLIENT_CA_CERTIFICATES_FILE=
# SECTION K - REMOVE DEPLOYMENT OPTIONS
REMOVE_DEPLOYMENT_FROM_DISK=true
Accessing the Web UI
Whether you installed GoldenGate graphically or silently, you will now be able to connect to the Web UI. Except for the design, it is pretty much the same thing as the Microservices Architecture of GoldenGate 19c and 21c. Connect to the hostname of your GoldenGate installation, on the service manager port: http://hostname:port, or https://hostname:post if the installation is secured.
With the example of this blog:
7809— Service manager7810— Administration service of the first deployment you created, for managing extracts and replicats7811— Distribution service, to send trail files to other GoldenGate deployments.7812— Receiver service, to receive trail files.7813— Performance metrics service, for extraction and replication analysis.
Log in either with the service manager credentials, or with the deployment credentials:
Service Manager Web UI
The service manager web UI allows you to stop and start deployment services, manage users and certificates. You will hardly ever use it, even less since deployment creation/removal will be done through oggca.sh anyway. If you have many deployments, it can still be useful to log in to these deployments quickly.
Deployment Web UI
The deployment web UI, however, is used throughout the whole lifecycle of your GoldenGate replications. You manage extracts, replicats, distribution paths, and more.
NB for the newcomers: you don’t have to bookmark all the services of a deployment. Once logged in to the administration service of a deployment, you can just jump between services through the UI, by clicking on the services on the top bar.
You now have a full GoldenGate 23ai installation, and can start configuring your first replication !
Appendix:oggca.rsp example
Here is an example of a response file to create both the service manager and the first deployment of a GoldenGate installation. I included at the end the full file with Oracle annotations.
oracle.install.responseFileVersion=/oracle/install/rspfmt_oggca_response_schema_v23_1_0
# SECTION A - GENERAL
CONFIGURATION_OPTION=ADD
DEPLOYMENT_NAME=ogg_test_01
# SECTION B - ADMINISTRATOR ACCOUNT
ADMINISTRATOR_USER=ogg
ADMINISTRATOR_PASSWORD=ogg_password
DEPLOYMENT_ADMINISTRATOR_USER=ogg
DEPLOYMENT_ADMINISTRATOR_PASSWORD=ogg_password
# SECTION C - SERVICE MANAGER
SERVICEMANAGER_DEPLOYMENT_HOME=/u01/app/oracle/product/ogg23ai_SM
SERVICEMANAGER_ETC_HOME=/u01/app/oracle/product/ogg23ai_SM/etc
SERVICEMANAGER_CONF_HOME=/u01/app/oracle/product/ogg23ai_SM/etc/conf
SERVICEMANAGER_SSL_HOME=/u01/app/oracle/product/ogg23ai_SM/etc/ssl
SERVICEMANAGER_VAR_HOME=/u01/app/oracle/product/ogg23ai_SM/var
SERVICEMANAGER_DATA_HOME=/u01/app/oracle/product/ogg23ai_SM/var/lib/data
SERVICEMANAGER_ARCHIVE_HOME=/u01/app/oracle/product/ogg23ai_SM/var/lib/archive
HOST_SERVICEMANAGER=your_host
PORT_SERVICEMANAGER=7809
SECURITY_ENABLED=false
STRONG_PWD_POLICY_ENABLED=false
CREATE_NEW_SERVICEMANAGER=true
REGISTER_SERVICEMANAGER_AS_A_SERVICE=true
INTEGRATE_SERVICEMANAGER_WITH_XAG=false
EXISTING_SERVICEMANAGER_IS_XAG_ENABLED=false
ENABLE_SERVICE_MANAGER_REMOTE_METRICS=false
SERVICE_MANAGER_REMOTE_METRICS_LISTENING_HOST=
SERVICE_MANAGER_REMOTE_METRICS_LISTENING_PORT=0
PLUGIN_SERVICE_ENABLED=false
# SECTION D - CONFIGURATION SRVICE
CONFIGURATION_SERVICE_ENABLED=false
CONFIGURATION_SERVICE_BACKEND_TYPE=FILESYSTEM
CONFIGURATION_SERVICE_BACKEND_CONNECTION_STRING=
CONFIGURATION_SERVICE_BACKEND_USERNAME=
CONFIGURATION_SERVICE_BACKEND_PASSWORD=
CONFIGURATION_SERVICE_BACKEND_TABLE_NAME=
# SECTION E - SOFTWARE HOME
OGG_SOFTWARE_HOME=/u01/app/oracle/product/ogg23ai
# SECTION F - DEPLOYMENT DIRECTORIES
OGG_DEPLOYMENT_HOME=/u01/app/oracle/product/ogg_test_01
OGG_ETC_HOME=/u01/app/oracle/product/ogg_test_01/etc
OGG_CONF_HOME=/u01/app/oracle/product/ogg_test_01/etc/conf
OGG_SSL_HOME=/u01/app/oracle/product/ogg_test_01/etc/ssl
OGG_VAR_HOME=/u01/app/oracle/product/ogg_test_01/var
OGG_DATA_HOME=/u01/app/oracle/product/ogg_test_01/var/lib/data
OGG_ARCHIVE_HOME=/u01/app/oracle/product/ogg_test_01/var/lib/archive
# SECTION G - ENVIRONMENT VARIABLES
ENV_LD_LIBRARY_PATH=${OGG_HOME}/lib/instantclient:${OGG_HOME}/lib
ENV_TNS_ADMIN=/u01/app/oracle/network/admin
ENV_STREAMS_POOL_SIZE=
ENV_USER_VARS=
# SECTION H - SECURITY
TLS_1_2_ENABLED=false
TLS_1_3_ENABLED=true
FIPS_ENABLED=false
SERVER_CERTIFICATE=
SERVER_CERTIFICATE_KEY_FILE=
SERVER_CA_CERTIFICATES_FILE=
CLIENT_CERTIFICATE=
CLIENT_CERTIFICATE_KEY_FILE=
CLIENT_CA_CERTIFICATES_FILE=
# SECTION I - SERVICES
ADMINISTRATION_SERVER_ENABLED=true
PORT_ADMINSRVR=7810
DISTRIBUTION_SERVER_ENABLED=true
PORT_DISTSRVR=7811
NON_SECURE_DISTSRVR_CONNECTS_TO_SECURE_RCVRSRVR=false
RECEIVER_SERVER_ENABLED=true
PORT_RCVRSRVR=7812
METRICS_SERVER_ENABLED=true
METRICS_SERVER_IS_CRITICAL=false
PORT_PMSRVR=7813
PMSRVR_DATASTORE_TYPE=BDB
PMSRVR_DATASTORE_HOME=
ENABLE_DEPLOYMENT_REMOTE_METRICS=false
DEPLOYMENT_REMOTE_METRICS_LISTENING_HOST=
DEPLOYMENT_REMOTE_METRICS_LISTENING_PORT=0
# SECTION J - REPLICATION OPTIONS
OGG_SCHEMA=OGGADMIN
# SECTION K - REMOVE DEPLOYMENT OPTIONS
REMOVE_DEPLOYMENT_FROM_DISK=
Full file :
################################################################################
## Copyright(c) Oracle Corporation 2016, 2024. All rights reserved. ##
## ##
## Specify values for the variables listed below to customize your ##
## installation. ##
## ##
## Each variable is associated with a comment. The comments can help to ##
## populate the variables with the appropriate values. ##
## ##
## IMPORTANT NOTE: This file should be secured to have read permission only ##
## by the Oracle user or an administrator who owns this configuration to ##
## protect any sensitive input values. ##
## ##
################################################################################
#-------------------------------------------------------------------------------
# Do not change the following system generated value.
#-------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_oggca_response_schema_v23_1_0
################################################################################
## ##
## Oracle GoldenGate deployment configuration options and details ##
## ##
################################################################################
################################################################################
## ##
## Instructions to fill out this response file ##
## ------------------------------------------- ##
## Fill out section A, B, and C for general deployment information ##
## Additionally: ##
## Fill out sections D, E, F, G, H, I, and J for adding a deployment ##
## Fill out section K for removing a deployment ##
## ##
################################################################################
################################################################################
# #
# SECTION A - GENERAL #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify the configuration option.
# Specify:
# - ADD : for adding a new GoldenGate deployment.
# - REMOVE : for removing an existing GoldenGate deployment.
#-------------------------------------------------------------------------------
CONFIGURATION_OPTION=ADD
#-------------------------------------------------------------------------------
# Specify the name for the new or existing deployment.
#-------------------------------------------------------------------------------
DEPLOYMENT_NAME=ogg_test_01
################################################################################
# #
# SECTION B - ADMINISTRATOR ACCOUNT #
# #
# * If creating a new Service Manager, set the Administrator Account username #
# and password. #
# #
# * If reusing an existing Service Manager: #
# * Enter the credentials for the Administrator Account in #
# the existing Service Manager. #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify the administrator account username for the Service Manager.
#-------------------------------------------------------------------------------
ADMINISTRATOR_USER=ogg
#-------------------------------------------------------------------------------
# Specify the administrator account password for the Service Manager.
#-------------------------------------------------------------------------------
ADMINISTRATOR_PASSWORD=ogg_password
#-------------------------------------------------------------------------------
# Optionally, specify a different administrator account username for the deployment,
# or leave blanks to use the same Service Manager administrator credentials.
#-------------------------------------------------------------------------------
DEPLOYMENT_ADMINISTRATOR_USER=ogg
#-------------------------------------------------------------------------------
# If creating a different administrator account username for the deployment,
# specify the password for it.
#-------------------------------------------------------------------------------
DEPLOYMENT_ADMINISTRATOR_PASSWORD=ogg_password
################################################################################
# #
# SECTION C - SERVICE MANAGER #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify the location for the Service Manager deployment.
# This is only needed if the Service Manager deployment doesn't exist already.
#-------------------------------------------------------------------------------
SERVICEMANAGER_DEPLOYMENT_HOME=/u01/app/oracle/product/ogg23ai_SM
#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment ETC_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_ETC_HOME=/u01/app/oracle/product/ogg23ai_SM/etc
#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment CONF_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_CONF_HOME=/u01/app/oracle/product/ogg23ai_SM/etc/conf
#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment SSL_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_SSL_HOME=/u01/app/oracle/product/ogg23ai_SM/etc/ssl
#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment VAR_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_VAR_HOME=/u01/app/oracle/product/ogg23ai_SM/var
#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment DATA_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_DATA_HOME=/u01/app/oracle/product/ogg23ai_SM/var/lib/data
#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment ARCHIVE_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_ARCHIVE_HOME=/u01/app/oracle/product/ogg23ai_SM/var/lib/archive
#-------------------------------------------------------------------------------
# Specify the host for the Service Manager.
#-------------------------------------------------------------------------------
HOST_SERVICEMANAGER=your_host
#-------------------------------------------------------------------------------
# Specify the port for the Service Manager.
#-------------------------------------------------------------------------------
PORT_SERVICEMANAGER=7809
#-------------------------------------------------------------------------------
# Specify if SSL / TLS is or will be enabled for the deployment.
# Specify true if SSL / TLS is or will be enabled, false otherwise.
#-------------------------------------------------------------------------------
SECURITY_ENABLED=false
#-------------------------------------------------------------------------------
# Specify if the deployment should enforce a strong password policy.
# Specify true to enable strong password policy management.
#-------------------------------------------------------------------------------
STRONG_PWD_POLICY_ENABLED=false
#-------------------------------------------------------------------------------
# Specify if a new Service Manager should be created.
# Specify true if a new Service Manager should be created, false otherwise.
#
# This option is only needed when CONFIGURATION_OPTION is ADD.
#-------------------------------------------------------------------------------
CREATE_NEW_SERVICEMANAGER=true
#-------------------------------------------------------------------------------
# Specify if Service Manager should be registered as a service/daemon. This option is mutually exclusive with the 'INTEGRATE_SERVICEMANAGER_WITH_XAG' option.
# Specify true if Service Manager should be registered as a service, false otherwise.
#
# This option is only needed when CONFIGURATION_OPTION is ADD.
# This option does not apply to Windows platform.
#-------------------------------------------------------------------------------
REGISTER_SERVICEMANAGER_AS_A_SERVICE=true
#-------------------------------------------------------------------------------
# Specify if Service Manager should be integrated with XAG. This option is mutually exclusive with the 'REGISTER_SERVICEMANAGER_AS_A_SERVICE' option.
# Specify true if Service Manager should be integrated with XAG, false otherwise.
#
# This option is only needed when CONFIGURATION_OPTION is ADD.
# This option is only supported for Oracle databases.
#-------------------------------------------------------------------------------
INTEGRATE_SERVICEMANAGER_WITH_XAG=false
#-------------------------------------------------------------------------------
# If using an existing Service Manager, specify if it is integrated with XAG.
# Specify true if the existing Service Manager is integrated with XAG, false otherwise.
#
# This option is only needed when CONFIGURATION_OPTION is ADD.
# This option is only supported for Oracle databases.
#-------------------------------------------------------------------------------
EXISTING_SERVICEMANAGER_IS_XAG_ENABLED=false
#-------------------------------------------------------------------------------
# Specify if Remote Metrics using StatsD protocol will be enabled for the Service Manager
# Specify true if Remote Metrics for the Service Manager will be enabled, false otherwise
#-------------------------------------------------------------------------------
ENABLE_SERVICE_MANAGER_REMOTE_METRICS=false
#-------------------------------------------------------------------------------
# If Remote Metrics for the Service Manager will be enabled, specify the listening host
#-------------------------------------------------------------------------------
SERVICE_MANAGER_REMOTE_METRICS_LISTENING_HOST=
#-------------------------------------------------------------------------------
# If Remote Metrics for the Service Manager will be enabled, specify the listening port for that server
#-------------------------------------------------------------------------------
SERVICE_MANAGER_REMOTE_METRICS_LISTENING_PORT=0
#-------------------------------------------------------------------------------
# Specify if the Plugin Service for the Service Manager will be enabled.
# Specify true if the Plugin Service will be enabled, false otherwise.
#-------------------------------------------------------------------------------
PLUGIN_SERVICE_ENABLED=false
###############################################################################
# #
# SECTION D - CONFIGURATION SERVICE #
# #
###############################################################################
#-------------------------------------------------------------------------------
# Specify if the Configuration Service will be enabled.
# Specify true if the Configuration Service will be enabled, false otherwise.
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_ENABLED=false
#-------------------------------------------------------------------------------
# Specify the Configuration Service backend type.
# Specify:
# - FILESYSTEM
# - ORACLE_DATABASE
#
# This is only needed if the Configuration Service will be enabled
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_BACKEND_TYPE=FILESYSTEM
#-------------------------------------------------------------------------------
# Specify the Configuration Service connection string for the database backend
#
# This is only needed if:
# * The Configuration Service will be enabled
# * CONFIGURATION_SERVICE_BACKEND_TYPE is ORACLE_DATABASE
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_BACKEND_CONNECTION_STRING=
#-------------------------------------------------------------------------------
# Specify the Configuration Service username for the database backend
#
# This is only needed if:
# * The Configuration Service will be enabled
# * CONFIGURATION_SERVICE_BACKEND_TYPE is ORACLE_DATABASE
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_BACKEND_USERNAME=
#-------------------------------------------------------------------------------
# Specify the Configuration Service password for the database backend
#
# This is only needed if:
# * The Configuration Service will be enabled
# * CONFIGURATION_SERVICE_BACKEND_TYPE is ORACLE_DATABASE
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_BACKEND_PASSWORD=
#-------------------------------------------------------------------------------
# Specify the Configuration Service table name for the database backend
#
# This is only needed if:
# * The Configuration Service will be enabled
# * CONFIGURATION_SERVICE_BACKEND_TYPE is ORACLE_DATABASE
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_BACKEND_TABLE_NAME=
###############################################################################
# #
# SECTION E - SOFTWARE HOME #
# #
###############################################################################
#-------------------------------------------------------------------------------
# Specify the existing OGG software home location.
#-------------------------------------------------------------------------------
OGG_SOFTWARE_HOME=/u01/app/oracle/product/OGG_23.9.0.25.07
###############################################################################
# #
# SECTION F - DEPLOYMENT DIRECTORIES #
# #
###############################################################################
#-------------------------------------------------------------------------------
# Specify the location of the new or existing OGG deployment.
#-------------------------------------------------------------------------------
OGG_DEPLOYMENT_HOME=/u01/app/oracle/product/ogg_test_01
#-------------------------------------------------------------------------------
# Specify the location for OGG_ETC_HOME.
#-------------------------------------------------------------------------------
OGG_ETC_HOME=/u01/app/oracle/product/ogg_test_01/etc
#-------------------------------------------------------------------------------
# Specify the location for OGG_CONF_HOME.
#-------------------------------------------------------------------------------
OGG_CONF_HOME=/u01/app/oracle/product/ogg_test_01/etc/conf
#-------------------------------------------------------------------------------
# Specify the location for OGG_SSL_HOME.
#-------------------------------------------------------------------------------
OGG_SSL_HOME=/u01/app/oracle/product/ogg_test_01/etc/ssl
#-------------------------------------------------------------------------------
# Specify the location for OGG_VAR_HOME.
#-------------------------------------------------------------------------------
OGG_VAR_HOME=/u01/app/oracle/product/ogg_test_01/var
#-------------------------------------------------------------------------------
# Specify the location for OGG_DATA_HOME.
#-------------------------------------------------------------------------------
OGG_DATA_HOME=/u01/app/oracle/product/ogg_test_01/var/lib/data
#-------------------------------------------------------------------------------
# Specify the location for OGG_ARCHIVE_HOME.
#-------------------------------------------------------------------------------
OGG_ARCHIVE_HOME=/u01/app/oracle/product/ogg_test_01/var/lib/archive
###############################################################################
# #
# SECTION G - ENVIRONMENT VARIABLES #
# #
###############################################################################
#-------------------------------------------------------------------------------
# Specify the value for the LD_LIBRARY_PATH environment variable.
#-------------------------------------------------------------------------------
ENV_LD_LIBRARY_PATH=${OGG_HOME}/lib/instantclient:${OGG_HOME}/lib
#-------------------------------------------------------------------------------
# Specify the value for the TNS_ADMIN environment variable.
# This environment variable is only for Oracle Databases.
#-------------------------------------------------------------------------------
ENV_TNS_ADMIN=/u01/app/oracle/network/admin
#-------------------------------------------------------------------------------
# This option is only needed when Sharding will be enabled.
# Specify the value for the STREAMS_POOL_SIZE environment variable.
# This environment variable is only for Oracle Databases.
#-------------------------------------------------------------------------------
ENV_STREAMS_POOL_SIZE=
#-------------------------------------------------------------------------------
# Specify any additional environment variables to be set in the deployment.
#-------------------------------------------------------------------------------
ENV_USER_VARS=
###############################################################################
# #
# SECTION H - SECURITY #
# This section is only needed if Security will be enabled #
# #
###############################################################################
# ------------------------------------------------------------------------------
# If security will be enabled, specify if TLS v1.2 will be enabled.
# Specify true if TLS v1.2 will be enabled, false otherwise.
#-------------------------------------------------------------------------------
TLS_1_2_ENABLED=false
# ------------------------------------------------------------------------------
# If security will be enabled, specify if TLS v1.3 will be enabled.
# Specify true if TLS v1.3 will be enabled, false otherwise.
#-------------------------------------------------------------------------------
TLS_1_3_ENABLED=true
#-------------------------------------------------------------------------------
# Specify if FIPS will be enabled.
#-------------------------------------------------------------------------------
FIPS_ENABLED=false
#-------------------------------------------------------------------------------
# If SSL / TLS will be enabled, specify the server certificate
#-------------------------------------------------------------------------------
SERVER_CERTIFICATE=
#-------------------------------------------------------------------------------
# If importing a server certificate, specify the private key file in PKCS#8 format
# The private key file must not be encrypted
#-------------------------------------------------------------------------------
SERVER_CERTIFICATE_KEY_FILE=
#-------------------------------------------------------------------------------
# If importing a server certificate, optionally specify the CA certificates file
#-------------------------------------------------------------------------------
SERVER_CA_CERTIFICATES_FILE=
#-------------------------------------------------------------------------------
# If SSL / TLS will be enabled, optionally specify the client certificate.
#-------------------------------------------------------------------------------
CLIENT_CERTIFICATE=
#-------------------------------------------------------------------------------
# If importing a client certificate, specify the private key file in PKCS#8 format
# The private key file must not be encrypted
#-------------------------------------------------------------------------------
CLIENT_CERTIFICATE_KEY_FILE=
#-------------------------------------------------------------------------------
# If importing a client certificate, optionally specify the CA certificates file
#-------------------------------------------------------------------------------
CLIENT_CA_CERTIFICATES_FILE=
###############################################################################
# #
# SECTION I - SERVICES #
# #
###############################################################################
#-------------------------------------------------------------------------------
# Specify if the Administration server will be enabled.
# Specify true if the Administration server will be enabled, false otherwise.
#-------------------------------------------------------------------------------
ADMINISTRATION_SERVER_ENABLED=true
#-------------------------------------------------------------------------------
# Required only if the Administration server will be enabled.
# Specify the port for Administration Server.
#-------------------------------------------------------------------------------
PORT_ADMINSRVR=7810
#-------------------------------------------------------------------------------
# Specify if the Distribution server will be enabled.
# Specify true if the Distribution server will be enabled, false otherwise.
#-------------------------------------------------------------------------------
DISTRIBUTION_SERVER_ENABLED=true
#-------------------------------------------------------------------------------
# Required only if the Distribution server will be enabled.
# Specify the port for Distribution Server.
#-------------------------------------------------------------------------------
PORT_DISTSRVR=7811
#-------------------------------------------------------------------------------
# If security is disabled, specify if this non-secure deployment will be used
# to send trail data to a secure deployment.
#-------------------------------------------------------------------------------
NON_SECURE_DISTSRVR_CONNECTS_TO_SECURE_RCVRSRVR=false
#-------------------------------------------------------------------------------
# Specify if the Receiver server will be enabled.
# Specify true if the Receiver server will be enabled, false otherwise.
#-------------------------------------------------------------------------------
RECEIVER_SERVER_ENABLED=true
#-------------------------------------------------------------------------------
# Required only if the Receiver server will be enabled.
# Specify the port for Receiver Server.
#-------------------------------------------------------------------------------
PORT_RCVRSRVR=7812
#-------------------------------------------------------------------------------
# Specify if Performance Metrics server will be enabled.
# Specify true if Performance Metrics server will be enabled, false otherwise.
#-------------------------------------------------------------------------------
METRICS_SERVER_ENABLED=true
#-------------------------------------------------------------------------------
# Specify if Performance Metrics server is a critical service.
# Specify true if Performance Metrics server is a critical service, false otherwise.
#
# This is optional and only takes effect when Performance Metrics server will be enabled.
# Also, this option should only be set when the Service Manager is integrated with XAG.
# The default value is false.
#
# This option is only supported for Oracle databases.
#-------------------------------------------------------------------------------
METRICS_SERVER_IS_CRITICAL=false
#-------------------------------------------------------------------------------
# Specify the port for Performance Metrics server (TCP).
#
# This option is only needed when Performance Metrics server will be enabled.
#-------------------------------------------------------------------------------
PORT_PMSRVR=7813
#-------------------------------------------------------------------------------
# Specify the DataStore type for Performance Metrics server.
# Valid values are: BDB, LMDB
#
# This option is only needed when Performance Metrics server will be enabled.
#-------------------------------------------------------------------------------
PMSRVR_DATASTORE_TYPE=BDB
#-------------------------------------------------------------------------------
# Specify the DataStore home location for Performance Metrics server.
# This is optional and only takes effect when Performance Metrics server will be enabled.
#-------------------------------------------------------------------------------
PMSRVR_DATASTORE_HOME=
#-------------------------------------------------------------------------------
# Specify if Remote Metrics using StatsD protocol will be enabled for the Deployment
# Specify true if Remote Metrics for the deployment will be enabled, false otherwise
#-------------------------------------------------------------------------------
ENABLE_DEPLOYMENT_REMOTE_METRICS=false
#-------------------------------------------------------------------------------
# If Remote Metrics for the deployment will be enabled, specify the listening host
#-------------------------------------------------------------------------------
DEPLOYMENT_REMOTE_METRICS_LISTENING_HOST=
#-------------------------------------------------------------------------------
# If Remote Metrics for the deployment will be enabled, specify the listening port for that server
#-------------------------------------------------------------------------------
DEPLOYMENT_REMOTE_METRICS_LISTENING_PORT=0
###############################################################################
# #
# SECTION J - REPLICATION OPTIONS #
# #
###############################################################################
#-------------------------------------------------------------------------------
# Specify the value for the GoldenGate schema.
#-------------------------------------------------------------------------------
OGG_SCHEMA=OGGADMIN
###############################################################################
# #
# SECTION K - REMOVE DEPLOYMENT OPTIONS #
# #
###############################################################################
#-------------------------------------------------------------------------------
# Specify if the deployment files should be removed from disk.
# Specify true if the deployment files should be removed, false otherwise.
#-------------------------------------------------------------------------------
REMOVE_DEPLOYMENT_FROM_DISK=
L’article GoldenGate 23ai Installation: Graphic and Silent Mode Comparison for Automation est apparu en premier sur dbi Blog.
Dctm – Configure the memory of an IDS Target on Windows
The Documentum Interactive Delivery Services (IDS) doesn’t seem to be very popular, at least in my experience, as I haven’t seen that one used in a lot of situations. However, when I did, I always had to increase the IDS Memory, as the default JVM was never enough to process an actual production load.
Memory configuration on LinuxOn Linux, it is very easy to do for both the IDS Source and IDS Target, as what is running is simply the Java process. Therefore, you can simply update the shell scripts created by the installer or create your own. On the IDS Source, you can modify the line “USER_MEM_ARGS” inside the file “$CATALINA_HOME/bin/startWEBCACHE.sh“. On the IDS Target, you can create your own start script, which could contain something like:
#!/bin/bash
# IDS Agent Configuration
agent_port="2787"
jvm_mem="-Xms8g -Xmx8g -XX:MaxMetaspaceSize=512m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:G1HeapRegionSize=2m"
jvm_params="-Djava.net.preferIPv4Stack=true -Djava.security.egd=file:///dev/./urandom -Djava.awt.headless=true -Dfile.encoding=UTF-8"
# IDS Agent start command
$JAVA_HOME/bin/java ${jvm_mem} ${jvm_params} -cp "$IDS_HOME/lib/*" \
com.documentum.webcache.transfer.MigAgent \
$IDS_HOME/admin/config/${agent_port}/agent.ini
But what about Windows? In older IDS versions like 16.7.4 (compatible with Documentum 16.7.1 to 21.1), it was possible to increase the memory by following these steps:
- Open the Registry Editor (regedit)
- Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\%IDS_SERVICE_NAME%\Env
- Double-click on the “Values” registry key
- Edit the “jvmoptions” definition with the required sizing, e.g.: jvmoptions=-Xms8g -Xmx8g -Dfile.encoding=UTF-8
- Restart the Windows Service of the same name
Note: %IDS_SERVICE_NAME% is the IDS Service name. It depends on IDS version, port and communication type (secure or not), but it can be something like “OpenTextDocumentumIDS_secure_2787”.
That wasn’t documented initially but after some discussions with OpenText years ago (or was it Dell at the time? I don’t recall), it finally made its way into the documentation at some point, in the “User Guide”, under a section (beautifully) named “If the OutOfMemory error appears on the target”. In any cases, that was working properly.
How it is supposed to work for IDS 16.7.5…The issue, and the reason for this blog is the IDS 16.7.5 (compatible with Documentum 23.4 to 24.4 (what happened to versions between 21.1 and 23.4? :D)). If you look at the documentation (c.f. IDS User Guide, section 5.13.2), it still mention to apply the exact same steps as what I described above for 16.7.4. But that doesn’t work for IDS 16.7.5, because there is no such “Env” folder in the Registry (it’s replaced by a “Parameters” one) and therefore no such key as “Values” either. The Service definition was changed for IDS 16.7.5 and unfortunately, that brought a bug with it.
As a side note, with IDS 16.7.5, there are a couple of BAT scripts that will be created when you configure a Target Agent. A first one can be used to create the Windows Service (c.f. %IDS_HOME%\admin\config\%PORT%\InstallTargetServices.bat) while the second one is to remove it. The BAT script to create the Windows Service contains its full definition:
@echo off
echo Target Service Installer.
...
set SERVICE_NAME=OpenTextDocumentumIDS_secure_2787
set PR_DISPLAYNAME=!SERVICE_NAME!
set PR_DESCRIPTION=OpenText Documentum Interactive Delivery Services Target Background Service
set TARGET_HOME=D:\ids\target
set AGENTINI_PATH=D:\ids\target\admin\config\2787\agent.ini
set PR_INSTALL=!TARGET_HOME!\product\bin\TargetService.exe
set PR_SERVICEUSER=LocalSystem
...
REM Path to java installation
set PR_JVM=!JAVA_HOME!\bin\server\jvm.dll
set PR_CLASSPATH=!TARGET_HOME!\lib\*
...
REM JVM configuration
set PR_JVMMS=8192m
set PR_JVMMX=8192m
set PR_JVMSS=4000m
set PR_JVMOPTIONS=-Dfile.encoding=UTF-8
REM Install service
...
Before opening the ticket with OpenText, I tried to modify the lines “PR_JVMMS” and “PR_JVMMX” to the needed values and then re-creating the service using the provided scripts. That’s how it is supposed to work, technically/normally… But it looked like it had no impact at all. I also tried to set the JVM size by modifying the line “PR_JVMOPTIONS” from “-Dfile.encoding=UTF-8” to “-Xms8g -Xmx8g -Dfile.encoding=UTF-8” (so with a similar approach to what used to work in 16.7.4 and lower). That also failed and it was even worse as the IDS couldn’t start anymore, it was complaining about some file not found.
Workaround until the patch is availableThis is a bug in IDS 16.7.5, as OpenText confirmed after some discussions with them. A next patch should fix it. However, for now, it is possible to configure the IDS Target memory following these steps:
- Open the Registry Editor (regedit.exe)
- Navigate to: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Apache Software Foundation\Procrun 2.0\%IDS_SERVICE_NAME%\Parameters\Java
- Double-click on the “JvmMs” registry key
- Make sure to set the “Base” to “Decimal” (right hand side) and edit the “Value data” with the required sizing, e.g.: 8192
- Repeat steps 3+4 for any other keys like “JvmMx” and “JvmSs“
- Restart the Windows Service of the same name
If you are hit by the bug, which should be the case in IDS 16.7.5, then all 3 values will be 0 by default. To verify if the modification is successful after the service restart, you can open the Resource Monitor (resmon.exe) and look for the PID of the IDS (TargetService.exe) in the Memory tab (c.f. Commit, Working Set, Shareable, Private Memory).
L’article Dctm – Configure the memory of an IDS Target on Windows est apparu en premier sur dbi Blog.
The M-Files Microsoft Teams Integration
This blog focuses on the integration of M-Files with Microsoft Teams. Please find below a brief overview of the installation and configuration requirements. The primary focus will be on the benefits it will bring to the user and on demonstrating how to work with the M-Files Teams integration.
Requirements- M-Files May 2024 Update or later
- M-Files Plugin for Microsoft Teams with the version 24.3.2.0
- Make sure that you have enough concurrent licenses in the vault. This is because people without access to M-Files use concurrent licenses
- The add-in requires a few Microsoft Graph API permissions
- Microsoft Entra ID is configured
- Encrypted gRPC connection must be enabled
This list details the most important requirements. For a full list of requirements, please refer to the official M-Files documentation.
Please also be advised of the fact that external users of Microsoft Word Online are able to open documents only, and are not permitted to save files in this program.
Installation and configurationIn order for the software’s functionality to be fully utilised, it is recommended that the add-in and M-Files Teams integration plugin are installed.
Should you elect to install only the add-in, please be advised of the limitations detailed below.
- The users cannot share M-Files content to people outside the organization
- You must configure the Federated Authentication settings manually
- In future updates the add-in will be come a requirement
Add-in configuration
The configuration and installation of the add-in is a two step activity. One part is in the M-Files Admin the other part has to be done in the Microsoft Teams admin center. The detailed configurations steps are documented in the M-Files documentation.
Plugin configuration
For the Plugin for Microsoft Teams you must install the M-Files Vault application, create and configure a scope for the federated authentication and configure the Plugin. Same as for the add-in, it is recommended to follow the official M-Files documentation.
Benefits for non M-Files usersThe primary advantage of utilising the Microsoft Teams integration is the ability to share M-Files content with users who do not possess an M-Files account. This feature is instrumental in enabling companies to share a specific set of documents with their entire team, enhancing collaboration and efficiency. One example of this would be internal guidelines such as HR processes. M-Files is an effective system for the management of such documents, with version control enabled to ensure seamless control over all versions of files.
Should you require clarification on the implementation and functionality of the permission, or wish to confirm that documents are always shared in accordance with the permission set in M-Files, please do not hesitate to get in touch. Please refer to the graphic below for a clear explanation of how the permission evaluation and settings work.
It is imperative to ensure that the restricted access settings are always adhered to.
Please find below an example illustrating which internal users have access to M-Files documents. As we can see, this document is available in the M-Files Teams tab.
Next, we will examine a restricted document to understand how permissions are evaluated and set in the M-Files Teams tab.
Functions
In this chapter we will have look to the benefits and functions, I will explain based on screenshots of an existing implementation.
The image on the left shows a Teams client. If you look at the orange cycle, you will see the Teams channel. If you look closely at the star, you will see the M-Files tab.
Here you can see how to create documents in the same way as you can do it in M-Files itself.
Microsoft Teams offers a consistent user experience comparable to that of the M-Files client. The functionality is restricted to the view that is made available in Teams. As demonstrated, the system enables full functionality and provides a preview option.
The following list provides an overview of the tasks that can be performed outside the Teams client, provided that the necessary access rights in M-Files have been granted, of course.
- Share an object only with M-Files users
- Stop sharing objects with M-Files users
- Delete a tab
- Share an object with everyone
- Stop sharing an object with everyone
- Share with current channel members
It is also possible to enable collaboration. This will allow users within Microsoft Teams to upload, download or work with the documents available in the Teams tab. Please be aware that these users will consume M-Files concurrent licenses. Should you wish to learn more about this topic, I would be happy to write a dedicated blog post on the subject.
ConclusionThis is an alternative and efficient method of working with a defined part of a document stored in M-Files directly from a placeholder that you use most frequently on a daily basis. If you use Teams and are looking for an alternative and easy way to access your M-Files documents, we recommend that you consider the M-Files Teams integration.
Should you require a demonstration or wish to discuss an integration project, please do not hesitate to contact us. I would be pleased to support you in this matter further.
L’article The M-Files Microsoft Teams Integration est apparu en premier sur dbi Blog.
True Cache on Oracle AI Database 26ai: Quickstart
Oracle Database True Cache is an in-memory, read-only cache designed to reduce latency and offload read activity from the primary database. It’s comparable to an Active Data Guard standby instance, but it runs mostly in memory.
This feature debuted in Oracle Database 23ai, which has since been rebranded as Oracle AI Database 26ai.
There are several free ways to try this release of Oracle Database and its features, for instance:
• Oracle LiveLabs
• OCI Free Tier trial credits
• The Oracle Database 23ai Free VirtualBox Appliance
In this blog, we’ll walk through a simple True Cache setup using two local VirtualBox VMs. The goal is not deep architecture analysis, but rather:
• to introduce practically what True Cache is
• to show one free, hands-on way to try Oracle AI Database 26ai
We will build two Oracle Linux VMs, install Oracle AI Database 26 on them, create an Oracle Database on the first VM, and its True Cache related instance on the second.
Let’s see how it goes :
1. Download an Oracle Linux Full ISO from: https://yum.oracle.com/oracle-linux-isos.html
In this example, we use the latest Oracle Linux 9 update:
OracleLinux-R9-U6-x86_64-dvd.iso
2. Create a new VM in VirtualBox. Choose the correct OS type and give the VM a name:
3. Assign resources (memory, CPU, disk), then click Finish:
4. Before booting the VM, open Settings > Storage and attach the Oracle Linux Full ISO to the Optical Drive:
5. Start the VM and install Oracle Linux. The installation is standard — just make sure the network interface is connected.
6. After the OS is installed on the first VM, clone it. We’ll use the clone as the second machine (one will run the primary database, the other will run True Cache).
7. Update hostnames and /etc/hosts on both machines so they can resolve each other. Both VMs are attached to a Bridged Adapter in VirtualBox so they’re on the same LAN and can communicate.
-- Machine 1, For the Primary Database [root@orahost1 ~]# hostname orahost1 [root@orahost1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.20.10.3 orahost1.localdomain orahost1 172.20.10.4 orahost2.localdomain orahost2
-- Machine 2, For the True Cache Instance [root@orahost2 ~]# hostname orahost2 [root@orahost1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.20.10.3 orahost1.localdomain orahost1 172.20.10.4 orahost2.localdomain orahost2
And at last, let’s make sure the 1521 port would be open for the SQL *Net communication :
-- (TODO on both machines) [root@orahost1 ~]# firewall-cmd --zone=public --add-port=1521/tcp --permanent success [root@orahost1 ~]# firewall-cmd --reload success [root@orahost1 ~]# firewall-cmd --zone=public --list-ports 1521/tcp2- Installing Oracle AI Database 26 binaries
Reference : https://www.oracle.com/database/free/get-started/ (Choose the “Oracle Database Free Installation Guide” corresponding to your linux version)
We begin by installing the preinstall package, which will take of all the pre-installation tasks, including the creation of the needed users and groups
-- TODO on both VMs : dnf -y install oracle-ai-database-preinstall-26ai [root@orahost1 ~]# dnf -y install oracle-ai-database-preinstall-26ai Last metadata expiration check: 0:19:59 ago on Mon 27 Oct 2025 01:11:08 PM CET. Dependencies resolved. ============================================================================================================================================================================================= Package Architecture Version Repository Size ============================================================================================================================================================================================= Installing: oracle-ai-database-preinstall-26ai x86_64 1.0-1.el8 ol8_appstream 29 k Installing dependencies: compat-openssl10 x86_64 1:1.0.2o-4.el8_10.1 ol8_appstream 1.1 M glibc-devel x86_64 2.28-251.0.2.el8 ol8_baseos_latest 89 k ksh x86_64 20120801-269.0.1.el8_10 ol8_appstream 924 k libxcrypt-devel x86_64 4.1.1-6.el8 ol8_baseos_latest 25 k lm_sensors-libs x86_64 3.4.0-23.20180522git70f7e08.el8 ol8_baseos_latest 59 k make x86_64 1:4.2.1-11.el8 ol8_baseos_latest 498 k sysstat x86_64 11.7.3-13.0.2.el8_10 ol8_appstream 427 k Transaction Summary ============================================================================================================================================================================================= Install 8 Packages Total download size: 3.1 M Installed size: 9.3 M Downloading Packages: (1/8): libxcrypt-devel-4.1.1-6.el8.x86_64.rpm 37 kB/s | 25 kB 00:00 (2/8): lm_sensors-libs-3.4.0-23.20180522git70f7e08.el8.x86_64.rpm 81 kB/s | 59 kB 00:00 (3/8): glibc-devel-2.28-251.0.2.el8.x86_64.rpm 118 kB/s | 89 kB 00:00 (4/8): make-4.2.1-11.el8.x86_64.rpm 1.6 MB/s | 498 kB 00:00 (5/8): oracle-ai-database-preinstall-26ai-1.0-1.el8.x86_64.rpm 363 kB/s | 29 kB 00:00 (6/8): ksh-20120801-269.0.1.el8_10.x86_64.rpm 2.6 MB/s | 924 kB 00:00 (7/8): sysstat-11.7.3-13.0.2.el8_10.x86_64.rpm 2.4 MB/s | 427 kB 00:00 (8/8): compat-openssl10-1.0.2o-4.el8_10.1.x86_64.rpm 2.2 MB/s | 1.1 MB 00:00 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 2.5 MB/s | 3.1 MB 00:01 Oracle Linux 8 BaseOS Latest (x86_64) 2.8 MB/s | 3.1 kB 00:00 Importing GPG key 0xAD986DA3: Userid : "Oracle OSS group (Open Source Software group) " Fingerprint: 76FD 3DB1 3AB6 7410 B89D B10E 8256 2EA9 AD98 6DA3 From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle Key imported successfully Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : libxcrypt-devel-4.1.1-6.el8.x86_64 1/8 Installing : glibc-devel-2.28-251.0.2.el8.x86_64 2/8 Running scriptlet: glibc-devel-2.28-251.0.2.el8.x86_64 2/8 Installing : make-1:4.2.1-11.el8.x86_64 3/8 Running scriptlet: make-1:4.2.1-11.el8.x86_64 3/8 Installing : compat-openssl10-1:1.0.2o-4.el8_10.1.x86_64 4/8 Running scriptlet: compat-openssl10-1:1.0.2o-4.el8_10.1.x86_64 4/8 Installing : ksh-20120801-269.0.1.el8_10.x86_64 5/8 Running scriptlet: ksh-20120801-269.0.1.el8_10.x86_64 5/8 Installing : lm_sensors-libs-3.4.0-23.20180522git70f7e08.el8.x86_64 6/8 Running scriptlet: lm_sensors-libs-3.4.0-23.20180522git70f7e08.el8.x86_64 6/8 Installing : sysstat-11.7.3-13.0.2.el8_10.x86_64 7/8 Running scriptlet: sysstat-11.7.3-13.0.2.el8_10.x86_64 7/8 Installing : oracle-ai-database-preinstall-26ai-1.0-1.el8.x86_64 8/8 Running scriptlet: oracle-ai-database-preinstall-26ai-1.0-1.el8.x86_64 8/8 Verifying : glibc-devel-2.28-251.0.2.el8.x86_64 1/8 Verifying : libxcrypt-devel-4.1.1-6.el8.x86_64 2/8 Verifying : lm_sensors-libs-3.4.0-23.20180522git70f7e08.el8.x86_64 3/8 Verifying : make-1:4.2.1-11.el8.x86_64 4/8 Verifying : compat-openssl10-1:1.0.2o-4.el8_10.1.x86_64 5/8 Verifying : ksh-20120801-269.0.1.el8_10.x86_64 6/8 Verifying : oracle-ai-database-preinstall-26ai-1.0-1.el8.x86_64 7/8 Verifying : sysstat-11.7.3-13.0.2.el8_10.x86_64 8/8 Installed: compat-openssl10-1:1.0.2o-4.el8_10.1.x86_64 glibc-devel-2.28-251.0.2.el8.x86_64 ksh-20120801-269.0.1.el8_10.x86_64 libxcrypt-devel-4.1.1-6.el8.x86_64 lm_sensors-libs-3.4.0-23.20180522git70f7e08.el8.x86_64 make-1:4.2.1-11.el8.x86_64 oracle-ai-database-preinstall-26ai-1.0-1.el8.x86_64 sysstat-11.7.3-13.0.2.el8_10.x86_64 Complete!
Next we download and copy the latest free Oracle AI Database 26ai rpm corresponding to the OS version.
From https://www.oracle.com/database/free/get-started/
For me, that would be : oracle-ai-database-free-26ai-23.26.0-1.el9.x86_64.rpm
-- Both Machines , dnf -y install oracle-ai-database-free-26ai-23.26.0-1.el9.x86_64.rpm [root@orahost1 software]# ls -l total 1398208 -rw-r--r--. 1 root root 1431761556 Oct 27 14:10 oracle-ai-database-free-26ai-23.26.0-1.el9.x86_64.rpm [root@orahost1 software]# dnf -y install oracle-ai-database-free-26ai-23.26.0-1.el9.x86_64.rpm Last metadata expiration check: 1:06:41 ago on Mon 27 Oct 2025 01:11:08 PM CET. Dependencies resolved. ============================================================================================================================================================================================= Package Architecture Version Repository Size ============================================================================================================================================================================================= Installing: oracle-ai-database-free-26ai x86_64 23.26.0-1 @commandline 1.3 G Transaction Summary ============================================================================================================================================================================================= Install 1 Package Total size: 1.3 G Installed size: 3.6 G Downloading Packages: Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: oracle-ai-database-free-26ai-23.26.0-1.x86_64 1/1 Installing : oracle-ai-database-free-26ai-23.26.0-1.x86_64 1/1 Running scriptlet: oracle-ai-database-free-26ai-23.26.0-1.x86_64 1/1 [INFO] Executing post installation scripts... [INFO] Oracle home installed successfully and ready to be configured. To configure Oracle AI Database Free, optionally modify the parameters in '/etc/sysconfig/oracle-free-26ai.conf' and then run '/etc/init.d/oracle-free-26ai configure' as root. Verifying : oracle-ai-database-free-26ai-23.26.0-1.x86_64 1/1 Installed: oracle-ai-database-free-26ai-23.26.0-1.x86_64 Complete!3. Create and Configure the Primary Database
On Machine 1 (the future primary database host), still as root, run the configuration script. This creates:
• a CDB with one PDB
• a local listener
-- Only on Machine 1
[root@orahost1 software]# /etc/init.d/oracle-free-26ai configure
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle AI Database FREE.
Enter SYS user password:
*************
Enter SYSTEM user password:
***********
Enter PDBADMIN User Password:
**************
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
33% complete
36% complete
39% complete
43% complete
Completing Database Creation
47% complete
49% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
/opt/oracle/cfgtoollogs/dbca/FREE.
Database Information:
Global Database Name:FREE
System Identifier(SID):FREE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/FREE/FREE.log" for further details.
Connect to Oracle AI Database using one of the connect strings:
Pluggable database: orahost1/FREEPDB1
Multitenant container database: orahost1
Now we have:
• a multitenant FREE database
• a local listener listening on port 1521
[root@orahost1 software]# su - oracle
[oracle@orahost1 ~]$ . oraenv
ORACLE_SID = [oracle] ? FREE
The Oracle base has been set to /opt/oracle
[oracle@orahost1 ~]$ sqlplus / as sysdba
Connected to:
Oracle AI Database 26ai Free Release 23.26.0.0.0 - Develop, Learn, and Run for Free
Version 23.26.0.0.0
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 FREEPDB1 READ WRITE NO
SQL> exit
Disconnected from Oracle AI Database 26ai Free Release 23.26.0.0.0 - Develop, Learn, and Run for Free
Version 23.26.0.0.0
[oracle@orahost1 ~]$ lsnrctl status
LSNRCTL for Linux: Version 23.26.0.0.0 - Production on 27-OCT-2025 14:37:34
Copyright (c) 1991, 2025, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orahost1.localdomain)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 23.26.0.0.0 - Production
Start Date 27-OCT-2025 14:26:40
Uptime 0 days 0 hr. 10 min. 54 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Default Service FREE
Listener Parameter File /opt/oracle/product/26ai/dbhomeFree/network/admin/listener.ora
Listener Log File /opt/oracle/diag/tnslsnr/orahost1/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=orahost1.localdomain)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "4224ed483625a7e9e063050a14accd4f" has 1 instance(s).
Instance "FREE", status READY, has 1 handler(s) for this service...
Service "FREE" has 1 instance(s).
Instance "FREE", status READY, has 1 handler(s) for this service...
Service "FREEXDB" has 1 instance(s).
Instance "FREE", status READY, has 1 handler(s) for this service...
Service "freepdb1" has 1 instance(s).
Instance "FREE", status READY, has 1 handler(s) for this service...
The command completed successfully
4. Create the True Cache Instance
Next, we create and configure the True Cache instance on Machine 2. We only need to copy the password file from the primary to the True Cache host first:
[oracle@orahost2 bin]$ ll /opt/oracle/tmp/orapwFREE -rw-r-----. 1 oracle oinstall 2048 Sep 17 16:56 /opt/oracle/tmp/orapwFREE
Then we run dbca with the -createTrueCache option:
[oracle@orahost2 bin]$ ./dbca -silent -createTrueCache -gdbName FREE -sourceDBConnectionString orahost1:1521/FREE -passwordFileFromSourceDB /opt/oracle/tmp/orapwFREE -dbUniqueName FREE_TC -sid FREE Enter Remote DB SYS user password: Session ID of the current execution is: 2 ----------------- Running Extract_password_file_from_blob_file job Completed Extract_password_file_from_blob_file job 25% complete ----------------- Running Create_static_listener job Completed Create_static_listener job 38% complete ----------------- Running Register_listener job Completed Register_listener job 50% complete ----------------- Running Extract_tde_wallet_from_blob_file job Skipping. Job is detected as not applicable. 54% complete ----------------- Running Setup_required_directories job Skipping. Job is detected as not applicable. 57% complete ----------------- Running Create_pfile job Completed Create_pfile job 61% complete ----------------- Running Start_nomount_instance job Completed Start_nomount_instance job 64% complete ----------------- Running Create_TDE_wallet job Skipping. Job is detected as not applicable. 68% complete ----------------- Running Create_truecache_instance job Completed Create_truecache_instance job 71% complete ----------------- Running Add_oratab_entry job Completed Add_oratab_entry job 75% complete ----------------- Running Reopen_wallet job Skipping. Job is detected as not applicable. 100% complete Look at the log file "/opt/oracle/cfgtoollogs/dbca/FREE_TC/FREE_TC0.log" for further details.
The instance is created. Let’s confirm its state:
SQL> select open_mode,database_role from v$database; OPEN_MODE DATABASE_ROLE -------------------- ---------------- READ ONLY WITH APPLY TRUE CACHE SQL> select name from v$database; NAME --------- FREE SQL> select db_unique_name from v$database; DB_UNIQUE_NAME ------------------------------ FREE_TC
As you can see, it looks very similar to a Data Guard standby — except there are no datafiles.
SQL> select name from v$controlfile;
NAME
--------------------------------------------------------------------------------
/opt/oracle/oradata/FREE_TC/controlfile/o1_mf_nhz3od6n_.ctl
SQL> show parameter spfile
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string /opt/oracle/product/23ai/dbhom
eFree/dbs/spfileFREE.ora
SQL> select name from v$datafile;
no rows selected
SQL> select name from v$tempfile;
NAME
--------------------------------------------------------------------------------
/opt/oracle/oradata/S_TEMP_##TC##_FREE#TC_8192_1_3_201_1
/opt/oracle/oradata/S_TEMP_##TC##_FREE#TC_8192_2_3_202_1
/opt/oracle/oradata/S_TEMP_##TC##_FREE#TC_8192_3_3_203_1
5. Configure a True Cache Application Service
Applications can use True Cache in two main ways:
Option 1: The application maintains two physical connections:
• One to the primary database
• One to the True Cache instance
Each connection uses its own database service. The app chooses the connection based on whether it needs to read or write. This works with any language and any existing client drivers.
Option 2: The application maintains one logical connection (to the primary database service). The JDBC Thin driver (starting with Oracle AI Database 26ai) automatically manages physical connections to both the primary and True Cache. This option is for Java applications.
For more details, see:
https://docs.oracle.com/en/database/oracle/oracle-database/26/odbtc/methods-connecting-true-cache.html
In this example, we’ll configure an application service for a hypothetical sales application that queries data in the FREEPDB1 PDB.
Step 1: On the primary database, create and start a service called SALES:
-- On the primary
SQL> alter session set container=FREEPDB1;
Session altered.
SQL> BEGIN
DBMS_SERVICE.CREATE_SERVICE('SALES', 'SALES');
DBMS_SERVICE.START_SERVICE('SALES');
END;
/
Now the service is visible in the primary listener:
[oracle@orahost1 ~]$ lsnrctl status | grep SALES Service "SALES" has 1 instance(s).
Step 2: From the primary host, run dbca again to configure the related True Cache service (sales_tc) on the True Cache instance:
[oracle@orahost1 ~]$ $ORACLE_HOME/bin/dbca -configureDatabase -configureTrueCacheInstanceService -sourceDB FREE -trueCacheConnectString orahost2:1521/FREE_TC -trueCacheServiceName sales_tc -serviceName sales -pdbName FREEPDB1 -silent Session ID of the current execution is: 1 ----------------- Running Initialization job Enter SYS user password: Completed Initialization job 33% complete ----------------- Running Validate_true_cache_instance_connection job Completed Validate_true_cache_instance_connection job 37% complete ----------------- Running Validate_dataguard job Skipping. Job is detected as not applicable. 41% complete ----------------- Running Validate_db_version job Completed Validate_db_version job 44% complete ----------------- Running Validate_true_cache_instance job Completed Validate_true_cache_instance job 48% complete ----------------- Running Validate_archive_log_mode job Completed Validate_archive_log_mode job 52% complete ----------------- Running Validate_pdb job Completed Validate_pdb job 56% complete ----------------- Running Validate_primary_db_service job Completed Validate_primary_db_service job 59% complete ----------------- Running Validate_true_cache_db_service job Completed Validate_true_cache_db_service job 63% complete ----------------- Running Validate_true_cache_instance_open_mode job Completed Validate_true_cache_instance_open_mode job 67% complete ----------------- Running Create_truecache_service job Completed Create_truecache_service job 73% complete ----------------- Running Add_network_entry job Completed Add_network_entry job 80% complete ----------------- Running Modify_primary_service job Completed Modify_primary_service job 87% complete ----------------- Running Start_truecache_service job Completed Start_truecache_service job 93% complete ----------------- Running Enable_service_registration job Completed Enable_service_registration job 100% complete Look at the log file "/opt/oracle/cfgtoollogs/dbca/FREE/FREE0.log" for further details.
The service has been created, we can check on the true cache machine :
[oracle@orahost2 bin]$ lsnrctl status | grep sales_tc
Service "sales_tc" has 1 instance(s).
[oracle@orahost2 bin]$ sqlplus / as sysdba
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 FREEPDB1 READ ONLY NO
SQL> alter session set container=freepdb1;
Session altered.
SQL> select name from v$active_services;
NAME
----------------------------------------------------------------
sales_tc
FREEPDB1
Conclusion
• It’s fast and cheap to build a small lab for Oracle AI Database 26ai and start testing new features on your laptop.
• True Cache is one of those features: it’s easy to set up, designed for offloading read-only workload, and included with Enterprise Edition licensing (unlike Active Data Guard which requires an extra option). It’s definitely worth evaluating for applications that mostly read and don’t or rarely need to update data.
L’article True Cache on Oracle AI Database 26ai: Quickstart est apparu en premier sur dbi Blog.
Smart Automation with M-Files: Real Use Cases
In the world of information management, time is money, and manual processes are the enemy of efficiency.
The time of simple document management systems is over. Now, we need tools that help workers optimize their time and make their lives easier.

This is where M-Files’ automation capabilities really shine, offering a significant advantage in the realm of information management.
By combining metadata-driven logic, workflows, and integrations, M-Files enables organizations to automate routine tasks, reduce human error, and focus on what really matters.
However, it is often not so obvious where we can achieve this gain.
So let’s explore how smart automation works in M-Files, looking at real-world use cases that deliver measurable time savings, which may give you ideas for your own business.
What Is Smart Automation in M-Files?Smart automation in M-Files refers to the use of:
- Metadata to drive decisions and routing
- Workflows to manage objects (not only documents) lifecycles
- Automatic permissions to control access dynamically during the life of the object
- Notifications and tasks to keep people informed and accountable
- Integrations with external systems (e.g., Ms Teams, ERP, CRM, email,…)
The result: a system that thinks for you! It automates repetitive steps, ensures total consistency across departments, and reduces silos.
Here are some real use cases Automated Invoice ApprovalChallenge: Manual invoice approvals are slow, error-prone, and often get stuck in email chains.
Solution: M-Files automatically routes invoices based on metadata like amount, vendor, or department. Approvers receive tasks and reminders, and the system logs every step for audit purposes.
Time Saved: Up to 50% reduction in processing time.
Challenge: Tracking contract versions, deadlines, and approvals manually is risky and inefficient.
Solution: M-Files automates the entire contract lifecycle, from drafting and review to approval and renewal. Metadata triggers reminders before expiration and ensures the right people are involved at each stage.
Time Saved: Weeks shaved off contract turnaround times.
Challenge: On-boarding new employees involves multiple documents, departments, and compliance steps.
Solution: M-Files initiates on-boarding workflows as soon as a new hire is created. It assigns tasks to HR, IT, and managers, ensures documents are signed, and stores everything in a compliant structure.
Time Saved: Hours per new hire, with improved consistency.
Challenge: Managing SOPs, CAPAs, and audit trails manually is time-consuming and error-prone.
Solution: M-Files automates document versioning, review cycles, and approval workflows. It ensures only the latest approved version is accessible and maintains a full audit trail.
Time Saved: Significant reduction in audit preparation time.
Challenge: Important emails get lost in inboxes or saved in the wrong place.
Solution: M-Files integrates with Outlook to automatically classify and store emails based on metadata—linking them to the right project, client, or case.
Time Saved: Minutes per email, multiplied across teams.
The secret behind M-Files automation is metadata. Instead of relying on folder structures or manual tagging, M-Files uses metadata to understand what a document is and what should happen next. This enables:
- Context-aware workflows
- Dynamic access control
- Smart search and retrieval
- Automated compliance enforcement
The main point is also how M-Files delivers information. Since documents are classified by their properties (metadata), it is easy to adapt the access method to the user’s profile, while avoiding duplicates.
For instance, a sales representative would want direct access to a contract, whereas a project manager would prefer to view it as an appendix to their project without having to navigate through a whole tree structure.
Smart automation in M-Files isn’t just about saving time, it’s about enabling people to focus on high-value work. Whether you’re in finance, legal, HR, or quality management, M-Files can help you eliminate bottlenecks, reduce risk, and accelerate your business processes.
I hope these examples have given you some ideas and helped you think about ways to improve information management.
If you’re still relying on manual steps and email chains, it might be time to explore what smart automation can do for you. Of course, we will be pleased to assist you in that transition.
More information here.
L’article Smart Automation with M-Files: Real Use Cases est apparu en premier sur dbi Blog.
PostgreSQL 19: Two nice little improvements: log_autoanalyze_min_duration and search_path in the psql prompt
Two nice little improvements have been committed for PostgreSQL 19. The first one is about logging the duration of automatic analyze while the second one is about displaying the current search_path in psql’s prompt.
Lets start with the improvement for psql. As you probably know, the default prompt in psql looks like this:
postgres@:/home/postgres/ [pgdev] psql
psql (19devel)
Type "help" for help.
postgres=#
While I do not have an issue with the default prompt, you maybe want to see more information. An example of what you might do is this:
postgres=# \set PROMPT1 '%M:%> %n@%/%R%#%x '
[local]:5432 postgres@postgres=#
Now you immediately see that this is a connection over a socket on port 5432, and you’re connected as the “postgres” user to the “postgres” database. If you want to make this permanent, add it to your “.psqlrc” file.
The new prompting option which will come with PostgreSQL 19 is “%S”, and this will give you the search_path:
postgres=# show search_path;
search_path
-----------------
"$user", public
(1 row)
postgres=# \set PROMPT1 '%/%R%x%..%S..# '
postgres=.."$user", public..# set search_path = 'xxxxxxx';
SET
postgres=..xxxxxxx..#
Nice. You can find all the other prompting options in the documentation of psql, the commit is here.
The second improvement is about logging the time of automatic analyze. Before PostgreSQL 19 we only had log_autovacuum_min_duration. This logs all actions of autovacuum if they cross the specified threshold. This of course includes the auto analyze as well, but usually it is autovacuum taking most of the time. This is now separated and there is a new parameter called “log_autovacuum_min_duration”. You can easily test this with the following snippet:
postgres=# create table t ( a int , b text );
CREATE TABLE
postgres=# insert into t select i, i::text from generate_series(1,1000000) i;
INSERT 0 1000000
postgres=# alter system set log_autoanalyze_min_duration = '1ms';
ALTER SYSTEM
postgres=# select pg_reload_conf();
pg_reload_conf
----------------
t
(1 row)
postgres=# insert into t select i, i::text from generate_series(1,1000000) i;
INSERT 0 1000000
Looking at the log file there is now this:
2025-10-29 08:05:52.744 CET - 1 - 3454 - - @ - 771LOG: automatic analyze of table "postgres.public.t"
avg read rate: 0.033 MB/s, avg write rate: 0.033 MB/s
buffer usage: 10992 hits, 1 reads, 1 dirtied
WAL usage: 5 records, 1 full page images, 10021 bytes, 0 buffers full
system usage: CPU: user: 0.10 s, system: 0.01 s, elapsed: 0.23 s
Also nice, the commit is here.
L’article PostgreSQL 19: Two nice little improvements: log_autoanalyze_min_duration and search_path in the psql prompt est apparu en premier sur dbi Blog.
M-Files connected to Microsoft Copilot
Should you be a Microsoft 365 subscriber who already owns a Microsoft Copilot licence, you may wish to consider connecting your M-Files Vault to benefit from the capabilities of Microsoft Copilot.
M-Files offers a connector that can meet these needs. In this blog, we will examine the functionality it will provide and the steps required to enable it.
RequirementsIn order to successfully install and enable the M-Filers connector for Microsoft Copilot, it is essential to meet the following requirements.
- License for the M-Files Connector for Copilot. (In the M-Files Business Platform is the connector included)
- M-Files Server version 24.9 or later
- TLS 1.2 or later is enabled
- Administrative rights in the M-Files vault
- Administrative rights in the Microsoft Entra ID tenant
- End-users must have
- An Entra ID
- A license including Microsoft Copilot
- Matching email address in Entra ID and M-Files Manage
- A maximum of 30 connections per tenant. (Important to understand is that this is the amount to of all connections, not limited to M-Files)
- A maximum of 50 million items per tenant
- A maximum of 500 GB in connection size
- A maximum of 128 properties per schema
- A maximum of 4 MB of text content for one item
In this use case, we facilitate the integration of documents with Microsoft Copilot. These can subsequently be utilised by Copilot, for instance, in the context of self-developed Copilot Agents. Please note that these self-developed agents are not subject to disclosure to the public. Please be advised that they will remain within the security perimeter of your Microsoft 365 tenant.
We use the M-Files property “Copilot” to enable the content. All documents set to “‘yes” are enabled for Microsoft Copilot.
ConfigurationsThis section exclusively covers the implemented configuration settings. For comprehensive instructions on configuration, please refer to the official M-Files documentation.
In order to limit the scope of the document uploaded to Microsoft Copilot, the following configurations were applied. These configurations can be set in the M-Files Admin tool under the M-Files Copilot Connector.
- Object type: “Document“
- Additional Conditions: “Copilot = Yes“
In this showcase, I will demonstrate the functionality from the perspectives of both M-Files Vault and Microsoft Copilot. The demonstration will be observed from the viewpoints of two users, each with different access rights in both worlds.
List of the user accounts to demonstrate the use case:
Microsoft 365 Administrator
This user has no access to the M-Files Vault
M-Files Demo User
The user is part of the Microsoft tenant
To illustrate the use case, I will use a document named “EN-CNTNT-eBook-SRGCM11924.pdf“. The document is stored in the M-Files Vault and belongs to the object type “Documents” and class “Other Documents“. The “Other Documents” class includes a property called “Copilot“, which is used to enable the document for Microsoft Copilot, as previously outlined.
If you look at the screenshot on the right, you can see the M-Files Vault. This screenshot shows the document object, and the property “Copilot” that is set to “yes“.
If you set your document up like the screenshot above, it will automatically upload to Microsoft Copilot and you can then use it from there. The screenshot below shows what it will look like in Microsoft Copilot, based on a search with the configured M-Files vertical.
The Microsoft 365 Administrator has obtained the following results when searching for Copilot. In the blue cycle, you can see the configured vertical for M-Files. He has an account in the tenant which is configured in the M-Files Vault, but he does not have an account or access to it.
This is what the M-Files Demo user can see. The user with the user picture of a cat.
The user has access to the M-Files Vault and is part of the Microsoft tenant that is set up in the Vault.
The screenshot below presents a side-by-side comparison of the results for additional clarification.
As can be seen in the pictures above, only users with access to the M-Files Vault can access and work with the content in Microsoft Copilot.
ConclusionThe M-Files Connector for Microsoft Copilot is an ideal solution for businesses that already own or plan to purchase a Microsoft 365 subscription as part of their IT strategy. Should you be willing to accept the limitations and consent to the movement of your data outside of the security boundaries of your M-Files vault, then this will be a possible option.
As I previously outlined in a previous blog post on M-Files and Microsoft. Microsoft Copilot capabilities will be made available in a M-Files Vault that uses Sharepoint Embedded in the near future. Once enabled, the Microsoft collaboration can access the data stored in the M-Files vault in the Sharepoint Embedded container directly. This will resolve the issue of the security boundary, as they will not be leaving the company security area. Further details about this option will be provided in one of my subsequent blogs, following the official release planned for November 2025.
Should you have any further questions or wish to discuss your options, we would be delighted to provide support and assistance as you consider your options. Should you have any questions, please do not hesitate to get in touch.
L’article M-Files connected to Microsoft Copilot est apparu en premier sur dbi Blog.
pgconf.eu 2025 – RECAP
I was fortunate to be able to attend at the pgconf.eu 2025.
This year event was happening in RIGA and joined together once again key members of the community, contributors, committers, sponsors and users from across the world.
I would summarize this year event with those three main topics : AI/LLM – PG18- Monitoring.
Compared to last year the formula changed a bit regarding the Community events day of Tuesday where for the first time different “Summits” where organized. If you want full details on the event and the schedule as well as the presentation slides of each talk you may find it here : Schedule — PostgreSQL Conference Europe 2025
I had the chance to be chosen as a speaker for the AI Summit. It was quite interesting for me. In total there was 13 short talks (10min) on various topics related to PostgreSQL and AI/LLMs it was dense with a lot of interesting ideas of implementations – you can find the details and slides here PGConf.EU 2025 PostgreSQL AI Summit – PostgreSQL wiki. AI/LLMs are the hot topic of the moment and naturally it came up often during this event, in the talks and in the discussions. You can find the pdf of my presentation here. I explained a business case implementation of a BI self-service agentic RAG to find relevant fields for a target KPI and data marts creation as output. Since the talks were short, it allowed to have a debate at the end between the audience and the speakers. The discussion nicely moderated by organizers was interesting because it exposed the same strong thoughts people have in general about AI/LLMs. A blend of distrust and not fully understanding of what it is about or how it could help organizations. Which, in itself, shows that the PostgreSQL community has the same difficulties at explaining technical challenges versus organizational/human challenges. My view here is that we don’t have technical challenges, they are almost un-relevant to most arguments but rather human relation and understanding of what values a DBA for example, brings to the organization. To me installing and configuring PostgreSQL has no benefits in terms of personal growth so automating it is quite natural and adding AI/LLMs on top is “nice to have” but not fundamentally different than an Ansible playbook. But for the junior DBA this an additional abstraction that can be dangerous because it provides tools that users can’t grasp the full extent of their consequences. This outlines that the main issue of integrating AI/LLMs workflows is more a governance/ C-management issue than a technical one and it can’t be the last excuse for adding to the technological debt.
Jay Miller from Aiven explained how you can fail at exposing PII from LLMs and MCPs. This is rely a relevant topic knowing that more and more organization are facing issues like shadow IT. He also was quite the show host and was funny to hear. I recommend strongly watching the recording when it will be released.
This year was just after the PostgreSQL 18 version release which is one the version that brought major improvements and is initiating changes for future release to come. I was quite enthusiast to listen to Melanie Plagemen on how she worked on the improvements on freezing in this release. I have to say, usually when I am going at an advanced internal talk, I am more confused after than before. But here, Melanie did an amazing job at talking about a technical complex topic without loosing the audience.
Gülçin Yıldırım Jelínek, on her side explained what’s new in PG18 about constraints like NOT ENFORCED and NOT NULL and how to use them. The COO of Cybertec Raj Verma, during a sponsor talk, explained why compliance matters and how to minimize the risks and how PostgreSQL is helping us to be PCI DSS, GDPR, nLPD or HIPAA compliant.
Another interesting talk I was happy to attend was from Floor Drees and Gabriele Bartolini. they explain how they went on joining the CloudNativePG project to the CNCF.
Monitoring
This leads me to another important topic, I wasn’t looking for it but became a bit of a main subject for my over the years as a DBA that was interested in performance tuning. Monitoring on PostgreSQL was introduced by several talks like Luigi Nardi and his idea of workload fingerprint with the DBtune tool they have. Additionally, Lukas Fittl presented pg_stat_plans, an extension which aims at tracking execution plans over time. This is definitely something I am going to try and will push for implementation in the core extensions if not the core code itself.
The reason for that is obvious for me, PostgreSQL is becoming more and more central to enterprise organizations and appart from subject like TDE, monitoring is going to become a key aspect of automation, CloudNativePG and AI/LLM workflows. Having PostgreSQL being able to be monitored better and easier at the core will allow leveraging at all this levels. Cloud companies release that already hence there involvement in similar projects.
In the end, this year was once again the occasion for me to think about many relevant topics and exchange with PostgreSQL hackers as well as users from around the world. I came back home with the head full of ideas to investigate.
Additionally after the conference the videos of the each talks will be uploaded to the pgconf Europe Youtube channel : PostgreSQL Europe, but you can already check previous amazing talks and this year pgday Paris.
So once again the PostgreSQL flag was floating up high !

L’article pgconf.eu 2025 – RECAP est apparu en premier sur dbi Blog.
Alfresco – When a JDBC driver breaks after years of stability
A customer recently contacted us regarding an issue with an old Alfresco Enterprise 4.2.2 environment. The system had been running stably for years, with a nightly restart of the Tomcat/Alfresco service to help recycle the memory and fix long-standing issues in this version. Earlier this week, one of these nightly restarts suddenly failed and since then, Alfresco would completely refuse to start. Unfortunately, that was on Production, so it was rather important! Yes, it is a very old version of Alfresco (on an old OS as well, obviously), but the customer doesn’t want to upgrade or patch it, so it was left as is since its installation by another company that isn’t around anymore.
The errorWhen reviewing the Tomcat logs, the startup sequence looked normal until Alfresco attempted to initialize Hibernate and connect to its MySQL database. Here is the relevant part of the log with the error:
oct. 22, 2025 5:45:37 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent
INFOS: Loaded APR based Apache Tomcat Native library 1.1.33 using APR version 1.4.6.
oct. 22, 2025 5:45:37 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent
INFOS: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
oct. 22, 2025 5:45:37 AM org.apache.catalina.core.AprLifecycleListener initializeSSL
INFOS: OpenSSL successfully initialized (OpenSSL 0.9.8y 5 Feb 2013)
oct. 22, 2025 5:45:37 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["http-apr-8080"]
oct. 22, 2025 5:45:37 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["ajp-apr-8009"]
oct. 22, 2025 5:45:37 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["http-bio-8443"]
oct. 22, 2025 5:45:37 AM org.apache.catalina.startup.Catalina load
INFOS: Initialization processed in 995 ms
...
2025-10-22 05:46:15,118 INFO [alfresco.repo.admin] [localhost-startStop-1] Using database URL 'jdbc:mysql://localhost:3306/alfresco?autoReconnect=true' with user 'alfresco'.
2025-10-22 05:46:15,491 INFO [alfresco.repo.admin] [localhost-startStop-1] Connected to database MySQL version 5.6.28-log
2025-10-22 05:46:21,836 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'sysAdmin' subsystem, ID: [sysAdmin, default]
2025-10-22 05:46:21,863 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Startup of 'sysAdmin' subsystem, ID: [sysAdmin, default] complete
2025-10-22 05:46:21,902 INFO [domain.schema.SchemaBootstrap] [localhost-startStop-1] Ignoring script patch (post-Hibernate): patch.db-V4.2-metadata-query-indexes
2025-10-22 05:46:31,210 WARN [hibernate.cfg.SettingsFactory] [localhost-startStop-1] Could not obtain connection metadata
org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (java.lang.NullPointerException)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:81)
at org.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84)
at org.hibernate.cfg.Configuration.buildSettings(Configuration.java:2079)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1304)
at org.springframework.orm.hibernate3.LocalSessionFactoryBean.newSessionFactory(LocalSessionFactoryBean.java:860)
at org.springframework.orm.hibernate3.LocalSessionFactoryBean.buildSessionFactory(LocalSessionFactoryBean.java:779)
at org.springframework.orm.hibernate3.AbstractSessionFactoryBean.afterPropertiesSet(AbstractSessionFactoryBean.java:211)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1477)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1417)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:563)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47)
at org.alfresco.web.app.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:63)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5068)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5584)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:899)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:875)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:1092)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1984)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: java.lang.NullPointerException
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927)
at com.mysql.jdbc.ConnectionImpl.buildCollationMapping(ConnectionImpl.java:1023)
at com.mysql.jdbc.ConnectionImpl.initializePropsFromServer(ConnectionImpl.java:3451)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2443)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2213)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:797)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:389)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:305)
at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545)
... 37 more
Caused by: java.lang.NullPointerException
at java.util.TreeMap.put(TreeMap.java:556)
at com.mysql.jdbc.Util.resultSetToMap(Util.java:506)
at com.mysql.jdbc.ConnectionImpl.buildCollationMapping(ConnectionImpl.java:964)
... 53 more
2025-10-22 05:46:31,597 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Stopping 'sysAdmin' subsystem, ID: [sysAdmin, default]
2025-10-22 05:46:31,597 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Stopped 'sysAdmin' subsystem, ID: [sysAdmin, default]
2025-10-22 05:46:31,607 ERROR [web.context.ContextLoader] [localhost-startStop-1] Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialectResourceLoader' defined in class path resource [alfresco/extension/context/typo3-context.xml]: Cannot create inner bean 'org.springframework.beans.factory.config.PropertyPathFactoryBean#70397d85' of type [org.springframework.beans.factory.config.PropertyPathFactoryBean] while setting bean property 'dialectClass'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.beans.factory.config.PropertyPathFactoryBean#70397d85' defined in class path resource [alfresco/extension/context/typo3-context.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialect': FactoryBean threw exception on object creation; nested exception is org.hibernate.exception.GenericJDBCException: Cannot open connection
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:281)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:120)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1325)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1086)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47)
at org.alfresco.web.app.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:63)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5068)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5584)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:899)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:875)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:1092)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1984)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.beans.factory.config.PropertyPathFactoryBean#70397d85' defined in class path resource [alfresco/extension/context/typo3-context.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialect': FactoryBean threw exception on object creation; nested exception is org.hibernate.exception.GenericJDBCException: Cannot open connection
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:270)
... 29 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialect': FactoryBean threw exception on object creation; nested exception is org.hibernate.exception.GenericJDBCException: Cannot open connection
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:149)
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:102)
at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1429)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:245)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190)
at org.springframework.beans.factory.config.PropertyPathFactoryBean.setBeanFactory(PropertyPathFactoryBean.java:186)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeAwareMethods(AbstractAutowireCapableBeanFactory.java:1439)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1408)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
... 31 more
Caused by: org.hibernate.exception.GenericJDBCException: Cannot open connection
at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:103)
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:91)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:29)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:426)
at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:144)
at org.hibernate.jdbc.BorrowedConnectionProxy.invoke(BorrowedConnectionProxy.java:150)
at com.sun.proxy.$Proxy5.setAutoCommit(Unknown Source)
at org.alfresco.hibernate.DialectFactoryBean.getObject(DialectFactoryBean.java:67)
at org.alfresco.hibernate.DialectFactoryBean.getObject(DialectFactoryBean.java:39)
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:142)
... 39 more
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (java.lang.NullPointerException)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:81)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:423)
... 45 more
Caused by: java.sql.SQLException: java.lang.NullPointerException
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927)
at com.mysql.jdbc.ConnectionImpl.buildCollationMapping(ConnectionImpl.java:1023)
at com.mysql.jdbc.ConnectionImpl.initializePropsFromServer(ConnectionImpl.java:3451)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2443)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2213)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:797)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
at sun.reflect.GeneratedConstructorAccessor103.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:389)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:305)
at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545)
... 49 more
Caused by: java.lang.NullPointerException
at java.util.TreeMap.put(TreeMap.java:556)
at com.mysql.jdbc.Util.resultSetToMap(Util.java:506)
at com.mysql.jdbc.ConnectionImpl.buildCollationMapping(ConnectionImpl.java:964)
... 64 more
oct. 22, 2025 5:46:31 AM org.apache.catalina.core.StandardContext listenerStart
GRAVE: Exception lors de l'envoi de l'évènement contexte initialisé (context initialized) à l'instance de classe d'écoute (listener) org.alfresco.web.app.ContextLoaderListener
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialectResourceLoader' defined in class path resource [alfresco/extension/context/typo3-context.xml]: Cannot create inner bean 'org.springframework.beans.factory.config.PropertyPathFactoryBean#70397d85' of type [org.springframework.beans.factory.config.PropertyPathFactoryBean] while setting bean property 'dialectClass'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.beans.factory.config.PropertyPathFactoryBean#70397d85' defined in class path resource [alfresco/extension/context/typo3-context.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialect': FactoryBean threw exception on object creation; nested exception is org.hibernate.exception.GenericJDBCException: Cannot open connection
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:281)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:120)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1325)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1086)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291)
...
The key clue here appears to be the NullPointerException from the buildCollationMapping(), which probably indicates a problem during the JDBC driver’s initialization of MySQL collation data.
DB & JDBC Connector DetailsGiven that the error appeared to be linked to the DB, I checked that it was properly running and responding, while also retrieving its version:
mysql> SELECT VERSION();
+------------+
| VERSION() |
+------------+
| 5.6.28-log |
+------------+
1 row in set (0.01 sec)
Everything looked fine, and the collation list was consistent with no apparent missing data and no “null” values:
mysql> SHOW COLLATION;
+---------------------+----------+-----+---------+----------+---------+
| Collation | Charset | Id | Default | Compiled | Sortlen |
+---------------------+----------+-----+---------+----------+---------+
| big5_chinese_ci | big5 | 1 | Yes | Yes | 1 |
| big5_bin | big5 | 84 | | Yes | 1 |
| dec8_swedish_ci | dec8 | 3 | Yes | Yes | 1 |
| dec8_bin | dec8 | 69 | | Yes | 1 |
...
Another important thing to check is the JDBC Connector used by Tomcat to connect to the database (as it’s not using the mysql binaries to connect, as I did above):
[alfresco@alf01 ~]$ cd $CATALINA_HOME
[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ ls -l lib/mysql-*
-rw-r--r--. 1 alfresco alfresco 802721 Jan 21 2015 lib/mysql-connector-java-5.1.20-bin.jar
[alfresco@alf01 tomcat]$
The MySQL JDBC Connector version 5.1.20 was released in May 2012. Looking into known MySQL Connector/J bugs, it looked like there were several problems affecting it, like long-running MySQL instances, some related to the buildCollationMapping, etc… The MySQL DB of the environment was running for the past 2.5 years without any restart. Therefore, the first thing we suggested was simply to restart the DB. Unfortunately, it didn’t help.
Patching the JDBC ConnectorSince patching/upgrading Alfresco and the DB was out of the question, the only easy thing we could try, as a next step, was to patch the JDBC Connector. Looking at the MySQL download website for that component (c.f. here), the latest minor version available is 5.1.49, released in April 2020. Therefore, I tried to download and extract it and then I stopped the (not-really) running Tomcat process:
[alfresco@alf01 tomcat]$ wget "https://cdn.mysql.com/archives/mysql-connector-java-5.1/mysql-connector-java-5.1.49.zip"
--2025-10-22 09:43:47-- https://cdn.mysql.com/archives/mysql-connector-java-5.1/mysql-connector-java-5.1.49.zip
Resolving cdn.mysql.com... 2.19.75.100, 2001:918:ffcb:1a6::1d68, 2001:918:ffcb:181::1d68
Connecting to cdn.mysql.com|2.19.75.100|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3722067 (3.5M) [application/zip]
Saving to: “mysql-connector-java-5.1.49.zip”
100%[============================================================>] 3,722,067 19.1M/s in 0.2s
2025-10-22 09:43:48 (19.1 MB/s) - “mysql-connector-java-5.1.49.zip” saved [3722067/3722067]
[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ unzip mysql-connector-java-5.1.49.zip
Archive: mysql-connector-java-5.1.49.zip
creating: mysql-connector-java-5.1.49/
creating: mysql-connector-java-5.1.49/src/
creating: mysql-connector-java-5.1.49/src/com/
...
inflating: mysql-connector-java-5.1.49/src/testsuite/ssl-test-certs/server-cert.pem
inflating: mysql-connector-java-5.1.49/src/testsuite/ssl-test-certs/server-key.pem
[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ ../alfresco.sh stop tomcat
**
** Alfresco has been stopped successfully
**
[alfresco@alf01 tomcat]$
The final step was to backup the current and new versions of the JARs. Then I replaced the file in the Tomcat lib folder before restarting the process:
[alfresco@alf01 tomcat]$ ls -l lib/mysql-*
-rw-r--r--. 1 alfresco alfresco 802721 Jan 21 2015 lib/mysql-connector-java-5.1.20-bin.jar
[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ mv lib/mysql-connector-java-5.1.20-bin.jar ./
[alfresco@alf01 tomcat]$ mv mysql-connector-java-5.1.49/mysql-connector-java-5.1.49.jar ./
[alfresco@alf01 tomcat]$ cp -p mysql-connector-java-5.1.49.jar lib/
[alfresco@alf01 tomcat]$ rm -rf mysql-connector-java-5.1.49.zip mysql-connector-java-5.1.49
[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ ls -l lib/mysql-*
-rw-r--r--. 1 alfresco alfresco 1006904 Apr 20 2020 lib/mysql-connector-java-5.1.49.jar
[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ ../alfresco.sh start tomcat
**
** Alfresco has been started
**
[alfresco@alf01 tomcat]$
Once done, the Tomcat logs looked much better, there were no errors related to database connection anymore. Alfresco was able to finish its startup and users were able to log in to Alfresco Share again:
oct. 22, 2025 8:27:35 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent
INFOS: Loaded APR based Apache Tomcat Native library 1.1.33 using APR version 1.4.6.
oct. 22, 2025 8:27:35 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent
INFOS: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
oct. 22, 2025 8:27:35 AM org.apache.catalina.core.AprLifecycleListener initializeSSL
INFOS: OpenSSL successfully initialized (OpenSSL 0.9.8y 5 Feb 2013)
oct. 22, 2025 8:27:35 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["http-apr-8080"]
oct. 22, 2025 8:27:35 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["ajp-apr-8009"]
oct. 22, 2025 8:27:35 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["http-bio-8443"]
oct. 22, 2025 8:27:35 AM org.apache.catalina.startup.Catalina load
INFOS: Initialization processed in 989 ms
...
2025-10-22 08:28:16,988 INFO [alfresco.repo.admin] [localhost-startStop-1] Using database URL 'jdbc:mysql://localhost:3306/alfresco?autoReconnect=true' with user 'alfresco'.
2025-10-22 08:28:17,300 INFO [alfresco.repo.admin] [localhost-startStop-1] Connected to database MySQL version 5.6.28-log
2025-10-22 08:28:23,797 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'sysAdmin' subsystem, ID: [sysAdmin, default]
2025-10-22 08:28:23,822 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Startup of 'sysAdmin' subsystem, ID: [sysAdmin, default] complete
2025-10-22 08:28:23,859 INFO [domain.schema.SchemaBootstrap] [localhost-startStop-1] Ignoring script patch (post-Hibernate): patch.db-V4.2-metadata-query-indexes
Wed Oct 22 08:28:33 CEST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Oct 22 08:28:33 CEST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
SETTING APPLICATION CONTEXT
2025-10-22 08:28:33,976 INFO [extensions.webscripts.TemplateProcessorRegistry] [localhost-startStop-1] Registered template processor Repository Template Processor for extension ftl
2025-10-22 08:28:33,982 INFO [extensions.webscripts.ScriptProcessorRegistry] [localhost-startStop-1] Registered script processor Repository Script Processor for extension js
2025-10-22 08:28:40,396 INFO [domain.schema.SchemaBootstrap] [localhost-startStop-1] Connecting to database: jdbc:mysql://localhost:3306/alfresco?autoReconnect=true, UserName=alfresco@localhost, MySQL Connector Java
2025-10-22 08:28:40,396 INFO [domain.schema.SchemaBootstrap] [localhost-startStop-1] Schema managed by database dialect org.hibernate.dialect.MySQLInnoDBDialect.
2025-10-22 08:28:41,104 INFO [domain.schema.SchemaBootstrap] [localhost-startStop-1] No changes were made to the schema.
2025-10-22 08:28:42,093 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'Authentication' subsystem, ID: [Authentication, managed, alfinst]
2025-10-22 08:28:42,545 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Startup of 'Authentication' subsystem, ID: [Authentication, managed, alfinst] complete
...
oct. 22, 2025 8:30:09 AM org.apache.coyote.AbstractProtocol start
INFOS: Starting ProtocolHandler ["http-apr-8080"]
oct. 22, 2025 8:30:09 AM org.apache.coyote.AbstractProtocol start
INFOS: Starting ProtocolHandler ["ajp-apr-8009"]
oct. 22, 2025 8:30:09 AM org.apache.coyote.AbstractProtocol start
INFOS: Starting ProtocolHandler ["http-bio-8443"]
oct. 22, 2025 8:30:09 AM org.apache.catalina.startup.Catalina start
INFOS: Server startup in 153447 ms
2025-10-22 08:30:10,121 INFO [web.site.EditionInterceptor] [ajp-apr-8009-exec-7] Successfully retrieved license information from Alfresco.
2025-10-22 08:30:39,780 INFO [web.scripts.ImapServerStatus] [ajp-apr-8009-exec-3] Successfully retrieved IMAP server status from Alfresco: disabled
2025-10-22 08:30:40,107 INFO [web.scripts.SyncModeConfig] [ajp-apr-8009-exec-3] Successfully retrieved Sync Mode configuration from Alfresco: ON_PREMISE
This issue was pretty interesting, as it shows that even if you do not change anything on a system running for more than 10 years (same DB/Alfresco/Tomcat/Java/JDBC Connector versions), and that restarts daily, you can still face bugs at any time… I can’t check all details of the bugs while I’m writing this blog as the MySQL bugs website is horribly slow and crashing today, but clearly this shows that even if you’re maintaining old platforms that can’t be upgraded, it’s still worth keeping small components like JDBC connectors up to date.
L’article Alfresco – When a JDBC driver breaks after years of stability est apparu en premier sur dbi Blog.
Dctm – D2-DAR installation fails because of workflow trackers
In a Documentum upgrade project from 20.x to 23.4 earlier this year, I faced an issue which prevented the installation/upgrade of D2. The problem was located inside the D2-DAR while it was trying to update the workflow trackers, which failed because of an ORA-01427 error. The exact error displayed was this one:
[emc.installer] dmbasic output : [DM_SERVER_E_EXEC_SQL]error: "The execsql method failed due to a database error: ORA-01427: single-row subquery returns more than one row"
Before that happened, I successfully upgraded 7 or 8 environments from 20.x to 23.4, for that customer, and after that, I did around 20 more. However, this issue only happened once, so not sure what exactly was wrong, and the Application/Business Teams couldn’t find what really caused the initial issue (inside the DB) either… In any cases, to start my investigation, I tried to quickly compare the generated logs for the D2-DAR installation on a working vs non-working Repository.
Working Repository logs:
###
## Working repo:
###
[dmadmin@cs-0 DARsInternal]$ cat dar-deploy-D2-DAR.dar-REPO1-20250703-133024.log
[INFO] ******************************************************
[INFO] * Headless Composer
[INFO] * Version: 23.4.0000.0120
[INFO] * Java version: 17.0.13 (64bit)
[INFO] * Java home: $JAVA_HOME
[INFO] * Set storage type: false
[INFO] *
[INFO] * DAR file: $DM_HOME/install/DARsInternal/D2-DAR.dar
[INFO] * Project name: D2-DAR
[INFO] * Built by Composer: 23.4.0000.0027
[INFO] *
...
[INFO] dmbasic output : Finished creating required indexes on D2 types.
[INFO] dmbasic output : Copied workflow_id in to d2c_workflow_tracker object
[INFO] dmbasic output : Copied process_id in to d2c_workflow_tracker object
[INFO] dmbasic output : Copied supervisor_name in to d2c_workflow_tracker object
[INFO] dmbasic output : Adding d2_digital_sign_tracker_user user to the group d2_digital_signature_tracker
[INFO] dmbasic output : Fetched the group d2_digital_signature_tracker
[INFO] dmbasic output : Check for 'Public searches' folder to migrate.
[INFO] dmbasic output : Folder '/Resources/D2/[MIGRATE] Public searches' not found. Do not migrate.
[INFO] dmbasic output : Disconnected from the server.
[INFO] Finished executing post-install script Thu Jul 03 13:32:13 UTC 2025
[INFO] Project 'D2-DAR' was successfully installed.
[dmadmin@cs-0 DARsInternal]$
Non-working Repository logs:
###
## Non working repo:
###
[dmadmin@cs-0 DARsInternal]$ cat dar-deploy-D2-DAR.dar-REPO2-20250704-144219.log
[INFO] ******************************************************
[INFO] * Headless Composer
[INFO] * Version: 23.4.0000.0120
[INFO] * Java version: 17.0.13 (64bit)
[INFO] * Java home: $JAVA_HOME
[INFO] * Set storage type: false
[INFO] *
[INFO] * DAR file: $DM_HOME/install/DARsInternal/D2-DAR.dar
[INFO] * Project name: D2-DAR
[INFO] * Built by Composer: 23.4.0000.0027
[INFO] *
...
[INFO] dmbasic output : Finished creating required indexes on D2 types.
[INFO] dmbasic output : Copied workflow_id in to d2c_workflow_tracker object
[INFO] dmbasic output : [DM_SERVER_E_EXEC_SQL]error: "The execsql method failed due to a database error: ORA-01427: single-row subquery returns more than one row"
[INFO] dmbasic output :
[INFO] dmbasic output : [DM_TYPE_MGR_E_INDEX_ALREADY_EXISTS]error: "The index being created on type d2_recently_used_profile already exists. The existing index is represented by dmi_index object with id 1f012345800005a0."
[INFO] dmbasic output :
[INFO] dmbasic output : [DM_TYPE_MGR_E_INDEX_ALREADY_EXISTS]error: "The index being created on type d2_recently_used_profile already exists. The existing index is represented by dmi_index object with id 1f0123458000059f."
[INFO] dmbasic output :
[INFO] dmbasic output :
[INFO] dmbasic output : Exiting with dmExit(-1)
[ERROR] Procedure execution failed with dmbasic exit value : 255
[INFO] D2-DAR install failed.
[ERROR] Unable to install dar file $DM_HOME/install/DARsInternal/D2-DAR.dar
com.emc.ide.installer.PostInstallException: Error running post-install procedure "PostInstall". Please contact the procedure owner to verify if it is functioning properly.
at internal.com.emc.ide.installer.DarInstaller.postInstall(DarInstaller.java:1574)
at internal.com.emc.ide.installer.DarInstaller.doInstall(DarInstaller.java:669)
...
Caused by: com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerException: Procedure execution failed with dmbasic exit value : 255
at com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerUtils.executeDmBasic(ProcedureRunnerUtils.java:285)
at com.emc.ide.external.dfc.procedurerunner.ProcedureRunner.execute(ProcedureRunner.java:55)
...
[ERROR] Failed to install DAR
Unable to install dar file $DM_HOME/install/DARsInternal/D2-DAR.dar
at com.emc.ant.installer.api.InstallerAntTask.installDar(InstallerAntTask.java:273)
at com.emc.ant.installer.api.InstallerAntTask.execute(InstallerAntTask.java:135)
...
Caused by: com.emc.ide.installer.PostInstallException: Error running post-install procedure "PostInstall". Please contact the procedure owner to verify if it is functioning properly.
at internal.com.emc.ide.installer.DarInstaller.postInstall(DarInstaller.java:1574)
at internal.com.emc.ide.installer.DarInstaller.doInstall(DarInstaller.java:669)
...
Caused by: com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerException: Procedure execution failed with dmbasic exit value : 255
at com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerUtils.executeDmBasic(ProcedureRunnerUtils.java:285)
at com.emc.ide.external.dfc.procedurerunner.ProcedureRunner.execute(ProcedureRunner.java:55)
at internal.com.emc.ide.installer.DarInstaller.postInstall(DarInstaller.java:1570)
... 42 more
As often with DAR installation failures, there aren’t a lot of information. It’s clear that the installation failed because there is apparently some wrong data inside the DB (a query that should return only 1 row is giving more results), but there isn’t much more details expect that. By comparing the 2 different logs, you can understand that the issue is located between these 2 lines:
[INFO] dmbasic output : Copied workflow_id in to d2c_workflow_tracker object
[INFO] dmbasic output : Copied process_id in to d2c_workflow_tracker object
Therefore, finding the root cause is essentially finding what is being done between these 2 lines. A DAR is essentially a zip (Documentum Archive), so you can just extract it and look at its content to see what is being done. In this case (D2 23.4), I could find the necessary source code inside the file named “D2-DAR/bin/content/55/-1569930955/runnableContent.crtext“. Here is an extract of that file:
Print "Copied workflow_id in to d2c_workflow_tracker object"
sQuery = "update d2c_workflow_tracker_s set process_id = " & _
"(select child_id from dm_relation_s where dm_relation_s.parent_id = " & _
"d2c_workflow_tracker_s.r_object_id and dm_relation_s.relation_name = " & _
"'D2_WF_TRACKER_TEMPLATE') where (process_id IS NULL or process_id = '0000000000000000') " & _
" and exists (select child_id from dm_relation_s where dm_relation_s.parent_id = " & _
"d2c_workflow_tracker_s.r_object_id and dm_relation_s.relation_name = 'D2_WF_TRACKER_TEMPLATE')"
bRet = dmAPIExec("execsql," & sess & "," & sQuery)
if (not bRet) then
Call Err_Handle(sess, "0")
Print "Exiting with dmExit(-1)"
dmExit(-1)
end if
Print "Copied process_id in to d2c_workflow_tracker object"
Therefore, the query that is causing the issue is the one referenced above as “sQuery”, i.e. this one, transformed into plain SQL:
update d2c_workflow_tracker_s
set process_id=(select child_id from dm_relation_s
where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id
and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE')
where (process_id IS NULL or process_id='0000000000000000')
and exists (select child_id from dm_relation_s
where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id
and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE');
And indeed, executing this query produces the same result:
SQL> update d2c_workflow_tracker_s
set process_id=(select child_id from dm_relation_s
where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id
and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE')
where (process_id IS NULL or process_id='0000000000000000')
and exists (select child_id from dm_relation_s
where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id
and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE');
update d2c_workflow_tracker_s set process_id=(select child_id from dm_relation_s where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE') where (process_id IS NULL or process_id='0000000000000000') and exists (select child_id from dm_relation_s where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE')
*
ERROR at line 1:
ORA-01427: single-row subquery returns more than one row
Unfortunately, because of the way the query is formed (mapping all d2c_workflow_tracker_s.process_id 1-by-1 from the dm_relation_s.child_id), you cannot just execute the sub-query (whatever is present between the parenthesis):
SQL> select child_id from dm_relation_s
where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id
and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE';
select child_id from dm_relation_s where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE'
*
ERROR at line 1:
ORA-00904: "D2C_WORKFLOW_TRACKER_S"."R_OBJECT_ID": invalid identifier
Even if you “correct” it (by adding the 2nd table name in the “from”), then it will just display something that isn’t directly usable:
SQL> select child_id from dm_relation_s, d2c_workflow_tracker_s
where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id
and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE';
CHILD_ID
----------------
4b01234580044d0f
4b01234580057910
4b01234580002572
...
4b01234580057910
4b01234580057910
4b01234580311d91
1406 rows selected.
What to do then? Well, you need to go further. There are 1406 rows from the above query (number of dm_process being tracked or to be more accurate, the number of relations between a Workflow Tracker (parent_id) and its dm_process (child_id)). Assuming that the error is legit, then that would probably mean that there could be less Workflow Trackers than that number. Therefore, after a bit of reflection, I executed the following query to try to find all Workflow Trackers and the count of relations that each and every one of them has. Under normal conditions, each Workflow Tracker should have a single relation (meaning a single dm_process (1-by-1 mapping)) but that wasn’t the case:
SQL> select d2c_workflow_tracker_s.r_object_id, count(dm_relation_s.parent_id)
from d2c_workflow_tracker_s, dm_relation_s
where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id
and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE'
group by d2c_workflow_tracker_s.r_object_id
order by 2;
R_OBJECT_ID COUNT(DM_RELATION_S.PARENT_ID)
---------------- ------------------------------
09012345800df7eb 1
09012345800df7ed 1
...
0901234580313e31 1
090123458004fdde 2
090123458005cdf5 2
090123458005fb44 2
1403 rows selected.
There are only 1403 Workflow Trackers, which means that there are 3 “additional” relations (or 3 dm_process which are assigned to an already-used Workflow Tracker). To get more details on the Workflow and their Workflow Trackers, these kinds of queries can be executed:
SQL> select parent_id, child_id from dm_relation_s
where parent_id IN ('090123458004fdde','090123458005cdf5','090123458005fb44')
and relation_name='D2_WF_TRACKER_TEMPLATE'
order by 1, 2;
PARENT_ID CHILD_ID
---------------- ----------------
090123458004fdde 4b01234580002575
090123458004fdde 4b01234580057910
090123458005cdf5 4b01234580057910
090123458005cdf5 4b01234580057910
090123458005fb44 4b01234580057910
090123458005fb44 4b01234580057910
SQL> select r_object_id, object_name, r_creator_name from dm_sysobject_sp
where r_object_id IN ('090123458004fdde','090123458005cdf5','090123458005fb44');
R_OBJECT_ID OBJECT_NAME R_CREATOR_NAME
---------------- ---------------------------------------------------------------------- -----------------
090123458004fdde Start Import Approval October 29, 2021 at 10:34:45 AM UTC OQSOP01
090123458005cdf5 Start Ready For Approval November 11, 2021 at 3:55:04 PM UTC OQAuthor01
090123458005fb44 Start In Review November 16, 2021 at 12:01:32 PM UTC OQSOP01
SQL> select r_object_id, object_name from dm_sysobject_sp
where r_object_id IN ('4b01234580002575','4b01234580057910');
R_OBJECT_ID OBJECT_NAME
---------------- ----------------------
4b01234580002575 WF_Import
4b01234580057910 WF_Review_Approve
SQL> select * from dm_relation_sp
where parent_id='090123458005cdf5'
and relation_name='D2_WF_TRACKER_TEMPLATE';
R_OBJECT_ID RELATION_NAME PARENT_ID CHILD_ID CHILD_LABEL PERMANENT_LINK ORDER_NO EFFECTIVE EXPIRATIO DESCRIPTION I_PARTITION I_IS_REPLICA I_VSTAMP
---------------- -------------------------------- ---------------- ---------------- -------------------------------- -------------- ---------- --------- --------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------- ------------ ----------
370123458000d92a D2_WF_TRACKER_TEMPLATE 090123458005cdf5 4b01234580057910 0 0 01-JAN-01 01-JAN-01 0 0 0
370123458000cd4f D2_WF_TRACKER_TEMPLATE 090123458005cdf5 4b01234580057910 0 0 01-JAN-01 01-JAN-01 0 0 0
Clearly, there is a Data/Business problem. Two of the three Workflow Trackers have an exact duplicate relation for the same dm_process while the third one tracks two different dm_process, which is even more surprising.
To be able to proceed, I noted all the IDs and informed the Application/Business teams and then I did what the SQL command from the D2-DAR was supposed to do. Therefore, I took one of the 2 values and assigned it to the process_id of the Workflow Tracker:
SQL> update d2c_workflow_tracker_s
set process_id='4b01234580002575'
where (process_id IS NULL or process_id='0000000000000000')
and r_object_id='090123458004fdde';
1 row updated.
SQL> update d2c_workflow_tracker_s
set process_id='4b01234580057910'
where (process_id IS NULL or process_id='0000000000000000')
and r_object_id='090123458005fb44';
1 row updated.
SQL> update d2c_workflow_tracker_s
set process_id='4b01234580057910'
where (process_id IS NULL or process_id='0000000000000000')
and r_object_id='090123458005cdf5';
1 row updated.
After that, I re-triggered the query which was previously failing and this time it worked properly for all the remaining Workflow Trackers:
SQL> update d2c_workflow_tracker_s
set process_id=(select child_id from dm_relation_s
where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id
and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE')
where (process_id IS NULL or process_id='0000000000000000')
and exists (select child_id from dm_relation_s
where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id
and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE');
1400 rows updated.
I was then able to start the upgrade pipeline from scratch again, and it was able to complete properly. As previously stated, the Application/Business teams still didn’t find what caused these 3 rows, but it’s only a DEV, so I can only assume that it was some human errors at some points, like a wrong DQL being executed that duplicated some values or something similar. In any cases, what is interesting for this blog is the investigation process and the way to get to the bottom of things by searching inside the D2 DAR and analyzing the DB content.
L’article Dctm – D2-DAR installation fails because of workflow trackers est apparu en premier sur dbi Blog.


