Yann Neuhaus

Subscribe to Yann Neuhaus feed Yann Neuhaus
Updated: 31 min 19 sec ago

ORA-44001 when setting up GoldenGate privileges on a CDB

Thu, 2025-11-13 02:00

I was recently setting up GoldenGate for a client when I was struck by a ORA-44001 error. I definitely wasn’t the first one to come across this while playing with grants on GoldenGate users, but nowhere could I find the exact reason for the issue. Not a single question or comment on that matter offered a solution.

The problem occurs when running the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE package described in the documentation. An example given by the documentation is the following:

EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(GRANTEE => 'c##ggadmin', CONTAINER => 'ALL');

And the main complaint mentioned regarding this command was the following ORA-44001 error:

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##ggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-44001: invalid schema
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_ASSERT", line 410
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 50
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 3082
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3632
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2

The solution is in fact quite simple. But I decided to investigate it a bit further, playing with the multitenant architecture. In this blog, I will use an Oracle 19c CDB with a single pluggable database named PDB1.

Do you have read-only PDBs on your CDB ?

For me, it was really the only thing that mattered when encountering this error. On a CDB with tens of PDBs, you might have some PDBs in read-only mode. Whether it’s to keep templates aside, or for temporary restrictions on a specific PDB. Let’s try to replicate the error.

First example: PDB in read-write, grant operation succeeds

If you first try to grant the admin privileges with a PDB in read-write, it succeeds:

SQL> alter pluggable database pdb1 open read write;

Pluggable database altered.

SQL> create user c##oggadmin identified by ogg;

User created.

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');

PL/SQL procedure successfully completed.
Second example: PDB in read-only before the user creation, grant operation fails with ORA-44001

If you first put the PDB in read-only mode, and then create the user, then the user doesn’t exist, and you get the ORA-44001 when granting privileges.

SQL> drop user c##oggadmin;

User dropped.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read only;

Pluggable database altered.

SQL> create user c##oggadmin identified by ogg;

User created.

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-44001: invalid schema
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_ASSERT", line 410
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 50
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 3082
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3632
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2
Third example: PDB in read-only after the user creation, grant operation fails with ORA-16000

Where this gets tricky is the order in which you write the query. If you create the user before putting a PDB in read-only, you get another error, because the user actually exists:

SQL> drop user c##oggadmin;

User dropped.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read write;

Pluggable database altered.

SQL> create user c##oggadmin identified by ogg;

User created.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read only;

Pluggable database altered.

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-16000: database or pluggable database open for read-only access
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 93
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 84
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 123
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3635
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2

As often with Oracle, the error messages can be misleading. The third example clearly points to the issue, while the second one is tricky to debug (even though it is completely valid).

Should I create a GoldenGate user at the CDB-level ?

Depending on your replication configuration, you might need to create a common user instead of multiple users per PDB. For instance, this is strictly required when setting up a downstream extract. However, in general, it might be a bad idea to create a common C##GGADMIN user and granting it privileges with CONTAINER => ALL, because you might not want such a privileged user to exist on all your PDBs.

L’article ORA-44001 when setting up GoldenGate privileges on a CDB est apparu en premier sur dbi Blog.

Alfresco – Solr search result inconsistencies

Wed, 2025-11-12 04:38

We recently encountered an error at a customer’s site. Their Alfresco environment was behaving strangely.
Sometimes the search results worked, and sometimes they did not get the expected results.

The context

The environment is composed of 2 Alfresco7 nodes in cluster and 2 Solr 6.6 nodes load balanced (in active-active mode).

Sometimes the customer isn’t able to retrieve the document he created recently.

Investigation steps

Since we have load balancing in place, the first step is to confirm that everything is okay on the two nodes.

  • I checked that Alfresco is running as expected. Nothing out of the ordinary; the processes are there and there are no errors in the log files and everything is green in the admin console.
  • Then, I checked the alfresco-global.properties on both nodes to ensure the configuration is the same. We never know. I also checked the way we connect to Solr and confirmed that the load-balanced URL is being used.
  • At this point, it is almost certain that the problem is with Solr. We will start by checking the administration console. Because we have load balancing, we must connect to each node individually and cannot use the URL in alfresco-global.properties.
  • At first glance, everything seems fine, but a closer inspection of the Core Admin panel reveals a difference of several thousand “NumDocs” between the two nodes. These values may differ because they are internal Solr files. However, the discrepancy is too high in my opinion.
  • How can this assumption be verified? Move to any core and run a query to list all the files (cm:name:*). On the first node, the query returns an error. On the second node, I received an answer similar to the one below:
  • Now moving to the server where I have the error, in the logs there are errors like:
2025-11-10 15:33:32.466 ERROR (searcherExecutor-137-thread-1-processing-x:alfresco-3) [   x:alfresco-3] o.a.s.c.SolrCore null:org.alfresco.service.cmr.dictionary.DictionaryException10100009 d_dictionary.model.err.parse.failure
        at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:113)
        at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:99)
        at org.alfresco.solr.tracker.ModelTracker.loadPersistedModels(ModelTracker.java:181)
        at org.alfresco.solr.tracker.ModelTracker.<init>(ModelTracker.java:142)
        at org.alfresco.solr.lifecycle.SolrCoreLoadListener.createModelTracker(SolrCoreLoadListener.java:341)
        at org.alfresco.solr.lifecycle.SolrCoreLoadListener.newSearcher(SolrCoreLoadListener.java:139)
        at org.apache.solr.core.SolrCore.lambda$getSearcher$15(SolrCore.java:2249)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.jibx.runtime.JiBXException: Error accessing document
        at org.jibx.runtime.impl.XMLPullReaderFactory$XMLPullReader.next(XMLPullReaderFactory.java:293)
        at org.jibx.runtime.impl.UnmarshallingContext.toStart(UnmarshallingContext.java:446)
        at org.jibx.runtime.impl.UnmarshallingContext.unmarshalElement(UnmarshallingContext.java:2750)
        at org.jibx.runtime.impl.UnmarshallingContext.unmarshalDocument(UnmarshallingContext.java:2900)
        at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:108)
        ... 11 more
Caused by: java.io.EOFException: input contained no data
        at org.xmlpull.mxp1.MXParser.fillBuf(MXParser.java:3003)
        at org.xmlpull.mxp1.MXParser.more(MXParser.java:3046)
        at org.xmlpull.mxp1.MXParser.parseProlog(MXParser.java:1410)
        at org.xmlpull.mxp1.MXParser.nextImpl(MXParser.java:1395)
        at org.xmlpull.mxp1.MXParser.next(MXParser.java:1093)
        at org.jibx.runtime.impl.XMLPullReaderFactory$XMLPullReader.next(XMLPullReaderFactory.java:291)
        ... 15 more
  • It looks like the problem is related to the model definition. We need to check if the models are still there in ../solr_data/models. The models are still in place, but one of them is 0 KB.
  • So we need to force delete the empty file and restart Solr to force the model to be reimported.

After taking these actions, we reimported the model file and the errors in the logs disappeared. In the admin console, we can see NumDocs increasing again. When we re-run the query, we get a result.

L’article Alfresco – Solr search result inconsistencies est apparu en premier sur dbi Blog.

PostgreSQL 19: Logical replication of sequences

Wed, 2025-11-12 02:50

Logical replication in PostgreSQL got a lot of features and performance improvements over the last releases. It was introduced in PostgreSQL 10 back in 2017, and PostgreSQL 9.6 (in 2016) introduced logical decoding which is the basis for logical replication. Today logical replication is really mature and from my point of view only two major features are missing: DDL replication and the replication of sequences. The latter is now possible with the upcoming PostgreSQL 19 next year, and this is what this post is about.

Before we can see how this works we need a logical replication setup. An easy method to set this up is to create a physical replica and then transform that into a logical replica using pg_createsubscriber:

postgres@:/home/postgres/ [pgdev] psql -c "create table t ( a int primary key generated always as identity, b text)"
CREATE TABLE
postgres@:/home/postgres/ [pgdev] psql -c "insert into t (b) values ('aaaa')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -c "insert into t (b) values ('bbbb')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] pg_basebackup --pgdata=/var/tmp/dummy --write-recovery-conf --checkpoint=fast
postgres@:/home/postgres/ [pgdev] echo "port=8888" >> /var/tmp/dummy/postgresql.auto.conf 
postgres@:/home/postgres/ [pgdev] pg_createsubscriber --all --pgdata=/var/tmp/dummy --subscriber-port=8888 --publisher-server="host=localhost,port=5432"
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/dummy start
2025-11-11 13:21:20.818 CET - 1 - 9669 -  - @ - 0LOG:  redirecting log output to logging collector process
2025-11-11 13:21:20.818 CET - 2 - 9669 -  - @ - 0HINT:  Future log output will appear in directory "pg_log".
2025-11-11 13:21:21.250 CET - 1 - 9684 -  - @ - 0LOG:  redirecting log output to logging collector process
2025-11-11 13:21:21.250 CET - 2 - 9684 -  - @ - 0HINT:  Future log output will appear in directory "pg_log".

Once this is done we have a logical replica and the data is synchronized:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from t"
 a |  b   
---+------
 1 | aaaa
 2 | bbbb
(2 rows)

A quick check the replication is ongoing:

postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values('cccc');"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from t"
 a |  b   
---+------
 1 | aaaa
 2 | bbbb
 3 | cccc
(3 rows)

The “generated always as identidy” we used above to create the table automatically created a sequence for us:

ostgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences;"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 3

Checking the same sequence on the replica clearly shows that the sequence is not synchronized (last_value is still a 2):

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences;"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 2

The reason is, that sequences are not synchronized automatically:

postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_publication;"
Expanded display is on.
-[ RECORD 1 ]---+-------------------------------
oid             | 16397
pubname         | pg_createsubscriber_5_f58e9acd
pubowner        | 10
puballtables    | t
puballsequences | f
pubinsert       | t
pubupdate       | t
pubdelete       | t
pubtruncate     | t
pubviaroot      | f
pubgencols      | n

As there currently is no way to enable sequence synchronization for an existing publication we can either drop and re-create or add an additional publication and subscription just for the sequences:

postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "create publication pubseq for all sequences;"
CREATE PUBLICATION
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_publication;"
Expanded display is on.
-[ RECORD 1 ]---+-------------------------------
oid             | 16397
pubname         | pg_createsubscriber_5_f58e9acd
pubowner        | 10
puballtables    | t
puballsequences | f
pubinsert       | t
pubupdate       | t
pubdelete       | t
pubtruncate     | t
pubviaroot      | f
pubgencols      | n
-[ RECORD 2 ]---+-------------------------------
oid             | 16398
pubname         | pubseq
pubowner        | 10
puballtables    | f
puballsequences | t
pubinsert       | t
pubupdate       | t
pubdelete       | t
pubtruncate     | t
pubviaroot      | f
pubgencols      | n

The publication has sequence replication enabled and the subscription to consume this can be created like this:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create subscription subseq connection 'host=localhost port=5432' publication pubseq"
CREATE SUBSCRIPTION

Now the sequence is visible in pg_subscription_rel on the subscriber:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from pg_subscription_rel;"
 srsubid | srrelid | srsubstate |  srsublsn  
---------+---------+------------+------------
   24589 |   16385 | r          | 
   24590 |   16384 | r          | 0/04004780   -- sequence
(2 rows)

State “r” means ready, so the sequence should have synchronized, and indeed:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 3

postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 3

Adding new rows to the table, which also increases the last_value of the sequence, should also synchronize the sequences:

postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values ('eeee')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values ('ffff')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 6

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 4

… but is not happening automatically. To get them synchronized you need to refresh the subscription:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "alter subscription subseq refresh sequences"
Expanded display is on.
ALTER SUBSCRIPTION
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 6

Great, this reduces the work to fix the sequences quite a bit and is really helpful. As usual, thanks to all involved.

L’article PostgreSQL 19: Logical replication of sequences est apparu en premier sur dbi Blog.

PostgreSQL 19: The “WAIT FOR” command

Tue, 2025-11-11 03:32

When you go for replication and you don’t use synchronous replication there is always a window when data written on the primary is not yet available in the replica. This is known as “replication lag” and can be monitored using the pg_stat_replication catalog view. A recent commit to PostgreSQL 19 implements a way to wait for data to be visible on the replica without switching to synchronous replication, and this is what the “WAIT FOR” command is for.

Before we can see how that works we need a replica, because when you try to execute this command on a primary you’ll get this:

postgres=# select version();
                                        version                                        
---------------------------------------------------------------------------------------
 PostgreSQL 19devel on x86_64-linux, compiled by gcc-15.1.1, 64-bit
(1 row)

postgres=# WAIT FOR LSN '0/306EE20';
ERROR:  recovery is not in progress
HINT:  Waiting for the replay LSN can only be executed during recovery.
postgres=# 

So, let’s create a replica and start it up:

postgres@:/home/postgres/ [pgdev] mkdir /var/tmp/dummy
postgres@:/home/postgres/ [pgdev] pg_basebackup --pgdata=/var/tmp/dummy --write-recovery-conf --checkpoint=fast
postgres@:/home/postgres/ [pgdev] echo "port=8888" >> /var/tmp/dummy/postgresql.auto.conf 
postgres@:/home/postgres/ [pgdev] chmod 700 /var/tmp/dummy
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/dummy start
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select pg_is_in_recovery()"
 pg_is_in_recovery 
-------------------
 t
(1 row)

As nothing is happening on the primary right now, data on the primary and the replica is exactly the same:

postgres=# select usename,sent_lsn,write_lsn,flush_lsn,replay_lsn from pg_stat_replication;
 usename  |  sent_lsn  | write_lsn  | flush_lsn  | replay_lsn 
----------+------------+------------+------------+------------
 postgres | 0/03000060 | 0/03000060 | 0/03000060 | 0/03000060

To see how “WAIT FOR” behaves we need to a little cheating and pause WAL replaying on the replica (we could also cut the the network between the primary and the replica):

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from pg_wal_replay_pause();" 
 pg_wal_replay_pause 
---------------------
 
(1 row)
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from pg_is_wal_replay_paused();" 
 pg_is_wal_replay_paused 
-------------------------
 t
(1 row)

On the primary, create a table and get the current LSN:

postgres@:/home/postgres/ [pgdev] psql -c "create table t(a int)"
CREATE TABLE
postgres@:/home/postgres/ [pgdev] psql -c "insert into t values(1)"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -c "select pg_current_wal_insert_lsn();"
 pg_current_wal_insert_lsn 
---------------------------
 0/03018CA8
(1 row)

As WAL replay on the replica is paused, the “WAIT FOR” command will now block:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "WAIT FOR LSN '0/03018CA8'" 

This is the intended behavior as we want to make sure that we can see all the data up to this LSN. Once we resume WAL replay on the replica the “WAIT FOR” command will return success as all the data reached the replica:

postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select * from pg_wal_replay_resume();"
 pg_wal_replay_resume 
----------------------
 
(1 row)

… the other session will unblock:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "WAIT FOR LSN '0/03018CA8'" 
 status  
---------
 success
(1 row)

So, starting with PostgreSQL 19 next year, there is a way for applications to make sure that a replica reached all the data up to a specific LSN by blocking until the data is there.

L’article PostgreSQL 19: The “WAIT FOR” command est apparu en premier sur dbi Blog.

Setting up TLS encryption and authentication in MongoDB

Mon, 2025-11-10 02:00

When securing a MongoDB deployment, protecting sensitive data is paramount. MongoDB supports encryption throughout the lifecycle of the data, with three primary types of data encryption :

  • Encryption in transit
  • Encryption at rest
  • Encryption in use


Among these, encryption in transit is fundamental : it protects data as it moves between your application and the database. In MongoDB, this is achieved through TLS (Transport Layer Security), which ensures that communication remains private and secure. You have two options when it comes to using TLS for your database :

  • Using TLS for encryption only.
  • Using TLS both for encryption and authentication to the database.

Setting up TLS for encryption Create a Certificate Authority

We’ll first create a Certificate Authority. These certificates will be self-signed, which is fine for testing, but you shouldn’t use self-signed certificates in a production environment ! On Linux, use the openssl library to generate the certificates.

openssl req -newkey rsa:4096 -nodes -x509 -days 365 -keyout ca.key -out ca.pem -subj "/C=CH/ST=ZH/L=Zurich/O=dbi/OU=MongoDBA/CN=vm.domain.com"

Here is a description of some important parameters of the commands :

  • -newkey rsa:4096 : Generates a new private key and a certificate request using RSA with a 4096-bit key size.
  • -nodes : Skips password encryption of the private key. Without it, OpenSSL would prompt you to set a passphrase.
  • -x509 : Generates a self-signed certificate. x509 is supported by MongoDB.
  • -days 365 : Validity of the certificate in days.
  • -keyout ca.key : Filename for the private key.
  • -out ca.pem : Filename for the certificate.
  • -subj "..." : Provides the subject’s Distinguished Name (DN). If you don’t specify it, OpenSSL will prompt for each field.
Create a Server Certificate

Then, we’ll create the server certificate for the MongoDB instance. In the openssl-server.cnf file below, you should change the req_distinguished_name fields with what you used while creating the Certificate Authority, and replace vm.domain.com by the name of your machine.

If you only have an IP and no DNS entry for your VM, use IP.1 instead of DNS.1 in the alt_names section.

cat > openssl-server.cnf <<EOF
[ req ]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no

[ req_distinguished_name ]
C = CH
ST = ZH
L = Zurich
O = dbi
OU = MongoDBA
CN = myVM

[ v3_req ]
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = myVM
EOF

Then, generates the certificate with these commands :

openssl req -newkey rsa:4096 -nodes -keyout mongodb-server.key -out mongodb-server.csr -config openssl-server.cnf

openssl x509 -req -in mongodb-server.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out mongodb-server.crt -days 365 -extensions v3_req -extfile openssl-server.cnf

cat mongodb-server.key mongodb-server.crt > mongodb-server.pem
Create a Client Certificate

Finally, we’ll create a client certificate. The process is the same, with a few tweaks :

  • OU should be different from the one from the server certificate. It is not mandatory for the communication, but it will be for the authentication if you decide to enable it.
  • CN should also be different.
  • extendedKeyUsage should be set with clientAuth instead of serverAuth.
cat > openssl-client.cnf <<EOF
[ req ]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no

[ req_distinguished_name ]
C = CH
ST = ZH
L = Zurich
O = dbi
OU = MongoDBAClient
CN = userApp

[ v3_req ]
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
EOF

The creation of the certificate is the same.

openssl req -newkey rsa:4096 -nodes -keyout mongodb-client.key -out mongodb-client.csr -config openssl-client.cnf

openssl x509 -req -in mongodb-client.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out mongodb-client.crt -days 365 -extensions v3_req -extfile openssl-client.cnf

cat mongodb-client.key mongodb-client.crt > mongodb-client.pem
MongoDB Configuration Change

Make sure to set permissions correctly for your certificates.

chmod 600 ca.pem mongodb-server.pem mongodb-client.pem
chown mongod: ca.pem mongodb-server.pem mongodb-client.pem

Now, you can change your MongoDB configuration file to include the certificates. Simply add the net.tls part to your mongod.conf file.

net:
  bindIp: yourIP
  port: 27017
  tls:
    mode: requireTLS
    certificateKeyFile: /path/to/mongodb-server.pem
    CAFile: /path/to/ca.pem

You can now restart your MongoDB instance with systemctl restart mongod (or whatever you’re using), and then try the connection to your instance for your client. Of course, the port mentioned in the net.port field of your configuration file shouldn’t be blocked by your firewall.

> mongosh --host myVM --port 27017 --tls --tlsCertificateKeyFile mongodb-client.pem --tlsCAFile ca.pem
Current Mongosh Log ID:	682c9641bbe4593252ee7c8c
Connecting to:		mongodb://vmIP:27017/?directConnection=true&tls=true&tlsCertificateKeyFile=Fclient.pem&tlsCAFile=ca.pem&appName=mongosh+2.5.1
Using MongoDB:		8.0.9
Using Mongosh:		2.5.1

For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/

test>

You’re now connected to your MongoDB instance through TLS ! And if you’re not using the certificate, the requireTLS mode prevents the connection from being established, and generated these error messages in your logs :

{"t":{"$date":"2025-05-21T04:43:55.277+00:00"},"s":"I",  "c":"EXECUTOR", "id":22988,   "ctx":"conn52","msg":"Error receiving request from client. Ending connection from remote","attr":{"error":{"code":141,"codeName":"SSLHandshakeFailed","errmsg":"The server is configured to only allow SSL connections"},"remote":"IP:50100","connectionId":52}}

If you want to learn more about MongoDB logs, I wrote a blog on this topic: MongoDB Log Analysis : A Comprehensive Guide.

Setting up TLS authentication

Now that you’re connected, we will set up authentication so that you can be connected as a specific user to MongoDB. Using the already established connection, create a user in the $external database. Each client certificate that you create can be mapped to one MongoDB user. Retrieve the username that you will use :

> openssl x509 -in mongodb-client.pem -inform PEM -subject -nameopt RFC2253 | grep subject
subject=CN=userApp,OU=MongoDBA,O=dbi,L=Zurich,ST=ZH,C=CH

And then create the user in the $external database, using the existing MongoDB connection :

test> db.getSiblingDB("$external").runCommand({
  createUser: "CN=userApp,OU=MongoDBA,O=dbi,L=Zurich,ST=ZH,C=CH",
  roles: [
    { role: "userAdminAnyDatabase", db: "admin" }
  ]
});

To check that everything works as intended, you can try to display collections in the admin database. For the moment, there is no error because you have all the rights to do it (no authorization is enforced).

test> use admin
switched to db admin
admin> show collections;
system.users
system.version

You can now edit the MongoDB configuration file by adding the net.tls.allowConnectionsWithoutCertificates set to true, and the security.authorization flag set to enabled. The mongod.conf file should look like this :

net:
  bindIp: X.X.X.X
  port: XXXXX
  tls:
    mode: requireTLS
    certificateKeyFile: /path/to/mongodb-server.pem
    CAFile: /path/to/ca.pem
    allowConnectionsWithoutCertificates: false

security:
  authorization: enabled

After restarting with systemctl restart mongod, you can now connect again. If you use the same command as before, you will log in without authentication, and get the error below whenever you try to do anything :

MongoServerError[Unauthorized]: Command listCollections requires authentication

So you should now connect via this command :

mongosh --host vmIP --port 27017 --tls --tlsCertificateKeyFile mongodb-client.pem --tlsCAFile ca.pem --authenticationDatabase '$external' --authenticationMechanism MONGODB-X509

If you want to show the admin collections, you will now get an error, because your user only has the userAdminAnyDatabase role granted (this role was chosen during the user creation, see above).

admin> show collections
MongoServerError[Unauthorized]: not authorized on admin to execute command { listCollections: 1, filter: {}, cursor: {}, nameOnly: true, authorizedCollections: false, lsid: { id: UUID("9c48b4c4-7702-49ce-a97c-52763b2ad6b3") }, $db: "admin" }

But it’s fine, you can grant yourself more roles (readWriteAnyDatabase, for instance) and create new users if you want.

The communication between the client and the server is now fully secured. Congratulations !

MongoServerError[BadValue]

Side note : if you ever encounter this error:

MongoServerError[BadValue]: Cannot create an x.509 user with a subjectname that would be recognized as an internal cluster member

… make sure to follow the RFC-2253 standards. For instance, you could have this error if one of the field is too long. Also, as a reminder, the client certificate should have a different Distinguished Name (DN) than the server certificate (see documentation for more information).

L’article Setting up TLS encryption and authentication in MongoDB est apparu en premier sur dbi Blog.

Securing an Existing Unsecure GoldenGate Installation

Fri, 2025-11-07 01:00

You might have an existing unsecure GoldenGate installation that you would like to secure, whether it’s for security reasons or because you would like to dissociate the installation and its securing process. After searching everywhere in the Oracle documentation for how to proceed, I decided to try, investigate and eventually even asked Oracle directly. Here is the answer.

For a TL;DR version of the answer, please go to the end of the blog, but in the meantime, here was my reasoning.

Setup differences between a secure and unsecure GoldenGate installation Installation differences

From an installation perspective, the difference between a secure and unsecure installation is narrow. I talked earlier about graphic and silent GoldenGate installations, and for the silent installation, the following response file parameters are the only one involved in this security aspect:

# SECTION C - SECURITY MANAGER
SECURITY_ENABLED=false

# SECTION H - SECURITY
TLS_1_2_ENABLED=false
TLS_1_3_ENABLED=false
FIPS_ENABLED=false
SERVER_CERTIFICATE=
SERVER_CERTIFICATE_KEY_FILE=
SERVER_CA_CERTIFICATES_FILE=
CLIENT_CERTIFICATE=
CLIENT_CERTIFICATE_KEY_FILE=
CLIENT_CA_CERTIFICATES_FILE

*_ENABLED parameters are just flags that should be set to true to secure the installation (at least for SECURITY_ENABLED and one TLS parameter), and then you need to provide the certificate files (client and server, three for each).

To summarize, there is not much you have to do to configure a secure GoldenGate setup. So it shouldn’t be that difficult to enable these security features after installation: one flag, and a few certificates.

Configuration differences

From a configuration perspective, there are not many differences either. Looking at the deploymentConfiguration.dat file for both secure and unsecure service managers, the only difference lies in the SecurityManager.config.securityDetails section. After cleaning what is similar, here are the differences:

# Secure installation
            "securityDetails": {
                "network": {
                    "common": {
                        "fipsEnabled": false,
                    },
                    "inbound": {
                        "authMode": "clientOptional_server",
                        "cipherSuites": [
                            "TLS_AES_256_GCM_SHA384",
                            "TLS_AES_128_GCM_SHA256",
                            "TLS_CHACHA20_POLY1305_SHA256"
                        ],
                        "protocolVersion": "TLS_ALL"
                    },
                    "outbound": {
                        "authMode": "clientOptional_server",
                    }
                }
            },

# Unsecure installation
            "securityDetails": {
                "network": {
                    "common": {
                        "fipsEnabled": false,
                    },
                    "inbound": {
                        "authMode": "clientOptional_server",
                        "cipherSuites": "^((?!anon|RC4|NULL|3DES).)*$",
                    },
                    "outbound": {
                        "authMode": "client_server",
                    }
                }
            },

Basically, securityDetails.outbound.authMode is set to clientOptional_server on one side, and client_server on the other. And the unsecure configuration has a different securityDetails.inbound.cipherSuites parameter, and a missing securityDetails.protocolVersion parameter.

But nothing in the configuration points to the wallet files, locates in $OGG_ETC_HOME/ssl. So, how to add them here ?

Can you secure an unsecure GoldenGate installation ?

When connecting to an unsecure GoldenGate service manager, you still have the ability to add and manage certificates from the UI, the same way you would do on a secure installation:

Certificates Management tab on an unsecured GoldenGate service manager UI

It is unfortunate, but just adding the certificates from the UI doesn’t make your installation secure. In fact, even after modifying the deploymentConfiguration.dat files, the last piece missing in the configuration, as described above, it doesn’t work. You will only end up with a broken installation, even when doing the same with all your deployments and restarting everything.

Is there really no way to secure an already existing GoldenGate installation ?

Unfortunately, not at this point. And it was confirmed earlier this week on the MOSC forums by Gopal Gaur, Senior Principal Software Engineer working on GoldenGate at Oracle.

You can not convert non secure deployment into secure deployment, you will need a new service manager that supports sever side SSL/TLS.

You can not convert non secure deployment into secure deployment at this stage, we have an opened enhancement for this.

To wrap up, bad news: it is not possible to secure an existing GoldenGate installation, but good news, Oracle is apparently working on it. In the meantime, just re-install GoldenGate

L’article Securing an Existing Unsecure GoldenGate Installation est apparu en premier sur dbi Blog.

GoldenGate 23ai Installation: Graphic and Silent Mode Comparison for Automation

Mon, 2025-11-03 02:00

Automating Oracle installations can sometimes be daunting, given the long list of parameters available. We’ll compare both graphic and silent installations of GoldenGate 23ai, focusing on building minimalist response files for automation purposes.

Prerequisites for GoldenGate installation

You can set up GoldenGate in two different ways:

  • From the base archive, available on eDelivery (V1042871-01.zip for Linux x86-64, for instance)
  • From the patched archive, updated quarterly and available on the Oracle Support. At the time of writing of this blog, GoldenGate 23.9 is the latest version available (23.10, now called 23.26, was announced but not released yet). You can find the MOS Document 3093376.1 on the subject, or 2193391.1 for general patching information on GoldenGate. Patch 38139663 is the completely patched installation (we will use this one in the blog), while patch 38139662 is the patch-only archive, applied on an existing GoldenGate installation.

For the purpose of this installation, we will use the oracle-database-preinstall rpm, even if we don’t need all the things it brings. If you plan on installing GoldenGate on an existing Oracle database server, Oracle recommends using a separate user. We will keep oracle here.

[root@vmogg ~] dnf install -y oracle-database-preinstall-23ai
[root@vmogg ~] mkdir -p /u01/stage
[root@vmogg ~] chown oracle:oinstall -R /u01

With the oracle user created through the rpm installation, unzip GoldenGate source file into a stage area:

[root@vmogg ~] su - oracle
[oracle@vmogg ~] cd /u01/stage
[oracle@vmogg stage] unzip -oq p38139663_23902507OGGRU_Linux-x86-64.zip -d /u01/stage/
Installing GoldenGate with the OUI (graphic installation) Installing GoldenGate binaries

Running the graphic installation of GoldenGate is not any different from what you would do with an Oracle database installation.

After setting up X11 display (out of the scope of this blog), you should first define the OGG_HOME variable to the location of the GoldenGate installation and then run the installer:

[oracle@vmogg ~]$ export OGG_HOME=/u01/app/oracle/product/ogg23ai
[oracle@vmogg ~]$ /u01/stage/fbo_ggs_Linux_x64_Oracle_services_shiphome/Disk1/runInstaller

Bug: depending on the display options you have, you might have a color mismatch on the GoldenGate installation window, most of it appearing black (see this image). If this happens, run the following command before launching the installation: export _JAVA_OPTIONS="-Dsun.java2d.xrender=false"

Just click Next on the first step. Starting from GoldenGate 23ai, Classic Architecture was desupported, so you don’t have to worry anymore about which architecture to choose. The Microservices Architecture is the only choice now.

Fill in the software location for your installation of GoldenGate. This will match the OGG_HOME environment variable. If the variable is set prior to launching the runInstaller, the software location is filled automatically.

Step 3 is just a summary of the installation. You can save the response file at this stage and use it later to standardize your installations with the silent installation described below. Then, click on Install.

After a few seconds, the installation is complete, and you can exit the installer.

If it’s your first Oracle-related installation on this server, you might have to run the /u01/app/oraInventory/orainstRoot.sh script as root when prompted to do so.

[root@vmogg ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

The binary installation is complete.

Installing the Service Manager and the First Deployment

Once the binaries are installed, with the same oracle X11 terminal, run the oggca.sh script located in the $OGG_HOME/bin directory:

[oracle@vmogg ~]$ export OGG_HOME=/u01/app/oracle/product/ogg23ai
[oracle@vmogg ~]$ $OGG_HOME/bin/oggca.sh

On the first step (see below), you will have:

  • Software Home, which contains the GoldenGate binaries, also called $OGG_HOME
  • Deployment Home, filled with the location of the service manager directory (and not the GoldenGate deployment, thank you Oracle for this one…).
  • Port (default is 7809) of the service manager. This will be the main point of entry for the web UI.
  • Register as a service/system daemon, if you want GoldenGate to be a service on your server.
  • Integrate with XAG, for a GoldenGate RAC installation (out of the scope of this blog).
  • Enable Security, with the associated certificates and key. You can leave this unchecked if you just want to test the GoldenGate installation process.
  • We leave the rest unchecked.
Step 1 out of 7 of GoldenGate graphic installation with oggca.sh

Next, fill in the credentials for the service manager. Enabling Strong Password Policy will force you to enter a secure password.

Step 2 out of 7 of GoldenGate graphic installation with oggca.sh

In previous versions of GoldenGate, you could first set up the service manager and wait before configuring your first deployment. It is now mandatory to set up the first deployment:

  • Deployment Name: ogg_test_01 for this installation. It is not just cosmetic, you will refer to this name for connection, in the adminclient and on the Web UI.
  • Deployment Home: Path to the deployment home. Logs, trail files and configuration will sit there.
  • Ports: Four ports need to be filled here. I would recommend using the default ports for the first deployment (7810, 7811, 7812 and 7813), or ports following the service manager port (7809). For a second deployment, you can continue with the following ports (7814, 7815, 7816, 7817), or keep the same units digit (7820, 7821, 7822, 7823) for a better understanding of your GoldenGate infrastructure.
  • Remote Metrics for the Deployment: out of the scope of this blog, not needed for a basic GoldenGate installation.
  • Security: If you secured your service manager earlier in the previous step, you should secure your deployment here, providing keys and certificates.
  • Replication Options: TNS_ADMIN could already be filled, otherwise just specify its path. GoldenGate will look for TNS entries here. You should also fill in the replication schema name.
Step 3 out of 7 of GoldenGate graphic installation with oggca.sh

Later, fill in the credentials for the deployment. They can be different from the service manager credentials, or you can check the box to keep the same credentials.

Step 4 out of 7 of GoldenGate graphic installation with oggca.sh

In the summary screen, review your configuration, and save the response file for later if required. Click on Finish to start the installation.

Step 5 out of 7 of GoldenGate graphic installation with oggca.sh

The installation should take a few seconds:

Step 6 out of 7 of GoldenGate graphic installation with oggca.sh

That’s it, you have successfully installed GoldenGate 23ai ! Go to the web UI section for how to connect to your GoldenGate environment.

Installing GoldenGate with the CLI (silent installation) Installing GoldenGate binaries

To perform the GoldenGate installation process in silent mode, you can either use a response file containing the arguments needed for the installation or give these arguments in the command line.

For the GoldenGate binaries installation, create a oggcore_23ai.rsp file, changing SOFTWARE_LOCATION, INVENTORY_LOCATION and UNIX_GROUP_NAME as needed:

[oracle@vmogg ~]$ cat oggcore_23ai.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_ogginstall_response_schema_v23_1_0
INSTALL_OPTION=ORA23ai
SOFTWARE_LOCATION=/u01/app/oracle/product/ogg23ai
INVENTORY_LOCATION=/u01/app/oraInventory
UNIX_GROUP_NAME=oinstall

Then, run the installer with the -silent and -responseFile options:

[oracle@vmogg ~]$ /u01/stage/fbo_ggs_Linux_x64_Oracle_services_shiphome/Disk1/runInstaller -silent -responseFile /home/oracle/oggcore_23ai.rsp
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 17094 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4095 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2025-09-24_09-53-23AM. Please wait ...
You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2025-09-24_09-53-23AM.log

As a root user, run the following script(s):
	1. /u01/app/oraInventory/orainstRoot.sh



Successfully Setup Software.
The installation of Oracle GoldenGate Services was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2025-09-24_09-53-23AM.log' for more details.

Same thing as with the graphic installation: if it’s the first time you run an Oracle-related installation on this server, run the orainstRoot.sh script as root:

[root@vmogg ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

Installing the Service Manager and the First Deployment

Once the binaries are installed, run the oggca.sh script with the response file corresponding to the service manager and deployment that you want to create. The content of the response file oggca.rsp should be adapted to your needs, but I integrated a full example in the appendix below.

[oracle@vmogg ~]$ /u01/app/oracle/product/ogg23ai/bin/oggca.sh -silent -responseFile /home/oracle/oggca.rsp

As part of the process of registering Service Manager as a system daemon, the following steps will be performed:
	1- The deployment will be stopped before registering the Service Manager as a daemon.
	2- A new popup window will show the details of the script used to register the Service Manager as a daemon.
	3- After the register script is executed, the Service Manager daemon will be started in the background and the deployment will be automatically restarted.

Click "OK" to continue.

In order to register Service Manager as a system service/daemon, as a "root" user, execute the following script:
	(1). /u01/app/oracle/product/ogg23ai_SM/bin/registerServiceManager.sh

To execute the configuration scripts:
  1.Open a terminal window
  2.Login as "root"
  3.Run the script


Successfully Setup Software.

If you asked for the creation of a service, then run the following command as root:

[root@vmogg ~]# /u01/app/oracle/product/ogg23ai_SM/bin/registerServiceManager.sh
Copyright (c) 2017, 2024, Oracle and/or its affiliates. All rights reserved.
----------------------------------------------------
     Oracle GoldenGate Install As Service Script
----------------------------------------------------
OGG_HOME=/u01/app/oracle/product/ogg23ai
OGG_CONF_HOME=/u01/app/oracle/product/ogg23ai_SM/etc/conf
OGG_VAR_HOME=/u01/app/oracle/product/ogg23ai_SM/var
OGG_USER=oracle
Running OracleGoldenGateInstall.sh...
Created symlink /etc/systemd/system/multi-user.target.wants/OracleGoldenGate.service → /etc/systemd/system/OracleGoldenGate.service.
Successfully Setup Software.

Warning: If you plan on automating a GoldenGate installation and setup, make sure response files can only be read by the oracle user, and clean the response files if you need to keep passwords in plain text inside the file.

Adding or Removing a Deployment

To add or remove a deployment to an existing service manager, you can do that graphically with oggca.sh, or in silent mode. In silent mode, I give below a minimal response file example to add a new deployment (removing everything that would be already configured, like service manager properties). Of course, the deployment name, paths, and ports should be different from an existing deployment. And if your deployment is secured, you should fill SECTION H - SECURITY in the same way you did for the first installation, and specify SECURITY_ENABLED=true in SECTION C - SERVICE MANAGER.

Adding a deployment
oracle.install.responseFileVersion=/oracle/install/rspfmt_oggca_response_schema_v23_1_0

# SECTION A - GENERAL
CONFIGURATION_OPTION=ADD
DEPLOYMENT_NAME=ogg_test_02

# SECTION B - ADMINISTRATOR ACCOUNT
ADMINISTRATOR_USER=ogg
ADMINISTRATOR_PASSWORD=ogg_password
DEPLOYMENT_ADMINISTRATOR_USER=ogg
DEPLOYMENT_ADMINISTRATOR_PASSWORD=ogg_password

# SECTION C - SERVICE MANAGER
HOST_SERVICEMANAGER=your_host
PORT_SERVICEMANAGER=7809
SECURITY_ENABLED=false
STRONG_PWD_POLICY_ENABLED=false

# SECTION E - SOFTWARE HOME
OGG_SOFTWARE_HOME=/u01/app/oracle/product/ogg23ai

# SECTION F - DEPLOYMENT DIRECTORIES
OGG_DEPLOYMENT_HOME=/u01/app/oracle/product/ogg_test_02
OGG_ETC_HOME=/u01/app/oracle/product/ogg_test_02/etc
OGG_CONF_HOME=/u01/app/oracle/product/ogg_test_02/etc/conf
OGG_SSL_HOME=/u01/app/oracle/product/ogg_test_02/etc/ssl
OGG_VAR_HOME=/u01/app/oracle/product/ogg_test_02/var
OGG_DATA_HOME=/u01/app/oracle/product/ogg_test_02/var/lib/data
OGG_ARCHIVE_HOME=/u01/app/oracle/product/ogg_test_02/var/lib/archive

# SECTION G - ENVIRONMENT VARIABLES
ENV_LD_LIBRARY_PATH=${OGG_HOME}/lib/instantclient:${OGG_HOME}/lib
ENV_TNS_ADMIN=/u01/app/oracle/network/admin
ENV_STREAMS_POOL_SIZE=
ENV_USER_VARS=

# SECTION H - SECURITY
TLS_1_2_ENABLED=false
TLS_1_3_ENABLED=false
FIPS_ENABLED=false
SERVER_CERTIFICATE=
SERVER_CERTIFICATE_KEY_FILE=
SERVER_CA_CERTIFICATES_FILE=
CLIENT_CERTIFICATE=
CLIENT_CERTIFICATE_KEY_FILE=
CLIENT_CA_CERTIFICATES_FILE=

# SECTION I - SERVICES
ADMINISTRATION_SERVER_ENABLED=true
PORT_ADMINSRVR=7820
DISTRIBUTION_SERVER_ENABLED=true
PORT_DISTSRVR=7821
NON_SECURE_DISTSRVR_CONNECTS_TO_SECURE_RCVRSRVR=false
RECEIVER_SERVER_ENABLED=true
PORT_RCVRSRVR=7822
METRICS_SERVER_ENABLED=true
METRICS_SERVER_IS_CRITICAL=false
PORT_PMSRVR=7823
PMSRVR_DATASTORE_TYPE=BDB
PMSRVR_DATASTORE_HOME=
ENABLE_DEPLOYMENT_REMOTE_METRICS=false
DEPLOYMENT_REMOTE_METRICS_LISTENING_HOST=
DEPLOYMENT_REMOTE_METRICS_LISTENING_PORT=0

# SECTION J - REPLICATION OPTIONS
OGG_SCHEMA=OGGADMIN
Removing a deployment

Same thing for the removal of an existing deployment, where the minimal response file is even simpler. You just need the deployment name and service manager information.

oracle.install.responseFileVersion=/oracle/install/rspfmt_oggca_response_schema_v23_1_0

# SECTION A - GENERAL
CONFIGURATION_OPTION=REMOVE
DEPLOYMENT_NAME=ogg_test_02

# SECTION B - ADMINISTRATOR ACCOUNT
ADMINISTRATOR_USER=ogg
ADMINISTRATOR_PASSWORD=ogg_password
DEPLOYMENT_ADMINISTRATOR_USER=ogg
DEPLOYMENT_ADMINISTRATOR_PASSWORD=ogg_password

# SECTION C - SERVICE MANAGER
HOST_SERVICEMANAGER=your_host
PORT_SERVICEMANAGER=7809
SECURITY_ENABLED=false

# SECTION H - SECURITY
TLS_1_2_ENABLED=false
TLS_1_3_ENABLED=false
FIPS_ENABLED=false
SERVER_CERTIFICATE=
SERVER_CERTIFICATE_KEY_FILE=
SERVER_CA_CERTIFICATES_FILE=
CLIENT_CERTIFICATE=
CLIENT_CERTIFICATE_KEY_FILE=
CLIENT_CA_CERTIFICATES_FILE=

# SECTION K - REMOVE DEPLOYMENT OPTIONS
REMOVE_DEPLOYMENT_FROM_DISK=true
Accessing the Web UI

Whether you installed GoldenGate graphically or silently, you will now be able to connect to the Web UI. Except for the design, it is pretty much the same thing as the Microservices Architecture of GoldenGate 19c and 21c. Connect to the hostname of your GoldenGate installation, on the service manager port: http://hostname:port, or https://hostname:post if the installation is secured.

With the example of this blog:

  • 7809 — Service manager
  • 7810 — Administration service of the first deployment you created, for managing extracts and replicats
  • 7811 — Distribution service, to send trail files to other GoldenGate deployments.
  • 7812 — Receiver service, to receive trail files.
  • 7813 — Performance metrics service, for extraction and replication analysis.

Log in either with the service manager credentials, or with the deployment credentials:

Service Manager Web UI

The service manager web UI allows you to stop and start deployment services, manage users and certificates. You will hardly ever use it, even less since deployment creation/removal will be done through oggca.sh anyway. If you have many deployments, it can still be useful to log in to these deployments quickly.

Deployment Web UI

The deployment web UI, however, is used throughout the whole lifecycle of your GoldenGate replications. You manage extracts, replicats, distribution paths, and more.

NB for the newcomers: you don’t have to bookmark all the services of a deployment. Once logged in to the administration service of a deployment, you can just jump between services through the UI, by clicking on the services on the top bar.

You now have a full GoldenGate 23ai installation, and can start configuring your first replication !

Appendix: oggca.rsp example

Here is an example of a response file to create both the service manager and the first deployment of a GoldenGate installation. I included at the end the full file with Oracle annotations.

oracle.install.responseFileVersion=/oracle/install/rspfmt_oggca_response_schema_v23_1_0

# SECTION A - GENERAL
CONFIGURATION_OPTION=ADD
DEPLOYMENT_NAME=ogg_test_01

# SECTION B - ADMINISTRATOR ACCOUNT
ADMINISTRATOR_USER=ogg
ADMINISTRATOR_PASSWORD=ogg_password
DEPLOYMENT_ADMINISTRATOR_USER=ogg
DEPLOYMENT_ADMINISTRATOR_PASSWORD=ogg_password

# SECTION C - SERVICE MANAGER
SERVICEMANAGER_DEPLOYMENT_HOME=/u01/app/oracle/product/ogg23ai_SM
SERVICEMANAGER_ETC_HOME=/u01/app/oracle/product/ogg23ai_SM/etc
SERVICEMANAGER_CONF_HOME=/u01/app/oracle/product/ogg23ai_SM/etc/conf
SERVICEMANAGER_SSL_HOME=/u01/app/oracle/product/ogg23ai_SM/etc/ssl
SERVICEMANAGER_VAR_HOME=/u01/app/oracle/product/ogg23ai_SM/var
SERVICEMANAGER_DATA_HOME=/u01/app/oracle/product/ogg23ai_SM/var/lib/data
SERVICEMANAGER_ARCHIVE_HOME=/u01/app/oracle/product/ogg23ai_SM/var/lib/archive
HOST_SERVICEMANAGER=your_host
PORT_SERVICEMANAGER=7809
SECURITY_ENABLED=false
STRONG_PWD_POLICY_ENABLED=false
CREATE_NEW_SERVICEMANAGER=true
REGISTER_SERVICEMANAGER_AS_A_SERVICE=true
INTEGRATE_SERVICEMANAGER_WITH_XAG=false
EXISTING_SERVICEMANAGER_IS_XAG_ENABLED=false
ENABLE_SERVICE_MANAGER_REMOTE_METRICS=false
SERVICE_MANAGER_REMOTE_METRICS_LISTENING_HOST=
SERVICE_MANAGER_REMOTE_METRICS_LISTENING_PORT=0
PLUGIN_SERVICE_ENABLED=false

# SECTION D - CONFIGURATION SRVICE
CONFIGURATION_SERVICE_ENABLED=false
CONFIGURATION_SERVICE_BACKEND_TYPE=FILESYSTEM
CONFIGURATION_SERVICE_BACKEND_CONNECTION_STRING=
CONFIGURATION_SERVICE_BACKEND_USERNAME=
CONFIGURATION_SERVICE_BACKEND_PASSWORD=
CONFIGURATION_SERVICE_BACKEND_TABLE_NAME=

# SECTION E - SOFTWARE HOME
OGG_SOFTWARE_HOME=/u01/app/oracle/product/ogg23ai

# SECTION F - DEPLOYMENT DIRECTORIES
OGG_DEPLOYMENT_HOME=/u01/app/oracle/product/ogg_test_01
OGG_ETC_HOME=/u01/app/oracle/product/ogg_test_01/etc
OGG_CONF_HOME=/u01/app/oracle/product/ogg_test_01/etc/conf
OGG_SSL_HOME=/u01/app/oracle/product/ogg_test_01/etc/ssl
OGG_VAR_HOME=/u01/app/oracle/product/ogg_test_01/var
OGG_DATA_HOME=/u01/app/oracle/product/ogg_test_01/var/lib/data
OGG_ARCHIVE_HOME=/u01/app/oracle/product/ogg_test_01/var/lib/archive

# SECTION G - ENVIRONMENT VARIABLES
ENV_LD_LIBRARY_PATH=${OGG_HOME}/lib/instantclient:${OGG_HOME}/lib
ENV_TNS_ADMIN=/u01/app/oracle/network/admin
ENV_STREAMS_POOL_SIZE=
ENV_USER_VARS=

# SECTION H - SECURITY
TLS_1_2_ENABLED=false
TLS_1_3_ENABLED=true
FIPS_ENABLED=false
SERVER_CERTIFICATE=
SERVER_CERTIFICATE_KEY_FILE=
SERVER_CA_CERTIFICATES_FILE=
CLIENT_CERTIFICATE=
CLIENT_CERTIFICATE_KEY_FILE=
CLIENT_CA_CERTIFICATES_FILE=

# SECTION I - SERVICES
ADMINISTRATION_SERVER_ENABLED=true
PORT_ADMINSRVR=7810
DISTRIBUTION_SERVER_ENABLED=true
PORT_DISTSRVR=7811
NON_SECURE_DISTSRVR_CONNECTS_TO_SECURE_RCVRSRVR=false
RECEIVER_SERVER_ENABLED=true
PORT_RCVRSRVR=7812
METRICS_SERVER_ENABLED=true
METRICS_SERVER_IS_CRITICAL=false
PORT_PMSRVR=7813
PMSRVR_DATASTORE_TYPE=BDB
PMSRVR_DATASTORE_HOME=
ENABLE_DEPLOYMENT_REMOTE_METRICS=false
DEPLOYMENT_REMOTE_METRICS_LISTENING_HOST=
DEPLOYMENT_REMOTE_METRICS_LISTENING_PORT=0

# SECTION J - REPLICATION OPTIONS
OGG_SCHEMA=OGGADMIN

# SECTION K - REMOVE DEPLOYMENT OPTIONS
REMOVE_DEPLOYMENT_FROM_DISK=

Full file :

################################################################################
## Copyright(c) Oracle Corporation 2016, 2024. All rights reserved.           ##
##                                                                            ##
## Specify values for the variables listed below to customize your            ##
## installation.                                                              ##
##                                                                            ##
## Each variable is associated with a comment. The comments can help to       ##
## populate the variables with the appropriate values.                        ##
##                                                                            ##
## IMPORTANT NOTE: This file should be secured to have read permission only   ##
## by the Oracle user or an administrator who owns this configuration to      ##
## protect any sensitive input values.                                        ##
##                                                                            ##
################################################################################

#-------------------------------------------------------------------------------
# Do not change the following system generated value. 
#-------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_oggca_response_schema_v23_1_0


################################################################################
##                                                                            ##
## Oracle GoldenGate deployment configuration options and details             ##
##                                                                            ##
################################################################################

################################################################################
##                                                                            ##
## Instructions to fill out this response file                                ##
## -------------------------------------------                                ##
## Fill out section A, B, and C for general deployment information            ##
## Additionally:                                                              ##  
## Fill out sections D, E, F, G, H, I, and J for adding a deployment          ##
## Fill out section K for removing a deployment                               ##
##                                                                            ##
################################################################################

################################################################################
#                                                                              #
#                          SECTION A - GENERAL                                 #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the configuration option.
# Specify: 
# - ADD    : for adding a new GoldenGate deployment.
# - REMOVE : for removing an existing GoldenGate deployment. 
#-------------------------------------------------------------------------------
CONFIGURATION_OPTION=ADD

#-------------------------------------------------------------------------------
# Specify the name for the new or existing deployment.
#-------------------------------------------------------------------------------
DEPLOYMENT_NAME=ogg_test_01


################################################################################
#                                                                              #
#                       SECTION B - ADMINISTRATOR ACCOUNT                      #
#                                                                              #
# * If creating a new Service Manager, set the Administrator Account username  #
#   and password.                                                              #
#                                                                              #
# * If reusing an existing Service Manager:                                    #
#     * Enter the credentials for the Administrator Account in                 #
#       the existing Service Manager.                                          #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the administrator account username for the Service Manager.
#-------------------------------------------------------------------------------
ADMINISTRATOR_USER=ogg

#-------------------------------------------------------------------------------
# Specify the administrator account password for the Service Manager.
#-------------------------------------------------------------------------------
ADMINISTRATOR_PASSWORD=ogg_password

#-------------------------------------------------------------------------------
# Optionally, specify a different administrator account username for the deployment,
# or leave blanks to use the same Service Manager administrator credentials.
#-------------------------------------------------------------------------------
DEPLOYMENT_ADMINISTRATOR_USER=ogg

#-------------------------------------------------------------------------------
# If creating a different administrator account username for the deployment, 
# specify the password for it.
#-------------------------------------------------------------------------------
DEPLOYMENT_ADMINISTRATOR_PASSWORD=ogg_password


################################################################################
#                                                                              #
#                       SECTION C - SERVICE MANAGER                            #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the location for the Service Manager deployment.
# This is only needed if the Service Manager deployment doesn't exist already.
#-------------------------------------------------------------------------------
SERVICEMANAGER_DEPLOYMENT_HOME=/u01/app/oracle/product/ogg23ai_SM

#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment ETC_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_ETC_HOME=/u01/app/oracle/product/ogg23ai_SM/etc

#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment CONF_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_CONF_HOME=/u01/app/oracle/product/ogg23ai_SM/etc/conf

#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment SSL_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_SSL_HOME=/u01/app/oracle/product/ogg23ai_SM/etc/ssl

#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment VAR_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_VAR_HOME=/u01/app/oracle/product/ogg23ai_SM/var

#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment DATA_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_DATA_HOME=/u01/app/oracle/product/ogg23ai_SM/var/lib/data

#-------------------------------------------------------------------------------
# Optionally, specify a custom location for the Service Manager deployment ARCHIVE_HOME.
#-------------------------------------------------------------------------------
SERVICEMANAGER_ARCHIVE_HOME=/u01/app/oracle/product/ogg23ai_SM/var/lib/archive

#-------------------------------------------------------------------------------
# Specify the host for the Service Manager.
#-------------------------------------------------------------------------------
HOST_SERVICEMANAGER=your_host

#-------------------------------------------------------------------------------
# Specify the port for the Service Manager.
#-------------------------------------------------------------------------------
PORT_SERVICEMANAGER=7809

#-------------------------------------------------------------------------------
# Specify if SSL / TLS is or will be enabled for the deployment.
# Specify true if SSL / TLS is or will be enabled, false otherwise.
#-------------------------------------------------------------------------------
SECURITY_ENABLED=false

#-------------------------------------------------------------------------------
# Specify if the deployment should enforce a strong password policy.
# Specify true to enable strong password policy management.
#-------------------------------------------------------------------------------
STRONG_PWD_POLICY_ENABLED=false

#-------------------------------------------------------------------------------
# Specify if a new Service Manager should be created. 
# Specify true if a new Service Manager should be created, false otherwise.
#
# This option is only needed when CONFIGURATION_OPTION is ADD.
#-------------------------------------------------------------------------------
CREATE_NEW_SERVICEMANAGER=true

#-------------------------------------------------------------------------------
# Specify if Service Manager should be registered as a service/daemon. This option is mutually exclusive with the 'INTEGRATE_SERVICEMANAGER_WITH_XAG' option.
# Specify true if Service Manager should be registered as a service, false otherwise.
#
# This option is only needed when CONFIGURATION_OPTION is ADD.
# This option does not apply to Windows platform.
#-------------------------------------------------------------------------------
REGISTER_SERVICEMANAGER_AS_A_SERVICE=true
#-------------------------------------------------------------------------------
# Specify if Service Manager should be integrated with XAG. This option is mutually exclusive with the 'REGISTER_SERVICEMANAGER_AS_A_SERVICE' option.
# Specify true if Service Manager should be integrated with XAG, false otherwise.
#
# This option is only needed when CONFIGURATION_OPTION is ADD.
# This option is only supported for Oracle databases.
#-------------------------------------------------------------------------------
INTEGRATE_SERVICEMANAGER_WITH_XAG=false

#-------------------------------------------------------------------------------
# If using an existing Service Manager, specify if it is integrated with XAG.
# Specify true if the existing Service Manager is integrated with XAG, false otherwise.
#
# This option is only needed when CONFIGURATION_OPTION is ADD.
# This option is only supported for Oracle databases.
#-------------------------------------------------------------------------------
EXISTING_SERVICEMANAGER_IS_XAG_ENABLED=false

#-------------------------------------------------------------------------------
# Specify if Remote Metrics using StatsD protocol will be enabled for the Service Manager
# Specify true if Remote Metrics for the Service Manager will be enabled, false otherwise
#-------------------------------------------------------------------------------
ENABLE_SERVICE_MANAGER_REMOTE_METRICS=false

#-------------------------------------------------------------------------------
# If Remote Metrics for the Service Manager will be enabled, specify the listening host
#-------------------------------------------------------------------------------
SERVICE_MANAGER_REMOTE_METRICS_LISTENING_HOST=

#-------------------------------------------------------------------------------
# If Remote Metrics for the Service Manager will be enabled, specify the listening port for that server
#-------------------------------------------------------------------------------
SERVICE_MANAGER_REMOTE_METRICS_LISTENING_PORT=0

#-------------------------------------------------------------------------------
# Specify if the Plugin Service for the Service Manager will be enabled.
# Specify true if the Plugin Service will be enabled, false otherwise.
#-------------------------------------------------------------------------------
PLUGIN_SERVICE_ENABLED=false
###############################################################################
#                                                                             #  
#                    SECTION D - CONFIGURATION SERVICE                        #
#                                                                             #
###############################################################################

#-------------------------------------------------------------------------------
# Specify if the Configuration Service will be enabled.
# Specify true if the Configuration Service will be enabled, false otherwise.
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_ENABLED=false

#-------------------------------------------------------------------------------
# Specify the Configuration Service backend type.
# Specify:
# - FILESYSTEM
# - ORACLE_DATABASE
#
# This is only needed if the Configuration Service will be enabled
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_BACKEND_TYPE=FILESYSTEM

#-------------------------------------------------------------------------------
# Specify the Configuration Service connection string for the database backend
#
# This is only needed if:
#     * The Configuration Service will be enabled
#     * CONFIGURATION_SERVICE_BACKEND_TYPE is ORACLE_DATABASE
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_BACKEND_CONNECTION_STRING=

#-------------------------------------------------------------------------------
# Specify the Configuration Service username for the database backend
#
# This is only needed if: 
#     * The Configuration Service will be enabled
#     * CONFIGURATION_SERVICE_BACKEND_TYPE is ORACLE_DATABASE
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_BACKEND_USERNAME=

#-------------------------------------------------------------------------------
# Specify the Configuration Service password for the database backend
#
# This is only needed if: 
#     * The Configuration Service will be enabled
#     * CONFIGURATION_SERVICE_BACKEND_TYPE is ORACLE_DATABASE
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_BACKEND_PASSWORD=

#-------------------------------------------------------------------------------
# Specify the Configuration Service table name for the database backend
#
# This is only needed if: 
#     * The Configuration Service will be enabled
#     * CONFIGURATION_SERVICE_BACKEND_TYPE is ORACLE_DATABASE
#-------------------------------------------------------------------------------
CONFIGURATION_SERVICE_BACKEND_TABLE_NAME=
###############################################################################
#                                                                             #
#                       SECTION E - SOFTWARE HOME                             #
#                                                                             #
###############################################################################

#-------------------------------------------------------------------------------
# Specify the existing OGG software home location.
#-------------------------------------------------------------------------------
OGG_SOFTWARE_HOME=/u01/app/oracle/product/OGG_23.9.0.25.07


###############################################################################
#                                                                             #
#                       SECTION F - DEPLOYMENT DIRECTORIES                    #
#                                                                             #
###############################################################################

#-------------------------------------------------------------------------------
# Specify the location of the new or existing OGG deployment.
#-------------------------------------------------------------------------------
OGG_DEPLOYMENT_HOME=/u01/app/oracle/product/ogg_test_01

#-------------------------------------------------------------------------------
# Specify the location for OGG_ETC_HOME.
#-------------------------------------------------------------------------------
OGG_ETC_HOME=/u01/app/oracle/product/ogg_test_01/etc

#-------------------------------------------------------------------------------
# Specify the location for OGG_CONF_HOME.
#-------------------------------------------------------------------------------
OGG_CONF_HOME=/u01/app/oracle/product/ogg_test_01/etc/conf

#-------------------------------------------------------------------------------
# Specify the location for OGG_SSL_HOME.
#-------------------------------------------------------------------------------
OGG_SSL_HOME=/u01/app/oracle/product/ogg_test_01/etc/ssl

#-------------------------------------------------------------------------------
# Specify the location for OGG_VAR_HOME.
#-------------------------------------------------------------------------------
OGG_VAR_HOME=/u01/app/oracle/product/ogg_test_01/var

#-------------------------------------------------------------------------------
# Specify the location for OGG_DATA_HOME.
#-------------------------------------------------------------------------------
OGG_DATA_HOME=/u01/app/oracle/product/ogg_test_01/var/lib/data

#-------------------------------------------------------------------------------
# Specify the location for OGG_ARCHIVE_HOME.
#-------------------------------------------------------------------------------
OGG_ARCHIVE_HOME=/u01/app/oracle/product/ogg_test_01/var/lib/archive

###############################################################################
#                                                                             #
#                       SECTION G - ENVIRONMENT VARIABLES                     #
#                                                                             #
###############################################################################

#-------------------------------------------------------------------------------
# Specify the value for the LD_LIBRARY_PATH environment variable.
#-------------------------------------------------------------------------------
ENV_LD_LIBRARY_PATH=${OGG_HOME}/lib/instantclient:${OGG_HOME}/lib

#-------------------------------------------------------------------------------
# Specify the value for the TNS_ADMIN environment variable.
# This environment variable is only for Oracle Databases.
#-------------------------------------------------------------------------------
ENV_TNS_ADMIN=/u01/app/oracle/network/admin

#-------------------------------------------------------------------------------
# This option is only needed when Sharding will be enabled.
# Specify the value for the STREAMS_POOL_SIZE environment variable.
# This environment variable is only for Oracle Databases.
#-------------------------------------------------------------------------------
ENV_STREAMS_POOL_SIZE=

#-------------------------------------------------------------------------------
# Specify any additional environment variables to be set in the deployment.
#-------------------------------------------------------------------------------
ENV_USER_VARS=


###############################################################################
#                                                                             #
#                           SECTION H - SECURITY                              #
#           This section is only needed if Security will be enabled           #
#                                                                             #
###############################################################################

# ------------------------------------------------------------------------------
# If security will be enabled, specify if TLS v1.2 will be enabled.
# Specify true if TLS v1.2 will be enabled, false otherwise.
#-------------------------------------------------------------------------------
TLS_1_2_ENABLED=false

# ------------------------------------------------------------------------------
# If security will be enabled, specify if TLS v1.3 will be enabled. 
# Specify true if TLS v1.3 will be enabled, false otherwise.
#-------------------------------------------------------------------------------
TLS_1_3_ENABLED=true

#-------------------------------------------------------------------------------
# Specify if FIPS will be enabled.
#-------------------------------------------------------------------------------
FIPS_ENABLED=false

#-------------------------------------------------------------------------------
# If SSL / TLS will be enabled, specify the server certificate 
#-------------------------------------------------------------------------------
SERVER_CERTIFICATE=

#-------------------------------------------------------------------------------
# If importing a server certificate, specify the private key file in PKCS#8 format
# The private key file must not be encrypted
#-------------------------------------------------------------------------------
SERVER_CERTIFICATE_KEY_FILE=

#-------------------------------------------------------------------------------
# If importing a server certificate, optionally specify the CA certificates file
#-------------------------------------------------------------------------------
SERVER_CA_CERTIFICATES_FILE=

#-------------------------------------------------------------------------------
# If SSL / TLS will be enabled, optionally specify the client certificate.
#-------------------------------------------------------------------------------
CLIENT_CERTIFICATE=

#-------------------------------------------------------------------------------
# If importing a client certificate, specify the private key file in PKCS#8 format
# The private key file must not be encrypted
#-------------------------------------------------------------------------------
CLIENT_CERTIFICATE_KEY_FILE=

#-------------------------------------------------------------------------------
# If importing a client certificate, optionally specify the CA certificates file
#-------------------------------------------------------------------------------
CLIENT_CA_CERTIFICATES_FILE=


###############################################################################
#                                                                             #
#                           SECTION I - SERVICES                              #
#                                                                             #
###############################################################################

#-------------------------------------------------------------------------------
# Specify if the Administration server will be enabled.
# Specify true if the Administration server will be enabled, false otherwise.
#-------------------------------------------------------------------------------
ADMINISTRATION_SERVER_ENABLED=true

#-------------------------------------------------------------------------------
# Required only if the Administration server will be enabled. 
# Specify the port for Administration Server.
#-------------------------------------------------------------------------------
PORT_ADMINSRVR=7810

#-------------------------------------------------------------------------------
# Specify if the Distribution server will be enabled.
# Specify true if the Distribution server will be enabled, false otherwise.
#-------------------------------------------------------------------------------
DISTRIBUTION_SERVER_ENABLED=true

#-------------------------------------------------------------------------------
# Required only if the Distribution server will be enabled. 
# Specify the port for Distribution Server.
#-------------------------------------------------------------------------------
PORT_DISTSRVR=7811

#-------------------------------------------------------------------------------
# If security is disabled, specify if this non-secure deployment will be used
# to send trail data to a secure deployment.
#-------------------------------------------------------------------------------
NON_SECURE_DISTSRVR_CONNECTS_TO_SECURE_RCVRSRVR=false

#-------------------------------------------------------------------------------
# Specify if the Receiver server will be enabled.
# Specify true if the Receiver server will be enabled, false otherwise.
#-------------------------------------------------------------------------------
RECEIVER_SERVER_ENABLED=true

#-------------------------------------------------------------------------------
# Required only if the Receiver server will be enabled. 
# Specify the port for Receiver Server.
#-------------------------------------------------------------------------------
PORT_RCVRSRVR=7812

#-------------------------------------------------------------------------------
# Specify if Performance Metrics server will be enabled.
# Specify true if Performance Metrics server will be enabled, false otherwise.
#-------------------------------------------------------------------------------
METRICS_SERVER_ENABLED=true
#-------------------------------------------------------------------------------
# Specify if Performance Metrics server is a critical service.
# Specify true if Performance Metrics server is a critical service, false otherwise.
#
# This is optional and only takes effect when Performance Metrics server will be enabled.
# Also, this option should only be set when the Service Manager is integrated with XAG.
# The default value is false.
#
# This option is only supported for Oracle databases.
#-------------------------------------------------------------------------------
METRICS_SERVER_IS_CRITICAL=false

#-------------------------------------------------------------------------------
# Specify the port for Performance Metrics server (TCP).
#
# This option is only needed when Performance Metrics server will be enabled.
#-------------------------------------------------------------------------------
PORT_PMSRVR=7813

#-------------------------------------------------------------------------------
# Specify the DataStore type for Performance Metrics server.
# Valid values are: BDB, LMDB
#
# This option is only needed when Performance Metrics server will be enabled.
#-------------------------------------------------------------------------------
PMSRVR_DATASTORE_TYPE=BDB

#-------------------------------------------------------------------------------
# Specify the DataStore home location for Performance Metrics server.
# This is optional and only takes effect when Performance Metrics server will be enabled.
#-------------------------------------------------------------------------------
PMSRVR_DATASTORE_HOME=

#-------------------------------------------------------------------------------
# Specify if Remote Metrics using StatsD protocol will be enabled for the Deployment
# Specify true if Remote Metrics for the deployment will be enabled, false otherwise
#-------------------------------------------------------------------------------
ENABLE_DEPLOYMENT_REMOTE_METRICS=false

#-------------------------------------------------------------------------------
# If Remote Metrics for the deployment will be enabled, specify the listening host
#-------------------------------------------------------------------------------
DEPLOYMENT_REMOTE_METRICS_LISTENING_HOST=

#-------------------------------------------------------------------------------
# If Remote Metrics for the deployment will be enabled, specify the listening port for that server
#-------------------------------------------------------------------------------
DEPLOYMENT_REMOTE_METRICS_LISTENING_PORT=0


###############################################################################
#                                                                             #
#                       SECTION J - REPLICATION OPTIONS                       #
#                                                                             #
###############################################################################

#-------------------------------------------------------------------------------
# Specify the value for the GoldenGate schema.
#-------------------------------------------------------------------------------
OGG_SCHEMA=OGGADMIN


###############################################################################
#                                                                             #
#                  SECTION K - REMOVE DEPLOYMENT OPTIONS                      #
#                                                                             #
###############################################################################

#-------------------------------------------------------------------------------
# Specify if the deployment files should be removed from disk.
# Specify true if the deployment files should be removed, false otherwise.
#-------------------------------------------------------------------------------
REMOVE_DEPLOYMENT_FROM_DISK=

L’article GoldenGate 23ai Installation: Graphic and Silent Mode Comparison for Automation est apparu en premier sur dbi Blog.

Dctm – Configure the memory of an IDS Target on Windows

Fri, 2025-10-31 16:05

The Documentum Interactive Delivery Services (IDS) doesn’t seem to be very popular, at least in my experience, as I haven’t seen that one used in a lot of situations. However, when I did, I always had to increase the IDS Memory, as the default JVM was never enough to process an actual production load.

Memory configuration on Linux

On Linux, it is very easy to do for both the IDS Source and IDS Target, as what is running is simply the Java process. Therefore, you can simply update the shell scripts created by the installer or create your own. On the IDS Source, you can modify the line “USER_MEM_ARGS” inside the file “$CATALINA_HOME/bin/startWEBCACHE.sh“. On the IDS Target, you can create your own start script, which could contain something like:

#!/bin/bash

# IDS Agent Configuration
agent_port="2787"
jvm_mem="-Xms8g -Xmx8g -XX:MaxMetaspaceSize=512m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:G1HeapRegionSize=2m"
jvm_params="-Djava.net.preferIPv4Stack=true -Djava.security.egd=file:///dev/./urandom -Djava.awt.headless=true -Dfile.encoding=UTF-8"

# IDS Agent start command
$JAVA_HOME/bin/java ${jvm_mem} ${jvm_params} -cp "$IDS_HOME/lib/*" \
                    com.documentum.webcache.transfer.MigAgent \
                    $IDS_HOME/admin/config/${agent_port}/agent.ini
Preview memory configuration on Windows

But what about Windows? In older IDS versions like 16.7.4 (compatible with Documentum 16.7.1 to 21.1), it was possible to increase the memory by following these steps:

  • Open the Registry Editor (regedit)
  • Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\%IDS_SERVICE_NAME%\Env
  • Double-click on the “Values” registry key
  • Edit the “jvmoptions” definition with the required sizing, e.g.: jvmoptions=-Xms8g -Xmx8g -Dfile.encoding=UTF-8
  • Restart the Windows Service of the same name

Note: %IDS_SERVICE_NAME% is the IDS Service name. It depends on IDS version, port and communication type (secure or not), but it can be something like “OpenTextDocumentumIDS_secure_2787”.

That wasn’t documented initially but after some discussions with OpenText years ago (or was it Dell at the time? I don’t recall), it finally made its way into the documentation at some point, in the “User Guide”, under a section (beautifully) named “If the OutOfMemory error appears on the target”. In any cases, that was working properly.

How it is supposed to work for IDS 16.7.5…

The issue, and the reason for this blog is the IDS 16.7.5 (compatible with Documentum 23.4 to 24.4 (what happened to versions between 21.1 and 23.4? :D)). If you look at the documentation (c.f. IDS User Guide, section 5.13.2), it still mention to apply the exact same steps as what I described above for 16.7.4. But that doesn’t work for IDS 16.7.5, because there is no such “Env” folder in the Registry (it’s replaced by a “Parameters” one) and therefore no such key as “Values” either. The Service definition was changed for IDS 16.7.5 and unfortunately, that brought a bug with it.

As a side note, with IDS 16.7.5, there are a couple of BAT scripts that will be created when you configure a Target Agent. A first one can be used to create the Windows Service (c.f. %IDS_HOME%\admin\config\%PORT%\InstallTargetServices.bat) while the second one is to remove it. The BAT script to create the Windows Service contains its full definition:

@echo off
echo Target Service Installer.
...
set SERVICE_NAME=OpenTextDocumentumIDS_secure_2787
set PR_DISPLAYNAME=!SERVICE_NAME!
set PR_DESCRIPTION=OpenText Documentum Interactive Delivery Services Target Background Service
set TARGET_HOME=D:\ids\target
set AGENTINI_PATH=D:\ids\target\admin\config\2787\agent.ini
set PR_INSTALL=!TARGET_HOME!\product\bin\TargetService.exe
set PR_SERVICEUSER=LocalSystem
...
REM Path to java installation
set PR_JVM=!JAVA_HOME!\bin\server\jvm.dll
set PR_CLASSPATH=!TARGET_HOME!\lib\*
...
REM JVM configuration
set PR_JVMMS=8192m
set PR_JVMMX=8192m
set PR_JVMSS=4000m
set PR_JVMOPTIONS=-Dfile.encoding=UTF-8
REM Install service
...

Before opening the ticket with OpenText, I tried to modify the lines “PR_JVMMS” and “PR_JVMMX” to the needed values and then re-creating the service using the provided scripts. That’s how it is supposed to work, technically/normally… But it looked like it had no impact at all. I also tried to set the JVM size by modifying the line “PR_JVMOPTIONS” from “-Dfile.encoding=UTF-8” to “-Xms8g -Xmx8g -Dfile.encoding=UTF-8” (so with a similar approach to what used to work in 16.7.4 and lower). That also failed and it was even worse as the IDS couldn’t start anymore, it was complaining about some file not found.

Workaround until the patch is available

This is a bug in IDS 16.7.5, as OpenText confirmed after some discussions with them. A next patch should fix it. However, for now, it is possible to configure the IDS Target memory following these steps:

  • Open the Registry Editor (regedit.exe)
  • Navigate to: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Apache Software Foundation\Procrun 2.0\%IDS_SERVICE_NAME%\Parameters\Java
  • Double-click on the “JvmMs” registry key
  • Make sure to set the “Base” to “Decimal” (right hand side) and edit the “Value data” with the required sizing, e.g.: 8192
  • Repeat steps 3+4 for any other keys like “JvmMx” and “JvmSs
  • Restart the Windows Service of the same name

If you are hit by the bug, which should be the case in IDS 16.7.5, then all 3 values will be 0 by default. To verify if the modification is successful after the service restart, you can open the Resource Monitor (resmon.exe) and look for the PID of the IDS (TargetService.exe) in the Memory tab (c.f. Commit, Working Set, Shareable, Private Memory).

L’article Dctm – Configure the memory of an IDS Target on Windows est apparu en premier sur dbi Blog.

The M-Files Microsoft Teams Integration

Thu, 2025-10-30 11:46

This blog focuses on the integration of M-Files with Microsoft Teams. Please find below a brief overview of the installation and configuration requirements. The primary focus will be on the benefits it will bring to the user and on demonstrating how to work with the M-Files Teams integration.

Requirements
  • M-Files May 2024 Update or later
  • M-Files Plugin for Microsoft Teams with the version 24.3.2.0
  • Make sure that you have enough concurrent licenses in the vault. This is because people without access to M-Files use concurrent licenses
  • The add-in requires a few Microsoft Graph API permissions
  • Microsoft Entra ID is configured
  • Encrypted gRPC connection must be enabled

This list details the most important requirements. For a full list of requirements, please refer to the official M-Files documentation.

Please also be advised of the fact that external users of Microsoft Word Online are able to open documents only, and are not permitted to save files in this program.

Installation and configuration

In order for the software’s functionality to be fully utilised, it is recommended that the add-in and M-Files Teams integration plugin are installed.

Should you elect to install only the add-in, please be advised of the limitations detailed below.

  • The users cannot share M-Files content to people outside the organization
  • You must configure the Federated Authentication settings manually
  • In future updates the add-in will be come a requirement

Add-in configuration

The configuration and installation of the add-in is a two step activity. One part is in the M-Files Admin the other part has to be done in the Microsoft Teams admin center. The detailed configurations steps are documented in the M-Files documentation.

Plugin configuration

For the Plugin for Microsoft Teams you must install the M-Files Vault application, create and configure a scope for the federated authentication and configure the Plugin. Same as for the add-in, it is recommended to follow the official M-Files documentation.

Benefits for non M-Files users

The primary advantage of utilising the Microsoft Teams integration is the ability to share M-Files content with users who do not possess an M-Files account. This feature is instrumental in enabling companies to share a specific set of documents with their entire team, enhancing collaboration and efficiency. One example of this would be internal guidelines such as HR processes. M-Files is an effective system for the management of such documents, with version control enabled to ensure seamless control over all versions of files.

Should you require clarification on the implementation and functionality of the permission, or wish to confirm that documents are always shared in accordance with the permission set in M-Files, please do not hesitate to get in touch. Please refer to the graphic below for a clear explanation of how the permission evaluation and settings work.
It is imperative to ensure that the restricted access settings are always adhered to.

Please find below an example illustrating which internal users have access to M-Files documents. As we can see, this document is available in the M-Files Teams tab.

Next, we will examine a restricted document to understand how permissions are evaluated and set in the M-Files Teams tab.

Functions

In this chapter we will have look to the benefits and functions, I will explain based on screenshots of an existing implementation.

The image on the left shows a Teams client. If you look at the orange cycle, you will see the Teams channel. If you look closely at the star, you will see the M-Files tab.

Here you can see how to create documents in the same way as you can do it in M-Files itself.

Microsoft Teams offers a consistent user experience comparable to that of the M-Files client. The functionality is restricted to the view that is made available in Teams. As demonstrated, the system enables full functionality and provides a preview option.

The following list provides an overview of the tasks that can be performed outside the Teams client, provided that the necessary access rights in M-Files have been granted, of course.

  • Share an object only with M-Files users
  • Stop sharing objects with M-Files users
  • Delete a tab 
  • Share an object with everyone
  • Stop sharing an object with everyone
  • Share with current channel members

It is also possible to enable collaboration. This will allow users within Microsoft Teams to upload, download or work with the documents available in the Teams tab. Please be aware that these users will consume M-Files concurrent licenses. Should you wish to learn more about this topic, I would be happy to write a dedicated blog post on the subject.

Conclusion

This is an alternative and efficient method of working with a defined part of a document stored in M-Files directly from a placeholder that you use most frequently on a daily basis. If you use Teams and are looking for an alternative and easy way to access your M-Files documents, we recommend that you consider the M-Files Teams integration.
Should you require a demonstration or wish to discuss an integration project, please do not hesitate to contact us. I would be pleased to support you in this matter further.

L’article The M-Files Microsoft Teams Integration est apparu en premier sur dbi Blog.

True Cache on Oracle AI Database 26ai: Quickstart

Thu, 2025-10-30 10:43
Introduction

Oracle Database True Cache is an in-memory, read-only cache designed to reduce latency and offload read activity from the primary database. It’s comparable to an Active Data Guard standby instance, but it runs mostly in memory.

This feature debuted in Oracle Database 23ai, which has since been rebranded as Oracle AI Database 26ai.

There are several free ways to try this release of Oracle Database and its features, for instance:
• Oracle LiveLabs
• OCI Free Tier trial credits
• The Oracle Database 23ai Free VirtualBox Appliance

In this blog, we’ll walk through a simple True Cache setup using two local VirtualBox VMs. The goal is not deep architecture analysis, but rather:
• to introduce practically what True Cache is
• to show one free, hands-on way to try Oracle AI Database 26ai

We will build two Oracle Linux VMs, install Oracle AI Database 26 on them, create an Oracle Database on the first VM, and its True Cache related instance on the second.
Let’s see how it goes :

1. Build the VirtualBox VMs

1. Download an Oracle Linux Full ISO from: https://yum.oracle.com/oracle-linux-isos.html
In this example, we use the latest Oracle Linux 9 update:
OracleLinux-R9-U6-x86_64-dvd.iso

2. Create a new VM in VirtualBox. Choose the correct OS type and give the VM a name:

3. Assign resources (memory, CPU, disk), then click Finish:

4. Before booting the VM, open Settings > Storage and attach the Oracle Linux Full ISO to the Optical Drive:

5. Start the VM and install Oracle Linux. The installation is standard — just make sure the network interface is connected.

6. After the OS is installed on the first VM, clone it. We’ll use the clone as the second machine (one will run the primary database, the other will run True Cache).

7. Update hostnames and /etc/hosts on both machines so they can resolve each other. Both VMs are attached to a Bridged Adapter in VirtualBox so they’re on the same LAN and can communicate.

-- Machine 1, For the Primary Database
[root@orahost1 ~]# hostname
orahost1
[root@orahost1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.20.10.3   orahost1.localdomain   orahost1
172.20.10.4   orahost2.localdomain   orahost2
-- Machine 2, For the True Cache Instance
[root@orahost2 ~]# hostname
orahost2
[root@orahost1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.20.10.3   orahost1.localdomain   orahost1
172.20.10.4   orahost2.localdomain   orahost2

And at last, let’s make sure the 1521 port would be open for the SQL *Net communication :

-- (TODO on both machines)
[root@orahost1 ~]# firewall-cmd --zone=public --add-port=1521/tcp --permanent
success
[root@orahost1 ~]# firewall-cmd --reload
success
[root@orahost1 ~]# firewall-cmd --zone=public --list-ports
1521/tcp
2- Installing Oracle AI Database 26 binaries

Reference : https://www.oracle.com/database/free/get-started/ (Choose the “Oracle Database Free Installation Guide” corresponding to your linux version)

We begin by installing the preinstall package, which will take of all the pre-installation tasks, including the creation of the needed users and groups

-- TODO on both VMs : dnf -y install oracle-ai-database-preinstall-26ai

[root@orahost1 ~]# dnf -y install oracle-ai-database-preinstall-26ai
Last metadata expiration check: 0:19:59 ago on Mon 27 Oct 2025 01:11:08 PM CET.
Dependencies resolved.
=============================================================================================================================================================================================
 Package                                                  Architecture                 Version                                                 Repository                               Size
=============================================================================================================================================================================================
Installing:
 oracle-ai-database-preinstall-26ai                       x86_64                       1.0-1.el8                                               ol8_appstream                            29 k
Installing dependencies:
 compat-openssl10                                         x86_64                       1:1.0.2o-4.el8_10.1                                     ol8_appstream                           1.1 M
 glibc-devel                                              x86_64                       2.28-251.0.2.el8                                        ol8_baseos_latest                        89 k
 ksh                                                      x86_64                       20120801-269.0.1.el8_10                                 ol8_appstream                           924 k
 libxcrypt-devel                                          x86_64                       4.1.1-6.el8                                             ol8_baseos_latest                        25 k
 lm_sensors-libs                                          x86_64                       3.4.0-23.20180522git70f7e08.el8                         ol8_baseos_latest                        59 k
 make                                                     x86_64                       1:4.2.1-11.el8                                          ol8_baseos_latest                       498 k
 sysstat                                                  x86_64                       11.7.3-13.0.2.el8_10                                    ol8_appstream                           427 k

Transaction Summary
=============================================================================================================================================================================================
Install  8 Packages

Total download size: 3.1 M
Installed size: 9.3 M
Downloading Packages:
(1/8): libxcrypt-devel-4.1.1-6.el8.x86_64.rpm                                                                                                                 37 kB/s |  25 kB     00:00
(2/8): lm_sensors-libs-3.4.0-23.20180522git70f7e08.el8.x86_64.rpm                                                                                             81 kB/s |  59 kB     00:00
(3/8): glibc-devel-2.28-251.0.2.el8.x86_64.rpm                                                                                                               118 kB/s |  89 kB     00:00
(4/8): make-4.2.1-11.el8.x86_64.rpm                                                                                                                          1.6 MB/s | 498 kB     00:00
(5/8): oracle-ai-database-preinstall-26ai-1.0-1.el8.x86_64.rpm                                                                                               363 kB/s |  29 kB     00:00
(6/8): ksh-20120801-269.0.1.el8_10.x86_64.rpm                                                                                                                2.6 MB/s | 924 kB     00:00
(7/8): sysstat-11.7.3-13.0.2.el8_10.x86_64.rpm                                                                                                               2.4 MB/s | 427 kB     00:00
(8/8): compat-openssl10-1.0.2o-4.el8_10.1.x86_64.rpm                                                                                                         2.2 MB/s | 1.1 MB     00:00
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                        2.5 MB/s | 3.1 MB     00:01
Oracle Linux 8 BaseOS Latest (x86_64)                                                                                                                        2.8 MB/s | 3.1 kB     00:00
Importing GPG key 0xAD986DA3:
 Userid     : "Oracle OSS group (Open Source Software group) "
 Fingerprint: 76FD 3DB1 3AB6 7410 B89D B10E 8256 2EA9 AD98 6DA3
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                     1/1
  Installing       : libxcrypt-devel-4.1.1-6.el8.x86_64                                                                                                                                  1/8
  Installing       : glibc-devel-2.28-251.0.2.el8.x86_64                                                                                                                                 2/8
  Running scriptlet: glibc-devel-2.28-251.0.2.el8.x86_64                                                                                                                                 2/8
  Installing       : make-1:4.2.1-11.el8.x86_64                                                                                                                                          3/8
  Running scriptlet: make-1:4.2.1-11.el8.x86_64                                                                                                                                          3/8
  Installing       : compat-openssl10-1:1.0.2o-4.el8_10.1.x86_64                                                                                                                         4/8
  Running scriptlet: compat-openssl10-1:1.0.2o-4.el8_10.1.x86_64                                                                                                                         4/8
  Installing       : ksh-20120801-269.0.1.el8_10.x86_64                                                                                                                                  5/8
  Running scriptlet: ksh-20120801-269.0.1.el8_10.x86_64                                                                                                                                  5/8
  Installing       : lm_sensors-libs-3.4.0-23.20180522git70f7e08.el8.x86_64                                                                                                              6/8
  Running scriptlet: lm_sensors-libs-3.4.0-23.20180522git70f7e08.el8.x86_64                                                                                                              6/8
  Installing       : sysstat-11.7.3-13.0.2.el8_10.x86_64                                                                                                                                 7/8
  Running scriptlet: sysstat-11.7.3-13.0.2.el8_10.x86_64                                                                                                                                 7/8
  Installing       : oracle-ai-database-preinstall-26ai-1.0-1.el8.x86_64                                                                                                                 8/8
  Running scriptlet: oracle-ai-database-preinstall-26ai-1.0-1.el8.x86_64                                                                                                                 8/8
  Verifying        : glibc-devel-2.28-251.0.2.el8.x86_64                                                                                                                                 1/8
  Verifying        : libxcrypt-devel-4.1.1-6.el8.x86_64                                                                                                                                  2/8
  Verifying        : lm_sensors-libs-3.4.0-23.20180522git70f7e08.el8.x86_64                                                                                                              3/8
  Verifying        : make-1:4.2.1-11.el8.x86_64                                                                                                                                          4/8
  Verifying        : compat-openssl10-1:1.0.2o-4.el8_10.1.x86_64                                                                                                                         5/8
  Verifying        : ksh-20120801-269.0.1.el8_10.x86_64                                                                                                                                  6/8
  Verifying        : oracle-ai-database-preinstall-26ai-1.0-1.el8.x86_64                                                                                                                 7/8
  Verifying        : sysstat-11.7.3-13.0.2.el8_10.x86_64                                                                                                                                 8/8

Installed:
  compat-openssl10-1:1.0.2o-4.el8_10.1.x86_64              glibc-devel-2.28-251.0.2.el8.x86_64   ksh-20120801-269.0.1.el8_10.x86_64                    libxcrypt-devel-4.1.1-6.el8.x86_64
  lm_sensors-libs-3.4.0-23.20180522git70f7e08.el8.x86_64   make-1:4.2.1-11.el8.x86_64            oracle-ai-database-preinstall-26ai-1.0-1.el8.x86_64   sysstat-11.7.3-13.0.2.el8_10.x86_64

Complete!

Next we download and copy the latest free Oracle AI Database 26ai rpm corresponding to the OS version.
From https://www.oracle.com/database/free/get-started/
For me, that would be : oracle-ai-database-free-26ai-23.26.0-1.el9.x86_64.rpm

-- Both Machines , dnf -y install oracle-ai-database-free-26ai-23.26.0-1.el9.x86_64.rpm

[root@orahost1 software]# ls -l
total 1398208
-rw-r--r--. 1 root root 1431761556 Oct 27 14:10 oracle-ai-database-free-26ai-23.26.0-1.el9.x86_64.rpm

[root@orahost1 software]# dnf -y install oracle-ai-database-free-26ai-23.26.0-1.el9.x86_64.rpm
Last metadata expiration check: 1:06:41 ago on Mon 27 Oct 2025 01:11:08 PM CET.
Dependencies resolved.
=============================================================================================================================================================================================
 Package                                                    Architecture                         Version                                    Repository                                  Size
=============================================================================================================================================================================================
Installing:
 oracle-ai-database-free-26ai                               x86_64                               23.26.0-1                                  @commandline                               1.3 G

Transaction Summary
=============================================================================================================================================================================================
Install  1 Package

Total size: 1.3 G
Installed size: 3.6 G
Downloading Packages:
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                     1/1
  Running scriptlet: oracle-ai-database-free-26ai-23.26.0-1.x86_64                                                                                                                       1/1
  Installing       : oracle-ai-database-free-26ai-23.26.0-1.x86_64                                                                                                                       1/1
  Running scriptlet: oracle-ai-database-free-26ai-23.26.0-1.x86_64                                                                                                                       1/1
[INFO] Executing post installation scripts...
[INFO] Oracle home installed successfully and ready to be configured.
To configure Oracle AI Database Free, optionally modify the parameters in '/etc/sysconfig/oracle-free-26ai.conf' and then run '/etc/init.d/oracle-free-26ai configure' as root.

  Verifying        : oracle-ai-database-free-26ai-23.26.0-1.x86_64                                                                                                                       1/1

Installed:
  oracle-ai-database-free-26ai-23.26.0-1.x86_64

Complete!
3. Create and Configure the Primary Database

On Machine 1 (the future primary database host), still as root, run the configuration script. This creates:
• a CDB with one PDB
• a local listener

 
-- Only on Machine 1
[root@orahost1 software]# /etc/init.d/oracle-free-26ai configure
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle AI Database FREE.
Enter SYS user password:
*************
Enter SYSTEM user password:
***********
Enter PDBADMIN User Password:
**************
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
33% complete
36% complete
39% complete
43% complete
Completing Database Creation
47% complete
49% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/FREE.
Database Information:
Global Database Name:FREE
System Identifier(SID):FREE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/FREE/FREE.log" for further details.

Connect to Oracle AI Database using one of the connect strings:
     Pluggable database: orahost1/FREEPDB1
     Multitenant container database: orahost1
 

Now we have:
• a multitenant FREE database
• a local listener listening on port 1521

[root@orahost1 software]# su - oracle
[oracle@orahost1 ~]$ . oraenv
ORACLE_SID = [oracle] ? FREE
The Oracle base has been set to /opt/oracle
[oracle@orahost1 ~]$ sqlplus / as sysdba

Connected to:
Oracle AI Database 26ai Free Release 23.26.0.0.0 - Develop, Learn, and Run for Free
Version 23.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 FREEPDB1                       READ WRITE NO
SQL> exit
Disconnected from Oracle AI Database 26ai Free Release 23.26.0.0.0 - Develop, Learn, and Run for Free
Version 23.26.0.0.0
[oracle@orahost1 ~]$ lsnrctl status

LSNRCTL for Linux: Version 23.26.0.0.0 - Production on 27-OCT-2025 14:37:34

Copyright (c) 1991, 2025, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orahost1.localdomain)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 23.26.0.0.0 - Production
Start Date                27-OCT-2025 14:26:40
Uptime                    0 days 0 hr. 10 min. 54 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Default Service           FREE
Listener Parameter File   /opt/oracle/product/26ai/dbhomeFree/network/admin/listener.ora
Listener Log File         /opt/oracle/diag/tnslsnr/orahost1/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=orahost1.localdomain)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "4224ed483625a7e9e063050a14accd4f" has 1 instance(s).
  Instance "FREE", status READY, has 1 handler(s) for this service...
Service "FREE" has 1 instance(s).
  Instance "FREE", status READY, has 1 handler(s) for this service...
Service "FREEXDB" has 1 instance(s).
  Instance "FREE", status READY, has 1 handler(s) for this service...
Service "freepdb1" has 1 instance(s).
  Instance "FREE", status READY, has 1 handler(s) for this service...
The command completed successfully
 
4. Create the True Cache Instance

Next, we create and configure the True Cache instance on Machine 2. We only need to copy the password file from the primary to the True Cache host first:

 
[oracle@orahost2 bin]$ ll /opt/oracle/tmp/orapwFREE
-rw-r-----. 1 oracle oinstall 2048 Sep 17 16:56 /opt/oracle/tmp/orapwFREE

Then we run dbca with the -createTrueCache option:

 
[oracle@orahost2 bin]$ ./dbca -silent -createTrueCache -gdbName FREE -sourceDBConnectionString  orahost1:1521/FREE -passwordFileFromSourceDB /opt/oracle/tmp/orapwFREE -dbUniqueName FREE_TC -sid FREE
Enter Remote DB SYS user password:

Session ID of the current execution is: 2
-----------------
Running Extract_password_file_from_blob_file job
Completed Extract_password_file_from_blob_file job
25% complete
-----------------
Running Create_static_listener job
Completed Create_static_listener job
38% complete
-----------------
Running Register_listener job
Completed Register_listener job
50% complete
-----------------
Running Extract_tde_wallet_from_blob_file job
Skipping. Job is detected as not applicable.
54% complete
-----------------
Running Setup_required_directories job
Skipping. Job is detected as not applicable.
57% complete
-----------------
Running Create_pfile job
Completed Create_pfile job
61% complete
-----------------
Running Start_nomount_instance job
Completed Start_nomount_instance job
64% complete
-----------------
Running Create_TDE_wallet job
Skipping. Job is detected as not applicable.
68% complete
-----------------
Running Create_truecache_instance job
Completed Create_truecache_instance job
71% complete
-----------------
Running Add_oratab_entry job
Completed Add_oratab_entry job
75% complete
-----------------
Running Reopen_wallet job
Skipping. Job is detected as not applicable.
100% complete
Look at the log file "/opt/oracle/cfgtoollogs/dbca/FREE_TC/FREE_TC0.log" for further details.
 

The instance is created. Let’s confirm its state:

SQL> select open_mode,database_role from v$database;

OPEN_MODE            DATABASE_ROLE
-------------------- ----------------
READ ONLY WITH APPLY TRUE CACHE

SQL> select name from v$database;

NAME
---------
FREE

SQL> select db_unique_name from v$database;

DB_UNIQUE_NAME
------------------------------
FREE_TC
 

As you can see, it looks very similar to a Data Guard standby — except there are no datafiles.

 
SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/opt/oracle/oradata/FREE_TC/controlfile/o1_mf_nhz3od6n_.ctl

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /opt/oracle/product/23ai/dbhom
                                                 eFree/dbs/spfileFREE.ora
SQL> select name from v$datafile;

no rows selected

SQL> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/opt/oracle/oradata/S_TEMP_##TC##_FREE#TC_8192_1_3_201_1
/opt/oracle/oradata/S_TEMP_##TC##_FREE#TC_8192_2_3_202_1
/opt/oracle/oradata/S_TEMP_##TC##_FREE#TC_8192_3_3_203_1
 
5. Configure a True Cache Application Service

Applications can use True Cache in two main ways:

Option 1: The application maintains two physical connections:
• One to the primary database
• One to the True Cache instance

Each connection uses its own database service. The app chooses the connection based on whether it needs to read or write. This works with any language and any existing client drivers.

Option 2: The application maintains one logical connection (to the primary database service). The JDBC Thin driver (starting with Oracle AI Database 26ai) automatically manages physical connections to both the primary and True Cache. This option is for Java applications.

For more details, see:
https://docs.oracle.com/en/database/oracle/oracle-database/26/odbtc/methods-connecting-true-cache.html

In this example, we’ll configure an application service for a hypothetical sales application that queries data in the FREEPDB1 PDB.

Step 1: On the primary database, create and start a service called SALES:

 
-- On the primary 
SQL> alter session set container=FREEPDB1;

Session altered.

SQL> BEGIN
   DBMS_SERVICE.CREATE_SERVICE('SALES', 'SALES');
   DBMS_SERVICE.START_SERVICE('SALES');
END;
/  

Now the service is visible in the primary listener:

 
[oracle@orahost1 ~]$ lsnrctl status | grep SALES 
Service "SALES" has 1 instance(s). 

Step 2: From the primary host, run dbca again to configure the related True Cache service (sales_tc) on the True Cache instance:

 
[oracle@orahost1 ~]$ $ORACLE_HOME/bin/dbca -configureDatabase -configureTrueCacheInstanceService -sourceDB FREE -trueCacheConnectString orahost2:1521/FREE_TC -trueCacheServiceName sales_tc -serviceName sales -pdbName FREEPDB1 -silent
Session ID of the current execution is: 1
-----------------
Running Initialization job
Enter SYS user password:

Completed Initialization job
33% complete
-----------------
Running Validate_true_cache_instance_connection job
Completed Validate_true_cache_instance_connection job
37% complete
-----------------
Running Validate_dataguard job
Skipping. Job is detected as not applicable.
41% complete
-----------------
Running Validate_db_version job
Completed Validate_db_version job
44% complete
-----------------
Running Validate_true_cache_instance job
Completed Validate_true_cache_instance job
48% complete
-----------------
Running Validate_archive_log_mode job
Completed Validate_archive_log_mode job
52% complete
-----------------
Running Validate_pdb job
Completed Validate_pdb job
56% complete
-----------------
Running Validate_primary_db_service job
Completed Validate_primary_db_service job
59% complete
-----------------
Running Validate_true_cache_db_service job
Completed Validate_true_cache_db_service job
63% complete
-----------------
Running Validate_true_cache_instance_open_mode job
Completed Validate_true_cache_instance_open_mode job
67% complete
-----------------
Running Create_truecache_service job
Completed Create_truecache_service job
73% complete
-----------------
Running Add_network_entry job
Completed Add_network_entry job
80% complete
-----------------
Running Modify_primary_service job
Completed Modify_primary_service job
87% complete
-----------------
Running Start_truecache_service job
Completed Start_truecache_service job
93% complete
-----------------
Running Enable_service_registration job
Completed Enable_service_registration job
100% complete
Look at the log file "/opt/oracle/cfgtoollogs/dbca/FREE/FREE0.log" for further details.
 

The service has been created, we can check on the true cache machine :

[oracle@orahost2 bin]$ lsnrctl status | grep sales_tc
Service "sales_tc" has 1 instance(s).
[oracle@orahost2 bin]$ sqlplus / as sysdba

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 FREEPDB1                       READ ONLY  NO
SQL> alter session set container=freepdb1;

Session altered.

SQL> select name from v$active_services;

NAME
----------------------------------------------------------------
sales_tc
FREEPDB1
Conclusion

• It’s fast and cheap to build a small lab for Oracle AI Database 26ai and start testing new features on your laptop.

• True Cache is one of those features: it’s easy to set up, designed for offloading read-only workload, and included with Enterprise Edition licensing (unlike Active Data Guard which requires an extra option). It’s definitely worth evaluating for applications that mostly read and don’t or rarely need to update data.

L’article True Cache on Oracle AI Database 26ai: Quickstart est apparu en premier sur dbi Blog.

Smart Automation with M-Files: Real Use Cases

Thu, 2025-10-30 10:30

In the world of information management, time is money, and manual processes are the enemy of efficiency.

The time of simple document management systems is over. Now, we need tools that help workers optimize their time and make their lives easier.

M-Files smart automation

This is where M-Files’ automation capabilities really shine, offering a significant advantage in the realm of information management.

By combining metadata-driven logic, workflows, and integrations, M-Files enables organizations to automate routine tasks, reduce human error, and focus on what really matters.

However, it is often not so obvious where we can achieve this gain.

So let’s explore how smart automation works in M-Files, looking at real-world use cases that deliver measurable time savings, which may give you ideas for your own business.

What Is Smart Automation in M-Files?

Smart automation in M-Files refers to the use of:

  • Metadata to drive decisions and routing
  • Workflows to manage objects (not only documents) lifecycles
  • Automatic permissions to control access dynamically during the life of the object
  • Notifications and tasks to keep people informed and accountable
  • Integrations with external systems (e.g., Ms Teams, ERP, CRM, email,…)

The result: a system that thinks for you! It automates repetitive steps, ensures total consistency across departments, and reduces silos.

Here are some real use cases Automated Invoice Approval

Challenge: Manual invoice approvals are slow, error-prone, and often get stuck in email chains.
Solution: M-Files automatically routes invoices based on metadata like amount, vendor, or department. Approvers receive tasks and reminders, and the system logs every step for audit purposes.
Time Saved: Up to 50% reduction in processing time.

Contract Lifecycle Management

Challenge: Tracking contract versions, deadlines, and approvals manually is risky and inefficient.
Solution: M-Files automates the entire contract lifecycle, from drafting and review to approval and renewal. Metadata triggers reminders before expiration and ensures the right people are involved at each stage.
Time Saved: Weeks shaved off contract turnaround times.

HR On-boarding Automation

Challenge: On-boarding new employees involves multiple documents, departments, and compliance steps.
Solution: M-Files initiates on-boarding workflows as soon as a new hire is created. It assigns tasks to HR, IT, and managers, ensures documents are signed, and stores everything in a compliant structure.
Time Saved: Hours per new hire, with improved consistency.

Quality Document Control (QMS)

Challenge: Managing SOPs, CAPAs, and audit trails manually is time-consuming and error-prone.
Solution: M-Files automates document versioning, review cycles, and approval workflows. It ensures only the latest approved version is accessible and maintains a full audit trail.
Time Saved: Significant reduction in audit preparation time.

Email Filing and Classification

Challenge: Important emails get lost in inboxes or saved in the wrong place.
Solution: M-Files integrates with Outlook to automatically classify and store emails based on metadata—linking them to the right project, client, or case.
Time Saved: Minutes per email, multiplied across teams.

Why It Works

The secret behind M-Files automation is metadata. Instead of relying on folder structures or manual tagging, M-Files uses metadata to understand what a document is and what should happen next. This enables:

  • Context-aware workflows
  • Dynamic access control
  • Smart search and retrieval
  • Automated compliance enforcement

The main point is also how M-Files delivers information. Since documents are classified by their properties (metadata), it is easy to adapt the access method to the user’s profile, while avoiding duplicates.
For instance, a sales representative would want direct access to a contract, whereas a project manager would prefer to view it as an appendix to their project without having to navigate through a whole tree structure.

In a nutshell

Smart automation in M-Files isn’t just about saving time, it’s about enabling people to focus on high-value work. Whether you’re in finance, legal, HR, or quality management, M-Files can help you eliminate bottlenecks, reduce risk, and accelerate your business processes.

I hope these examples have given you some ideas and helped you think about ways to improve information management.

If you’re still relying on manual steps and email chains, it might be time to explore what smart automation can do for you. Of course, we will be pleased to assist you in that transition.

More information here.

L’article Smart Automation with M-Files: Real Use Cases est apparu en premier sur dbi Blog.

PostgreSQL 19: Two nice little improvements: log_autoanalyze_min_duration and search_path in the psql prompt

Wed, 2025-10-29 02:10

Two nice little improvements have been committed for PostgreSQL 19. The first one is about logging the duration of automatic analyze while the second one is about displaying the current search_path in psql’s prompt.

Lets start with the improvement for psql. As you probably know, the default prompt in psql looks like this:

postgres@:/home/postgres/ [pgdev] psql
psql (19devel)
Type "help" for help.

postgres=# 

While I do not have an issue with the default prompt, you maybe want to see more information. An example of what you might do is this:

postgres=# \set PROMPT1 '%M:%&gt; %n@%/%R%#%x '
[local]:5432 postgres@postgres=# 

Now you immediately see that this is a connection over a socket on port 5432, and you’re connected as the “postgres” user to the “postgres” database. If you want to make this permanent, add it to your “.psqlrc” file.

The new prompting option which will come with PostgreSQL 19 is “%S”, and this will give you the search_path:

postgres=# show search_path;
   search_path   
-----------------
 "$user", public
(1 row)

postgres=# \set PROMPT1 '%/%R%x%..%S..# '
postgres=.."$user", public..# set search_path = 'xxxxxxx';
SET
postgres=..xxxxxxx..# 

Nice. You can find all the other prompting options in the documentation of psql, the commit is here.

The second improvement is about logging the time of automatic analyze. Before PostgreSQL 19 we only had log_autovacuum_min_duration. This logs all actions of autovacuum if they cross the specified threshold. This of course includes the auto analyze as well, but usually it is autovacuum taking most of the time. This is now separated and there is a new parameter called “log_autovacuum_min_duration”. You can easily test this with the following snippet:

postgres=# create table t ( a int , b text );
CREATE TABLE
postgres=# insert into t select i, i::text from generate_series(1,1000000) i;
INSERT 0 1000000
postgres=# alter system set log_autoanalyze_min_duration = '1ms';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# insert into t select i, i::text from generate_series(1,1000000) i;
INSERT 0 1000000

Looking at the log file there is now this:

2025-10-29 08:05:52.744 CET - 1 - 3454 -  - @ - 771LOG:  automatic analyze of table "postgres.public.t"
        avg read rate: 0.033 MB/s, avg write rate: 0.033 MB/s
        buffer usage: 10992 hits, 1 reads, 1 dirtied
        WAL usage: 5 records, 1 full page images, 10021 bytes, 0 buffers full
        system usage: CPU: user: 0.10 s, system: 0.01 s, elapsed: 0.23 s

Also nice, the commit is here.

L’article PostgreSQL 19: Two nice little improvements: log_autoanalyze_min_duration and search_path in the psql prompt est apparu en premier sur dbi Blog.

M-Files connected to Microsoft Copilot

Mon, 2025-10-27 01:22

Should you be a Microsoft 365 subscriber who already owns a Microsoft Copilot licence, you may wish to consider connecting your M-Files Vault to benefit from the capabilities of Microsoft Copilot.

M-Files offers a connector that can meet these needs. In this blog, we will examine the functionality it will provide and the steps required to enable it.

Requirements

In order to successfully install and enable the M-Filers connector for Microsoft Copilot, it is essential to meet the following requirements.

  • License for the M-Files Connector for Copilot. (In the M-Files Business Platform is the connector included)
  • M-Files Server version 24.9 or later
  • TLS 1.2 or later is enabled
  • Administrative rights in the M-Files vault
  • Administrative rights in the Microsoft Entra ID tenant
  • End-users must have
    • An Entra ID
    • A license including Microsoft Copilot
    • Matching email address in Entra ID and M-Files Manage
Limitations
  • A maximum of 30 connections per tenant. (Important to understand is that this is the amount to of all connections, not limited to M-Files)
  • A maximum of 50 million items per tenant
  • A maximum of 500 GB in connection size
  • A maximum of 128 properties per schema
  • A maximum of 4 MB of text content for one item
Use Case description

In this use case, we facilitate the integration of documents with Microsoft Copilot. These can subsequently be utilised by Copilot, for instance, in the context of self-developed Copilot Agents. Please note that these self-developed agents are not subject to disclosure to the public. Please be advised that they will remain within the security perimeter of your Microsoft 365 tenant.

We use the M-Files property “Copilot” to enable the content. All documents set to “‘yes” are enabled for Microsoft Copilot.

Configurations

This section exclusively covers the implemented configuration settings. For comprehensive instructions on configuration, please refer to the official M-Files documentation.

In order to limit the scope of the document uploaded to Microsoft Copilot, the following configurations were applied. These configurations can be set in the M-Files Admin tool under the M-Files Copilot Connector.

  • Object type: “Document
  • Additional Conditions: “Copilot = Yes
Use Case results

In this showcase, I will demonstrate the functionality from the perspectives of both M-Files Vault and Microsoft Copilot. The demonstration will be observed from the viewpoints of two users, each with different access rights in both worlds.

List of the user accounts to demonstrate the use case:

Microsoft 365 Administrator
This user has no access to the M-Files Vault

M-Files Demo User
The user is part of the Microsoft tenant

To illustrate the use case, I will use a document named “EN-CNTNT-eBook-SRGCM11924.pdf“. The document is stored in the M-Files Vault and belongs to the object type “Documents” and class “Other Documents“. The “Other Documents” class includes a property called “Copilot“, which is used to enable the document for Microsoft Copilot, as previously outlined.

If you look at the screenshot on the right, you can see the M-Files Vault. This screenshot shows the document object, and the property “Copilot” that is set to “yes“.

If you set your document up like the screenshot above, it will automatically upload to Microsoft Copilot and you can then use it from there. The screenshot below shows what it will look like in Microsoft Copilot, based on a search with the configured M-Files vertical.

The Microsoft 365 Administrator has obtained the following results when searching for Copilot. In the blue cycle, you can see the configured vertical for M-Files. He has an account in the tenant which is configured in the M-Files Vault, but he does not have an account or access to it.

This is what the M-Files Demo user can see. The user with the user picture of a cat.
The user has access to the M-Files Vault and is part of the Microsoft tenant that is set up in the Vault.


The screenshot below presents a side-by-side comparison of the results for additional clarification.

As can be seen in the pictures above, only users with access to the M-Files Vault can access and work with the content in Microsoft Copilot.

Conclusion

The M-Files Connector for Microsoft Copilot is an ideal solution for businesses that already own or plan to purchase a Microsoft 365 subscription as part of their IT strategy. Should you be willing to accept the limitations and consent to the movement of your data outside of the security boundaries of your M-Files vault, then this will be a possible option.

As I previously outlined in a previous blog post on M-Files and Microsoft. Microsoft Copilot capabilities will be made available in a M-Files Vault that uses Sharepoint Embedded in the near future. Once enabled, the Microsoft collaboration can access the data stored in the M-Files vault in the Sharepoint Embedded container directly. This will resolve the issue of the security boundary, as they will not be leaving the company security area. Further details about this option will be provided in one of my subsequent blogs, following the official release planned for November 2025.

Should you have any further questions or wish to discuss your options, we would be delighted to provide support and assistance as you consider your options. Should you have any questions, please do not hesitate to get in touch.

L’article M-Files connected to Microsoft Copilot est apparu en premier sur dbi Blog.

pgconf.eu 2025 – RECAP

Sun, 2025-10-26 13:30

I was fortunate to be able to attend at the pgconf.eu 2025.
This year event was happening in RIGA and joined together once again key members of the community, contributors, committers, sponsors and users from across the world.
I would summarize this year event with those three main topics : AI/LLM – PG18- Monitoring.

AI/LLMs


Compared to last year the formula changed a bit regarding the Community events day of Tuesday where for the first time different “Summits” where organized. If you want full details on the event and the schedule as well as the presentation slides of each talk you may find it here : Schedule — PostgreSQL Conference Europe 2025
I had the chance to be chosen as a speaker for the AI Summit. It was quite interesting for me. In total there was 13 short talks (10min) on various topics related to PostgreSQL and AI/LLMs it was dense with a lot of interesting ideas of implementations – you can find the details and slides here PGConf.EU 2025 PostgreSQL AI Summit – PostgreSQL wiki. AI/LLMs are the hot topic of the moment and naturally it came up often during this event, in the talks and in the discussions. You can find the pdf of my presentation here. I explained a business case implementation of a BI self-service agentic RAG to find relevant fields for a target KPI and data marts creation as output. Since the talks were short, it allowed to have a debate at the end between the audience and the speakers. The discussion nicely moderated by organizers was interesting because it exposed the same strong thoughts people have in general about AI/LLMs. A blend of distrust and not fully understanding of what it is about or how it could help organizations. Which, in itself, shows that the PostgreSQL community has the same difficulties at explaining technical challenges versus organizational/human challenges. My view here is that we don’t have technical challenges, they are almost un-relevant to most arguments but rather human relation and understanding of what values a DBA for example, brings to the organization. To me installing and configuring PostgreSQL has no benefits in terms of personal growth so automating it is quite natural and adding AI/LLMs on top is “nice to have” but not fundamentally different than an Ansible playbook. But for the junior DBA this an additional abstraction that can be dangerous because it provides tools that users can’t grasp the full extent of their consequences. This outlines that the main issue of integrating AI/LLMs workflows is more a governance/ C-management issue than a technical one and it can’t be the last excuse for adding to the technological debt.
Jay Miller from Aiven explained how you can fail at exposing PII from LLMs and MCPs. This is rely a relevant topic knowing that more and more organization are facing issues like shadow IT. He also was quite the show host and was funny to hear. I recommend strongly watching the recording when it will be released.

PG18

This year was just after the PostgreSQL 18 version release which is one the version that brought major improvements and is initiating changes for future release to come. I was quite enthusiast to listen to Melanie Plagemen on how she worked on the improvements on freezing in this release. I have to say, usually when I am going at an advanced internal talk, I am more confused after than before. But here, Melanie did an amazing job at talking about a technical complex topic without loosing the audience.
Gülçin Yıldırım Jelínek, on her side explained what’s new in PG18 about constraints like NOT ENFORCED and NOT NULL and how to use them. The COO of Cybertec Raj Verma, during a sponsor talk, explained why compliance matters and how to minimize the risks and how PostgreSQL is helping us to be PCI DSS, GDPR, nLPD or HIPAA compliant.
Another interesting talk I was happy to attend was from Floor Drees and Gabriele Bartolini. they explain how they went on joining the CloudNativePG project to the CNCF.

Monitoring

This leads me to another important topic, I wasn’t looking for it but became a bit of a main subject for my over the years as a DBA that was interested in performance tuning. Monitoring on PostgreSQL was introduced by several talks like Luigi Nardi and his idea of workload fingerprint with the DBtune tool they have. Additionally, Lukas Fittl presented pg_stat_plans, an extension which aims at tracking execution plans over time. This is definitely something I am going to try and will push for implementation in the core extensions if not the core code itself.
The reason for that is obvious for me, PostgreSQL is becoming more and more central to enterprise organizations and appart from subject like TDE, monitoring is going to become a key aspect of automation, CloudNativePG and AI/LLM workflows. Having PostgreSQL being able to be monitored better and easier at the core will allow leveraging at all this levels. Cloud companies release that already hence there involvement in similar projects.

In the end, this year was once again the occasion for me to think about many relevant topics and exchange with PostgreSQL hackers as well as users from around the world. I came back home with the head full of ideas to investigate.

Additionally after the conference the videos of the each talks will be uploaded to the pgconf Europe Youtube channel : PostgreSQL Europe, but you can already check previous amazing talks and this year pgday Paris.

So once again the PostgreSQL flag was floating up high !

L’article pgconf.eu 2025 – RECAP est apparu en premier sur dbi Blog.

Alfresco – When a JDBC driver breaks after years of stability

Sun, 2025-10-26 05:06

A customer recently contacted us regarding an issue with an old Alfresco Enterprise 4.2.2 environment. The system had been running stably for years, with a nightly restart of the Tomcat/Alfresco service to help recycle the memory and fix long-standing issues in this version. Earlier this week, one of these nightly restarts suddenly failed and since then, Alfresco would completely refuse to start. Unfortunately, that was on Production, so it was rather important! Yes, it is a very old version of Alfresco (on an old OS as well, obviously), but the customer doesn’t want to upgrade or patch it, so it was left as is since its installation by another company that isn’t around anymore.

The error

When reviewing the Tomcat logs, the startup sequence looked normal until Alfresco attempted to initialize Hibernate and connect to its MySQL database. Here is the relevant part of the log with the error:

oct. 22, 2025 5:45:37 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent
INFOS: Loaded APR based Apache Tomcat Native library 1.1.33 using APR version 1.4.6.
oct. 22, 2025 5:45:37 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent
INFOS: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
oct. 22, 2025 5:45:37 AM org.apache.catalina.core.AprLifecycleListener initializeSSL
INFOS: OpenSSL successfully initialized (OpenSSL 0.9.8y 5 Feb 2013)
oct. 22, 2025 5:45:37 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["http-apr-8080"]
oct. 22, 2025 5:45:37 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["ajp-apr-8009"]
oct. 22, 2025 5:45:37 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["http-bio-8443"]
oct. 22, 2025 5:45:37 AM org.apache.catalina.startup.Catalina load
INFOS: Initialization processed in 995 ms
...
2025-10-22 05:46:15,118  INFO  [alfresco.repo.admin] [localhost-startStop-1] Using database URL 'jdbc:mysql://localhost:3306/alfresco?autoReconnect=true' with user 'alfresco'.
2025-10-22 05:46:15,491  INFO  [alfresco.repo.admin] [localhost-startStop-1] Connected to database MySQL version 5.6.28-log
2025-10-22 05:46:21,836  INFO  [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'sysAdmin' subsystem, ID: [sysAdmin, default]
2025-10-22 05:46:21,863  INFO  [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Startup of 'sysAdmin' subsystem, ID: [sysAdmin, default] complete
2025-10-22 05:46:21,902  INFO  [domain.schema.SchemaBootstrap] [localhost-startStop-1] Ignoring script patch (post-Hibernate): patch.db-V4.2-metadata-query-indexes
2025-10-22 05:46:31,210  WARN  [hibernate.cfg.SettingsFactory] [localhost-startStop-1] Could not obtain connection metadata
org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (java.lang.NullPointerException)
        at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
        at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
        at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
        at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:81)
        at org.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84)
        at org.hibernate.cfg.Configuration.buildSettings(Configuration.java:2079)
        at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1304)
        at org.springframework.orm.hibernate3.LocalSessionFactoryBean.newSessionFactory(LocalSessionFactoryBean.java:860)
        at org.springframework.orm.hibernate3.LocalSessionFactoryBean.buildSessionFactory(LocalSessionFactoryBean.java:779)
        at org.springframework.orm.hibernate3.AbstractSessionFactoryBean.afterPropertiesSet(AbstractSessionFactoryBean.java:211)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1477)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1417)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:563)
        at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
        at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
        at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276)
        at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197)
        at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47)
        at org.alfresco.web.app.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:63)
        at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5068)
        at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5584)
        at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
        at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:899)
        at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:875)
        at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652)
        at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:1092)
        at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1984)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: java.lang.NullPointerException
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927)
        at com.mysql.jdbc.ConnectionImpl.buildCollationMapping(ConnectionImpl.java:1023)
        at com.mysql.jdbc.ConnectionImpl.initializePropsFromServer(ConnectionImpl.java:3451)
        at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2443)
        at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2213)
        at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:797)
        at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
        at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:389)
        at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:305)
        at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
        at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
        at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556)
        at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545)
        ... 37 more
Caused by: java.lang.NullPointerException
        at java.util.TreeMap.put(TreeMap.java:556)
        at com.mysql.jdbc.Util.resultSetToMap(Util.java:506)
        at com.mysql.jdbc.ConnectionImpl.buildCollationMapping(ConnectionImpl.java:964)
        ... 53 more
2025-10-22 05:46:31,597  INFO  [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Stopping 'sysAdmin' subsystem, ID: [sysAdmin, default]
2025-10-22 05:46:31,597  INFO  [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Stopped 'sysAdmin' subsystem, ID: [sysAdmin, default]
2025-10-22 05:46:31,607  ERROR [web.context.ContextLoader] [localhost-startStop-1] Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialectResourceLoader' defined in class path resource [alfresco/extension/context/typo3-context.xml]: Cannot create inner bean 'org.springframework.beans.factory.config.PropertyPathFactoryBean#70397d85' of type [org.springframework.beans.factory.config.PropertyPathFactoryBean] while setting bean property 'dialectClass'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.beans.factory.config.PropertyPathFactoryBean#70397d85' defined in class path resource [alfresco/extension/context/typo3-context.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialect': FactoryBean threw exception on object creation; nested exception is org.hibernate.exception.GenericJDBCException: Cannot open connection
        at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:281)
        at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:120)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1325)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1086)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580)
        at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
        at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
        at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276)
        at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197)
        at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47)
        at org.alfresco.web.app.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:63)
        at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5068)
        at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5584)
        at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
        at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:899)
        at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:875)
        at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652)
        at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:1092)
        at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1984)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.beans.factory.config.PropertyPathFactoryBean#70397d85' defined in class path resource [alfresco/extension/context/typo3-context.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialect': FactoryBean threw exception on object creation; nested exception is org.hibernate.exception.GenericJDBCException: Cannot open connection
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
        at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:270)
        ... 29 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialect': FactoryBean threw exception on object creation; nested exception is org.hibernate.exception.GenericJDBCException: Cannot open connection
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:149)
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:102)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1429)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:245)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190)
        at org.springframework.beans.factory.config.PropertyPathFactoryBean.setBeanFactory(PropertyPathFactoryBean.java:186)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeAwareMethods(AbstractAutowireCapableBeanFactory.java:1439)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1408)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
        ... 31 more
Caused by: org.hibernate.exception.GenericJDBCException: Cannot open connection
        at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:103)
        at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:91)
        at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
        at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:29)
        at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:426)
        at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:144)
        at org.hibernate.jdbc.BorrowedConnectionProxy.invoke(BorrowedConnectionProxy.java:150)
        at com.sun.proxy.$Proxy5.setAutoCommit(Unknown Source)
        at org.alfresco.hibernate.DialectFactoryBean.getObject(DialectFactoryBean.java:67)
        at org.alfresco.hibernate.DialectFactoryBean.getObject(DialectFactoryBean.java:39)
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:142)
        ... 39 more
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (java.lang.NullPointerException)
        at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
        at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
        at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
        at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:81)
        at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:423)
        ... 45 more
Caused by: java.sql.SQLException: java.lang.NullPointerException
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927)
        at com.mysql.jdbc.ConnectionImpl.buildCollationMapping(ConnectionImpl.java:1023)
        at com.mysql.jdbc.ConnectionImpl.initializePropsFromServer(ConnectionImpl.java:3451)
        at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2443)
        at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2213)
        at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:797)
        at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
        at sun.reflect.GeneratedConstructorAccessor103.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
        at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:389)
        at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:305)
        at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
        at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
        at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556)
        at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545)
        ... 49 more
Caused by: java.lang.NullPointerException
        at java.util.TreeMap.put(TreeMap.java:556)
        at com.mysql.jdbc.Util.resultSetToMap(Util.java:506)
        at com.mysql.jdbc.ConnectionImpl.buildCollationMapping(ConnectionImpl.java:964)
        ... 64 more
oct. 22, 2025 5:46:31 AM org.apache.catalina.core.StandardContext listenerStart
GRAVE: Exception lors de l'envoi de l'évènement contexte initialisé (context initialized) à l'instance de classe d'écoute (listener) org.alfresco.web.app.ContextLoaderListener
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialectResourceLoader' defined in class path resource [alfresco/extension/context/typo3-context.xml]: Cannot create inner bean 'org.springframework.beans.factory.config.PropertyPathFactoryBean#70397d85' of type [org.springframework.beans.factory.config.PropertyPathFactoryBean] while setting bean property 'dialectClass'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.beans.factory.config.PropertyPathFactoryBean#70397d85' defined in class path resource [alfresco/extension/context/typo3-context.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'extranetDialect': FactoryBean threw exception on object creation; nested exception is org.hibernate.exception.GenericJDBCException: Cannot open connection
        at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:281)
        at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:120)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1325)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1086)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291)
        ...

The key clue here appears to be the NullPointerException from the buildCollationMapping(), which probably indicates a problem during the JDBC driver’s initialization of MySQL collation data.

DB & JDBC Connector Details

Given that the error appeared to be linked to the DB, I checked that it was properly running and responding, while also retrieving its version:

mysql> SELECT VERSION();
+------------+
| VERSION()  |
+------------+
| 5.6.28-log |
+------------+

1 row in set (0.01 sec)

Everything looked fine, and the collation list was consistent with no apparent missing data and no “null” values:

mysql> SHOW COLLATION;
+---------------------+----------+-----+---------+----------+---------+
| Collation           | Charset  | Id  | Default | Compiled | Sortlen |
+---------------------+----------+-----+---------+----------+---------+
| big5_chinese_ci     | big5     |   1 | Yes     | Yes      |       1 |
| big5_bin            | big5     |  84 |         | Yes      |       1 |
| dec8_swedish_ci     | dec8     |   3 | Yes     | Yes      |       1 |
| dec8_bin            | dec8     |  69 |         | Yes      |       1 |
...

Another important thing to check is the JDBC Connector used by Tomcat to connect to the database (as it’s not using the mysql binaries to connect, as I did above):

[alfresco@alf01 ~]$ cd $CATALINA_HOME
[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ ls -l lib/mysql-*
-rw-r--r--. 1 alfresco alfresco  802721 Jan 21  2015 lib/mysql-connector-java-5.1.20-bin.jar
[alfresco@alf01 tomcat]$

The MySQL JDBC Connector version 5.1.20 was released in May 2012. Looking into known MySQL Connector/J bugs, it looked like there were several problems affecting it, like long-running MySQL instances, some related to the buildCollationMapping, etc… The MySQL DB of the environment was running for the past 2.5 years without any restart. Therefore, the first thing we suggested was simply to restart the DB. Unfortunately, it didn’t help.

Patching the JDBC Connector

Since patching/upgrading Alfresco and the DB was out of the question, the only easy thing we could try, as a next step, was to patch the JDBC Connector. Looking at the MySQL download website for that component (c.f. here), the latest minor version available is 5.1.49, released in April 2020. Therefore, I tried to download and extract it and then I stopped the (not-really) running Tomcat process:

[alfresco@alf01 tomcat]$ wget "https://cdn.mysql.com/archives/mysql-connector-java-5.1/mysql-connector-java-5.1.49.zip"
--2025-10-22 09:43:47--  https://cdn.mysql.com/archives/mysql-connector-java-5.1/mysql-connector-java-5.1.49.zip
Resolving cdn.mysql.com... 2.19.75.100, 2001:918:ffcb:1a6::1d68, 2001:918:ffcb:181::1d68
Connecting to cdn.mysql.com|2.19.75.100|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3722067 (3.5M) [application/zip]
Saving to: “mysql-connector-java-5.1.49.zip”

100%[============================================================>] 3,722,067   19.1M/s   in 0.2s

2025-10-22 09:43:48 (19.1 MB/s) - “mysql-connector-java-5.1.49.zip” saved [3722067/3722067]

[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ unzip mysql-connector-java-5.1.49.zip
Archive:  mysql-connector-java-5.1.49.zip
   creating: mysql-connector-java-5.1.49/
   creating: mysql-connector-java-5.1.49/src/
   creating: mysql-connector-java-5.1.49/src/com/
   ...
   inflating: mysql-connector-java-5.1.49/src/testsuite/ssl-test-certs/server-cert.pem
   inflating: mysql-connector-java-5.1.49/src/testsuite/ssl-test-certs/server-key.pem
[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ ../alfresco.sh stop tomcat
  **
  ** Alfresco has been stopped successfully
  **
[alfresco@alf01 tomcat]$

The final step was to backup the current and new versions of the JARs. Then I replaced the file in the Tomcat lib folder before restarting the process:

[alfresco@alf01 tomcat]$ ls -l lib/mysql-*
-rw-r--r--. 1 alfresco alfresco  802721 Jan 21  2015 lib/mysql-connector-java-5.1.20-bin.jar
[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ mv lib/mysql-connector-java-5.1.20-bin.jar ./
[alfresco@alf01 tomcat]$ mv mysql-connector-java-5.1.49/mysql-connector-java-5.1.49.jar ./
[alfresco@alf01 tomcat]$ cp -p mysql-connector-java-5.1.49.jar lib/
[alfresco@alf01 tomcat]$ rm -rf mysql-connector-java-5.1.49.zip mysql-connector-java-5.1.49
[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ ls -l lib/mysql-*
-rw-r--r--. 1 alfresco alfresco 1006904 Apr 20  2020 lib/mysql-connector-java-5.1.49.jar
[alfresco@alf01 tomcat]$
[alfresco@alf01 tomcat]$ ../alfresco.sh start tomcat
  **
  ** Alfresco has been started
  **
[alfresco@alf01 tomcat]$

Once done, the Tomcat logs looked much better, there were no errors related to database connection anymore. Alfresco was able to finish its startup and users were able to log in to Alfresco Share again:

oct. 22, 2025 8:27:35 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent
INFOS: Loaded APR based Apache Tomcat Native library 1.1.33 using APR version 1.4.6.
oct. 22, 2025 8:27:35 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent
INFOS: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
oct. 22, 2025 8:27:35 AM org.apache.catalina.core.AprLifecycleListener initializeSSL
INFOS: OpenSSL successfully initialized (OpenSSL 0.9.8y 5 Feb 2013)
oct. 22, 2025 8:27:35 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["http-apr-8080"]
oct. 22, 2025 8:27:35 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["ajp-apr-8009"]
oct. 22, 2025 8:27:35 AM org.apache.coyote.AbstractProtocol init
INFOS: Initializing ProtocolHandler ["http-bio-8443"]
oct. 22, 2025 8:27:35 AM org.apache.catalina.startup.Catalina load
INFOS: Initialization processed in 989 ms
...
2025-10-22 08:28:16,988  INFO  [alfresco.repo.admin] [localhost-startStop-1] Using database URL 'jdbc:mysql://localhost:3306/alfresco?autoReconnect=true' with user 'alfresco'.
2025-10-22 08:28:17,300  INFO  [alfresco.repo.admin] [localhost-startStop-1] Connected to database MySQL version 5.6.28-log
2025-10-22 08:28:23,797  INFO  [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'sysAdmin' subsystem, ID: [sysAdmin, default]
2025-10-22 08:28:23,822  INFO  [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Startup of 'sysAdmin' subsystem, ID: [sysAdmin, default] complete
2025-10-22 08:28:23,859  INFO  [domain.schema.SchemaBootstrap] [localhost-startStop-1] Ignoring script patch (post-Hibernate): patch.db-V4.2-metadata-query-indexes
Wed Oct 22 08:28:33 CEST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Oct 22 08:28:33 CEST 2025 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.

SETTING APPLICATION CONTEXT

2025-10-22 08:28:33,976  INFO  [extensions.webscripts.TemplateProcessorRegistry] [localhost-startStop-1] Registered template processor Repository Template Processor for extension ftl
2025-10-22 08:28:33,982  INFO  [extensions.webscripts.ScriptProcessorRegistry] [localhost-startStop-1] Registered script processor Repository Script Processor for extension js
2025-10-22 08:28:40,396  INFO  [domain.schema.SchemaBootstrap] [localhost-startStop-1] Connecting to database: jdbc:mysql://localhost:3306/alfresco?autoReconnect=true, UserName=alfresco@localhost, MySQL Connector Java
2025-10-22 08:28:40,396  INFO  [domain.schema.SchemaBootstrap] [localhost-startStop-1] Schema managed by database dialect org.hibernate.dialect.MySQLInnoDBDialect.
2025-10-22 08:28:41,104  INFO  [domain.schema.SchemaBootstrap] [localhost-startStop-1] No changes were made to the schema.
2025-10-22 08:28:42,093  INFO  [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'Authentication' subsystem, ID: [Authentication, managed, alfinst]
2025-10-22 08:28:42,545  INFO  [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Startup of 'Authentication' subsystem, ID: [Authentication, managed, alfinst] complete
...
oct. 22, 2025 8:30:09 AM org.apache.coyote.AbstractProtocol start
INFOS: Starting ProtocolHandler ["http-apr-8080"]
oct. 22, 2025 8:30:09 AM org.apache.coyote.AbstractProtocol start
INFOS: Starting ProtocolHandler ["ajp-apr-8009"]
oct. 22, 2025 8:30:09 AM org.apache.coyote.AbstractProtocol start
INFOS: Starting ProtocolHandler ["http-bio-8443"]
oct. 22, 2025 8:30:09 AM org.apache.catalina.startup.Catalina start
INFOS: Server startup in 153447 ms
2025-10-22 08:30:10,121  INFO  [web.site.EditionInterceptor] [ajp-apr-8009-exec-7] Successfully retrieved license information from Alfresco.
2025-10-22 08:30:39,780  INFO  [web.scripts.ImapServerStatus] [ajp-apr-8009-exec-3] Successfully retrieved IMAP server status from Alfresco: disabled
2025-10-22 08:30:40,107  INFO  [web.scripts.SyncModeConfig] [ajp-apr-8009-exec-3] Successfully retrieved Sync Mode configuration from Alfresco: ON_PREMISE

This issue was pretty interesting, as it shows that even if you do not change anything on a system running for more than 10 years (same DB/Alfresco/Tomcat/Java/JDBC Connector versions), and that restarts daily, you can still face bugs at any time… I can’t check all details of the bugs while I’m writing this blog as the MySQL bugs website is horribly slow and crashing today, but clearly this shows that even if you’re maintaining old platforms that can’t be upgraded, it’s still worth keeping small components like JDBC connectors up to date.

L’article Alfresco – When a JDBC driver breaks after years of stability est apparu en premier sur dbi Blog.

Dctm – D2-DAR installation fails because of workflow trackers

Sat, 2025-10-25 02:11

In a Documentum upgrade project from 20.x to 23.4 earlier this year, I faced an issue which prevented the installation/upgrade of D2. The problem was located inside the D2-DAR while it was trying to update the workflow trackers, which failed because of an ORA-01427 error. The exact error displayed was this one:

[emc.installer] dmbasic output : [DM_SERVER_E_EXEC_SQL]error:  "The execsql method failed due to a database error: ORA-01427: single-row subquery returns more than one row"

Before that happened, I successfully upgraded 7 or 8 environments from 20.x to 23.4, for that customer, and after that, I did around 20 more. However, this issue only happened once, so not sure what exactly was wrong, and the Application/Business Teams couldn’t find what really caused the initial issue (inside the DB) either… In any cases, to start my investigation, I tried to quickly compare the generated logs for the D2-DAR installation on a working vs non-working Repository.

Working Repository logs:

###
## Working repo:
###
[dmadmin@cs-0 DARsInternal]$ cat dar-deploy-D2-DAR.dar-REPO1-20250703-133024.log
[INFO]  ******************************************************
[INFO]  * Headless Composer
[INFO]  * Version:        23.4.0000.0120
[INFO]  * Java version:   17.0.13 (64bit)
[INFO]  * Java home:      $JAVA_HOME
[INFO]  * Set storage type: false
[INFO]  *
[INFO]  * DAR file:       $DM_HOME/install/DARsInternal/D2-DAR.dar
[INFO]  * Project name:   D2-DAR
[INFO]  * Built by Composer: 23.4.0000.0027
[INFO]  *
...
[INFO]  dmbasic output : Finished creating required indexes on D2 types.
[INFO]  dmbasic output : Copied workflow_id in to d2c_workflow_tracker object
[INFO]  dmbasic output : Copied process_id in to d2c_workflow_tracker object
[INFO]  dmbasic output : Copied supervisor_name in to d2c_workflow_tracker object
[INFO]  dmbasic output : Adding d2_digital_sign_tracker_user user to the group d2_digital_signature_tracker
[INFO]  dmbasic output : Fetched the group d2_digital_signature_tracker
[INFO]  dmbasic output : Check for 'Public searches' folder to migrate.
[INFO]  dmbasic output : Folder '/Resources/D2/[MIGRATE] Public searches' not found. Do not migrate.
[INFO]  dmbasic output : Disconnected from the server.
[INFO]  Finished executing post-install script Thu Jul 03 13:32:13 UTC 2025
[INFO]  Project 'D2-DAR' was successfully installed.
[dmadmin@cs-0 DARsInternal]$

Non-working Repository logs:

###
## Non working repo:
###
[dmadmin@cs-0 DARsInternal]$ cat dar-deploy-D2-DAR.dar-REPO2-20250704-144219.log
[INFO]  ******************************************************
[INFO]  * Headless Composer
[INFO]  * Version:        23.4.0000.0120
[INFO]  * Java version:   17.0.13 (64bit)
[INFO]  * Java home:      $JAVA_HOME
[INFO]  * Set storage type: false
[INFO]  *
[INFO]  * DAR file:       $DM_HOME/install/DARsInternal/D2-DAR.dar
[INFO]  * Project name:   D2-DAR
[INFO]  * Built by Composer: 23.4.0000.0027
[INFO]  *
...
[INFO]  dmbasic output : Finished creating required indexes on D2 types.
[INFO]  dmbasic output : Copied workflow_id in to d2c_workflow_tracker object
[INFO]  dmbasic output : [DM_SERVER_E_EXEC_SQL]error:  "The execsql method failed due to a database error: ORA-01427: single-row subquery returns more than one row"
[INFO]  dmbasic output :
[INFO]  dmbasic output : [DM_TYPE_MGR_E_INDEX_ALREADY_EXISTS]error:  "The index being created on type d2_recently_used_profile already exists. The existing index is represented by dmi_index object with id 1f012345800005a0."
[INFO]  dmbasic output :
[INFO]  dmbasic output : [DM_TYPE_MGR_E_INDEX_ALREADY_EXISTS]error:  "The index being created on type d2_recently_used_profile already exists. The existing index is represented by dmi_index object with id 1f0123458000059f."
[INFO]  dmbasic output :
[INFO]  dmbasic output :
[INFO]  dmbasic output : Exiting with dmExit(-1)
[ERROR]  Procedure execution failed with dmbasic exit value : 255
[INFO]  D2-DAR install failed.
[ERROR]  Unable to install dar file $DM_HOME/install/DARsInternal/D2-DAR.dar
com.emc.ide.installer.PostInstallException: Error running post-install procedure "PostInstall". Please contact the procedure owner to verify if it is functioning properly.
        at internal.com.emc.ide.installer.DarInstaller.postInstall(DarInstaller.java:1574)
        at internal.com.emc.ide.installer.DarInstaller.doInstall(DarInstaller.java:669)
        ...
Caused by: com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerException: Procedure execution failed with dmbasic exit value : 255
        at com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerUtils.executeDmBasic(ProcedureRunnerUtils.java:285)
        at com.emc.ide.external.dfc.procedurerunner.ProcedureRunner.execute(ProcedureRunner.java:55)
        ...
[ERROR]  Failed to install DAR
Unable to install dar file $DM_HOME/install/DARsInternal/D2-DAR.dar
        at com.emc.ant.installer.api.InstallerAntTask.installDar(InstallerAntTask.java:273)
        at com.emc.ant.installer.api.InstallerAntTask.execute(InstallerAntTask.java:135)
        ...
Caused by: com.emc.ide.installer.PostInstallException: Error running post-install procedure "PostInstall". Please contact the procedure owner to verify if it is functioning properly.
        at internal.com.emc.ide.installer.DarInstaller.postInstall(DarInstaller.java:1574)
        at internal.com.emc.ide.installer.DarInstaller.doInstall(DarInstaller.java:669)
        ...
Caused by: com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerException: Procedure execution failed with dmbasic exit value : 255
        at com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerUtils.executeDmBasic(ProcedureRunnerUtils.java:285)
        at com.emc.ide.external.dfc.procedurerunner.ProcedureRunner.execute(ProcedureRunner.java:55)
        at internal.com.emc.ide.installer.DarInstaller.postInstall(DarInstaller.java:1570)
        ... 42 more

As often with DAR installation failures, there aren’t a lot of information. It’s clear that the installation failed because there is apparently some wrong data inside the DB (a query that should return only 1 row is giving more results), but there isn’t much more details expect that. By comparing the 2 different logs, you can understand that the issue is located between these 2 lines:

[INFO]  dmbasic output : Copied workflow_id in to d2c_workflow_tracker object
[INFO]  dmbasic output : Copied process_id in to d2c_workflow_tracker object

Therefore, finding the root cause is essentially finding what is being done between these 2 lines. A DAR is essentially a zip (Documentum Archive), so you can just extract it and look at its content to see what is being done. In this case (D2 23.4), I could find the necessary source code inside the file named “D2-DAR/bin/content/55/-1569930955/runnableContent.crtext“. Here is an extract of that file:

    Print "Copied workflow_id in to d2c_workflow_tracker object"

    sQuery = "update d2c_workflow_tracker_s set process_id = " & _
             "(select child_id from dm_relation_s where dm_relation_s.parent_id = " & _
             "d2c_workflow_tracker_s.r_object_id and dm_relation_s.relation_name = " & _
             "'D2_WF_TRACKER_TEMPLATE') where (process_id IS NULL or process_id = '0000000000000000') " & _
             " and exists (select child_id from dm_relation_s where dm_relation_s.parent_id = " & _
             "d2c_workflow_tracker_s.r_object_id  and dm_relation_s.relation_name = 'D2_WF_TRACKER_TEMPLATE')"

    bRet = dmAPIExec("execsql," & sess & "," & sQuery)

    if (not bRet) then
      Call Err_Handle(sess, "0")
      Print "Exiting with dmExit(-1)"
      dmExit(-1)
    end if

    Print "Copied process_id in to d2c_workflow_tracker object"

Therefore, the query that is causing the issue is the one referenced above as “sQuery”, i.e. this one, transformed into plain SQL:

update d2c_workflow_tracker_s 
  set process_id=(select child_id from dm_relation_s 
    where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id 
      and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE') 
  where (process_id IS NULL or process_id='0000000000000000') 
    and exists (select child_id from dm_relation_s 
      where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id 
        and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE');

And indeed, executing this query produces the same result:

SQL> update d2c_workflow_tracker_s 
       set process_id=(select child_id from dm_relation_s 
         where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id 
           and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE') 
       where (process_id IS NULL or process_id='0000000000000000') 
         and exists (select child_id from dm_relation_s 
           where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id 
             and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE');
update d2c_workflow_tracker_s set process_id=(select child_id from dm_relation_s where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE') where (process_id IS NULL or process_id='0000000000000000') and exists (select child_id from dm_relation_s where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE')
                                                *
ERROR at line 1:
ORA-01427: single-row subquery returns more than one row

Unfortunately, because of the way the query is formed (mapping all d2c_workflow_tracker_s.process_id 1-by-1 from the dm_relation_s.child_id), you cannot just execute the sub-query (whatever is present between the parenthesis):

SQL> select child_id from dm_relation_s 
       where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id 
         and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE';
select child_id from dm_relation_s where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE'
                                                                   *
ERROR at line 1:
ORA-00904: "D2C_WORKFLOW_TRACKER_S"."R_OBJECT_ID": invalid identifier

Even if you “correct” it (by adding the 2nd table name in the “from”), then it will just display something that isn’t directly usable:

SQL> select child_id from dm_relation_s, d2c_workflow_tracker_s 
       where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id 
         and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE';

CHILD_ID
----------------
4b01234580044d0f
4b01234580057910
4b01234580002572
...
4b01234580057910
4b01234580057910
4b01234580311d91

1406 rows selected.

What to do then? Well, you need to go further. There are 1406 rows from the above query (number of dm_process being tracked or to be more accurate, the number of relations between a Workflow Tracker (parent_id) and its dm_process (child_id)). Assuming that the error is legit, then that would probably mean that there could be less Workflow Trackers than that number. Therefore, after a bit of reflection, I executed the following query to try to find all Workflow Trackers and the count of relations that each and every one of them has. Under normal conditions, each Workflow Tracker should have a single relation (meaning a single dm_process (1-by-1 mapping)) but that wasn’t the case:

SQL> select d2c_workflow_tracker_s.r_object_id, count(dm_relation_s.parent_id) 
       from d2c_workflow_tracker_s, dm_relation_s 
       where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id 
         and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE' 
       group by d2c_workflow_tracker_s.r_object_id 
       order by 2;

R_OBJECT_ID      COUNT(DM_RELATION_S.PARENT_ID)
---------------- ------------------------------
09012345800df7eb                              1
09012345800df7ed                              1
...
0901234580313e31                              1
090123458004fdde                              2
090123458005cdf5                              2
090123458005fb44                              2

1403 rows selected.

There are only 1403 Workflow Trackers, which means that there are 3 “additional” relations (or 3 dm_process which are assigned to an already-used Workflow Tracker). To get more details on the Workflow and their Workflow Trackers, these kinds of queries can be executed:

SQL> select parent_id, child_id from dm_relation_s 
       where parent_id IN ('090123458004fdde','090123458005cdf5','090123458005fb44') 
         and relation_name='D2_WF_TRACKER_TEMPLATE' 
       order by 1, 2;

PARENT_ID        CHILD_ID
---------------- ----------------
090123458004fdde 4b01234580002575
090123458004fdde 4b01234580057910
090123458005cdf5 4b01234580057910
090123458005cdf5 4b01234580057910
090123458005fb44 4b01234580057910
090123458005fb44 4b01234580057910



SQL> select r_object_id, object_name, r_creator_name from dm_sysobject_sp 
       where r_object_id IN ('090123458004fdde','090123458005cdf5','090123458005fb44');

R_OBJECT_ID      OBJECT_NAME                                                            R_CREATOR_NAME
---------------- ---------------------------------------------------------------------- -----------------
090123458004fdde Start Import Approval October 29, 2021 at 10:34:45 AM UTC              OQSOP01
090123458005cdf5 Start Ready For Approval November 11, 2021 at 3:55:04 PM UTC           OQAuthor01
090123458005fb44 Start In Review November 16, 2021 at 12:01:32 PM UTC                   OQSOP01



SQL> select r_object_id, object_name from dm_sysobject_sp 
       where r_object_id IN ('4b01234580002575','4b01234580057910');

R_OBJECT_ID      OBJECT_NAME
---------------- ----------------------
4b01234580002575 WF_Import
4b01234580057910 WF_Review_Approve



SQL> select * from dm_relation_sp 
       where parent_id='090123458005cdf5' 
         and relation_name='D2_WF_TRACKER_TEMPLATE';

R_OBJECT_ID      RELATION_NAME                    PARENT_ID        CHILD_ID         CHILD_LABEL                      PERMANENT_LINK   ORDER_NO EFFECTIVE EXPIRATIO DESCRIPTION                 I_PARTITION I_IS_REPLICA   I_VSTAMP
---------------- -------------------------------- ---------------- ---------------- -------------------------------- -------------- ---------- --------- --------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------- ------------ ----------
370123458000d92a D2_WF_TRACKER_TEMPLATE           090123458005cdf5 4b01234580057910                                               0          0 01-JAN-01 01-JAN-01                               0             0          0
370123458000cd4f D2_WF_TRACKER_TEMPLATE           090123458005cdf5 4b01234580057910                                               0          0 01-JAN-01 01-JAN-01                               0             0          0

Clearly, there is a Data/Business problem. Two of the three Workflow Trackers have an exact duplicate relation for the same dm_process while the third one tracks two different dm_process, which is even more surprising.

To be able to proceed, I noted all the IDs and informed the Application/Business teams and then I did what the SQL command from the D2-DAR was supposed to do. Therefore, I took one of the 2 values and assigned it to the process_id of the Workflow Tracker:

SQL> update d2c_workflow_tracker_s 
       set process_id='4b01234580002575' 
       where (process_id IS NULL or process_id='0000000000000000') 
         and r_object_id='090123458004fdde';

1 row updated.



SQL> update d2c_workflow_tracker_s 
       set process_id='4b01234580057910' 
       where (process_id IS NULL or process_id='0000000000000000') 
         and r_object_id='090123458005fb44';

1 row updated.



SQL> update d2c_workflow_tracker_s 
       set process_id='4b01234580057910' 
       where (process_id IS NULL or process_id='0000000000000000') 
         and r_object_id='090123458005cdf5';

1 row updated.

After that, I re-triggered the query which was previously failing and this time it worked properly for all the remaining Workflow Trackers:

SQL> update d2c_workflow_tracker_s 
       set process_id=(select child_id from dm_relation_s 
         where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id 
           and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE') 
       where (process_id IS NULL or process_id='0000000000000000') 
         and exists (select child_id from dm_relation_s 
           where dm_relation_s.parent_id=d2c_workflow_tracker_s.r_object_id 
             and dm_relation_s.relation_name='D2_WF_TRACKER_TEMPLATE');

1400 rows updated.

I was then able to start the upgrade pipeline from scratch again, and it was able to complete properly. As previously stated, the Application/Business teams still didn’t find what caused these 3 rows, but it’s only a DEV, so I can only assume that it was some human errors at some points, like a wrong DQL being executed that duplicated some values or something similar. In any cases, what is interesting for this blog is the investigation process and the way to get to the bottom of things by searching inside the D2 DAR and analyzing the DB content.

L’article Dctm – D2-DAR installation fails because of workflow trackers est apparu en premier sur dbi Blog.

SQL Server: Do you know how to use SERVERPROPERTY()  function in a useful way?

Wed, 2025-10-22 04:04

The SERVERPROPERTY() function is so nice to use to have information about the instance and server very quickly. Sometimes it’s good to come back to basis…

How to begin? Read the documentation of course here but you will see a lot of informations…

After reading the documentation, how to have the first useful information?

The first information will be about the SQL Server installed like the server name, instance name, build and version with also some advanced information like last update date, the KB or CU.

Here this first query:

SELECT  
  SERVERPROPERTY('MachineName') AS [Server_Name],
  SERVERPROPERTY('InstanceName') AS [Instance_Name], 
  SERVERPROPERTY('Edition') AS [Edition],
  SERVERPROPERTY('ProductVersion') AS [Product_Version], 
  SERVERPROPERTY('ProductLevel') AS [Product _Level],
  SERVERPROPERTY('ProductBuild') AS [Product _Build],
   SERVERPROPERTY('ProductMajorVersion') AS [Advanced_Product_ Major_Version],
  SERVERPROPERTY('ProductMinorVersion') AS [Advanced_Product_Minor_Version],
   SERVERPROPERTY('ProductUpdateReference ') AS [Advanced_KB_Version],
  SERVERPROPERTY('ProductUpdateLevel') AS [Advanced_Update _Level],
  SERVERPROPERTY(' ResourceLastUpdateDateTime ') AS [Advanced_Last_Update_Date]

Result of a test:

I run all queries on my sql server container under Visual Studio Code and sql server 2025 CPT2.1.

You can have more info about it here (dbi Blog)

After, some other useful information will be if the instance has features installed like Polybase or Full-Text seach but also some configuration like filestream or HA.

 Here the query:

SELECT 
  SERVERPROPERTY('MachineName') AS [Server_Name],
  SERVERPROPERTY('InstanceName') AS [Instance_Name], 
  SERVERPROPERTY('IsPolyBaseInstalled') AS [Is_Polybase_Installed],  
  SERVERPROPERTY('IsFullTextInstalled') AS [Is_Full-Text_Search_Installed],
  SERVERPROPERTY('IsAdvancedAnalyticsInstalled') AS [Is_Advanced_Analytics_Installed],
  SERVERPROPERTY('FilestreamConfiguredLevel') AS [is_Filestream_enabled],  
  SERVERPROPERTY('IsBigDataCluster') AS [Is_BigData_Cluster_enabled],
  SERVERPROPERTY('IsClustered') AS [Is_Clustered],  
  SERVERPROPERTY('IsHadrEnabled') AS [Is_HADR_enabled],
   SERVERPROPERTY('IsTempDbMetadataMemoryOptimized') AS [Is_TempDB_for_Memory_Optimized_Tables_Enabled],  
  SERVERPROPERTY('IsXTPSupported') AS [Is_IN_Memory_OLTP_Supported],
  SERVERPROPERTY('IsExternalGovernanceEnabled') AS [Is_External_Governance_Enabled];

Result of a test:

You can notice that the last property is the External governance. This is linked to the  Microsoft Purview access policies.  

The next useful information will be about the security.

Here the query:

SELECT 
  SERVERPROPERTY('MachineName') AS [Server_Name],
  SERVERPROPERTY('InstanceName') AS [Instance_Name], 
  SERVERPROPERTY('IsIntegratedSecurityOnly') AS [Is_Integrated_Security_Only],
  SERVERPROPERTY('IsSingleUser') AS [Is_Single_User],
  SERVERPROPERTY('IsExternalAuthenticationOnly') AS [s_External_Authentication_Only];

Result of a test:

The External Authentication concerns the Microsoft Entra-only authentication  for Azure SQL Database & Azure SQL Managed Instance.

The last information package is the collation/character set with this query:

SELECT 
  SERVERPROPERTY('MachineName') AS [Server_Name],
  SERVERPROPERTY('InstanceName') AS [Instance_Name], 
  SERVERPROPERTY('Collation') AS [Collation], 
  SERVERPROPERTY('LCID') AS [Windows_Locale_Identifier], 
  SERVERPROPERTY('SqlCharSetName') AS [SQL_Character_Set_Name], 
  SERVERPROPERTY('SqlSortOrderName') AS [SQL_Sort_Order_Name];

Result of a test:

To conclude, you have now in this blog 4 useful queries to find information on the server level.
Don’t hesitate to add your comments or give me other useful property that you use.!

L’article SQL Server: Do you know how to use SERVERPROPERTY()  function in a useful way? est apparu en premier sur dbi Blog.

Alfresco – Documents not searchable when uploaded via REST-API?

Thu, 2025-10-16 13:29

It’s a pretty common (and should be known) issue in Alfresco but even to this day, it still happens that enterprises contact us because of documents that cannot be found. The installation in question is a docker-compose for Alfresco Community 23.4, managed by the customer directly. Only some slight customizations were done but it’s a pretty out-of-the-box setup for the search part. There is an interface that uploads some documents (using a custom type) into a specific Site using the REST-API but each of these documents is “apparently” not searchable in Alfresco Share. When downloading and re-uploading manually through Share (either default or custom type), the customer was saying that the documents can indeed be found. So, what is going on here?

As mentioned, this is a pretty common problem. No latter than last week, there was again discussions around that topic on the Alfresco Discord channel, and yesterday at that customer, they were also facing that exact same thing. Therefore, I think it’s high time for me to write a blog on that topic, to share a bit of experience and hopefully reduce the amount of people that are impacted, probably even without knowing it. I will use a local development environment to demonstrate the issue and show how to fix it.

The Problem

So, what is the issue? In short, it’s linked to Solr6 cross-locale, i.e. a language problem. When using Solr6 with Alfresco, by default, the cross-locale configuration is commented out, which means it is effectively disabled. When someone or something upload a document to Alfresco, it will be assigned a property “sys:locale” that is the language used by the client that created said document. If it’s a web browser, then it would be assigned the language configured on your browser. If it’s a command line request, it would be the OS session language, etc… Then, when users are trying to search for that document, the locale of the user is also assigned to the search request and if it doesn’t match the initial locale (the one used to create the document), then you will probably never be able to find anything.

Unfortunately, Solr6 for Alfresco is assuming that everybody speak and uses only one language, which is definitively not true for a lot of large-scale or international enterprises or even smaller ones but in countries where there are multiple official languages, like Switzerland.

Environment Preparation

In the environments that I install, I always configure the cross-locale support. So, first of all, I will need to revert to the default Solr6 configuration. More details on that will be in the solution section below. With a default Solr6 installation/index, I will create a custom type with a single property that I will use to perform the searches (it’s not mandatory, you can use the default type/properties too). You can also use Full-Text Search, but it’s probably easier for me to showcase the problem using a very specific property.

Therefore, I created a custom type (dbi:demo_doc) and a property (dbi:demo_prop) using the model manager. “dbi:demo_prop” is a simple text property that is mandatory, indexed as “Free Text” (c.f. the documentation here for more details) and I didn’t set any constraints for this example but it could have had one. I will just use values such as “NB-[0-9]{4}” (so NB-0000, NB-0001, etc.):

Showcase – Web Browser in English

Now that I have a cleaned and out-of-the-box Solr6 index and my property to play with, let’s start with creating a first test document in a first browser with an English display. Assigning this document to my custom type “dbi:demo_doc” and using the value “NB-0001” for my property “dbi:demo_prop“:

As you can see above, the “sys:locale” has been set to “en_US” (coming from my browser settings) and please also note the DBID 1519 for later.

Then on that same browser, the document is properly searchable, there is 1 result, so far so good:

Note: I’m specifically not adding an “=” at the beginning of the search term so that it goes to Solr. If you have TMDQ (Transactional MetaData Query) enabled and if you use an exact match for a property, then it will normally use the database to answer the search, which would defeat the purpose of this blog…

Showcase – Web Browser in French

Now let’s proceed with opening a second browser, that is configured with French display this time and doing the exact same thing with a new document:

As you can see above, the “sys:locale” has been set to “fr” this time, for the same document, simply from another browser with a different display language (which you can confirm on above screenshots as everything is written in French). Same as before, please also note the DBID 1523 for later.

Then on that same browser, the document itself is also searchable, but in normal cases, we would have expected 2 results, no…? There is only one here:

This is the issue I’m talking about. Because the cross-locale is disabled, the document imported with a different locale isn’t found when you are searching for some text on content or properties.

It’s pretty obvious with the above already, but if you really want to confirm that the documents are indexed inside Solr, you can do something like that on the Solr UI “q=dbi:demo_prop:’NB-0001′” and it will give you 2 results, 2 documents with DBID 1519 as well as 1523. If you remember or if you look at the above screenshots, you can see that’s the values of the “sys:node-dbid” property for the 2 documents. Therefore, these are indexed in Solr:

How to Solve

So now, how do we fix that? Fortunately, it’s not very complex, but depending on your repository size, it might take some time as it will require to enable the cross-locale and then to perform a complete reindex… Since I’m using a nearly empty development environment, it will be very fast for me.

On the Solr host, let’s start with checking the cross-local configuration:

[solr@solr01 ~]$ cd $SOLR_HOME/solrhome/conf/
[solr@solr01 conf]$
[solr@solr01 conf]$ grep "cross.locale.datatype" shared.properties
#alfresco.cross.locale.datatype.0={http://www.alfresco.org/model/dictionary/1.0}text
#alfresco.cross.locale.datatype.1={http://www.alfresco.org/model/dictionary/1.0}content
#alfresco.cross.locale.datatype.2={http://www.alfresco.org/model/dictionary/1.0}mltext
[solr@solr01 conf]$

As you can see, the 3 lines for cross-locale configuration of datatypes are commented out by default (by the way, it’s also indicated in the documentation here). Therefore, let’s simply uncomment these and then we will scratch the index so that Solr6 can perform a full reindex:

[solr@solr01 conf]$ sed -i 's,^#\(alfresco.cross.locale.datatype\),\1,' shared.properties
[solr@solr01 conf]$
[solr@solr01 conf]$ grep "cross.locale.datatype" shared.properties
alfresco.cross.locale.datatype.0={http://www.alfresco.org/model/dictionary/1.0}text
alfresco.cross.locale.datatype.1={http://www.alfresco.org/model/dictionary/1.0}content
alfresco.cross.locale.datatype.2={http://www.alfresco.org/model/dictionary/1.0}mltext
[solr@solr01 conf]$
[solr@solr01 conf]$ sudo systemctl stop solr
[solr@solr01 conf]$
[solr@solr01 conf]$ ls -l $SOLR_DATA_HOME/
total 4
drwxr-x---. 4 alfresco alfresco   37 Oct 16 12:53 index
drwxr-x---. 2 alfresco alfresco 4096 Oct 16 12:54 models
[solr@solr01 conf]$
[solr@solr01 conf]$ rm -rf $SOLR_DATA_HOME/*
[solr@solr01 conf]$
[solr@solr01 conf]$ sudo systemctl start solr
[solr@solr01 conf]$

Once the documents are indexed, you can go back to the English or French browsers and execute the previous search, and voila, 2 results:

You can now create new documents with any locale, and it should be transparent from a search perspective. If you would like to see which locale is being sent during Search requests, you can enable DEBUG logs for “org.alfresco.repo.search.impl.solr.SolrQueryHTTPClient” and something like that should appear:

2025-10-16 15:35:29,732  DEBUG [impl.solr.SolrQueryHTTPClient] [http-apr-8080-exec-39]    with: {"tenants":[""],"locales":["en_US"],"defaultNamespace":"http://www.alfresco.org/model/content/1.0","textAttributes":[],"defaultFTSOperator":"AND","defaultFTSFieldOperator":"AND","anyDenyDenies":true,"query":"dbi:demo_prop:'NB-0001' ","templates":[{"template":"%(cm:name cm:title cm:description ia:whatEvent ia:descriptionEvent lnk:title lnk:description TEXT TAG)","name":"keywords"}],"allAttributes":[],"queryConsistency":"DEFAULT","authorities":["GROUP_EVERYONE","ROLE_ADMINISTRATOR","ROLE_AUTHENTICATED","admin"]}
2025-10-16 15:35:29,732  DEBUG [impl.solr.SolrQueryHTTPClient] [http-apr-8080-exec-39] Got: 2 in 47 ms

Happy searching!

L’article Alfresco – Documents not searchable when uploaded via REST-API? est apparu en premier sur dbi Blog.

User Adoption Strategies for M-Files: Lessons Learned

Thu, 2025-10-16 08:42

Implementing M-Files is never just a technical project. It’s a journey through an organization’s operations that we take with each customer.

As consultants, we often find that the success of an M-Files deployment hinges not on the software itself, but on how well users embrace it. After working with organizations across industries, here are some key lessons and strategies that consistently drive user adoption.

Starting point: Why

Before introducing M-Files, it’s crucial to articulate why the change is happening. Users need to understand how M-Files will make their work easier, whether it’s faster document retrieval, reduced email clutter, or improved compliance. Tailor the message to each department’s pain points and goals.

Lesson: Adoption improves when users see M-Files as a solution to their problems, not just another IT tool.

Involve Champions from Day One

Identify and empower internal key users, people who are respected, tech-savvy, and enthusiastic. These champions can provide feedback during configuration, test workflows, and help train their peers. Their support creates a ripple effect across teams.

Lesson: Peer influence is more powerful than management’s decisions that members have to follow.

Configure for Simplicity, Not Just Capability

M-Files is incredibly flexible, and we can accomplish so much with it. However, if it is poorly designed, that can be its biggest flaw.

Avoid overwhelming users with too many metadata fields, complex workflows, or unfamiliar terminology. Start with a clean, intuitive structure and evolve it based on feedback.

Lesson: A well-designed vault should feel natural and frictionless to users.

Integrate with Familiar Tools

Users are more likely to adopt M-Files when it fits into their existing habits. Integrating with Microsoft Teams, Outlook, or SharePoint can make M-Files feel like an extension of their daily workspace rather than a separate system.

Lesson: Seamless integration optimizes employee engagement and reduces resistance.

Provide Hands-On Training and Support

Training should be role-specific and scenario-based. Instead of generic demonstrations, show users how M-Files can help them with their daily tasks, such as managing contracts, onboarding employees, and tracking project documents. Follow up with cheat sheets, videos, and a support channel.

Lesson: Ongoing support is key to sustained adoption.

Celebrate Quick Wins

Rome wasn’t built in a day. Highlight early successes, such as reducing the time spent searching for documents or passing an audit, in internal newsletters or at team meetings. Recognition creates momentum and validates efforts.

Lesson: There’s no need to wait until the end of the project to celebrate! Celebrating small victories boosts morale and paves the way for long-term transformation.

Ultimately

However brilliant a solution may be, choosing it is not an end in itself. On the contrary, it is only the beginning of the journey.
User adoption is not just about training. It is also based on trust, relevance, and experience. At dbi services, we focus on the human aspect. We can transform M-Files from a basic system into a valuable solution.

L’article User Adoption Strategies for M-Files: Lessons Learned est apparu en premier sur dbi Blog.

You still need reliable and tested backups for your Oracle databases!

Thu, 2025-10-16 07:09
Introduction

These days, I was reviewing and updating the “Oracle Backup and Recovery Workshop” from dbi services. It’s all about RMAN, a powerful tool that always impresses my students. But I must admit that this workshop is less popular than before. Today, backup of an Oracle database is not a hot topic anymore. First, nothing is really new regarding RMAN compared to a few years ago. Hardware reliability has also increased significantly, and data centers are more secured than ever. Both making failures less likely to happen. Another point is that Disaster Recovery has become a standard in most projects, meaning that for a lot of failures, enabling the standby database will solve the problem in a minute or two. Finally, Oracle-embedded flashback technologies bring you faster solutions compared to restoring a backup when database is not broken (these technologies are also part of the workshop). That said, it doesn’t mean that you don’t need a strong backup strategy. This is still your very last protection for dramatic scenarios. And you need to test it from time to time. Let’s review what’s important and how to check and test your backup strategy.

Why you still need a reliable backup at any moment?

Let’s remind the main reasons for having a reliable backup:

  • you may loose everything on your primary and secondary site
  • you may have logically corrupted data from a software error or a human error, also replicated to your standby database
  • you may want to recover partial data that flashback technologies cannot bring back
  • you may have a security breach widely altering your data (corrupted storage, ransomware, aso)

For sure, this is rather unlikely to happen, but as a DBA, you’re supposed to have a plan in case of any kind of disaster. And a solid RMAN backup is your trustable ally.

Why you probably won’t use a RMAN restore in most cases?

You probably have a Disaster Recovery (DR) setup, meaning 2 sites with database replication using Data Guard (for Enterprise Edition) or Dbvisit Standby (for Standard Edition 2). In most cases, if you need a quick solution for bringing back your database to life after a crash, you will failover to the DR database. It’s a couple of minutes, and if you test your DR database on a regular basis (I mean doing a switchover at least once a year), this operation will succeed without any doubt. A failover is adequate when your primary server is not responding or when your primary database has corrupted files. Doing a failover is always the best thing to do if you want minimum data loss.

If you need data loss, I mean if there is a data corruption due to an error (incorrect SQL commands, application bug), you will probably use the Oracle-embedded flashback technologies. Flashback relies on multiple different technologies for different purposes. If you drop a table, you can bring it back from the recycle bin (FLASHBACK TABLE TO BEFORE DROP), if you delete lines from a table, you can do a query in time (AS OF TIMESTAMP) for example. If the overall database needs to go back in time, and if you’re using Enterprise Edition, you will use FLASHBACK DATABASE. FLASHBACK DATABASE, once enabled, is a mechanism that generates flashback logs, a kind of “INCR -1” automatic backup. With this feature, you can go back at any point in time from a couple of minutes to several hours. It’s much faster than an equivalent RMAN restore. And you can do multiple flashback operations to find the desired Point In Time.

Restoring a backup will be done when nothing else is possible, just because it’s slower to put back older datafiles on disk from backupsets, and recover them.

Probably the most important: make sure RPO and RTO are aligned with the company strategy

As a DBA, you must ensure that these metrics, defined by the company, are met:

  • Recovery Point Objective: until when you must be able to go back in time regarding data
  • Recovery Time Objective: the maximum time you need to restore the database to any Point In Time within the RPO

You should consider these RPO/RTO as a contract between the DBA and the management. The DBA will require resources, like adequate infrastructure, disk capacity and licenses to reach these objectives.

Regarding RPO, it’s mainly a matter of storage capacity. For RTO, it involves considering database edition, number of licenses/cores and storage bandwidth.

You must test your backup on a regular basis, just to ensure that you still respect the RTO with an increasing amount of data months after months.

Too often, RPO and RTO are defined by the DBA himself. In other words, the DBA makes what’s possible with the resources he has. This is definitely not the best approach.

Check and test your backups

First, you should never trust your backup! Not because it’s not reliable, but because backup is done when the database is opened, meaning that backups are not consistent. Restore is only possible if a complete set of backups (full + incremental + archivelogs) is available, with an adequate controlfile. Green lights from your backup monitoring tool or the successful word at the end of your backup log is simply not enough. The only trustable backup strategy is the one you test on a regular basis.

These are the checks and tests I would do to make sure my backup is OK.

Check RMAN views

With this statement, you are able to guess the backup strategy and see if it works fine at a glance.

set lines 180
set pages 300
col min for 999.9
col status for a20
alter session set NLS_DATE_FORMAT="DD/MM-HH24:MI";
select start_time "Start", round (input_bytes/1024/1024,1) "Source MB", round(output_bytes/1024/1024) "Backup MB", input_type "Type", status "Status", round(elapsed_seconds/60,1) "Min", round(input_bytes/1024/1024/elapsed_seconds,1) "read MB/s", round(output_bytes/1024/1024/elapsed_seconds,1) "write MB/s", round(compression_ratio,1) RATIO from v$rman_backup_job_details where start_time >= SYSDATE-7 order by 1 desc;

Start           Source MB  Backup MB Type          Status          Min  read MB/s write MB/s      RATIO
-------------- ---------- ---------- ------------- ------------ ------ ---------- ---------- ----------
25/08-15:10:06       1443        380 ARCHIVELOG    COMPLETED        .7       36.1        9.5        3.8
25/08-14:10:06        123         50 ARCHIVELOG    COMPLETED        .1       20.6        8.3        2.5
25/08-13:10:05         33         27 ARCHIVELOG    COMPLETED        .1        6.5        5.4        1.2
25/08-12:10:05         32         27 ARCHIVELOG    COMPLETED        .1          8        6.8        1.2
25/08-11:10:05         46         31 ARCHIVELOG    COMPLETED        .1        7.7        5.1        1.5
25/08-10:10:05         44         31 ARCHIVELOG    COMPLETED        .1        7.4        5.1        1.4
25/08-09:10:04         31         27 ARCHIVELOG    COMPLETED        .1        5.1        4.5        1.1
25/08-08:10:05         32         27 ARCHIVELOG    COMPLETED        .1          8        6.8        1.2
25/08-07:10:05         30         26 ARCHIVELOG    COMPLETED        .1        4.9        4.4        1.1
25/08-06:10:05         30         26 ARCHIVELOG    COMPLETED        .1        5.9        5.3        1.1
25/08-05:10:04         30         27 ARCHIVELOG    COMPLETED        .1          5        4.4        1.1
25/08-04:10:05         32         27 ARCHIVELOG    COMPLETED        .1          8        6.8        1.2
25/08-03:10:05         38         29 ARCHIVELOG    COMPLETED        .1        6.3        4.9        1.3
25/08-02:10:05         30         27 ARCHIVELOG    COMPLETED        .1          5        4.4        1.1
25/08-01:10:05         30         26 ARCHIVELOG    COMPLETED        .1        5.9        5.3        1.1
25/08-00:10:05         39         30 ARCHIVELOG    COMPLETED        .1        7.8          6        1.3
24/08-22:30:06     133858      17371 DB INCR       COMPLETED      26.3       84.8         11        7.7
24/08-22:10:05       1238        376 ARCHIVELOG    COMPLETED        .5       41.3       12.5        3.3
24/08-21:10:05         29         26 ARCHIVELOG    COMPLETED        .1        7.2        6.6        1.1
24/08-20:10:05         29         26 ARCHIVELOG    COMPLETED        .1        4.1        3.8        1.1
24/08-19:10:05         34         28 ARCHIVELOG    COMPLETED        .1        5.6        4.6        1.2
24/08-18:10:05         39         29 ARCHIVELOG    COMPLETED        .1        7.8        5.8        1.3
24/08-17:10:05       1037        204 ARCHIVELOG    COMPLETED        .5       35.8          7        5.1
24/08-16:10:05         42         31 ARCHIVELOG    COMPLETED        .1          6        4.4        1.4
...

In this example, everything is fine (Status=COMPLETED), a full (INCR level 0) is done every night, archivelog backup is done every hour and compression is enabled. Here, read MB/s and write MB/s give you estimated values. If you use incremental backups (INCR level 1), you can identify them with the “Backup MB” column: the figure should be much lower than the one of an INCR level 0.

Having a catalog is definitely more convenient as you can get an overview for all your databases from a unique statement:

set lines 180
set pages 300
col min for 999.9
col status for a20
alter session set NLS_DATE_FORMAT="DD/MM-HH24:MI:SS";
select db_name, start_time "Start", round (input_bytes/1024/1024,1) "Source MB", round(output_bytes/1024/1024,1) "Backup MB", input_type "Type", status "Status", round(elapsed_seconds/60,1) "Min", round(input_bytes/1024/1024/elapsed_seconds,1) "read MB/s", round(output_bytes/1024/1024/elapsed_seconds,1) "write MB/s", round(compression_ratio,1) RATIO from rc_rman_backup_job_details where start_time >= SYSDATE-7  order by 1,2 desc;
...

But this query is not enough to make sure that you can restore your database.

Check if retention is the one you expect

It’s not rare that a DBA is surprised when he checks backup retention and discovers a much lower value than expected. Retention can be changed in a minute, often for solving storage capacity issues. This could have dramatic consequences if you miss the RPO your boss sign up for. But most probably, nobody will know about that…

rman target /
show RETENTION POLICY ;

RMAN configuration parameters for database with db_unique_name ROP_S1 are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;

With this retention, you should be able to restore the database at a consistent state between now and 7 days before. It implies keeping backupsets of database for more than 7 days, because you need the FULL or INCR 0 backup before 7 days to reach the oldest Point In Time. And for sure, all the INCR 1 and archivelog backups done after this FULL or INCR 0.

But checking the retention is not enough to make sure you have everything needed for a restore.

Check if backups are known by RMAN

When deleting backups, this retention can be overridden if needed. So there is no guarantee that backups are still in the catalog. You should be able to find backups within this retention, backups of database (FULL or INCR 0) but also backups of archivelogs and INCR 1:

LIST BACKUP OF DATABASE COMPLETED BETWEEN 'sysdate -7' AND 'sysdate';
...

LIST BACKUP OF ARCHIVELOG COMPLETED BETWEEN 'sysdate -7' AND 'sysdate';
...

This is pretty verbose but you will find the various backusets and what’s inside.

Unfortunately, this is still not enough to make sure you can restore your database.

Do a crosscheck to validate that backup pieces are available

LIST BACKUP is only having a look at backup references in the catalog, it doesn’t guarantee that backupset files (backup pieces) are available. Therefore, a crosscheck is needed to match references with files on disk:

CROSSCHECK BACKUPSET;

You can do crosscheck only on a subset of backups, for example the most recents:

CROSSCHECK BACKUPSET COMPLETED AFTER 'sysdate -1';
...

Or backups between 2 dates:

CROSSCHECK BACKUPSET COMPLETED BETWEEN 'sysdate -7' AND 'sysdate -6';
...

All backupsets must be AVAILABLE. If not, they are EXPIRED, meaning not on disk anymore. This is not a normal behavior.

Is it enough for making sure you can restore? Not at all, it doesn’t mean the restore is possible.

Do a restore database validate with and without a Point In Time

Without any downtime and disk usage, you can do a RESTORE DATABASE VALIDATE. It will read backupsets from the backup location and validate that these backupsets are OK to restore the datafiles.

run {
ALLOCATE CHANNEL C1 DEVICE TYPE DISK;
ALLOCATE CHANNEL C2 DEVICE TYPE DISK;
ALLOCATE CHANNEL C3 DEVICE TYPE DISK;
RESTORE DATABASE VALIDATE;
}

This RESTORE DATABASE VALIDATE is also possible with a Point In Time. RMAN is then able to simulate a full restore of the adequate backupsets within the rentention.

run {
SET UNTIL TIME "TO_DATE('10-OCT-2025 14:00:00','DD-MON-YYYY HH24:MI:SS')";
ALLOCATE CHANNEL C1 DEVICE TYPE DISK;
ALLOCATE CHANNEL C2 DEVICE TYPE DISK;
ALLOCATE CHANNEL C3 DEVICE TYPE DISK;
RESTORE DATABASE VALIDATE;
}

Note that it restores the datafiles to /dev/null, meaning nowhere. This is why a RECOVER DATABASE VALIDATE does not exist, it’s the main limit. You will be able to restore the database, but you might not be able to recover the datafiles and open the database.

This test is nice, but not good enough. The recover is mandatory to open the database.

Do a restore AND recover on another server with or without a Point In Time

The best test would be a complete restore/recover on another server. Considering this server has the same system settings, the same Oracle version and the same filesystems, it’s rather easy. First, identify the latest controlfile and spfile autobackup, then restore the spfile and the controlfile from this backupset. RMAN is able to start an instance without spfile, for the only purpose of restoring the spfile:

su - oracle
echo "BMC3PRD:/u02/app/oracle/product/19.0.0.0/dbhome_1:Y" >> /etc/oratab

ls -lrt /backup/BMC3PRD/ | tail -n 1 
-rw-r-----. 1 oracle oinstall 19136512 Sep  4 09:20 o1_mf_s_1156169494_lr8gd9h4_.bkp

. oraenv <<< BMC3PRD

rman target / 

startup nomount;
startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u02/app/oracle/product/19.0.0.0/dbhome_1/dbs/initBMC3PRD.ora'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area    1070428224 bytes

Fixed Size                     8904768 bytes
Variable Size                281018368 bytes
Database Buffers             771751936 bytes
Redo Buffers                   8753152 bytes


restore spfile from '/backup/BMC3PRD/o1_mf_s_1156169494_lr8gd9h4_.bkp';
Starting restore at 04-SEP-25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=433 device type=DISK
channel ORA_DISK_1: restoring spfile from AUTOBACKUP /backup/BMC3PRD/o1_mf_s_1156169494_lr8gd9h4_.bkp
channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 04-SEP-25
 
startup force nomount;

restore controlfile from '/backup/BMC3PRD/o1_mf_s_1156169494_lr8gd9h4_.bkp';

Starting restore at 04-SEP-25
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=26 device type=DISK
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:05
output file name=+RECO/BMC3PRD_CR/CONTROLFILE/current.486.1210937913
Finished restore at 04-SEP-25

startup force mount;
run {
ALLOCATE CHANNEL C1 DEVICE TYPE DISK;
ALLOCATE CHANNEL C2 DEVICE TYPE DISK;
ALLOCATE CHANNEL C3 DEVICE TYPE DISK;
ALLOCATE CHANNEL C4 DEVICE TYPE DISK;
RESTORE DATABASE ;
RECOVER DATABASE ;
ALTER DATABASE OPEN RESETLOGS;
}

...

Testing a restore/recover at a random Point In Time within your retention window is also quite easy: identify the autobackup of spfile and controlfile corresponding to the first one after your Point in Time target, and specify an UNTIL clause in your restore/recover database block:

rman target / 

startup nomount;
startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u02/app/oracle/product/19.0.0.0/dbhome_1/dbs/initBMC3PRD.ora'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area    1070428224 bytes

Fixed Size                     8904768 bytes
Variable Size                281018368 bytes
Database Buffers             771751936 bytes
Redo Buffers                   8753152 bytes


restore spfile from '/backup/BMC3PRD/o1_mf_s_1156169494_124efa11_.bkp';
...

startup force nomount;

restore controlfile from '/backup/BMC3PRD/o1_mf_s_1156169494_124efa11_.bkp';
...

startup force mount;
run {
SET UNTIL TIME "TO_DATE('10-SEP-2025 09:27:42','DD-MON-YYYY HH24:MI:SS')";
ALLOCATE CHANNEL C1 DEVICE TYPE DISK;
ALLOCATE CHANNEL C2 DEVICE TYPE DISK;
ALLOCATE CHANNEL C3 DEVICE TYPE DISK;
ALLOCATE CHANNEL C4 DEVICE TYPE DISK;
RESTORE DATABASE ;
RECOVER DATABASE ;
ALTER DATABASE OPEN RESETLOGS;
}
...

This time, this test is enough to make sure your backups are OK, because you just did the exact same procedure as if you were facing a real disaster case.

Duplicate from backup with or without a Point In Time

You may not want to restore the database with the same name than the one used for production database, for obvious reasons. You can then restore with a different name thanks to the DUPLICATE command. This command is used for duplicating a target (source) to an auxiliary database (destination), but you can use it without any target, only pointing to a folder containing the backupsets. Starting the instance without a spfile is still possible with RMAN, but you’ll need to connect using target keyword first:

echo "RESTO:/u02/app/oracle/product/19.0.0.0/dbhome_1:Y" >> /etc/oratab

. oraenv <<< RESTO

rman target /
startup nomount;
exit;

rman auxiliary /
run {
ALLOCATE AUXILIARY CHANNEL C1 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL C2 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL C3 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL C4 DEVICE TYPE DISK;
DUPLICATE DATABASE TO RESTO BACKUP LOCATION '/backup/BMC3PRD/' NOFILENAMECHECK;
}

It also works with a Point In Time:

rman auxiliary /
run {
SET UNTIL TIME "TO_DATE('10-OCT-2025 16:30:00','DD-MON-YYYY HH24:MI:SS')";
ALLOCATE AUXILIARY CHANNEL C1 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL C2 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL C3 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL C4 DEVICE TYPE DISK;
DUPLICATE DATABASE TO RESTO BACKUP LOCATION '/backup/BMC3PRD/' NOFILENAMECHECK;
}

All these commands are fine when using OMF: I would recommend using OMF everywhere. If you don’t use OMF, you will need to take care of file names and use path conversions within RMAN commands.

Please, never do a DUPLICATE on a server where a database with the same name as the source database exists: at some point, RMAN needs to restore the controlfile with the name of the source database before changing for the new name, it can be hazardous…

This test is as good as the previous one: it can definitely validate your backups as if you were restoring your production database.

Conclusion

I would summarize this blog post with these 3 things:

  • you still need a strong backup strategy: do not underestimate the high importance your backups still have
  • RPO and RTO are decided by your company and the DBA asks for resources allowing him to reach these targets. He must warn if resources cannot guarantee these RPO/RTO
  • do regular and extensive checks of your backups, and simulate a disaster scenario at least once a year: this is good for validating RPO/RTO and good for validating the reliability of your backup strategy. This is also good for your training as you normally won’t do regular restore/recover operations

L’article You still need reliable and tested backups for your Oracle databases! est apparu en premier sur dbi Blog.

Pages