Yann Neuhaus

Subscribe to Yann Neuhaus feed Yann Neuhaus
Updated: 1 month 4 weeks ago

Errorhandling in Ansible

Thu, 2025-09-18 08:54
Errors in tasks – abort or ignore?

Per default, if a task in a playbook fails, then the execution of the playbook is stopped for that host.

- name: PLAY1
  hosts: localhost
  gather_facts: no
  tasks:
    - name: let this shell-command fail
      ansible.builtin.shell: exit 1

    - name: let this shell-command complete
      ansible.builtin.shell: exit 0

As you can see, the 2nd task is not executed. If you want to continue in such a case, the ignore_errors parameter is your friend

    - name: let this shell-command fail
      ansible.builtin.shell: exit 1
      ignore_errors: true
Custom error conditions

Per default, Ansible evaluates the exit-code of the module, in case of the shell-module, the exit-code of the last command.

But for some commands, that is not adequate. Example: The Oracle commandline tool sqlplus to submit sql-commands will have an exit-code of 0 if it can connect to the database-instance. It is not related to the result of your SQL-commands. Error-messages in Oracle are prefixed by ORA-.

So, if you want to check for application errors, you have to implement it yourself. For that, you can use the failed_when option.

    - name: let this shell-command fail
      ansible.builtin.shell: |
        . ~/.profile
        echo "select count(*) from all_users;" | sqlplus / as sysdba
      register: number_of_users
      failed_when: "'ORA-' in number_of_users.stdout"

Caution: In this case the exit-code of the shell is no longer evaluated. To also get the exit-code of the sqlplus call (e.g., sqlplus can not connect to the database, or sqlplus binary not found), you have to add this (default) condition:

      failed_when: "number_of_users.rc != 0 or 'ORA-' in number_of_users.stdout"

But caution! Not all modules will have an rc field.

Tuning: Perform several checks at once

Conceptually, Ansible is not the fastest tool. For each task, it will usually login with ssh to the remote server. If you have to run several checks in the shell, then, instead of running each in a separate task, you can run all these check-commands in one shell-task, and evaluate the result afterwards.

    - name: run many check commands
      ansible.builtin.shell: |
        mount | grep ' on /u00 '  #check if /u00 is mounted
        rpm -q ksh 2>&1  #check if ksh is installed
        exit 0 # do not fail if rpm exit != 0; we will do our own errorhandling
      register: checks

    - name: fail if no /u00 is mounted
      fail:
        msg: "/u00 is not mounted"
      when: "' on /u00 ' not in checks.stdout"

    - name: No ksh found, try to install it
      yum:
        name: ksh
        state: present
      when: "'package ksh is not installed' in checks.stdout"

If you only want to throw an error, then you can do it directly in the shell-task:

when: "' on /u00 ' not in checks.stdout" or 'package ksh is not installed' in checks.stdout

But if you parse the output afterwards, you can run tasks to fix the error.

Sometimes it is difficult to parse the output if some commands return the same output, e.g. “OK”.
If your check-commands always return exactly 1 line, then you can directly parse the output of the command. The output of the 3rd command is in checks.stdout_lines[2].
In the above example that will not work because grep will return the exit-code 0 (not found) or 1 (found) plus the found line. So, expand it as: mount | grep ' on /u00 ' || echo error

Print errormessages more readable

Do not fail the task itself, it is very usually unreadable because all information is printed on one line.

Instead, use ignore_errors: true and failed_when: false in the task. Do the errorhandling in a separate task with a customized errormessage. To print the multiline list of stdout_lines, use debug: otherwise you can directy use ansible.builtin.fail: with a customized message:

    - name: force sqlplus to throw error because of missing environment
      ansible.builtin.shell: |
        /u00/app/oracle/product/19.7.0/bin/sqlplus -L / as sysdba @myscript.sql 2>&1
      register: checks
      ignore_errors: true
      failed_when: false

    - name: Check for errors of task before
      debug: var=checks.stdout_lines
      failed_when: checks.rc != 0 or 'ORA-' in checks.stdout
      when: checks.rc != 0 or 'ORA-' in checks.stdout
Re-run failed hosts

As an example for this scenario: A customer of mine will do an offline backup of the development databases. And I have to run an Ansible playbook against all databases. If the playbook for this host is run at the backup time, it will fail because the database is down. But after some minutes the database will be restarted.

What we can do now?

Wait until the database is up again. That is possible, see the example of ansible.builtin.wait_for in my blog post Parallel execution of Ansible roles. But for this scenario it is a waste of time. The database can be stopped (not for backup) and will not be restarted within the next few minutes.

Try later after a while. My playbook for all hosts (parallel forks=5) takes about 1 hour. The idea now is to remember the host with the stopped database and to continue with the next host. After the play finished for all hosts, restart the play for the remembered hosts.

  • The 1st play running against all database hosts:
  • gets the status of the databases on the host
  • assigns the database instances to a is_running and a not_open list
  • Include the role to run against the running databases
  • Dynamically add the host to the group re_run_if_not_open if there are not_open databases
  • The next play only runs for the re_run_if_not_open group
  • Include the role to run against the (hopefully now running) databases
  • If the database then is still down, we assume it is stopped permanently.

L’article Errorhandling in Ansible est apparu en premier sur dbi Blog.

AWS Summit Zurich 2025

Mon, 2025-09-15 03:12

Last Thursday I could go to the AWS Summit Zurich 2025 edition. So it’s time to do a little recap of what I could see and what was presented during the event.

Amid the infinite sessions about AI, I could also find some on other topics. I started my journey with a session on security best practices.

Then came the keynote, where I could see a demo of an automated delivery robot shaped like a dog that can climb stairs. During the keynote, it was announced that Oracle Exadata would be available in the Zurich region starting 2026.

Among the other sessions I attended, I can spotlight one on the IAM service, where the speakers deep-dived into how to manage the service more accurately.

I also attended a session addressing the Sovereignty topic where we were explained the impact of the CLOUD act and the data protection subject.

I also went to a network focused presentation, the speaker showed us the new features of AWS Cloud WAN and how it works as a backbone network for hybrid solutions. The biggest news was that now we can connect AWS Direct Connections directly to AWS Cloud WAN without the need of a transit gateway.

They also introduced to us VPC Lattice, a fully managed application networking service that you use to connect, secure, and monitor the services and resources for your application.

Finally, I followed a session on cloud cost optimization. We were presented some dashboards that we can use to optimize our cloud costs.

That’s it for this year. Yet another summit where I’m able to learn new stuff. I hope I can go again next year and still learn new skills.

L’article AWS Summit Zurich 2025 est apparu en premier sur dbi Blog.

PostgreSQL CDC to JDBC Sink – minimal event driven architecture

Sun, 2025-09-14 04:53
Introduction

“Make it simple.”
When you have to create a solution, design an architecture, you should always take the simplest path, this is often the better solution proven by field knowledge and experiences of not scalable patterns. Which means also that you should only add the necessary and required complexity to your solutions.
So while following this principle I tried to look for the simplest event driven design that I could find.
And here it is :
PG Source → Flink CDC → JDBC Sink → PG Data Mart


What each piece does :

  • PostgreSQL (source)
    Emits a change stream from the WAL using logical replication (wal_level=logical).
    A publication defines which tables are replicated; a replication slot (one active reader per slot) holds WAL until the consumer confirms it.
  • Flink CDC source (Postgres)
    A Flink table that reads from the publication/slot and turns WAL into a changelog stream (insert/update/delete).
    Key options you’re using:
    • scan.incremental.snapshot.enabled=true – non-blocking initial load
    • slot.name=... – binds the job to a specific slot
    • primary key in the table schema – lets downstream sinks do upserts
  • Flink runtime
    Runs a streaming job that:
    1. Initial incremental snapshot: splits the table, bulk reads current rows, remembers an LSN.
    2. Catch-up + stream: replays WAL from that LSN and then tails new changes.
    3. Checkpoints: the slot LSN + operator state are stored on your S3/MinIO path, so restarts resume exactly from the last acknowledged point.
  • JDBC sink (Postgres data mart)
    Another Flink table. With a PRIMARY KEY defined, the connector performs UPSERT/DELETE semantics (e.g., INSERT ... ON CONFLICT DO UPDATE in Postgres).
    It writes in batches, flushes on checkpoints, and retries on transient errors.
  • PostgreSQL (data mart)
    Receives the normalized upsert/delete stream and ends up with a 1:1 “current state” of the source tables (ideal for BI).
Why is this design is useful ?
  • Near-real-time replication with transforms: you can filter, project, cleanse, deduplicate, join reference data, and even aggregate in Flink SQL before hitting the data mart—something native logical replication can’t do.
  • Upserts keep the mart tidy: the JDBC sink writes the current state keyed by your PKs (perfect for reporting).
  • Resilient: checkpoints + WAL offsets → automatic catch-up after failures/restarts.
  • DB-friendly: WAL-based CDC has low OLTP impact compared to heavy ETL pulls.
LAB DEMO

Architecture of the LAB :

server1 (SRC) → PostgreSQL 17.6 (source) + Flink (jobs run here) : IP 172.19.0.4

server2 (SINKS) → PostgreSQL 17.6 (data mart) : IP 172.20.0.4

First we need data to transfer, here is a sample database that you can create yourself, additionally we also setup the PostgreSQL instance :

# Execute on server1 only
sudo -u postgres psql -c "ALTER SYSTEM SET wal_level='logical';"

# Increase the number of concurrent replication connections allowed.
sudo -u postgres psql -c "ALTER SYSTEM SET max_wal_senders=10;"

# Increase the number of replication slots the server can support.
sudo -u postgres psql -c "ALTER SYSTEM SET max_replication_slots=10;"

# On both servers, modify the Host-Based Authentication file to allow connections from each others.

echo "host all all 127.0.0.1/32 trust" | sudo tee -a /etc/postgresql/17/main/pg_hba.conf
echo "host all all 172.19.0.4/32 scram-sha-256" | sudo tee -a /etc/postgresql/17/main/pg_hba.conf
echo "host all all 172.20.0.4/32 scram-sha-256" | sudo tee -a /etc/postgresql/17/main/pg_hba.conf


# Restart the PostgreSQL service to apply all configuration changes.
sudo systemctl restart postgresql


sudo -u postgres createdb logistics_src

sudo su - postgres

# Execute a multi-statement SQL block to define and seed the schema.
psql -U postgres -d logistics_src <<'SQL'
CREATE SCHEMA logistics;

CREATE TABLE logistics.customers (
  customer_id   bigserial PRIMARY KEY,
  name          text NOT NULL,
  city          text,
  email         text UNIQUE
);

CREATE TABLE logistics.products (
  product_id    bigserial PRIMARY KEY,
  sku           text UNIQUE NOT NULL,
  name          text NOT NULL,
  list_price    numeric(12,2) NOT NULL
);

CREATE TABLE logistics.orders (
  order_id      bigserial PRIMARY KEY,
  customer_id   bigint NOT NULL REFERENCES logistics.customers(customer_id),
  status        text NOT NULL DEFAULT 'NEW',
  order_ts      timestamptz NOT NULL DEFAULT now()
);

CREATE TABLE logistics.order_items (
  order_id      bigint NOT NULL REFERENCES logistics.orders(order_id),
  product_id    bigint NOT NULL REFERENCES logistics.products(product_id),
  qty           int NOT NULL,
  unit_price    numeric(12,2) NOT NULL,
  PRIMARY KEY(order_id, product_id)
);

CREATE TABLE logistics.inventory (
  product_id    bigint PRIMARY KEY REFERENCES logistics.products(product_id),
  on_hand       int NOT NULL DEFAULT 0
);

CREATE TABLE logistics.shipments (
  shipment_id   bigserial PRIMARY KEY,
  order_id      bigint NOT NULL REFERENCES logistics.orders(order_id),
  carrier       text,
  shipped_at    timestamptz,
  status        text
);

-- Seed initial data
INSERT INTO logistics.customers(name,city,email)
SELECT 'Customer '||g, 'City '|| (g%10), 'c'||g||'@example.com'
FROM generate_series(1,200) g;

INSERT INTO logistics.products(sku,name,list_price)
SELECT 'SKU-'||g, 'Product '||g, (random()*90+10)::numeric(12,2)
FROM generate_series(1,500) g;

INSERT INTO logistics.inventory(product_id,on_hand)
SELECT product_id, (random()*100)::int
FROM logistics.products;

-- Create 100 orders
WITH o AS (
  INSERT INTO logistics.orders(customer_id,status)
  SELECT (floor(random()*200)+1)::int, 'NEW'    -- customers 1..200
  FROM generate_series(1,100)
  RETURNING order_id
)
-- For each order, choose 2 distinct products and insert items
INSERT INTO logistics.order_items(order_id, product_id, qty, unit_price)
SELECT o.order_id,
       p.product_id,
       (floor(random()*5)+1)::int AS qty,        -- qty 1..5
       p.list_price
FROM o
CROSS JOIN LATERAL (
  SELECT pr.product_id, pr.list_price
  FROM logistics.products pr
  ORDER BY random()
  LIMIT 2
) AS p;

SQL


psql -U postgres -d logistics_src -c "ALTER ROLE postgres IN DATABASE logistics_src
  SET search_path = logistics, public;"

postgres@LAB-CDC-SRC:~$ psql -U postgres -d logistics_src -c "ALTER ROLE postgres IN DATABASE logistics_src
  SET search_path = logistics, public;"
ALTER ROLE
postgres@LAB-CDC-SRC:~$ psql
psql (17.6 (Ubuntu 17.6-1.pgdg24.04+1))
Type "help" for help.

postgres=# \l
                                                   List of databases
     Name      |  Owner   | Encoding | Locale Provider | Collate |  Ctype  | Locale | ICU Rules |   Access privileges
---------------+----------+----------+-----------------+---------+---------+--------+-----------+-----------------------
 logistics_src | postgres | UTF8     | libc            | C.UTF-8 | C.UTF-8 |        |           |
 postgres      | postgres | UTF8     | libc            | C.UTF-8 | C.UTF-8 |        |           |
 template0     | postgres | UTF8     | libc            | C.UTF-8 | C.UTF-8 |        |           | =c/postgres          +
               |          |          |                 |         |         |        |           | postgres=CTc/postgres
 template1     | postgres | UTF8     | libc            | C.UTF-8 | C.UTF-8 |        |           | =c/postgres          +
               |          |          |                 |         |         |        |           | postgres=CTc/postgres
(4 rows)

postgres=# \c logistics_src
You are now connected to database "logistics_src" as user "postgres".
logistics_src=# \dt
             List of relations
  Schema   |    Name     | Type  |  Owner
-----------+-------------+-------+----------
 logistics | customers   | table | postgres
 logistics | inventory   | table | postgres
 logistics | order_items | table | postgres
 logistics | orders      | table | postgres
 logistics | products    | table | postgres
 logistics | shipments   | table | postgres
(6 rows)


logistics_src=# SELECT count(*) FROM logistics.orders;
SELECT count(*) FROM logistics.order_items;
 count
-------
   700
(1 row)

 count
-------
  1100
(1 row)


## Here I created a slot per table on my source 
postgres=# SELECT slot_name, active, confirmed_flush_lsn FROM pg_replication_slots;
SELECT pubname FROM pg_publication;
       slot_name        | active | confirmed_flush_lsn
------------------------+--------+---------------------
 flink_order_items_slot | f      | 0/20368F0
 flink_orders_slot      | f      | 0/20368F0
(2 rows)

 pubname
---------
(0 rows)

postgres=#

On my target data mart I created the empty structure :

logistics_dm=#
SELECT count(*) FROM datamart.orders;
SELECT count(*) FROM datamart.order_items;
 count
-------
     0
(1 row)

 count
-------
     0
(1 row)

logistics_dm=#

Then we can start Flink :

adrien@LAB-CDC-SRC:~/flink-1.20.2$ ./bin/start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host LAB-CDC-SRC.
Starting taskexecutor daemon on host LAB-CDC-SRC.

Verify that your Flink UI is up by checking the URL http://127.0.0.1:8081

In my case I created a custom config file to handle some other aspects of my LAB like the Hudi and S3 part that you can skip. The important point in the config is more related to task manager and memory settings for the scheduler :

adrien@LAB-CDC-SRC:~/flink-1.20.2$ cat conf/flink-conf.yaml
jobmanager.rpc.address: localhost
# Web/API
rest.address: 0.0.0.0

# Memory (required by 1.20 to be set explicitly)
jobmanager.memory.process.size: 1200m
taskmanager.memory.process.size: 1600m
taskmanager.numberOfTaskSlots: 8

# Allow multiple small jobs at once
parallelism.default: 1     # so each job can start with 1 slot by default
jobmanager.scheduler: adaptive

# Checkpoint/savepoint locations (use MinIO so they survive restarts)
state.checkpoints.dir: s3a://flink/checkpoints
state.savepoints.dir:   s3a://flink/savepoints
execution.checkpointing.interval: 10 s
execution.checkpointing.mode: EXACTLY_ONCE
execution.checkpointing.externalized-checkpoint-retention: RETAIN_ON_CANCELLATION

# Optional resilience
restart-strategy: fixed-delay
restart-strategy.fixed-delay.attempts: 10
restart-strategy.fixed-delay.delay: 5 s



# Prefer parent loader (helps with some connector deps)
classloader.resolve-order: parent-first
classloader.check-leaked-classloader: false

# S3A to MinIO (Hadoop FS)
s3.endpoint: http://172.20.0.4:9000
s3.path.style.access: true
s3.access.key: admin
s3.secret.key: adminadmin
s3.connection.ssl.enabled: false
  #s3.impl: org.apache.hadoop.fs.s3a.S3AFileSystem
  #s3.aws.credentials.provider: org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider


# optional, handy for labs
execution.checkpointing.interval: 10 s
env.java.opts: --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED -Djdk.attach.allowAttachSelf=true
adrien@LAB-CDC-SRC:~/flink-1.20.2$

Then we start the SQL client,

adrien@LAB-CDC-SRC:~/flink-1.20.2$ ./bin/sql-client.sh

                                   ▒▓██▓██▒
                               ▓████▒▒█▓▒▓███▓▒
                            ▓███▓░░        ▒▒▒▓██▒  ▒
                          ░██▒   ▒▒▓▓█▓▓▒░      ▒████
                          ██▒         ░▒▓███▒    ▒█▒█▒
                            ░▓█            ███   ▓░▒██
                              ▓█       ▒▒▒▒▒▓██▓░▒░▓▓█
                            █░ █   ▒▒░       ███▓▓█ ▒█▒▒▒
                            ████░   ▒▓█▓      ██▒▒▒ ▓███▒
                         ░▒█▓▓██       ▓█▒    ▓█▒▓██▓ ░█░
                   ▓░▒▓████▒ ██         ▒█    █▓░▒█▒░▒█▒
                  ███▓░██▓  ▓█           █   █▓ ▒▓█▓▓█▒
                ░██▓  ░█░            █  █▒ ▒█████▓▒ ██▓░▒
               ███░ ░ █░          ▓ ░█ █████▒░░    ░█░▓  ▓░
              ██▓█ ▒▒▓▒          ▓███████▓░       ▒█▒ ▒▓ ▓██▓
           ▒██▓ ▓█ █▓█       ░▒█████▓▓▒░         ██▒▒  █ ▒  ▓█▒
           ▓█▓  ▓█ ██▓ ░▓▓▓▓▓▓▓▒              ▒██▓           ░█▒
           ▓█    █ ▓███▓▒░              ░▓▓▓███▓          ░▒░ ▓█
           ██▓    ██▒    ░▒▓▓███▓▓▓▓▓██████▓▒            ▓███  █
          ▓███▒ ███   ░▓▓▒░░   ░▓████▓░                  ░▒▓▒  █▓
          █▓▒▒▓▓██  ░▒▒░░░▒▒▒▒▓██▓░                            █▓
          ██ ▓░▒█   ▓▓▓▓▒░░  ▒█▓       ▒▓▓██▓    ▓▒          ▒▒▓
          ▓█▓ ▓▒█  █▓░  ░▒▓▓██▒            ░▓█▒   ▒▒▒░▒▒▓█████▒
           ██░ ▓█▒█▒  ▒▓▓▒  ▓█                █░      ░░░░   ░█▒
           ▓█   ▒█▓   ░     █░                ▒█              █▓
            █▓   ██         █░                 ▓▓        ▒█▓▓▓▒█░
             █▓ ░▓██░       ▓▒                  ▓█▓▒░░░▒▓█░    ▒█
              ██   ▓█▓░      ▒                    ░▒█▒██▒      ▓▓
               ▓█▒   ▒█▓▒░                         ▒▒ █▒█▓▒▒░░▒██
                ░██▒    ▒▓▓▒                     ▓██▓▒█▒ ░▓▓▓▓▒█▓
                  ░▓██▒                          ▓░  ▒█▓█  ░░▒▒▒
                      ▒▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░▓▓  ▓░▒█░

    ______ _ _       _       _____  ____  _         _____ _ _            _  BETA
   |  ____| (_)     | |     / ____|/ __ \| |       / ____| (_)          | |
   | |__  | |_ _ __ | | __ | (___ | |  | | |      | |    | |_  ___ _ __ | |_
   |  __| | | | '_ \| |/ /  \___ \| |  | | |      | |    | | |/ _ \ '_ \| __|
   | |    | | | | | |   <   ____) | |__| | |____  | |____| | |  __/ | | | |_
   |_|    |_|_|_| |_|_|\_\ |_____/ \___\_\______|  \_____|_|_|\___|_| |_|\__|

        Welcome! Enter 'HELP;' to list all available commands. 'QUIT;' to exit.

Command history file path: /home/adrien/.flink-sql-history





-- Set the checkpointing interval for the SQL client session.
-- While configured globally in flink-conf.yaml, setting it here ensures it applies.
SET 'execution.checkpointing.interval' = '10s';

-- =============================================================================
-- DEFINE CDC SOURCE TABLES
-- =============================================================================

-- This table definition maps to the 'logistics.orders' table in PostgreSQL.
-- The 'postgres-cdc' connector is used to stream changes.
SET 'execution.checkpointing.interval' = '10s';

CREATE TABLE src_orders (
  order_id BIGINT,
  customer_id BIGINT,
  status STRING,
  order_ts TIMESTAMP(3),
  PRIMARY KEY (order_id) NOT ENFORCED
) WITH (
  'connector' = 'postgres-cdc',
  'hostname' = '172.19.0.4',
  'port' = '5432',
  'username' = 'postgres',
  'password' = 'your_postgres_password',
  'database-name' = 'logistics_src',
  'schema-name' = 'logistics',
  'table-name'  = 'orders',
  'slot.name'   = 'flink_orders_slot',
  'decoding.plugin.name' = 'pgoutput',
  'scan.incremental.snapshot.enabled' = 'true'
);

CREATE TABLE src_order_items (
  order_id BIGINT,
  product_id BIGINT,
  qty INT,
  unit_price DECIMAL(12,2),
  PRIMARY KEY (order_id, product_id) NOT ENFORCED
) WITH (
  'connector' = 'postgres-cdc',
  'hostname' = '172.19.0.4',
  'port' = '5432',
  'username' = 'postgres',
  'password' = 'your_postgres_password',
  'database-name' = 'logistics_src',
  'schema-name' = 'logistics',
  'table-name'  = 'order_items',
  'slot.name'   = 'flink_order_items_slot',
  'decoding.plugin.name' = 'pgoutput',
  'scan.incremental.snapshot.enabled' = 'true'
);

CREATE TABLE dm_orders (
  order_id BIGINT,
  customer_id BIGINT,
  status STRING,
  order_ts TIMESTAMP(3),
  PRIMARY KEY (order_id) NOT ENFORCED
) WITH (
  'connector' = 'jdbc',
  'url' = 'jdbc:postgresql://172.20.0.4:5432/logistics_dm',
  'table-name' = 'datamart.orders',
  'username' = 'postgres',
  'password' = 'your_postgres_password',
  'driver'    = 'org.postgresql.Driver'
);

CREATE TABLE dm_order_items (
  order_id BIGINT,
  product_id BIGINT,
  qty INT,
  unit_price DECIMAL(12,2),
  PRIMARY KEY (order_id, product_id) NOT ENFORCED
) WITH (
  'connector' = 'jdbc',
  'url' = 'jdbc:postgresql://172.20.0.4:5432/logistics_dm',
  'table-name' = 'datamart.order_items',
  'username' = 'postgres',
  'password' = 'your_postgres_password',
  'driver'    = 'org.postgresql.Driver'
);




INSERT INTO dm_orders          SELECT * FROM src_orders;
INSERT INTO dm_order_items     SELECT * FROM src_order_items;

So here we just declared the source tables and the data mart tables, note the connector type.
Once the tables are declare in Flink you can then start the pipeline with the INSERT INTO statements.

At the end you should get something like this in the sql_client.sh :

Flink SQL> [INFO] Submitting SQL update statement to the cluster...
[INFO] SQL update statement has been successfully submitted to the cluster:
Job ID: 1e776162f4e23447ed5a9546ff464a61


Flink SQL> [INFO] Submitting SQL update statement to the cluster...
[INFO] SQL update statement has been successfully submitted to the cluster:
Job ID: 14ea0afbf7e2fda44efc968c189ee480

And in Flink :

Then verify that on the target data mart the tables are now populated :

postgres=# \c logistics_dm
You are now connected to database "logistics_dm" as user "postgres".
logistics_dm=#
SELECT count(*) FROM datamart.orders;
SELECT count(*) FROM datamart.order_items;
 count
-------
   700
(1 row)

 count
-------
  1100
(1 row)

logistics_dm=# select * from order_items limit 5;
 order_id | product_id | qty | unit_price | discount
----------+------------+-----+------------+----------
      224 |        223 |   4 |      15.44 |     0.00
       96 |         94 |   1 |      53.91 |     0.00
      689 |        143 |   3 |      70.71 |     0.00
      290 |        223 |   3 |      15.44 |     0.00
       93 |        191 |   2 |      95.85 |     0.00
(5 rows)

logistics_dm=# select * from orders limit 5;
 order_id | customer_id | status |           order_ts            | channel
----------+-------------+--------+-------------------------------+---------
      310 |         163 | NEW    | 2025-09-09 14:36:47.247234+00 |
      305 |          69 | NEW    | 2025-09-09 14:36:47.247234+00 |
      304 |          14 | NEW    | 2025-09-09 14:36:47.247234+00 |
      303 |         122 | NEW    | 2025-09-09 14:36:47.247234+00 |
      302 |           9 | NEW    | 2025-09-09 14:36:47.247234+00 |
(5 rows)

And this is it, we did the setup of the minimal event-driven architecture.

End-to-end flow (what happens when you “INSERT INTO dm_* SELECT * FROM src_*”)
  1. Flink CDC attaches to the slot and (if configured) creates/uses the publication.
  2. It takes a consistent snapshot of orders/order_items, while buffering concurrent WAL changes.
  3. Once snapshot is done, it emits the buffered changes and continues streaming new WAL records.
  4. The JDBC sink receives a relational changelog:
    • INSERTINSERT … ON CONFLICT DO UPDATE (upsert)
    • UPDATE → treated as upsert
    • DELETEDELETE … WHERE pk = ?
  5. Checkpoints coordinate the CDC source offsets with the sink flush so recovery doesn’t duplicate effects. With upserts, the pipeline is effectively-once even if a retry happens.
Operational knobs that matter
  • One slot per independent reader (or a single job fan-out via Statement Set).
  • Checkpointing (you’ve set it):
    execution.checkpointing.interval=10 s + S3 paths for state.checkpoints.dir/state.savepoints.dir.
  • Target DDL: create the DM tables up front with PRIMARY KEY to enable upserts.
  • Throughput: increase job/connector parallelism, adjust JDBC sink batch size/interval if needed (defaults are usually fine for this lab).
  • DDL changes on the source: Postgres logical decoding does not emit DDL → when schemas change, redeploy the Flink table DDL (and adjust the target tables). Use Liquibase/Flyway to manage that cleanly.
Limitations to keep in mind
  • PK required for clean upserts/deletes. Without it, the JDBC sink would just append.
  • Pros (vs. PostgreSQL logical replication)
  • DDL: manual coordination (expand/contract) is required.
  • Slots: one active consumer per slot; multiple slots can increase WAL retention if a reader lags or is stopped.
  • Backpressure: a slow data mart will throttle the job; watch Flink backpressure metrics and tune JDBC batching/parallelism.
Pros of this design
  • Transform while you move: filter, project, join, enrich, aggregate, de-duplicate, derive SCD logic—before data lands in the target.
  • Fan-out: the same CDC stream can drive many targets (multiple PG DBs, ClickHouse, Elasticsearch, etc.).
  • Decoupling & safety: backpressure, retries, checkpointed state, and rate-limiting protect the source and the target; you can shape load to the DM.
  • Schema mediation: implement expand/contract, rename mapping, default values—logical replication needs the same schema on both sides.
  • Non-PG targets: works when your sink isn’t Postgres (or not only Postgres).
  • Observability: Flink UI and metrics on lag, checkpoints, failures, couple it with already proven Prometheus exporters.
Cons
  • More moving parts: Flink cluster + connectors to operate.
  • Latency a bit higher: seconds (checkpoint cadence, batching) instead of sub-second WAL apply.
  • DDL isn’t automatic: Postgres logical decoding doesn’t emit DDL; you still manage schema changes (expand/contract + redeploy).
  • Throughput tuning: JDBC upserts can bottleneck a single DB; you tune parallelism and flush/batch settings.
When logical replication is better
  • You just need a near-real-time 1:1 copy PG→PG, no transforms, same schema, and lowest latency with minimal ops.
  • You’re okay with subscriber-side work (indexes/views) and whole-table subscription (no row-level filtering).
When should you “upgrade” with a Hudi sink (data lake)

Add Hudi when you need things a database replica can’t give you:

  • History & replayability: keep the raw truth (Bronze) cheaply; rebuild downstream tables any time.
  • Upserts/deletes at scale on files: CDC-friendly MOR/COW tables.
  • Time travel & incremental pulls: audits, backtests, point-in-time reprocessing.
  • Many heterogeneous consumers: BI + ML + ad-hoc engines (Trino/Spark/Presto) without re-extracting from OLTP.
  • Big volumes: storage and compute scale independently; you can compaction/cluster off-peak.
  • Scale compute and storage independently.

Trade-off: more infra (S3/MinIO + compaction), higher “cold” query latency than a hot DB, and you still materialize serving tables for apps.

What I don’t like with this design :
  • Java based : I don’t like to handle Java issues, runtime and versions. I am not a developer and those kinds of this should be packaged in a way that it makes them run fast and easy. This open source version and setup is not that user-friendly and requires deep knowledge of the tool, but realistically not that much more than any other tool that would do similar things. Additionally, if your are going to run this in production there is a high likelihood that your are going to use Kubernetes which is going to solve those issues and offer scalability.
  • The versions dependencies : I did figure out the hard way that not all latest version of all packages used are compatible with each other. In the open source world some projects sometimes need to catch up in development. Here I need to use Flink 1.20.2 to have CDC working with HUDI for example and because I want to use both JDBC connector and HUDI Sink I had to downgrade the entire stack. So be careful Flink 1.20 is the LTS version so this is fine but if you want to use the latest feature of the stable version you might want to check that first.
  • Cloud : this operational complexity for setup and management is handle on the cloud hence the strong argument to go for it appart from the obvious data sensitivity issue but that last part is more a labelling and classification issue than really a Flink usage argument. If your company is using not taking leverage of cloud solution because it is afraid of data loss, this is a high indicator of a lack of maturity in that area rather than a technical limitation.

Additional notes : this setup is not meant for production, this is a simple showcase for lab purpose that you can easily reproduce, here there is not persistence (if you restart the Flink processes your pipeline are lost), if you want a production ready setup refer to the official documentation and look for Kubernetes installations probably : Introduction | Apache Flink CDC.

L’article PostgreSQL CDC to JDBC Sink – minimal event driven architecture est apparu en premier sur dbi Blog.

Simplify trainings follow up with M-Files

Wed, 2025-09-10 09:58

In today’s organizations, things are continuously evolving. We have to become more agile, but we also need more control to ensure compliance with ISO certifications and the like.

In my last blog post, I discussed controlled documents, especially procedures. It’s good to write a procedure, but it’s better to ensure the audience is aware of it.

Looking closely

For that, M-Files provides also a solution with its Training Management module (part of the Compliance Kit).

Compliance Kit

If you have read my previous post, then you already know what this is about. For those who haven’t, here is a brief overview. The Compliance Kit is an additional Vault application to extend M-Files capabilities like Version Control, Electronic signature, Extended PDF renditions,…and today we will explore the Training Module.

All these modules are especially valuable in regulated industries such as pharmaceuticals, finance, and manufacturing, where traceability and accountability are essential.

Training Management Module: added value

This module allows organizations to manage employee training records directly within M-Files.

So after some configuration to fit your needs you will get:

  • Assignment of training: Automatically assign identified training based on roles, departments, or compliance requirements.
Training assignment
  • tracking or reporting: a supervisor can easily check completion, overdue tasks,…
  • Version control: A new training request is triggered as soon as a new controlled document is released to ensure that people are always up to date.
New document to read
  • Integration with workflow: as usual with M-Files,it is possible to integrate training completion with other business processes to create strong links, avoid silos, and improve the user experience.
So, why is it great?

Admittedly, the training module is not a pure LMS solution. However, it has the advantage of allowing you to monitor training progress and ensure that employees are aware of the latest available information thanks to M-Files, without adding any other software.

M-Files and the Compliance Kit represent a strategic opportunity to deliver high-value solutions that align with both operational efficiency and regulatory compliance. The Training Management module, in particular, bridges the gap between document control and human resource development, making it a cornerstone for digital transformation in compliance-heavy sectors.

Whether you are deploying M-Files for the first time or optimizing an existing configuration, please do not hesitate to contact us for personalized support.

L’article Simplify trainings follow up with M-Files est apparu en premier sur dbi Blog.

Reimage your ODA from scratch

Tue, 2025-09-09 12:45
Introduction

Oracle Database Appliance is still very popular today. Provisioning is fast: from unboxing to a first running database is done in one day. And everything is standardized.

Patching is not that straight forward. First, you can only apply the latest patch on top of the 4 previous ones, and if you come from an older release, you may have 2, 3 or more patches to apply to reach the current version. It could take hours, even days.

The question one could ask is if it wouldn’t be more efficient to do a full ODA reimaging instead of patching. Let’s try to find out.

Environment and upgrade plan

This is a “real” environment composed of 4x ODA X8-2M: 2x ODAs for production, 2x ODAs for test/development. These servers have been provisioned in 2020 with the first 19c production available on ODA: 19.6. They were updated twice in the past years to reach 19.20 in 2024. These 2 upgrades were not that easy for some reasons, and systems are not that clean in my opinion.

According to MOS note 2757884.1, these ODAs are supported until August 2027, meaning that the last patch for these servers will be provided at this time. My client wants to keep them for 2 more years at least. So it’s definitely worth it to apply the patch.

I could have applied patch 19.24 then patch 19.27 on these ODAs, but as the first jump requires a Data Preserving Reprovisioning (meaning an OS reimaging) and as my ODAs are not that clean, I discussed with my client and we chose to reimage the servers from scratch.

Note that the production ODAs we will focus on are running 15 Enterprise Edition databases, and that a Data Guard configuration exists for each of them.

The plan is to switch all primaries to one ODA, reimage the other one, wait 2 weeks to see if everything is fine, do the switchover to the freshly reimaged ODA and reimage the other one the same way. As I don’t want to patch my databases now, I will deploy a 19.20 DB home and migrate to the latest version later.

Note that when I started this reimaging, current patch was 19.27. In the meantime, 19.28 becomes available.

Before starting: check ODA’s health

Let’s first check if the ODA I planned to reimage is healthy. You should normally monitor the hardware alerts on your ODAs, and eventually configure Automatic Service Request (ASR) to address hardware issues as soon as possible.

odaadmcli show server | grep Problem
	Open Problems            : 0
	Open Problems Report     : System is healthy

No problem on this ODA. I wouldn’t do anything on an ODA that is not healthy: fix up hardware troubles before patching or reimaging an ODA.

Check if standby databases are synced and disable archive transport

Apart from DBTEST which is primary (I always have a standalone test database on my ODAs), every database must be a Physical Standby Database. Expected downtime for such a reimaging is several hours or days depending on the number of databases. So, checking if sync is OK is a good practice. I also disable transport to standby databases and step down Data Guard to Maximum Performance:

vi check_role.sql
conn / as sysdba
set pages 0
select name||' is here '|| database_role from v$database;
exit;

for a in `ps -ef | grep pmon  | grep -v grep | grep -v ASM | grep -v APX | cut -c 58- | sort`; do  . oraenv <<< $a 1> /dev/null ; sqlplus -s /nolog @check_role; done
BPAPRD is here PHYSICAL STANDBY

DBTEST is here PRIMARY

BMC3PRD is here PHYSICAL STANDBY

MLOGAR is here PHYSICAL STANDBY

NIMSHMP is here PHYSICAL STANDBY

NOVOTEC is here PHYSICAL STANDBY

NTRAMPRD is here PHYSICAL STANDBY

NWTASKP is here PHYSICAL STANDBY

OPERA is here PHYSICAL STANDBY

PBYPRD is here PHYSICAL STANDBY

PRAXPRD is here PHYSICAL STANDBY

RCAT is here PHYSICAL STANDBY

SMARTPRD is here PHYSICAL STANDBY

SMKPRD is here PHYSICAL STANDBY

SPORT1 is here PHYSICAL STANDBY

SPORT2 is here PHYSICAL STANDBY


for a in `ps -ef | grep pmon  | grep -v grep | grep -v ASM | grep -v APX | grep -v DBTEST | cut -c 58- | sort`; do echo $a ; . oraenv <<< $a 1> /dev/null ; dgmgrl / "show configuration lag" | grep Lag; done
BPAPRD
             Transport Lag:	 0 seconds (computed 1 second ago)
             Apply Lag:          0 seconds (computed 1 second ago)
BMC3PRD
                  Transport Lag:      0 seconds (computed 0 seconds ago)
                  Apply Lag:          0 seconds (computed 0 seconds ago)
MLOGAR
                   Transport Lag:      0 seconds (computed 1 second ago)
                   Apply Lag:          0 seconds (computed 1 second ago)
NIMSHMP
                  Transport Lag:      0 seconds (computed 0 seconds ago)
                  Apply Lag:          0 seconds (computed 0 seconds ago)
NOVOTEC
                  Transport Lag:      0 seconds (computed 0 seconds ago)
                  Apply Lag:          0 seconds (computed 0 seconds ago)
NTRAMPRD
                    Transport Lag:	0 seconds (computed 0 seconds ago)
                    Apply Lag:          0 seconds (computed 0 seconds ago)
NWTASKP
                  Transport Lag:      0 seconds (computed 0 seconds ago)
                  Apply Lag:          0 seconds (computed 0 seconds ago)
OPERA
             Transport Lag:	 0 seconds (computed 0 seconds ago)
             Apply Lag:          0 seconds (computed 0 seconds ago)
PBYPRD
                  Transport Lag:      0 seconds (computed 0 seconds ago)
                  Apply Lag:          0 seconds (computed 0 seconds ago)
PRAXPRD
             Transport Lag:	 0 seconds (computed 1 second ago)
             Apply Lag:          0 seconds (computed 1 second ago)
RCAT
              Transport Lag:	  0 seconds (computed 1 second ago)
              Apply Lag:          0 seconds (computed 1 second ago)
 
SMARTPRD
             Transport Lag:	 0 seconds (computed 0 seconds ago)
             Apply Lag:          0 seconds (computed 0 seconds ago)
SMKPRD
             Transport Lag:	 0 seconds (computed 1 second ago)
             Apply Lag:          0 seconds (computed 1 second ago)
SPORT1
             Transport Lag:	 0 seconds (computed 0 seconds ago)
             Apply Lag:          0 seconds (computed 0 seconds ago)
SPORT2
             Transport Lag:	 0 seconds (computed 0 seconds ago)
             Apply Lag:          0 seconds (computed 0 seconds ago)

for a in `ps -ef | grep pmon  | grep -v grep | grep -v ASM | grep -v APX | cut -c 58- | sort`; do echo $a ; . oraenv <<< $a 1> /dev/null ; dgmgrl / "edit configuration set protection mode as MaxPerformance;" | grep Succeeded; dgmgrl / "EDIT DATABASE ${a}_IN SET STATE='TRANSPORT-OFF';" | grep Succeed ; done
BPAPRD
Succeeded.
Succeeded.
DBTEST
BMC3PRD
Succeeded.
Succeeded.
MLOGAR
Succeeded.
Succeeded
NIMSHMP
Succeeded.
Succeeded.
NOVOTEC
Succeeded.
Succeeded.
NTRAMPRD
Succeeded.
Succeeded
NWTASKP
Succeeded.
Succeeded.
OPERA
Succeeded.
Succeeded.
PBYPRD
Succeeded.
Succeeded.
PRAXPRD
Succeeded.
Succeeded.
RCAT
Succeeded.
Succeeded.
SMARTPRD
Succeeded.
Succeeded.
SMKPRD
Succeeded.
Succeeded.
SPORT1
Succeeded.
Succeeded.
SPORT2
Succeeded.
Succeeded.
Backup important files

As you will erase everything on this ODA, backup everything that needs to be backed up on a remote filesystem (/backup here). Usually, I use this script:

cat /backup/Patch19.27/dbi_prepatch_backup.sh
# Backup important files before patching
export BKPPATH=/backup/Patch19.27/backup_ODA_`hostname`_`date +"%Y%m%d_%H%M"`
echo "Backing up to " $BKPPATH
mkdir -p $BKPPATH
odacli list-databases > $BKPPATH/list-databases.txt
ps -ef | grep pmon | grep -v ASM | grep -v APX | grep -v grep | cut -c 58- | sort > $BKPPATH/running-instances.txt
odacli list-dbhomes > $BKPPATH/list-dbhomes.txt
odacli list-dbsystems > $BKPPATH/list-dbsystems.txt
odacli list-vms > $BKPPATH/list-vms.txt
crontab -u oracle -l  > $BKPPATH/crontab-oracle.txt
crontab -u grid -l  > $BKPPATH/crontab-grid.txt
crontab -l  > $BKPPATH/crontab-root.txt

cat /etc/fstab >  $BKPPATH/fstab.txt
cat /etc/oratab >  $BKPPATH/oratab.txt
cat /etc/sysconfig/network >  $BKPPATH/etc-sysconfig-network.txt
cat /etc/hosts  >  $BKPPATH/hosts
cat /etc/resolv.conf  >  $BKPPATH/resolv.conf
cat /etc/sysctl.conf  >  $BKPPATH/

cp /etc/krb5.conf  $BKPPATH/
cp /etc/krb5.keytab  $BKPPATH/
mkdir $BKPPATH/network-scripts
cp  /etc/sysconfig/network-scripts/ifcfg*  $BKPPATH/network-scripts/
odacli describe-system > $BKPPATH/describe-system.txt
odacli  describe-component >  $BKPPATH/describe-component.txt
HISTFILE=~/.bash_history
set -o history
history > $BKPPATH/history-root.txt
cp /home/oracle/.bash_history $BKPPATH/history-oracle.txt
df -h >  $BKPPATH/filesystems-status.txt

for a in `odacli list-dbhomes -j | grep dbHomeLocation | awk -F '"' '{print $4}' | sort` ; do mkdir -p $BKPPATH/$a/network/admin/ ; cp $a/network/admin/tnsnames.ora $BKPPATH/$a/network/admin/; cp $a/network/admin/sqlnet.ora $BKPPATH/$a/network/admin/; done
for a in `odacli list-dbhomes -j | grep dbHomeLocation | awk -F '"' '{print $4}' | sort` ; do mkdir -p $BKPPATH/$a/owm/ ; cp -r $a/owm/* $BKPPATH/$a/owm/; done
cp `ps -ef | grep -v grep | grep LISTENER | awk -F ' ' '{print $8}' | awk -F 'bin' '{print $1}'`network/admin/listener.ora $BKPPATH/gridhome-listener.ora
cp `ps -ef | grep -v grep | grep LISTENER | awk -F ' ' '{print $8}' | awk -F 'bin' '{print $1}'`/network/admin/sqlnet.ora $BKPPATH/gridhome-sqlnet.ora

tar czf $BKPPATH/u01-app-oracle-admin.tgz /u01/app/oracle/admin/
tar czf $BKPPATH/u01-app-odaorabase-oracle-admin.tgz /u01/app/odaorabase/oracle/admin/
tar czf $BKPPATH/u01-app-oracle-local.tgz /u01/app/oracle/local/
tar czf $BKPPATH/home.tgz /home/
cp /etc/passwd $BKPPATH/
cp /etc/group $BKPPATH/

echo "End"
echo "Backup files size:"
du -hs  $BKPPATH
echo "Backup files content:"
ls -lrt  $BKPPATH

sh /backup/Patch19.27/dbi_prepatch_backup.sh
...

This backup script is probably not extensive: take some time to make a suitable one for your needs.

Let’s also take a backup of the DB homes, in my case I only have one:

tar czf /backup/Patch19.27/odaprdcr_dbhome.tgz /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4

ls -lrth /backup/Patch19.27/odaprdcr_dbhome.tgz

-rw-r--r-- 1 root root 5.9G Sep  3 10:50 /backup/Patch19.27/odaprdcr_dbhome.tgz
Backup standby spfiles and controlfiles

As this ODA is only running standby databases, I don’t need to back them up. I can use the backup from primary assuming a reliable backup strategy has been deployed on production’s databases. 2 files differ between primary and standby databases: spfile and controlfile. So let’s backup these 2 files on all databases, it just needs a couple of minutes for all:

su - oracle

mkdir /backup/Patch19.27/ctl_spf

cat /backup/Patch19.27/bck_ctl_spf.rcv

backup current controlfile format '/backup/Patch19.27/ctl_spf/ctl_%d_%T.bck';
backup spfile format '/backup/Patch19.27/ctl_spf/spf_%d_%T.bck';
exit;

for a in `ps -ef | grep pmon  | grep -v grep | grep -v ASM | grep -v APX | grep -v DBTEST | cut -c 58- | sort`; do echo $a ; . oraenv <<< $a 1> /dev/null ; rman target / cmdfile="/backup/Patch19.27/bck_ctl_spf.rcv" | grep Succeeded; done

ls -lrt /backup/Patch19.27/ctl_spf

total 33592
-rw-r----- 1 oracle asmadmin 46104576 Sep  3 10:33 ctl_BPAPRD_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:33 spf_BPAPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 21233664 Sep  3 10:33 ctl_BMC3PRD_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:33 spf_BMC3PRD_20250903.bck
-rw-r----- 1 oracle asmadmin 27656192 Sep  3 10:33 ctl_MLOGAR_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:34 spf_MLOGAR_20250903.bck
-rw-r----- 1 oracle asmadmin 30965760 Sep  3 10:34 ctl_NIMSHMP_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:34 spf_NIMSHMP_20250903.bck
-rw-r----- 1 oracle asmadmin 27394048 Sep  3 10:34 ctl_NOVOTEC_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:34 spf_NOVOTEC_20250903.bck
-rw-r----- 1 oracle asmadmin 77955072 Sep  3 10:34 ctl_NTRAMPRD_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:34 spf_NTRAMPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 29294592 Sep  3 10:34 ctl_NWTASKP_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:34 spf_NWTASKP_20250903.bck
-rw-r----- 1 oracle asmadmin 66387968 Sep  3 10:35 ctl_OPERA_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:35 spf_OPERA_20250903.bck
-rw-r----- 1 oracle asmadmin 20905984 Sep  3 10:35 ctl_PBYPRD_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:35 spf_PBYPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 28868608 Sep  3 10:35 ctl_PRAXPRD_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:35 spf_PRAXPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 14352384 Sep  3 10:35 ctl_RCAT_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:35 spf_RCAT_20250903.bck
-rw-r----- 1 oracle asmadmin 35028992 Sep  3 10:35 ctl_SMARTPRD_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:35 spf_SMARTPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 21594112 Sep  3 10:35 ctl_SMKPRD_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:36 spf_SMKPRD_20250903.bck
-rw-r----- 1 oracle asmadmin 36798464 Sep  3 10:36 ctl_SPORT1_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:36 spf_SPORT1_20250903.bck
-rw-r----- 1 oracle asmadmin 20840448 Sep  3 10:36 ctl_SPORT2_20250903.bck
-rw-r----- 1 oracle asmadmin   114688 Sep  3 10:36 spf_SPORT2_20250903.bck

The restore will be easier with these files.

Prepare the json file for ODA deployment

Before doing the cleanup, prepare the json file you will use for deploying again your ODA.

You can use the one used for initial deployment, or eventually create a new one.

You will need these things for the json:

  • which edition you use: Standard Edition or Enterprise Edition
  • hostname, IP, netmask and gateway of your server
  • hostname, IP, netmask and gateway of your ILOM
  • user and group ids if you don’t use the default ones
  • ratio between DATA and RECO for disk partitioning (you may change from what was configured before)
  • DNS and NTP IPs
  • default’s network interface (usually btbond1)
  • DBTEST parameters if you create a test database during provisioning (I would recommend doing so)

You will find most of these parameters in the backup you’ve done with the script.

Here is my json file:

{
    "instance": {
        "instanceBaseName": "odaprdcr",
        "dbEdition": "EE",
        "objectStoreCredentials": null,
        "name": "odaprdcr",
        "systemPassword": "*********",
        "timeZone": "Europe/Zurich",
        "domainName": "dbi-services.ch",
        "ntpServers" : ["10.8.122.132","10.8.122.133"],
        "dnsServers" : ["10.8.122.132","10.8.122.233"],
        "isMultiUserAccessEnabled": "false",
        "isRoleSeparated": true,
        "osUserGroup": {
            "users": [
                {
                    "userName": "oracle",
                    "userRole": "oracleUser",
                    "userId": 1000
                },
                {
                    "userName": "grid",
                    "userRole": "gridUser",
                    "userId": 1001
                }
            ],
            "groups": [
                {
                    "groupName": "oinstall",
                    "groupRole": "oinstall",
                    "groupId": 1001
                },
                {
                    "groupName": "dbaoper",
                    "groupRole": "dbaoper",
                    "groupId": 1002
                },
                {
                    "groupName": "dba",
                    "groupRole": "dba",
                    "groupId": 1000
                },
                {
                    "groupName": "asmadmin",
                    "groupRole": "asmadmin",
                    "groupId": 1004
                },
                {
                    "groupName": "asmoper",
                    "groupRole": "asmoper",
                    "groupId": 1005
                },
                {
                    "groupName": "asmdba",
                    "groupRole": "asmdba",
                    "groupId": 1006
                }
            ]
        }
    },
    "nodes": [
        {
            "nodeNumber": "0",
            "nodeName": "odaprdcr",
            "network": [
                {
                    "ipAddress" : "10.192.100.103",
                    "subNetMask" : "255.255.255.0",
                    "gateway" : "10.192.100.254",
                    "nicName": "btbond1",
                    "networkType": [
                        "Public"
                    ],
                    "isDefaultNetwork": true
                }
            ],
            "ilom": {
                "ilomName": "odaprdcr-ilom",
                "ipAddress":"10.192.200.103",
                "subNetMask":"255.255.255.0",
                "gateway":"10.192.200.254"
            }
        }
    ],
    "grid": {
        "vip": [],
        "diskGroup": [
            {
                "diskGroupName": "DATA",
                "diskPercentage": 85,
                "redundancy": "NORMAL"
            },
            {
                "diskGroupName": "RECO",
                "diskPercentage": 15,
                "redundancy": "NORMAL"
            }
        ],
        "language": "en",
        "scan": null
    },
    "database": {
        "dbName": "DBTEST",
        "dbCharacterSet": {
            "characterSet": "AL32UTF8",
            "nlsCharacterset": "AL16UTF16",
            "dbTerritory": "AMERICA",
            "dbLanguage": "AMERICAN"
        },
        "dbRedundancy": null,
        "dbRole": "PRIMARY",
        "adminPassword": "*********",
        "dbEdition": "EE",
        "databaseUniqueName": "DBTEST",
        "dbClass": "OLTP",
        "dbVersion": "19.27.0.0.250415",
        "dbHomeId": null,
        "instanceOnly": false,
        "isCdb": true,
        "pdBName": "PDB1",
        "dbShape": "odb1",
        "pdbAdminuserName": "pdbadmin",
        "enableTDE": false,
        "dbType": "SI",
        "dbStorage": "ASM",
        "dbConsoleEnable": false,
        "dbOnFlashStorage": false,
        "backupConfigId": null,
        "rmanBkupPassword": null,
        "tdePassword": null
    },
    "acr": {
        "acrEnable": false
    },
    "sysOraHomeStorage": {
        "diskgroup": "DATA",
        "volumeSize": 80
    }
}
Cleanup the data disks

Cleanup of the data disk headers is mandatory before reimaging, otherwise provisioning will fail. Make sure you’re on the correct ODA, you can’t undo this action:

/opt/oracle/oak/onecmd/cleanup.pl
INFO: Log file is /opt/oracle/oak/log/odaprdcr/cleanup/cleanup_2025-09-03_10-53-04.log
INFO: Log file is /opt/oracle/oak/log/odaprdcr/cleanup/dcsemu_diag_precleanup_2025-09-03_10-53-04.log

INFO: *******************************************************************
INFO: ** Starting process to cleanup provisioned host odaprdcr         **
INFO: *******************************************************************
INFO: Default mode being used to cleanup a provisioned system.
INFO: It will change all ASM disk status from MEMBER to FORMER
Do you want to continue (yes/no) : yes
INFO: nodes will be rebooted
Do you want to continue (yes/no) : yes
INFO: /u01/app/19.20.0.0/oracle/bin/crsctl.bin

INFO: *************************************
INFO: ** Checking for GI bits presence
INFO: *************************************
INFO: GI bits /u01/app/19.20.0.0/oracle found on system under /u01/app directory...

INFO: ** Disabling AFD filtering
SUCCESS: AFD filtering disabled on all devices
INFO: ** Saving disk partition info
INFO: *************************************
INFO: ** Executing Stop Clusterware commands
INFO: *************************************
INFO: *************************************
INFO: ** Remove Oracle Stack
…

INFO: Rebooting the system via <reboot>...
INFO: Executing <reboot>

INFO: Cleanup was successful
INFO: Log file is /opt/oracle/oak/log/odaprdcr/cleanup/cleanup_2025-09-03_10-53-04.log

WARNING: After system reboot, please re-run "odacli update-repository" for GI/DB clones,
WARNING: before running "odacli create-appliance".
Reimage with the latest OS

ODA patch file 30403643 includes an ISO image you will use to reimage your ODA. From the ILOM, start the remote console, connect this ISO file, define next boot device as CDROM and do a power cycle of the server. Reimaging an ODA takes about 50 minutes and doesn’t need any input. Keep the remote console open for the next operation.

Configure firstnet

Once your ODA runs on a brand new OS, you will need to do the initial network setup through the remote console. So log into the console with root/welcome1, type configure-firstnet and provide IP address, netmask and gateway of your server. LACP mode will depend on your switch configuration, if you didn’t use it before, keep it disabled. On an ODA deployed in 2020, LACP mode was not available. You normally won’t use DHCP, neither VLAN as VLAN is usually tagged on the switch ports where your ODA is connected to.

configure-firstnet
...

Once configured, you can reach your server through the network and copy the needed files for the next steps.

Patch the microcodes

Reimaging your ODA doesn’t mean your microcodes are updated, it’s only the software part that is new. Therefore, first task after reimaging is patching these microcodes.

Let’s copy and register the patch file on the ODA and apply the updates (patch 37817290 for 19.27):

cd /opt/dbi
unzip p37817290_1927000_Linux-x86-64.zip
odacli update-repository -f /opt/dbi/oda-sm-19.27.0.0.0-250601-server.zip

odacli describe-job -i "1e2dd483-2237-47b9-9c60-5395fa720fd2"
Job details
----------------------------------------------------------------
                     ID:  1e2dd483-2237-47b9-9c60-5395fa720fd2
            Description:  Repository Update
                 Status:  Success
                Created:  September 03, 2025 09:53:24 UTC
                Message:  /opt/dbi/oda-sm-19.27.0.0.0-250601-server.zip

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Unzip bundle                             September 03, 2025 09:53:39 UTC          September 03, 2025 09:53:54 UTC          Success

odacli describe-component
System Version
--------------
19.27.0.0.0

System Node Name
----------------
oak

Local System Version
--------------------
19.27.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                      19.27.0.0.0          up-to-date

DCSCONTROLLER                            19.27.0.0.0          up-to-date

DCSCLI                                   19.27.0.0.0          up-to-date

DCSAGENT                                 19.27.0.0.0          up-to-date

DCSADMIN                                 19.27.0.0.0          up-to-date

OS                                       8.10                 up-to-date

ILOM                                     5.1.1.23.r151750     5.1.4.25.r160118

BIOS                                     52100600             52140100

LOCAL CONTROLLER FIRMWARE {
     [c8, c9]                            214.2.271.9          up-to-date
}

SHARED CONTROLLER FIRMWARE {
     [c0, c1, c2, c3, c4, c5]            VDV1RL06             up-to-date
}

LOCAL DISK FIRMWARE {
     [c6d0, c6d1]                        XC311102             up-to-date
}

HMP                                      2.4.10.1.600         up-to-date

odacli create-prepatchreport -s -v 19.27.0.0.0
 
sleep 120 ; odacli describe-prepatchreport -i 8010302b-feca-4f23-92bc-6878aa586713
Prepatch Report
------------------------------------------------------------------------
                 Job ID:  8010302b-feca-4f23-92bc-6878aa586713
            Description:  Patch pre-checks for [OS, ILOM, SERVER] to 19.27.0.0.0
                 Status:  SUCCESS
                Created:  September 3, 2025 9:54:56 AM UTC
                 Result:  All pre-checks succeeded

Node Name
---------------
oak

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions     Success   Validated minimum supported versions.
Validate patching tag           Success   Validated patching tag: 19.27.0.0.0.
Is patch location available     Success   Patch location is available.
Verify OS patch                 Success   There are no packages available for
                                          an update
Validate command execution      Success   Skipped command execution verfication
                                          - Instance is not provisioned

__ILOM__
Validate ILOM server reachable  Success   Successfully connected with ILOM
                                          server using public IP and USB
                                          interconnect
Validate supported versions     Success   Validated minimum supported versions.
Validate patching tag           Success   Validated patching tag: 19.27.0.0.0.
Is patch location available     Success   Patch location is available.
Checking Ilom patch Version     Success   Successfully verified the versions
Patch location validation       Success   Successfully validated location
Validate command execution      Success   Skipped command execution verfication
                                          - Instance is not provisioned

__SERVER__
Validate local patching         Success   Successfully validated server local
                                          patching
Validate command execution      Success   Skipped command execution verfication
                                          - Instance is not provisioned

odacli update-server -v 19.27.0.0.0

sleep 900 ; odacli describe-job -i 3a4ce5a7-6305-483d-a7c3-e6328d8a831e
Job details
----------------------------------------------------------------
                     ID:  3a4ce5a7-6305-483d-a7c3-e6328d8a831e
            Description:  Server Patching to 19.27.0.0.0
                 Status:  Success
                Created:  September 03, 2025 09:57:34 UTC
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Validate ILOM server reachable           September 03, 2025 09:57:36 UTC          September 03, 2025 09:57:37 UTC          Success
Stop DCS Admin                           September 03, 2025 09:57:38 UTC          September 03, 2025 09:57:39 UTC          Success
Generate mTLS certificates               September 03, 2025 09:57:39 UTC          September 03, 2025 09:57:40 UTC          Success
Exporting Public Keys                    September 03, 2025 09:57:40 UTC          September 03, 2025 09:57:41 UTC          Success
Creating Trust Store                     September 03, 2025 09:57:41 UTC          September 03, 2025 09:57:43 UTC          Success
Restart DCS Admin                        September 03, 2025 09:57:43 UTC          September 03, 2025 09:58:03 UTC          Success
Update config files                      September 03, 2025 09:57:43 UTC          September 03, 2025 09:57:43 UTC          Success
Deactivate Unit[dnf-makecache.timer]     September 03, 2025 09:58:04 UTC          September 03, 2025 09:58:04 UTC          Success
Deactivate Unit[kdump.service]           September 03, 2025 09:58:04 UTC          September 03, 2025 09:58:05 UTC          Success
Modify BM udev rules                     September 03, 2025 09:58:05 UTC          September 03, 2025 09:58:25 UTC          Success
Creating repositories using yum          September 03, 2025 09:58:25 UTC          September 03, 2025 09:58:27 UTC          Success
Updating YumPluginVersionLock rpm        September 03, 2025 09:58:27 UTC          September 03, 2025 09:58:29 UTC          Success
Applying OS Patches                      September 03, 2025 09:58:29 UTC          September 03, 2025 09:59:03 UTC          Success
Creating repositories using yum          September 03, 2025 09:59:03 UTC          September 03, 2025 09:59:03 UTC          Success
Applying HMP Patches                     September 03, 2025 09:59:04 UTC          September 03, 2025 09:59:06 UTC          Success
Applying Firmware local Disk Patches     September 03, 2025 09:59:07 UTC          September 03, 2025 09:59:10 UTC          Success
Oda-hw-mgmt upgrade                      September 03, 2025 09:59:07 UTC          September 03, 2025 09:59:07 UTC          Success
Patch location validation                September 03, 2025 09:59:07 UTC          September 03, 2025 09:59:07 UTC          Success
Applying Firmware local Controller Patch September 03, 2025 09:59:10 UTC          September 03, 2025 09:59:15 UTC          Success
Checking Ilom patch Version              September 03, 2025 09:59:15 UTC          September 03, 2025 09:59:15 UTC          Success
Patch location validation                September 03, 2025 09:59:15 UTC          September 03, 2025 09:59:15 UTC          Success
Save password in Wallet                  September 03, 2025 09:59:15 UTC          September 03, 2025 09:59:16 UTC          Success
Apply Ilom patch                         September 03, 2025 09:59:16 UTC          September 03, 2025 10:09:24 UTC          Success
Disabling IPMI v2                        September 03, 2025 09:59:16 UTC          September 03, 2025 09:59:16 UTC          Success
Cleanup JRE Home                         September 03, 2025 10:09:24 UTC          September 03, 2025 10:09:24 UTC          Success
Copying Flash Bios to Temp location      September 03, 2025 10:09:24 UTC          September 03, 2025 10:09:24 UTC          Success
Generating and saving BOM                September 03, 2025 10:09:24 UTC          September 03, 2025 10:09:27 UTC          Success
Update System version                    September 03, 2025 10:09:24 UTC          September 03, 2025 10:09:24 UTC          Success
PreRebootNode Actions                    September 03, 2025 10:09:27 UTC          September 03, 2025 10:09:27 UTC          Success
Reboot Node                              September 03, 2025 10:09:27 UTC          September 03, 2025 10:09:27 UTC          Success

The system will reboot once patched, let’s check the versions:

odacli describe-component
System Version
--------------
19.27.0.0.0

System Node Name
----------------
oak

Local System Version
--------------------
19.27.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                      19.27.0.0.0          up-to-date

DCSCONTROLLER                            19.27.0.0.0          up-to-date

DCSCLI                                   19.27.0.0.0          up-to-date

DCSAGENT                                 19.27.0.0.0          up-to-date

DCSADMIN                                 19.27.0.0.0          up-to-date

OS                                       8.10                 up-to-date

ILOM                                     5.1.4.25.r160118     up-to-date

BIOS                                     52140100             up-to-date

LOCAL CONTROLLER FIRMWARE {
     [c8, c9]                            214.2.271.9          up-to-date
}

SHARED CONTROLLER FIRMWARE {
     [c0, c1, c2, c3, c4, c5]            VDV1RL06             up-to-date
}

LOCAL DISK FIRMWARE {
     [c6d0, c6d1]                        XC311102             up-to-date
}

HMP                                      2.4.10.1.600         up-to-date

No need to apply the storage patch here, but if needed, apply it with:

odacli update-storage -v 19.27.0.0.0
Provision the appliance

Now it’s time to put back Oracle binaries on this system. First register the GI and DB clones, then do the create-appliance with the json file you prepared earlier:

cd /opt/dbi
unzip p30403662_1927000_Linux-x86-64.zip 
unzip p30403673_1927000_Linux-x86-64.zip

odacli update-repository -f /opt/dbi/odacli-dcs-19.27.0.0.0-250527-GI-19.27.0.0.zip
sleep 90 ; odacli describe-job -i c8f3559f-7f93-4df1-ba99-7511111f27ce

Job details
----------------------------------------------------------------
                     ID:  c8f3559f-7f93-4df1-ba99-7511111f27ce
            Description:  Repository Update
                 Status:  Success
                Created:  September 03, 2025 10:19:04 UTC
                Message:  /opt/dbi/odacli-dcs-19.27.0.0.0-250527-GI-19.27.0.0.zip

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Unzip bundle                             September 03, 2025 10:19:04 UTC          September 03, 2025 10:20:08 UTC          Success

odacli update-repository -f /opt/dbi/odacli-dcs-19.27.0.0.0-250527-DB-19.27.0.0.zip
sleep 90 ; odacli describe-job -i 96187596-16f6-413c-8905-9d85c6d3e106

Job details
----------------------------------------------------------------
                     ID:  96187596-16f6-413c-8905-9d85c6d3e106
            Description:  Repository Update
                 Status:  Success
                Created:  September 03, 2025 10:20:50 UTC
                Message:  /opt/dbi/odacli-dcs-19.27.0.0.0-250527-DB-19.27.0.0.zip

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Unzip bundle                             September 03, 2025 10:20:50 UTC          September 03, 2025 10:21:46 UTC          Success
 

odacli create-appliance -r /opt/dbi/deploy_odaprdcr.txt

Enter an initial password for Web Console account (oda-admin): **********
Confirm the password for Web Console account (oda-admin): **********
User 'oda-admin' created successfully...

odacli describe-job -i 22fec8d6-d9ae-4576-8f18-d8e1a0b00370
Job details
----------------------------------------------------------------
                     ID:  22fec8d6-d9ae-4576-8f18-d8e1a0b00370
            Description:  Provisioning service creation
                 Status:  Success
                Created:  September 03, 2025 12:45:37 CEST
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Validate absence of Interconnect         September 03, 2025 12:45:40 CEST         September 03, 2025 12:45:40 CEST         Success
network configuration file
Restart network interface pubnet         September 03, 2025 12:45:43 CEST         September 03, 2025 12:45:49 CEST         Success
Setting up Network                       September 03, 2025 12:45:43 CEST         September 03, 2025 12:45:43 CEST         Success
Setting up Vlan                          September 03, 2025 12:45:50 CEST         September 03, 2025 12:45:51 CEST         Success
Restart network interface priv0.100      September 03, 2025 12:45:51 CEST         September 03, 2025 12:45:52 CEST         Success
Restart network interface privasm        September 03, 2025 12:45:52 CEST         September 03, 2025 12:45:53 CEST         Success
Restart network interface privasm        September 03, 2025 12:45:53 CEST         September 03, 2025 12:45:59 CEST         Success
Setting up Network                       September 03, 2025 12:45:53 CEST         September 03, 2025 12:45:53 CEST         Success
Network update                           September 03, 2025 12:45:59 CEST         September 03, 2025 12:46:13 CEST         Success
Restart network interface btbond1        September 03, 2025 12:45:59 CEST         September 03, 2025 12:46:03 CEST         Success
Setting up Network                       September 03, 2025 12:45:59 CEST         September 03, 2025 12:45:59 CEST         Success
Updating network                         September 03, 2025 12:45:59 CEST         September 03, 2025 12:46:13 CEST         Success
Restart network interface btbond1        September 03, 2025 12:46:03 CEST         September 03, 2025 12:46:07 CEST         Success
Restart network interface pubnet         September 03, 2025 12:46:07 CEST         September 03, 2025 12:46:13 CEST         Success
OS usergroup 'asmadmin' creation         September 03, 2025 12:46:13 CEST         September 03, 2025 12:46:13 CEST         Success
OS usergroup 'asmdba' creation           September 03, 2025 12:46:13 CEST         September 03, 2025 12:46:13 CEST         Success
OS usergroup 'asmoper' creation          September 03, 2025 12:46:13 CEST         September 03, 2025 12:46:13 CEST         Success
OS usergroup 'dba' creation              September 03, 2025 12:46:13 CEST         September 03, 2025 12:46:14 CEST         Success
Validate availability of pubnet          September 03, 2025 12:46:13 CEST         September 03, 2025 12:46:13 CEST         Success
OS user 'grid' creation                  September 03, 2025 12:46:14 CEST         September 03, 2025 12:46:15 CEST         Success
OS usergroup 'dbaoper' creation          September 03, 2025 12:46:14 CEST         September 03, 2025 12:46:14 CEST         Success
OS usergroup 'oinstall' creation         September 03, 2025 12:46:14 CEST         September 03, 2025 12:46:14 CEST         Success
OS user 'oracle' creation                September 03, 2025 12:46:15 CEST         September 03, 2025 12:46:16 CEST         Success
Add SYSNAME in Env                       September 03, 2025 12:46:16 CEST         September 03, 2025 12:46:16 CEST         Success
Backup Config name validation            September 03, 2025 12:46:16 CEST         September 03, 2025 12:46:16 CEST         Success
Backup config metadata persist           September 03, 2025 12:46:16 CEST         September 03, 2025 12:46:16 CEST         Success
Default backup policy creation           September 03, 2025 12:46:16 CEST         September 03, 2025 12:46:16 CEST         Success
Grant permission to RHP files            September 03, 2025 12:46:16 CEST         September 03, 2025 12:46:16 CEST         Success
Install oracle-ahf                       September 03, 2025 12:46:16 CEST         September 03, 2025 12:47:23 CEST         Success
Stop DCS Admin                           September 03, 2025 12:48:25 CEST         September 03, 2025 12:48:26 CEST         Success
Generate mTLS certificates               September 03, 2025 12:48:26 CEST         September 03, 2025 12:48:27 CEST         Success
 
Exporting Public Keys                    September 03, 2025 12:48:27 CEST         September 03, 2025 12:48:28 CEST         Success
Creating Trust Store                     September 03, 2025 12:48:29 CEST         September 03, 2025 12:48:31 CEST         Success
Restart DCS Admin                        September 03, 2025 12:48:31 CEST         September 03, 2025 12:48:52 CEST         Success
Update config files                      September 03, 2025 12:48:31 CEST         September 03, 2025 12:48:31 CEST         Success
Creating GI home directories             September 03, 2025 12:48:52 CEST         September 03, 2025 12:48:52 CEST         Success
Extract GI clone                         September 03, 2025 12:48:52 CEST         September 03, 2025 12:50:11 CEST         Success
Grid home creation                       September 03, 2025 12:48:52 CEST         September 03, 2025 12:50:13 CEST         Success
Storage discovery                        September 03, 2025 12:50:13 CEST         September 03, 2025 12:51:33 CEST         Success
Creating wallet for Root User            September 03, 2025 12:51:33 CEST         September 03, 2025 12:51:36 CEST         Success
Creating wallet for ASM Client           September 03, 2025 12:51:36 CEST         September 03, 2025 12:51:39 CEST         Success
Grid stack creation                      September 03, 2025 12:51:39 CEST         September 03, 2025 13:00:38 CEST         Success
Provisioning GI with RHP                 September 03, 2025 12:51:39 CEST         September 03, 2025 12:57:27 CEST         Success
Set CRS ping target                      September 03, 2025 12:57:28 CEST         September 03, 2025 12:57:29 CEST         Success
Updating GI home version                 September 03, 2025 12:57:29 CEST         September 03, 2025 12:57:34 CEST         Success
Restarting Clusterware                   September 03, 2025 12:57:35 CEST         September 03, 2025 13:00:38 CEST         Success
Post cluster OAKD configuration          September 03, 2025 13:00:38 CEST         September 03, 2025 13:01:33 CEST         Success
Disk group 'RECO' creation               September 03, 2025 13:01:41 CEST         September 03, 2025 13:01:52 CEST         Success
Setting ACL for disk groups              September 03, 2025 13:01:52 CEST         September 03, 2025 13:01:55 CEST         Success
Modify DB file attributes                September 03, 2025 13:01:55 CEST         September 03, 2025 13:02:03 CEST         Success
Register Scan and Vips to Public Network September 03, 2025 13:02:03 CEST         September 03, 2025 13:02:05 CEST         Success
Configure export clones resource         September 03, 2025 13:03:24 CEST         September 03, 2025 13:03:24 CEST         Success
Volume 'commonstore' creation            September 03, 2025 13:03:24 CEST         September 03, 2025 13:03:41 CEST         Success
ACFS File system 'DATA' creation         September 03, 2025 13:03:41 CEST         September 03, 2025 13:04:03 CEST         Success
Creating ACFS database home              September 03, 2025 13:04:04 CEST         September 03, 2025 13:05:22 CEST         Success
Database home creation                   September 03, 2025 13:04:04 CEST         September 03, 2025 13:07:55 CEST         Success
Validating dbHome available space        September 03, 2025 13:05:23 CEST         September 03, 2025 13:05:23 CEST         Success
Create required directories              September 03, 2025 13:05:24 CEST         September 03, 2025 13:05:24 CEST         Success
Creating DbHome Directory                September 03, 2025 13:05:24 CEST         September 03, 2025 13:05:24 CEST         Success
Extract DB clone                         September 03, 2025 13:05:24 CEST         September 03, 2025 13:06:18 CEST         Success
ProvDbHome by using RHP                  September 03, 2025 13:06:18 CEST         September 03, 2025 13:07:33 CEST         Success
Enable DB options                        September 03, 2025 13:07:33 CEST         September 03, 2025 13:07:47 CEST         Success
Creating wallet for DB Client            September 03, 2025 13:07:51 CEST         September 03, 2025 13:07:55 CEST         Success
Database Creation by RHP                 September 03, 2025 13:07:56 CEST         September 03, 2025 13:20:28 CEST         Success
Provisioning service creation            September 03, 2025 13:07:56 CEST         September 03, 2025 13:23:47 CEST         Success
Change permission for xdb wallet files   September 03, 2025 13:20:28 CEST         September 03, 2025 13:20:29 CEST         Success
Add Startup Trigger to Open all PDBS     September 03, 2025 13:20:29 CEST         September 03, 2025 13:20:30 CEST         Success
Place SnapshotCtrlFile in sharedLoc      September 03, 2025 13:20:30 CEST         September 03, 2025 13:20:33 CEST         Success
SqlPatch upgrade                         September 03, 2025 13:22:08 CEST         September 03, 2025 13:22:30 CEST         Success
Running dbms_stats init_package          September 03, 2025 13:22:30 CEST         September 03, 2025 13:22:31 CEST         Success
Set log_archive_dest for Database        September 03, 2025 13:22:31 CEST         September 03, 2025 13:22:34 CEST         Success
Updating the Database version            September 03, 2025 13:22:34 CEST         September 03, 2025 13:22:38 CEST         Success
Users tablespace creation                September 03, 2025 13:23:47 CEST         September 03, 2025 13:23:50 CEST         Success
Clear all listeners from Database        September 03, 2025 13:23:50 CEST         September 03, 2025 13:23:50 CEST         Success
Clear local and remote listeners from    September 03, 2025 13:23:52 CEST         September 03, 2025 13:23:53 CEST         Success
Database
Copy Pwfile to Shared Storage            September 03, 2025 13:23:53 CEST         September 03, 2025 13:23:59 CEST         Success
Persist new agent state entry            September 03, 2025 13:24:02 CEST         September 03, 2025 13:24:02 CEST         Success
Persist new agent state entry            September 03, 2025 13:24:02 CEST         September 03, 2025 13:24:02 CEST         Success
Provisioning service creation            September 03, 2025 13:24:02 CEST         September 03, 2025 13:24:02 CEST         Success
Restart DCS Agent                        September 03, 2025 13:24:02 CEST         September 03, 2025 13:24:03 CEST         Success

Creating the appliance usually takes 40 to 50 minutes.

Check core reduction

Reimaging an ODA will keep the core reduction, but make sure it’s true with these commands:

odacli list-cpucores
Node  Cores  Modified                           Job Status
----- ------ ---------------------------------- ---------------
0     4      September 3, 2025 9:46:11 AM CEST  CONFIGURED

lscpu | grep Core
Core(s) per socket:  2

Here I have 4 enabled cores, meaning 2 processor licenses.

Setup older DB home

As I will bring back my Data Guard setup with 19.20 databases, I will need a 19.20 DB home. Let’s register the clone file and create the DB home. An increase of the diagnostic dest is probably needed, 10GB is way too small for me:

odacli list-dbhome-storages
=============================================================================================================
ID                                     Node Description          Disk Group Volume      Size(GB)   Status
-------------------------------------- ---- -------------------- ---------- ----------- ---------- ----------
bef3ae8a-96d3-44d4-b179-1e5dbdd2a939   0    ORACLE_HOME          DATA       orahome_sh  80         CONFIGURED
e972cc17-ff92-4203-b23f-55fab02a7cc0   0    ORACLE_BASE          DATA       odabase_n0  10         CONFIGURED
=============================================================================================================

odacli modify-dbhome-storage -i e972cc17-ff92-4203-b23f-55fab02a7cc0 -s 50

sleep 10 ; odacli list-dbhome-storages
=============================================================================================================
ID                                     Node Description          Disk Group Volume      Size(GB)   Status
-------------------------------------- ---- -------------------- ---------- ----------- ---------- ----------
bef3ae8a-96d3-44d4-b179-1e5dbdd2a939   0    ORACLE_HOME          DATA       orahome_sh  80         CONFIGURED
e972cc17-ff92-4203-b23f-55fab02a7cc0   0    ORACLE_BASE          DATA       odabase_n0  50         CONFIGURED
=============================================================================================================


cd /opt/dbi
unzip p30403662_1920000_Linux-x86-64.zip 
odacli update-repository -f /opt/dbi/odacli-dcs-19.20.0.0.0-230720-DB-19.20.0.0.zip

sleep 30 ; odacli describe-job -i 3ee5db4d-d350-47b4-8e24-18949d244ab7
Job details
----------------------------------------------------------------
                     ID:  3ee5db4d-d350-47b4-8e24-18949d244ab7
            Description:  Repository Update
                 Status:  Success
                Created:  September 03, 2025 13:47:23 CEST
                Message:  /opt/dbi/odacli-dcs-19.20.0.0.0-230720-DB-19.20.0.0.zip

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Unzip bundle                             September 03, 2025 13:47:23 CEST         September 03, 2025 13:47:55 CEST         Success

odacli create-dbhome -v 19.20.0.0.230718

sleep 240 ; odacli describe-job -i 3123544b-60c5-418a-82b5-c110dd68aafa
Job details
----------------------------------------------------------------
                     ID:  3123544b-60c5-418a-82b5-c110dd68aafa
            Description:  Database Home OraDB19000_home2 creation with version :19.20.0.0.230718
                 Status:  Success
                Created:  September 03, 2025 13:53:21 CEST
                Message:  Create Database Home

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Setting up SSH equivalence               September 03, 2025 13:53:32 CEST         September 03, 2025 13:53:34 CEST         Success
Setting up SSH equivalence               September 03, 2025 13:53:34 CEST         September 03, 2025 13:53:35 CEST         Success
Creating ACFS database home              September 03, 2025 13:53:35 CEST         September 03, 2025 13:53:35 CEST         Success
Validating dbHome available space        September 03, 2025 13:53:35 CEST         September 03, 2025 13:53:35 CEST         Success
Create required directories              September 03, 2025 13:53:36 CEST         September 03, 2025 13:53:36 CEST         Success
Creating DbHome Directory                September 03, 2025 13:53:36 CEST         September 03, 2025 13:53:36 CEST         Success
Extract DB clone                         September 03, 2025 13:53:36 CEST         September 03, 2025 13:54:23 CEST         Success
ProvDbHome by using RHP                  September 03, 2025 13:54:23 CEST         September 03, 2025 13:55:39 CEST         Success
Enable DB options                        September 03, 2025 13:55:39 CEST         September 03, 2025 13:55:54 CEST         Success
Creating wallet for DB Client            September 03, 2025 13:55:57 CEST         September 03, 2025 13:55:58 CEST         Success

odacli list-dbhomes
ID                                       Name                 DB Version           DB Edition Home Location                                            Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
19ee71be-07ae-4373-9bb6-31db8195e36c     OraDB19000_home1     19.27.0.0.250415     EE         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1     CONFIGURED
cd2dc372-3f95-40f1-917c-ee0793c26664     OraDB19000_home2     19.20.0.0.230718     EE         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2     CONFIGURED
Create the databases

Your ODA registry needs to know the databases, so I will create them before doing the restore. I don’t use the -io (instance only) of odacli for database creation as srvctl entries will be missing. So let’s create a new standalone database and then remove its files:

odacli create-database -cs AL32UTF8 -no-c -u BMC3PRD_CR -dh 'cd2dc372-3f95-40f1-917c-ee0793c26664' -n BMC3PRD -s odb1s -l AMERICAN -dt AMERICA -no-co -r asm

sleep 660 ; odacli describe-job -i 40b80359-0607-4330-ae6e-055d284a780e
Job details
----------------------------------------------------------------
                     ID:  40b80359-0607-4330-ae6e-055d284a780e
            Description:  Database service creation with DB name: BMC3PRD
                 Status:  Success
                Created:  September 03, 2025 15:11:48 CEST
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- -------
Database Creation by RHP                 September 03, 2025 15:11:50 CEST         September 03, 2025 15:19:24 CEST         Success
Database Service creation                September 03, 2025 15:11:50 CEST         September 03, 2025 15:22:19 CEST         Success
Setting up SSH equivalence               September 03, 2025 15:11:50 CEST         September 03, 2025 15:11:50 CEST         Success
Setting up SSH equivalence               September 03, 2025 15:11:50 CEST         September 03, 2025 15:11:50 CEST         Success
Change permission for xdb wallet files   September 03, 2025 15:19:24 CEST         September 03, 2025 15:19:25 CEST         Success
Place SnapshotCtrlFile in sharedLoc      September 03, 2025 15:19:25 CEST         September 03, 2025 15:19:28 CEST         Success
SqlPatch upgrade                         September 03, 2025 15:20:43 CEST         September 03, 2025 15:20:50 CEST         Success
Running dbms_stats init_package          September 03, 2025 15:20:50 CEST         September 03, 2025 15:20:51 CEST         Success
Set log_archive_dest for Database        September 03, 2025 15:20:51 CEST         September 03, 2025 15:20:54 CEST         Success
Updating the Database version            September 03, 2025 15:20:54 CEST         September 03, 2025 15:20:58 CEST         Success
Create Users tablespace                  September 03, 2025 15:22:19 CEST         September 03, 2025 15:22:20 CEST         Success
Clear all listeners from Database        September 03, 2025 15:22:20 CEST         September 03, 2025 15:22:21 CEST         Success
Copy Pwfile to Shared Storage            September 03, 2025 15:22:23 CEST         September 03, 2025 15:22:27 CEST         Success

su - oracle
. oraenv <<< BMC3PRD
sqlplus / as sysdba
set pages 0
set feedback off
spool /tmp/rmfiles.sh
select 'asmcmd rm -fr '||file_name from dba_data_files;
select 'asmcmd rm -fr '||file_name from dba_temp_files;
select 'asmcmd rm -fr '||member from v$logfile;
select 'asmcmd rm -fr '||name from v$controlfile;
select 'asmcmd rm -fr '||value from v$parameter where name='spfile';
spool off
shutdown abort;
exit
exit
su - grid
sh /tmp/rmfiles.sh
exit

Do the same for the other databases. Don’t try to create your databases with multiple odacli jobs running at the same time: this is something ODA cannot do in parallel, creation will be serialized and jobs will be waiting for the current one to complete.

Restore the databases

I’m used to move the spfile from ASM to the filesystem, so it’s the exact same path as the other server. In the same time, I change the role of the database as odacli created it as a primary:

su - oracle
. oraenv <<< BMC3PRD
srvctl modify database -db BMC3PRD_CR -spfile /u01/app/oracle/admin/BMC3PRD/pfile/spfileBMC3PRD.ora -startoption mount -role PHYSICAL_STANDBY

Now let’s restore the spfile and the controlfile. RMAN is able to start an instance without any spfile for recovering the spfile, it’s very convenient:

rman target / 

startup nomount;
restore spfile from '/backup/Patch19.27/ctl_spf/spf_BMC3PRD_20250903.bck';
Starting restore at 04-SEP-25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=433 device type=DISK
channel ORA_DISK_1: restoring spfile from AUTOBACKUP /backup/Patch19.27/ctl_spf/spf_BMC3PRD_20250903.bck
channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 04-SEP-25
 
startup force nomount;

restore controlfile from '/backup/Patch19.27/ctl_spf/ctl_BMC3PRD_20250903.bck';

Starting restore at 04-SEP-25
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=26 device type=DISK
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:05
output file name=+RECO/BMC3PRD_CR/CONTROLFILE/current.486.1210937913
Finished restore at 04-SEP-25

Now I can start restoring the database. Choose the parallel degree carefully: as your ODA has nothing else to do, I would recommend using the maximum channels you could for big databases, for example between 4 and 8 channels for an ODA with 4 enabled cores:

startup force mount;
run {
allocate channel c1 device type disk;
allocate channel c2 device type disk;
allocate channel c3 device type disk;
allocate channel c4 device type disk;
restore database ;
}


…
channel c4: restoring datafile 00004 to +DATA/BMC3PRD/DATAFILE/undotbs1.291.1042201603
channel c4: restoring datafile 00005 to +DATA/BMC3PRD/DATAFILE/users.294.1042202001
channel c4: reading from backup piece /backup/BMC3PRD/backupset/20250901_000002_inc0_BMC3PRD_45931446_s36914_p1.bck
channel c3: piece handle=/backup/BMC3PRD/backupset/20250901_000002_inc0_BMC3PRD_45931446_s36908_p2.bck tag=INC0_20250901_000002
channel c3: restored backup piece 2
channel c3: restore complete, elapsed time: 00:00:38
channel c1: piece handle=/backup/BMC3PRD/backupset/20250901_000002_inc0_BMC3PRD_45931446_s36911_p2.bck tag=INC0_20250901_000002
channel c1: restored backup piece 2
channel c1: restore complete, elapsed time: 00:00:42
channel c4: piece handle=/backup/BMC3PRD/backupset/20250901_000002_inc0_BMC3PRD_45931446_s36914_p1.bck tag=INC0_20250901_000002
channel c4: restored backup piece 1
channel c4: restore complete, elapsed time: 00:00:35
channel c2: piece handle=/backup/BMC3PRD/backupset/20250901_000002_inc0_BMC3PRD_45931446_s36911_p1.bck tag=INC0_20250901_000002
channel c2: restored backup piece 1
channel c2: restore complete, elapsed time: 00:00:54
Finished restore at 04-SEP-25
released channel: c1
released channel: c2
released channel: c3
released channel: c4

exit
Add TNS entries for Data Guard and a static entry for the listener

Your database is now restored, you’ll need to configure communication with the primary. Let’s add the following TNS entries:

vi $ORACLE_HOME/network/admin/tnsnames.ora
…

BMC3PRD_CR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = odaprdcr)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = BMC3PRD_CR)
    )
  )

BMC3PRD_IN =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = odaprdin)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = BMC3PRD_IN)
    )
  )

You will also need the static entry for your database in the listener.ora file:

su - grid
vi $ORACLE_HOME/network/admin/listener.ora
SID_LIST_LISTENER=
(SID_LIST=
  (SID_DESC=
    (GLOBAL_DBNAME = BMC3PRD_CR)
    (ORACLE_HOME = /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2)
    (SID_NAME = BMC3PRD)
  )
)
exit
Copy the password file and modify its location

Pick up the password file from primary server and relocate the file to the filesystem with srvctl:

scp oracle@odaprdin:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2/dbs/orapwBMC3PRD /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2/dbs/

srvctl modify database -db BMC3PRD_CR -pwfile
srvctl stop database -db BMC3PRD_CR ; sleep 10 ; srvctl start database -db BMC3PRD_CR -o mount

You can use a symbolic link in the dbs subfolder if you want to put this password file alongside your spfile.

Recover the database

Restored datafiles are probably several hours old, so let’s do a recover on the standby from the primary service before configuring Data Guard:

rman target sys
recover database from service 'BMC3PRD_IN';
…
channel ORA_DISK_1: using network backup set from service BMC3PRD_IN
destination for restore of datafile 00007: +DATA/BMC3PRD_CR/DATAFILE/dm_mc3prd_docbase.477.1210938063
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service BMC3PRD_IN
destination for restore of datafile 00008: +DATA/BMC3PRD_CR/DATAFILE/dm_mc3prd_docbase.478.1210938063
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service BMC3PRD_IN
destination for restore of datafile 00009: +DATA/BMC3PRD_CR/DATAFILE/dm_mc3prd_docbase.479.1210938067
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service BMC3PRD_IN
destination for restore of datafile 00010: +DATA/BMC3PRD_CR/DATAFILE/dm_mc3prd_docbase.480.1210938165
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

starting media recovery

media recovery complete, elapsed time: 00:00:00
Finished recover at 04-SEP-25

exit

If archivelog files are missing or if recovery is not possible, it doesn’t mean that it’s a problem. Data Guard should be able to solve a small or bigger archive gap.

Rebuild Data Guard configuration

Instead of adding back my standby database to the existing Data Guard configuration, I will recreate this configuration. It’s a matter of seconds:

dgmgrl sys
*****
Remove configuration;
create configuration DTG as primary database is 'BMC3PRD_IN' connect identifier is 'BMC3PRD_IN';
add database 'BMC3PRD_CR' as connect identifier is 'BMC3PRD_CR';
enable configuration;
edit database 'BMC3PRD_CR' set property LogXptMode='SYNC';
edit database 'BMC3PRD_IN' set property LogXptMode='SYNC';
edit database 'BMC3PRD_CR' set property StandbyFileManagement='AUTO';
edit database 'BMC3PRD_IN' set property StandbyFileManagement='AUTO';
EDIT DATABASE 'BMC3PRD_CR' SET PROPERTY 'ArchiveLagTarget'=1200; 
EDIT DATABASE 'BMC3PRD_IN' SET PROPERTY 'ArchiveLagTarget'=1200; 
edit configuration set protection mode as maxavailability;

Depending on the lag between your primary and this standby, it can takes several minutes to bring back the sync of both databases:

show configuration lag;
Configuration - dtg

  Protection Mode: MaxAvailability
  Members:
  BMC3PRD_IN - Primary database
    BMC3PRD_CR - Physical standby database
                  Transport Lag:      0 seconds (computed 1 second ago)
                  Apply Lag:          0 seconds (computed 1 second ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 6 seconds ago)

This database is OK, let’s continue with the next ones.

Additional settings

If you made additional settings on your ODA, you’ll have to configure them again. It could be:

  • monitoring
  • additional network interfaces
  • additional services
  • mount points and shares
  • firewall rules
  • kerberos authentication
  • compatibility modes in sqlnet.ora file
Conclusion

With several multi-TB databases to restore, it took me less than 3 days per ODA for this operation. And these ODAs are now clean and ready for their second life.

Reimaging an ODA from scratch is a good solution as soon as a Data Guard setup allows the databases to run on another ODA for several days. And with a good preparation, the job is not that huge. And there is much less troubleshooting compared to patching. This is something to consider.

L’article Reimage your ODA from scratch est apparu en premier sur dbi Blog.

Handling of documents including meta data with Chronoscan and AI

Sun, 2025-08-31 10:12

Chronoscan is a well known application to scan, split and extract metadata from documents. This is possible with traditional functionalities like Intelli-Tag from Chronoscan, which requires additional complex and sometimes time-consuming configuration.

It also offers an AI implementation, which makes things much less complex and reduces implementation time considerably. In the last version that I reviewed and tested, I discovered the possibility of using a local AI implementation. This means that AI servers which are not under your own control can be eliminated. As you can imagine, this is a game changer, as it puts your data back under your own government’s control, which is a major achievement in terms of security requirements.

Introduction to the use case

In this blog, I will present a possible implementation based on the handling of invoices. We will import them, split them and extract the metadata. The next step is straightforward: we will store them in a M-Files vault, including the PDF and the metadata we defined.

The diagram below clearly shows the high level of the technical requirements and provides a comprehensive overview of the necessary infrastructure and software.

The first step in the process is to add the documents to the designated hot folder. This will initiate the automatic document handling process.

It is possible to move a document manually from an email or directly from a scanner, and add it to the hot folder.

How the documents import works.

Chronoscan regularly verifies the hot folder for new documents. Should a valid document be identified, the system will initiate the import, analysis and export processes. In this example, we will follow the steps outlined in the diagram below when importing the document.

Finally the import to M-Files

In the final stage of the process, the document and its extracted metadata are exported to the M-Files Vault. As is clearly evident. Please refer to the image below for further information. The document has been correctly assigned to the relevant document class, including the defined metadata.

Should you be seeking a comprehensive automated solution for the management of both electronic and paper-based documentation, with subsequent processing according to your business’s specific workflow, we invite you to contact us to discuss your particular requirements.

We can then work together to define a solution and phrase a project. As a first step, we can create a proof of concept to confirm that the solution will meet your needs and make your daily work easier and saving you resources.

L’article Handling of documents including meta data with Chronoscan and AI est apparu en premier sur dbi Blog.

Unit Testing PostgreSQL with pgTAP

Fri, 2025-08-29 11:23
Introduction

Unit testing is a fundamental practice in software development, ensuring that individual components function correctly. When working with PostgreSQL, testing database logic—such as functions, triggers, and constraints—is crucial for maintaining data integrity and reliability. One powerful tool for this purpose is pgTAP.
pgTAP is a PostgreSQL extension that provides a set of TAP (Test Anything Protocol) functions for writing unit tests directly in SQL. It allows developers to test database functions, schemas, constraints, and much more in an automated and repeatable way.

Installing pgTAP

Before using pgTAP, you need to install it on your PostgreSQL instance. You can install it from source as follows:

10:18:56 postgres@ws-pgdev:/home/postgres/ [sw] wget https://api.pgxn.org/dist/pgtap/1.3.3/pgtap-1.3.3.zip .
--2025-04-02 10:19:53--  https://api.pgxn.org/dist/pgtap/1.3.3/pgtap-1.3.3.zip
Resolving api.pgxn.org (api.pgxn.org)... 88.198.49.178
Connecting to api.pgxn.org (api.pgxn.org)|88.198.49.178|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 329966 (322K) [application/zip]
Saving to: ‘pgtap-1.3.3.zip’

pgtap-1.3.3.zip                         100%[============================================================================>] 322.23K  --.-KB/s    in 0.1s

2025-04-02 10:19:54 (3.18 MB/s) - ‘pgtap-1.3.3.zip’ saved [329966/329966]

--2025-04-02 10:19:54--  http://./
Resolving . (.)... failed: No address associated with hostname.
wget: unable to resolve host address ‘.’
FINISHED --2025-04-02 10:19:54--
Total wall clock time: 0.4s
Downloaded: 1 files, 322K in 0.1s (3.18 MB/s)

10:19:54 postgres@ws-pgdev:/home/postgres/ [sw] unzip pgtap-1.3.3.zip
Archive:  pgtap-1.3.3.zip
b941782fada240afdb7057065eb3261a21e8512c
   creating: pgtap-1.3.3/
  inflating: pgtap-1.3.3/Changes
...
...

10:20:11 postgres@ws-pgdev:/home/postgres/ [sw] cd pgtap-1.3.3/
11:11:58 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] make

GNUmake running against Postgres version 17.0, with pg_config located at /u01/app/postgres/product/17/db_0/bin

Makefile:186: To use pg_prove, TAP::Parser::SourceHandler::pgTAP Perl module
Makefile:187: must be installed from CPAN. To do so, simply run:
Makefile:188: cpan TAP::Parser::SourceHandler::pgTAP
cp sql/pgtap--0.95.0--0.96.0.sql.in sql/pgtap--0.95.0--0.96.0.sql
cp sql/pgtap--0.96.0--0.97.0.sql.in sql/pgtap--0.96.0--0.97.0.sql
cp sql/pgtap--0.97.0--0.98.0.sql.in sql/pgtap--0.97.0--0.98.0.sql
cp sql/pgtap--0.98.0--0.99.0.sql.in sql/pgtap--0.98.0--0.99.0.sql
cp sql/pgtap--0.99.0--1.0.0.sql.in sql/pgtap--0.99.0--1.0.0.sql
cp sql/pgtap.sql.in sql/pgtap.sql
sed -e 's,MODULE_PATHNAME,$libdir/pgtap,g' -e 's,__OS__,linux,g' -e 's,__VERSION__,1.3,g' sql/pgtap.sql > sql/pgtap.tmp
mv sql/pgtap.tmp sql/pgtap.sql
'/usr/bin/perl' -e 'for (grep { /^CREATE /} reverse <>) { chomp; s/CREATE (OR REPLACE )?/DROP /; s/DROP (FUNCTION|VIEW|TYPE) /DROP $1 IF EXISTS /; s/ (DEFAUL                                T|=)[ ]+[a-zA-Z0-9]+//g; print "$_;\n" }' sql/pgtap.sql > sql/uninstall_pgtap.sql
cp sql/pgtap.sql.in sql/pgtap-static.sql.tmp

*** Patching pgtap-static.sql with compat/install-9.6.patch
patching file sql/pgtap-static.sql.tmp

*** Patching pgtap-static.sql with compat/install-9.4.patch
patching file sql/pgtap-static.sql.tmp

*** Patching pgtap-static.sql with compat/install-9.2.patch
patching file sql/pgtap-static.sql.tmp

*** Patching pgtap-static.sql with compat/install-9.1.patch
patching file sql/pgtap-static.sql.tmp
sed -e 's#MODULE_PATHNAME#$libdir/pgtap#g' -e 's#__OS__#linux#g' -e 's#__VERSION__#1.3#g' sql/pgtap-static.sql.tmp > sql/pgtap-static.sql
'/usr/bin/perl' compat/gencore 0 sql/pgtap-static.sql > sql/pgtap-core.sql
'/usr/bin/perl' compat/gencore 1 sql/pgtap-static.sql > sql/pgtap-schema.sql
cp sql/pgtap.sql sql/pgtap--1.3.3.sql
cp sql/pgtap-core.sql sql/pgtap-core--1.3.3.sql
cp sql/pgtap-schema.sql sql/pgtap-schema--1.3.3.sql

11:12:02 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] make install

GNUmake running against Postgres version 17.0, with pg_config located at /u01/app/postgres/product/17/db_0/bin

Makefile:186: To use pg_prove, TAP::Parser::SourceHandler::pgTAP Perl module
Makefile:187: must be installed from CPAN. To do so, simply run:
Makefile:188: cpan TAP::Parser::SourceHandler::pgTAP
mkdir -p '/u01/app/postgres/product/17/db_0/share/extension'
mkdir -p '/u01/app/postgres/product/17/db_0/share/extension'
mkdir -p '/u01/app/postgres/product/17/db_0/share/doc/extension'
/bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 644 .//pgtap.control '/u01/app/postgres/product/17/db_0/share/extension/'
/bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 644 .//sql/pgtap--0.90.0--0.91.0.sql .//sql/pgtap--0.91.0--0.92.0.sql .//sql/pgtap--0.92.0--0.93.0.sql .//sql/pgtap--0.93.0--0.94.0.sql .//sql/pgtap--0.94.0--0.95.0.sql .//sql/pgtap--0.95.0--0.96.0.sql .//sql/pgtap--0.96.0--0.97.0.sql .//sql/pgtap--0.97.0--0.98.0.sql .//sql/pgtap--0.98.0--0.99.0.sql .//sql/pgtap--0.99.0--1.0.0.sql .//sql/pgtap--1.0.0--1.1.0.sql .//sql/pgtap--1.1.0--1.2.0.sql .//sql/pgtap--1.2.0--1.3.0.sql .//sql/pgtap--1.3.0--1.3.1.sql .//sql/pgtap--1.3.1--1.3.2.sql .//sql/pgtap--1.3.2--1.3.3.sql .//sql/pgtap--1.3.3.sql .//sql/pgtap--unpackaged--0.91.0.sql .//sql/pgtap-core--1.3.3.sql .//sql/pgtap-core.sql .//sql/pgtap-schema--1.3.3.sql .//sql/pgtap-schema.sql .//sql/pgtap.sql .//sql/uninstall_pgtap.sql  '/u01/app/postgres/product/17/db_0/share/extension/'
/bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 644 .//doc/pgtap.mmd '/u01/app/postgres/product/17/db_0/share/doc/extension/'

As mentioned in the output of the previous command, we need to run some commands to be able to use pg_prove, which we are going to use later:

11:14:23 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] cpan TAP::Parser::SourceHandler::pgTAP
Loading internal logger. Log::Log4perl recommended for better logging

CPAN.pm requires configuration, but most of it can be done automatically.
If you answer 'no' below, you will enter an interactive dialog for each
configuration option instead.

Would you like to configure as much as possible automatically? [yes] yes

Warning: You do not have write permission for Perl library directories.

To install modules, you need to configure a local Perl library directory or
escalate your privileges.  CPAN can help you by bootstrapping the local::lib
module or by configuring itself to use 'sudo' (if available).  You may also
resolve this problem manually if you need to customize your setup.

What approach do you want?  (Choose 'local::lib', 'sudo' or 'manual')
 [local::lib] sudo
Fetching with HTTP::Tiny:
https://cpan.org/authors/01mailrc.txt.gz
Reading '/home/postgres/.cpan/sources/authors/01mailrc.txt.gz'
............................................................................DONE
Fetching with HTTP::Tiny:
https://cpan.org/modules/02packages.details.txt.gz
Reading '/home/postgres/.cpan/sources/modules/02packages.details.txt.gz'
  Database was generated on Wed, 02 Apr 2025 08:29:02 GMT
..............
  New CPAN.pm version (v2.38) available.
  [Currently running version is v2.33]
  You might want to try
    install CPAN
    reload cpan
  to both upgrade CPAN.pm and run the new version without leaving
  the current session.


..............................................................DONE
Fetching with HTTP::Tiny:
https://cpan.org/modules/03modlist.data.gz
Reading '/home/postgres/.cpan/sources/modules/03modlist.data.gz'
DONE
Writing /home/postgres/.cpan/Metadata
Running install for module 'TAP::Parser::SourceHandler::pgTAP'
Fetching with HTTP::Tiny:
https://cpan.org/authors/id/D/DW/DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
Fetching with HTTP::Tiny:
https://cpan.org/authors/id/D/DW/DWHEELER/CHECKSUMS
Checksum for /home/postgres/.cpan/sources/authors/id/D/DW/DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz ok
'YAML' not installed, will not store persistent state
Configuring D/DW/DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz with Build.PL
Created MYMETA.yml and MYMETA.json
Creating new 'Build' script for 'TAP-Parser-SourceHandler-pgTAP' version '3.37'
  DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
  /usr/bin/perl Build.PL --installdirs site -- OK
Running Build for D/DW/DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
Building TAP-Parser-SourceHandler-pgTAP
  DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
  ./Build -- OK
Running Build test for DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
t/source_handler.t .. ok
All tests successful.
Files=1, Tests=47,  1 wallclock secs ( 0.03 usr  0.00 sys +  0.08 cusr  0.12 csys =  0.23 CPU)
Result: PASS
  DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
  ./Build test -- OK
Running Build install for DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
Building TAP-Parser-SourceHandler-pgTAP
Installing /usr/local/man/man1/pg_prove.1p
Installing /usr/local/man/man1/pg_tapgen.1p
Installing /usr/local/share/perl/5.36.0/TAP/Parser/SourceHandler/pgTAP.pm
Installing /usr/local/man/man3/TAP::Parser::SourceHandler::pgTAP.3pm
Installing /usr/local/bin/pg_tapgen
Installing /usr/local/bin/pg_prove
  DWHEELER/TAP-Parser-SourceHandler-pgTAP-3.37.tar.gz
  sudo ./Build install  -- OK
11:15:03 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] mkdir -p '/u01/app/postgres/product/17/db_0/share/extension'
11:15:05 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] mkdir -p '/u01/app/postgres/product/17/db_0/share/extension'
mkdir -p '/u01/app/postgres/product/17/db_0/share/doc/extension'
11:15:10 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] /bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 64                                4 .//pgtap.control '/u01/app/postgres/product/17/db_0/share/extension/'
11:15:18 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] /bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 64                                4 .//sql/pgtap--0.90.0--0.91.0.sql .//sql/pgtap--0.91.0--0.92.0.sql .//sql/pgtap--0.92.0--0.93.0.sql .//sql/pgtap--0.93.0--0.94.0.sql .//sql/pgtap--0.94.0--0                                .95.0.sql .//sql/pgtap--0.95.0--0.96.0.sql .//sql/pgtap--0.96.0--0.97.0.sql .//sql/pgtap--0.97.0--0.98.0.sql .//sql/pgtap--0.98.0--0.99.0.sql .//sql/pgtap--0                                .99.0--1.0.0.sql .//sql/pgtap--1.0.0--1.1.0.sql .//sql/pgtap--1.1.0--1.2.0.sql .//sql/pgtap--1.2.0--1.3.0.sql .//sql/pgtap--1.3.0--1.3.1.sql .//sql/pgtap--1.                                3.1--1.3.2.sql .//sql/pgtap--1.3.2--1.3.3.sql .//sql/pgtap--1.3.3.sql .//sql/pgtap--unpackaged--0.91.0.sql .//sql/pgtap-core--1.3.3.sql .//sql/pgtap-core.sql                                 .//sql/pgtap-schema--1.3.3.sql .//sql/pgtap-schema.sql .//sql/pgtap.sql .//sql/uninstall_pgtap.sql  '/u01/app/postgres/product/17/db_0/share/extension/'
11:15:33 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] /bin/sh /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../config/install-sh -c -m 64                                4 .//doc/pgtap.mmd '/u01/app/postgres/product/17/db_0/share/doc/extension/'
11:15:37 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] pg_prove
No tests named and 't' directory not found at /usr/share/perl/5.36/App/Prove.pm line 522.

11:15:42 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] pg_prove --version
pg_prove 3.37

You can check if pgTAP was installed properly using the following command:

10:24:09 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] make installcheck

GNUmake running against Postgres version 17.0, with pg_config located at /u01/app/postgres/product/17/db_0/bin

Makefile:186: To use pg_prove, TAP::Parser::SourceHandler::pgTAP Perl module
Makefile:187: must be installed from CPAN. To do so, simply run:
Makefile:188: cpan TAP::Parser::SourceHandler::pgTAP
Using 89 parallel test connections
Rebuilding test/build/all_tests
Schedule changed to test/build/parallel.sch
cp `cat test/build/which_schedule` test/build/run.sch
echo "# +++ regress install-check in  +++" && /u01/app/postgres/product/17/db_0/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --bindir='/u01/app/postgres/product/17/db_0/bin'    --inputdir=test --max-connections=89 --schedule test/schedule/main.sch   --schedule test/build/run.sch
# +++ regress install-check in  +++
# using postmaster on Unix socket, port 5432
ok 1         - build                                     369 ms
...
ok 4         - hastap                                   1309 ms
# parallel group (35 tests):  matching istap do_tap moretap util performs_ok performs_within todotap check cmpok pg73 runjusttests roletap throwtap trigger usergroup enumtap policy runtests runnotests proctap fktap privs inheritance partitions valueset functap resultset aretap extension ownership ruletap pktap index unique
ok 5         + aretap                                   5911 ms
ok 6         + check                                    1558 ms
...
ok 39        + valueset                                 3784 ms
1..39
# All 39 tests passed.

Once installed, enable it in your database:

10:25:06 postgres@ws-pgdev:/home/postgres/pgtap-1.3.3/ [sw] psql
psql (17.0 dbi services build)
Type "help" for help.

postgres=# \c hybrid
You are now connected to database "hybrid" as user "postgres".
hybrid=# CREATE EXTENSION pgtap;
CREATE EXTENSION
hybrid=# \dx
                                                        List of installed extensions
        Name        | Version |    Schema    |                                          Description
--------------------+---------+--------------+-----------------------------------------------------------------------------------------------
 btree_gist         | 1.7     | training_app | support for indexing common datatypes in GiST
 orafce             | 4.14    | training_app | Functions and operators that emulate a subset of functions and packages from the Oracle RDBMS
 pg_stat_statements | 1.11    | public       | track planning and execution statistics of all SQL statements executed
 pg_trgm            | 1.6     | training_app | text similarity measurement and index searching based on trigrams
 pgcrypto           | 1.3     | training_app | cryptographic functions
 pgtap              | 1.3.3   | training_app | Unit testing for PostgreSQL
 plperl             | 1.0     | pg_catalog   | PL/Perl procedural language
 plpgsql            | 1.0     | pg_catalog   | PL/pgSQL procedural language
(8 rows)
Writing Your First pgTAP Tests

pgTAP provides a wide range of assertions for testing various database objects. Let’s go through some examples.

1. Testing a Function

Assume we have a function that verifies a password based on a specific pattern:

CREATE OR REPLACE FUNCTION training_app.f_password_verify (pv_password TEXT) RETURNS BOOLEAN
AS $$
  SELECT pv_password ~ '^(?=.{10,}$)(?=.*[a-z])(?=.*[A-Z])(?=.*[0-9])(?=.*\W).*$';
$$ LANGUAGE sql;

hybrid=# \df f_password_verify
                                List of functions
    Schema    |       Name        | Result data type | Argument data types | Type
--------------+-------------------+------------------+---------------------+------
 training_app | f_password_verify | boolean          | pv_password text    | func
(1 row)

To test this function with pgTAP:

hybrid=# SELECT plan(4);
 plan
------
 1..4
(1 row)

hybrid=# SELECT ok(training_app.f_password_verify('ValidPass1!'), 'Valid password should return true');
                    ok
------------------------------------------
 ok 1 - Valid password should return true
(1 row)

hybrid=# SELECT ok(NOT training_app.f_password_verify('short1!'), 'Too short password should return false');
                      ok
-----------------------------------------------
 ok 2 - Too short password should return false
(1 row)

hybrid=# SELECT ok(NOT training_app.f_password_verify('NoNumberPass!'), 'Password without a number should return false');
                          ok
------------------------------------------------------
 ok 3 - Password without a number should return false
(1 row)

hybrid=# SELECT ok(NOT training_app.f_password_verify('NoSpecialChar1'), 'Password without special character should return false');
                              ok
---------------------------------------------------------------
 ok 4 - Password without special character should return false
(1 row)

hybrid=# SELECT * FROM finish();
 finish
--------
(0 rows)

2. Testing Table Constraints

Consider the users table with the following schema:

hybrid=# \d users
                         Table "training_app.users"
  Column  |   Type    | Collation | Nullable |           Default
----------+-----------+-----------+----------+------------------------------
 id       | integer   |           | not null | generated always as identity
 username | text      |           | not null |
 password | text      |           | not null |
 created  | date      |           |          | now()
 validity | tstzrange |           |          |
Indexes:
    "users_pkey" PRIMARY KEY, btree (id)
    "i_username_trgm" gin (username gin_trgm_ops)
    "i_users_username" btree (username)
    "i_users_username_btree_partial" btree (created) WHERE created >= '2024-11-07'::date AND created < '2024-11-08'::date
Check constraints:
    "user_check_username" CHECK (username ~* '[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}'::text)
    "user_check_username_length" CHECK (length(username) <= 72)
Referenced by:
    TABLE "user_training" CONSTRAINT "fk_user_training_users" FOREIGN KEY (user_id) REFERENCES users(id)
    TABLE "users_history" CONSTRAINT "fk_users_history_user_id_users_id" FOREIGN KEY (user_id) REFERENCES users(id)
Policies:
    POLICY "policy_current_month" FOR SELECT
      TO role_app_read_only
      USING (((EXTRACT(month FROM created))::integer = (EXTRACT(month FROM now()))::integer))
Triggers:
    t_log_user_history BEFORE UPDATE ON users FOR EACH ROW EXECUTE FUNCTION tf_user_history()

create or replace function training_app.tf_user_history() returns trigger as
$$
begin
   insert into training_app.users_history ( user_id, username, password, created, validity)
                                  values ( old.id, old.username, old.password, old.created, old.validity);
   return new;
end;                            
$$ language plpgsql;

create trigger t_log_user_history
   before update on training_app.users
   for each row
   execute procedure training_app.tf_user_history();

To test the constraints, create a test file test_users_constraints.sql:

10:57:07 postgres@ws-pgdev:/u01/app/postgres/local/dmk/tests/ [sw] touch test_users_constraints.sql
10:57:37 postgres@ws-pgdev:/u01/app/postgres/local/dmk/tests/ [sw] cat test_users_constraints.sql
BEGIN;
SELECT plan(2);

-- Test CHECK constraint on username format
SELECT throws_like(
  $$INSERT INTO training_app.users (username, password) VALUES ('invalid_user', 'Password1!')$$,
  'new row for relation "users" violates check constraint "user_check_username"',
  'Invalid username should fail CHECK constraint'
);

-- Test CHECK constraint on username length
SELECT throws_like(
  $$INSERT INTO training_app.users (username, password) VALUES (repeat('a', 73), 'Password1!')$$,
  'new row for relation "users" violates check constraint "user_check_username"',
  'Username exceeding 72 characters should fail CHECK constraint'
);

SELECT * FROM finish();
ROLLBACK;
Running Tests

You can execute pgTAP tests using pg_prove, a command-line tool for running TAP tests. We are now going to test it with the file we just created, test_users_constraints.sql.

Run it with pg_prove:

11:45:14 postgres@ws-pgdev:/u01/app/postgres/local/dmk/tests/ [sw] pg_prove -d hybrid -U postgres -p 5432 test_users_constraints.sql
test_users_constraints.sql .. ok
All tests successful.
Files=1, Tests=2,  0 wallclock secs ( 0.05 usr  0.01 sys +  0.00 cusr  0.01 csys =  0.07 CPU)
Result: PASS
3. Testing Triggers

To verify that our trigger correctly logs changes to the users table, we check:

  • That the recorded historical data correctly reflects the old values before the update.
  • That an update on users triggers an insert into users_history.
13:43:55 postgres@ws-pgdev:/u01/app/postgres/local/dmk/tests/ [sw] cat test_trigger.sql
BEGIN;
SELECT plan(2);

-- Insert a test user with a valid email as username
INSERT INTO training_app.users (username, password) VALUES ('testuser@example.com', 'TestPassword123!');

-- Update the user's username (this should activate the trigger)
UPDATE training_app.users SET username = 'updateduser@example.com' WHERE username = 'testuser@example.com';

-- Check if the corresponding row is added to the users_history table
SELECT ok(
    (SELECT COUNT(*) FROM training_app.users_history WHERE user_id = (SELECT id FROM training_app.users WHERE username = 'updateduser@example.com')) > 0,
    'User history should be logged in users_history after update'
);

-- Check if the values in users_history match the old values (before the update)
SELECT is(
    (SELECT username FROM training_app.users_history WHERE user_id = (SELECT id FROM training_app.users WHERE username = 'updateduser@example.com') ORDER BY created DESC LIMIT 1),
    'testuser@example.com',
    'Username in user history should match the old (pre-update) value'
);

SELECT * FROM finish();
ROLLBACK;

Execute the test using:

13:58:38 postgres@ws-pgdev:/u01/app/postgres/local/dmk/tests/ [sw] pg_prove -d hybrid -U postgres -p 5432 test_trigger.sql
test_trigger.sql .. ok
All tests successful.
Files=1, Tests=2,  0 wallclock secs ( 0.04 usr  0.01 sys +  0.01 cusr  0.01 csys =  0.07 CPU)
Result: PASS
Benefits of Using pgTAP
  • Automated Testing: Helps maintain database integrity by catching errors early.
  • SQL-Based: No need for external scripting languages; tests are written in SQL.
  • Integration with CI/CD: Works with CI/CD pipelines to ensure database quality.
  • Comprehensive Assertions: Supports functions, constraints, indexes, views, and more.
Conclusion

pgTAP is a powerful tool for unit testing PostgreSQL databases. By incorporating it into your workflow, you can ensure that your database logic remains robust and reliable over time. Whether you’re testing functions, constraints, or triggers, pgTAP provides a structured and repeatable approach to database testing. You can find more information about pgTAP on the official website.
Do you use pgTAP in your projects? Let me know in the comments how it has helped you!

L’article Unit Testing PostgreSQL with pgTAP est apparu en premier sur dbi Blog.

Dctm – Oracle DB permissions on recent versions

Wed, 2025-08-27 13:16

If you have been running Documentum on Oracle in the past, you might have noticed that recent versions of Dctm requires more Database permissions to be able to perform an install/upgrade. This blog will just be a quick one to show what I used to request for DB permissions for older versions of Documentum as well as what happen if you don’t have the needed permissions on the latest versions. So, let’s get right into it.

First of all, when working on big environments/customers, it is pretty rare to get full access to the Database. Therefore, I’ve always created the tablespace and repository owner accounts upfront, with the needed permissions, before starting a new Documentum installation that would then use this dedicated account. If you aren’t in this case, then I guess this blog isn’t very relevant, since the Documentum installer can create the account with the needed permissions by itself (since you provide it with the DB SYSADMIN account). Please also note that using the Dctm installer will grant unlimited tablespace usage to the repository owner, which DBAs don’t really like (and it’s not really required if you do things properly…).

For as long as I recall, these have been the permissions that I requested/granted on the repository owner account ( being the dm_dbo account name) for Oracle Databases:

GRANT CONNECT TO <REPO_NAME>;
GRANT RESOURCE TO <REPO_NAME>;
GRANT CREATE SESSION TO <REPO_NAME>;
GRANT ALTER SESSION TO <REPO_NAME>;
GRANT CREATE TABLE TO <REPO_NAME>;
GRANT CREATE CLUSTER TO <REPO_NAME>;
GRANT CREATE SYNONYM TO <REPO_NAME>;
GRANT CREATE VIEW TO <REPO_NAME>;
GRANT CREATE SEQUENCE TO <REPO_NAME>;
GRANT CREATE DATABASE LINK TO <REPO_NAME>;
GRANT CREATE PROCEDURE TO <REPO_NAME>;
GRANT CREATE TRIGGER TO <REPO_NAME>;
GRANT CREATE MATERIALIZED VIEW TO <REPO_NAME>;
GRANT CREATE JOB TO <REPO_NAME>;

The documentation does not really give you much details about the exact requirements and the content of the doc itself hasn’t changed much (at all?) either. However, with the above list, you usually had enough permissions to install/run/upgrade Documentum as well as perform usual administrative tasks (like create a database link to populate some custom tables with external data (ODBC?) or managing some archiving/cleanup).

With recent versions of Documentum though, trying to perform an upgrade, for example from 20.x to 23.x, if you had the above DB permissions only, then the upgrade (or OOTB installation) would fail with the Repository not being able to start. The Repository logs would show something like:

[dmadmin@cs-0 ~]$ cat $DOCUMENTUM/dba/log/<REPO_NAME>.log

    OpenText Documentum Content Server (version 23.4.0000.0143  Linux64.Oracle)
    Copyright (c) 2023. OpenText Corporation
    All rights reserved.

2024-06-25T14:06:31.324462      13781[13781]    0000000000000000        [DM_SERVER_I_START_SERVER]info:  "Docbase <REPO_NAME> attempting to open"

2024-06-25T14:06:31.324558      13781[13781]    0000000000000000        [DM_SERVER_I_START_KEY_STORAGE_MODE]info:  "Docbase <REPO_NAME> is using database for cryptographic key storage"

2024-06-25T14:06:31.324581      13781[13781]    0000000000000000        [DM_SERVER_I_START_SERVER]info:  "Docbase <REPO_NAME> process identity: user(dmadmin)"

2024-06-25T14:06:31.701344      13781[13781]    0000000000000000        Database Session is shutting down due to (Pq)
2024-06-25T14:06:31.701461      13781[13781]    0000000000000000        [DM_SERVER_I_START_ID]info:  "Docbase <REPO_NAME> was not successfully opened. Docbase id: (1000001)"

[dmadmin@cs-0 ~]$

The above message “Database Session is shutting down due to (XX)” can be caused by missing/wrong DB details (like username), but it can also be caused by missing permissions on the DB Repository owner account. The problem is that this can happen even if you made sure to execute the “$DM_HOME/bin/dmdbtest” utility just before the Repository install/upgrade. This utility is(was) rather good, to test the DB connection and to make sure that you have(had) enough permissions. However, in this specific case, it doesn’t detect the issue, and it just completes successfully:

[dmadmin@cs-0 ~]$ ### Execution with initial DB permissions
[dmadmin@cs-0 ~]$ $DM_HOME/bin/dmdbtest -S<TNS_ALIAS> -U<USER_NAME> -P<PASSWORD>
Database successfully opened.
Test table successfully created.
Test view successfully created.
Test index successfully created.
Insert into table successfully done.
Index successfully dropped.
View successfully dropped.
Database case sensitivity test successfully past.
Table successfully dropped.
[dmadmin@cs-0 ~]$

Starting on Dctm 22.x (it was apparently around that time), some additional permissions are definitively required, but this utility either cannot check that, or it wasn’t updated for that (I let you have your own opinion on that matter ;)). To be able to install/upgrade your environment, you will need these additional permissions now (the first one being the most important apparently):

GRANT SELECT_CATALOG_ROLE TO <REPO_NAME>;
GRANT CREATE TYPE TO <REPO_NAME>;
GRANT CREATE INDEXTYPE TO <REPO_NAME>;
GRANT CREATE ANY INDEX TO <REPO_NAME>;

After a successful grant, if you would like to check the permissions of the DB account, you can execute these commands:

  • SELECT USERNAME, GRANTED_ROLE FROM USER_ROLE_PRIVS;
    • Should display the SELECT_CATALOG_ROLE role
  • SELECT USERNAME, PRIVILEGE FROM USER_SYS_PRIVS;
    • Should display the other three privileges

Note: yes, the documentation has been mentioning the Select Catalog Role as being required for a very long time and it’s part of the Dctm binaries since Aug-2001 (24y ago…) but I don’t think it was really required so far, since I installed dozens (hundreds?) of Dctm environment without that role on Oracle (it’s a sensitive one), and I never had any issues before, as I recall. But in recent versions, something changed, and we now need more permissions for certain.

In any cases, after adding the Repository owner account’s permissions on the Database, if you try to trigger the install/upgrade again, it should work properly:

[dmadmin@cs-0 ~]$ cat $DOCUMENTUM/dba/log/&lt;REPO_NAME&gt;.log

    OpenText Documentum Content Server (version 23.4.0000.0143  Linux64.Oracle)
    Copyright (c) 2023. OpenText Corporation
    All rights reserved.

2024-06-25T15:20:28.223973      6372[6372]      0000000000000000        [DM_SERVER_I_START_SERVER]info:  "Docbase &lt;REPO_NAME&gt; attempting to open"

2024-06-25T15:20:28.224045      6372[6372]      0000000000000000        [DM_SERVER_I_START_KEY_STORAGE_MODE]info:  "Docbase &lt;REPO_NAME&gt; is using database for cryptographic key storage"

2024-06-25T15:20:28.224709      6372[6372]      0000000000000000        [DM_SERVER_I_START_SERVER]info:  "Docbase &lt;REPO_NAME&gt; process identity: user(dmadmin)"

2024-06-25T15:20:29.048972      6372[6372]      0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Post Upgrade Processing."

2024-06-25T15:20:29.050037      6372[6372]      0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Base Types."
...

L’article Dctm – Oracle DB permissions on recent versions est apparu en premier sur dbi Blog.

Automating Business with M-Files

Tue, 2025-08-26 11:20

There is no debate: an Enterprise Content Management system is a must-have nowadays.

Of course

Now, it’s more than just a place to store documents. We need intelligent systems that help us manage content, improve collaboration and ensure compliance.

M-Files excels in this area, particularly with its Advanced Workflow and Version Control module. Let’s take a look at why!

Why are workflows essential?

Many companies manage documents using only a basic ‘draft, final version’ lifecycle. This approach does not address issues of version confusion, which ultimately slows down activities and introduces risks.

M-Files transforms this with workflow automation, turning document lifecycles into structured, rule-based processes.

Advanced Workflow and Version Control

M-Files offers the Compliance Kit, a suite of tools that extends the basic capabilities of the solution, including “Advanced Workflow and Version Control”.

This package include some features, like:

  • Change Management: add change request object in the Vault
  • Periodic Tasks: to manage recurring assignments
  • Advanced Notifications: extend the notification capabilities
  • Training Management: to manage learning requirements and records
  • Hierarchy Manager: to trigger action from related objects

But, for me, one of the most important is Version Control (and Version Control Lite) that provides foundation for controlled documents.

What does this mean in practice?

Version control allows you to distinguish between working copies and published versions.

It means documents being edited (working copies) have their own workflow with steps, transitions, approvals, …permissions and visibility.

When this document reach the final state, Version control can automatically publish it as a final document.

A basic implementation can be something like:

Working copy workflow

Then when the workflow reach the “Effective” state, it generate an output document with a specific workflow like:

Released version
The benefits in real life

The creation and edition process is done in background. Simple users cannot access or use documents that are not in final state.

The last published document remains accessible and usable even if a new version is initiated.

Once a new version is published, the old version is moved to “superseded,” and the associated action can be archiving, deleting, or hiding it from normal users.

There are numerous typical use cases, but the most well-known are:

  • Template management
  • Procedures (SOPs)
  • Contracts
The Future of Document Management

M-Files offers a compelling toolkit for modernizing document management. The Advanced Workflow module adds structure and automation, and Version Control Lite provides clarity and control without unnecessary complexity.

These modules can be tailored to fit the unique business processes of any organization, whether it’s a mid-sized firm or a regulated enterprise, delivering real value with minimal disruption.

For any question feel free to contact us!

L’article Automating Business with M-Files est apparu en premier sur dbi Blog.

SQL-Server 2025: Vector Indexes & Semantic Search Performance

Wed, 2025-08-20 10:53

When performing search operations on large datasets, performance is a key consideration for ensuring that an application remains efficient and user-friendly. This consideration equally applies when working with embedded data. That’s why in this blog post we’ll explore the different possibilities for vector search and show how to leverage the latest features of SQL Server 2025 Preview to accelerate your semantic search workloads.

Exact and Approximate Search:

First, let’s talk about the different functions available in SQL-Server 2025 Preview and the concepts behind them.

If you read my last blog post, you may remember that we used the VECTOR_DISTANCE() function to search for black pants I might wear during my summer vacation. This function calculates the Exact Nearest Neighbors (ENN) for the vector of the search text. In practice, it calculates the vector distance between the given vector and all other vectors, then sorts the results and returns the top (K-) Nearest Neighbors (NN). In the context of OLTP systems and B-tree indexes, you can compare this process to an index scan operation with additional sorting and computation overhead. As you can already anticipate, calculating the Exact Nearest Neighbors can be quite resource-intensive depending on the amount of data. In the Microsoft documentation, the general recommendation is to perform an exact search when you have fewer than 50’000 vectors – noting that you can also prefilter the number of vectors using predicates.

When working with large vector datasets the VECTOR_SEARCH() function comes into play. Instead of scanning the entire dataset and calculating the distance for each vector, the VECTOR_SEARCH() function approximates the nearest neighbors (ANN) for a given vector using vector index structures. While ANN requires far fewer resources than exact K-NN, it comes with a trade-off in terms of accuracy.

This trade-off in accuracy is measured by the recall value. Recall represents the overlap between the approximate neighbors found and the true K-NN set. For example, if we search for the 10 nearest neighbors of a vector and ANN returns 7 results that are also in the exact K-NN set, the recall would be 0.7 (7 /10 = 0.9). The closer the recall is to 1, the more accurate the ANN results are.

In SQL-Server 2025 Preview, a vector index is required to perform ANN searches with the VECTOR_SEARCH() function. SQL-Server 2025 Preview uses the DiskANN algorithm to create these graph-based vector indexes. DiskANN is designed to leverage SSDs and minimal memory resources for storing and handling graph indexes, while still maintaining high accuracy with a recall of around 0.95.

Note: While the VECTOR_DISTANCE  function is available in SQL-Server 2025 preview and Azure SQL, the VECTOR_SEARCH function is currently only available in SQL-Server 2025 Preview.

You can get more information’s from Microsoft’s documentation.

Let’s take a practical look at the features:

First of all we have to enable some trace flags to be able to use the preview features:

	DBCC TRACEON(466, 474, 13981, -1);
	DBCC TRACESTATUS;

Then I’m creating a vector index on my “embedding” column in my “dbo.products” table. I’m using the DiskANN algorithm which is currently the only one which is supported. Further I’m using the “cosine” metric for the similarity calculation. You can also use “Euclidean Distance” or “Dot Product” as similarity metrics.

CREATE VECTOR INDEX vec_idx ON [dbo].[products]([embedding]) 
WITH (METRIC = 'cosine', TYPE = 'diskann');

Then we are ready to go and we can test the VECTOR_SEARCH function with the query below:

--Test Approximate Search

DECLARE @SemanticSearchText Nvarchar(max) = 'Im looking for black pants for men which I can wear during my summer vacation.'
DECLARE @qv VECTOR(1536) = AI_GENERATE_EMBEDDINGS(@SemanticSearchText USE MODEL OpenAITextEmbedding3Small);

SELECT
		product_id,
		product_name, 
		Description,
		price,
		s.season,
		g.gender
		FROM
			VECTOR_SEARCH(
				TABLE = [dbo].[products] AS p, 
				COLUMN = [embedding], 
				SIMILAR_TO = @qv, 
				METRIC = 'cosine', 
				TOP_N = 10
			) AS vs
		inner join dbo.season s on p.season_id = s.season_id
		inner join dbo.gender g on p.gender_id = g.gender_id
	ORDER BY vs.distance

As you can see we are getting a pretty accurate result. Lets compare the results of the exact search using the VECTOR_DISTANCE function and the approximate search using the VECTOR_SEARCH function with the code below:

DECLARE @SemanticSearchText Nvarchar(max) = 'Im looking for black pants for men which I can wear during my summer vacation.'
  
    DECLARE @qv VECTOR(1536) = AI_GENERATE_EMBEDDINGS(@SemanticSearchText USE MODEL OpenAITextEmbedding3Small);
 
 --Exact Search

    SELECT TOP(10)
		product_id as ENN_product_id,
		product_name as ENN_product_name, 
		Description as ENN_description,
		price as ENN_price,
		s.season as ENN_price,
		g.gender as ENN_gender,
		VECTOR_DISTANCE('cosine', @qv, embedding) AS distance 
	FROM dbo.products p
		inner join dbo.season s on p.season_id = s.season_id
		inner join dbo.gender g on p.gender_id = g.gender_id
		ORDER BY   distance;

--Approximate Search

	SELECT
		product_id as ANN_product_id,
		product_name as ANN_product_name, 
		Description as ANN_Description,
		price as ANN_price,
		s.season as ANN_season,
		g.gender as ANN_gender
		FROM
			VECTOR_SEARCH(
				TABLE = [dbo].[products] AS p, 
				COLUMN = [embedding], 
				SIMILAR_TO = @qv, 
				METRIC = 'cosine', 
				TOP_N = 10
			) AS vs
		inner join dbo.season s on p.season_id = s.season_id
		inner join dbo.gender g on p.gender_id = g.gender_id
	ORDER BY vs.distance

As you can see, we are getting a fairly accurate ANN result. This corresponds to a recall of 0.8, since only the products with IDs 1292 and 2167 are missing from the exact search result set:

Of course, in this example we only have 3’000 products in our database, so the savings in resource consumption when using ANN are not very significant. However, when processing much larger datasets, using ANN becomes highly relevant.

Limitations with Vector indexes – in SQL-Server 2025 Preview

Interesting to know is that tables with a vector index are read only in the preview edition so you can’t manipulate data at the moment:

	update dbo.products
	set price = price * 0.8
	where product_id = 13

In addition to support a vector index, the table must have a clustered index defined on a single column that is a primary key with the integer data type.  In the preview edition, vector indexes do also not support partitioning and are not replicated to subscribers when using replication. According to Microsoft, these limitations apply to the current preview edition of SQL Server 2025. I’m excited to see if these limitations will be removed in the GA release.

Let me know your thoughts in the comment section below and thanks for reading!

Hocine

L’article SQL-Server 2025: Vector Indexes & Semantic Search Performance est apparu en premier sur dbi Blog.

ZDM 21.5 Physical Online Migration failing during TEMP tablespace encryption

Tue, 2025-08-05 07:05

The ZDM 21.5 version brings some new functions like encrypting automatically now the TEMP, SYSTEM, SYSAUX and UNDO tablespaces. There is no need to do it manually any more after the ZDM migration, ZDM will take care of it on its own. Little problem, this is adding a new bug because the temporary database needs to be restarted to drop the previous TEMP tablespace. Let’s see how to resolve the problem and move forward with the ZDM migration.

Problem description

After switchover is run by ZDM, the TEMP, SYSTEM, SYSAUX and UNDO tablespaces are now encrypted by ZDM during the ZDM_POST_DATABASE_OPEN_TGT step. In my case described here, I was doing a dry run, so I manually ran a failover to the target environment to avoid any impact and downtime to the source database. It was currently a dry run and not the final migration. Thus, after having paused the ZDM migration once the Data Guard configuration was setup (ZDM_CONFIGURE_DG_SRC step), I manually ran the failover and then moved forward with the ZDM Migration using the -skip SWITCHOVER option. I have explained this in one of my previous ZDM blog if you want more details.

I’m running same kind of migration I was doing with 21.4 version, non-cdb as source and including conversion to pdb during ZDM migration.

So, after failover has been run manually, I resumed the ZDM migration till the end.

[exauser@zdm-host ~]$ zdmcli resume job -jobid 580 -skip SWITCHOVER
zdm-host.domain.com: Audit ID: 6963

As we can see, the ZDM job failed on the ZDM_POST_DATABASE_OPEN_TGT step.

[exauser@zdm-host ~]$ zdmcli query job -jobid 580
zdm-host.domain.com: Audit ID: 6966
Job ID: 580
User: exauser
Client: zdm-host
Job Type: "MIGRATE"
Scheduled job command: "zdmcli migrate database -sourcesid SRCSID -rsp /home/exauser/migration/zdm_SRCSID_physical_online.rsp -sourcenode dbsource-host -srcauth dbuser -srcarg1 user:oracle -srcarg2 identity_file:/home/exauser/.ssh/id_rsa -targetnode exacc-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/exauser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_COPYFILES"
Scheduled job execution start time: 2025-06-11T17:09:40+02. Equivalent local time: 2025-06-11 17:09:40
Current status: FAILED
Result file path: "/u01/app/oracle/chkbase/scheduled/job-580-2025-06-11-17:10:06.log"
Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-580-2025-06-11-17:10:06.json"
Job execution start time: 2025-06-11 17:10:07
Job execution end time: 2025-06-12 10:04:40
Job execution elapsed time: 2 hours 52 minutes 48 seconds
ZDM_GET_SRC_INFO .............. COMPLETED
ZDM_GET_TGT_INFO .............. COMPLETED
ZDM_PRECHECKS_SRC ............. COMPLETED
ZDM_PRECHECKS_TGT ............. COMPLETED
ZDM_SETUP_SRC ................. COMPLETED
ZDM_SETUP_TGT ................. COMPLETED
ZDM_PREUSERACTIONS ............ COMPLETED
ZDM_PREUSERACTIONS_TGT ........ COMPLETED
ZDM_VALIDATE_SRC .............. COMPLETED
ZDM_VALIDATE_TGT .............. COMPLETED
ZDM_DISCOVER_SRC .............. COMPLETED
ZDM_COPYFILES ................. COMPLETED
ZDM_PREPARE_TGT ............... COMPLETED
ZDM_SETUP_TDE_TGT ............. COMPLETED
ZDM_RESTORE_TGT ............... COMPLETED
ZDM_RECOVER_TGT ............... COMPLETED
ZDM_FINALIZE_TGT .............. COMPLETED
ZDM_CONFIGURE_DG_SRC .......... COMPLETED
ZDM_SWITCHOVER_SRC ............ COMPLETED
ZDM_SWITCHOVER_TGT ............ COMPLETED
ZDM_POST_DATABASE_OPEN_TGT .... FAILED
ZDM_DATAPATCH_TGT ............. PENDING
ZDM_NONCDBTOPDB_PRECHECK ...... PENDING
ZDM_NONCDBTOPDB_CONVERSION .... PENDING
ZDM_POST_MIGRATE_TGT .......... PENDING
ZDM_POSTUSERACTIONS ........... PENDING
ZDM_POSTUSERACTIONS_TGT ....... PENDING
ZDM_CLEANUP_SRC ............... PENDING
ZDM_CLEANUP_TGT ............... PENDING
[exauser@zdm-host ~]$

ZDM log file would show the problem coming from the TEMP tablespace encryption.

[exauser@zdm-host ~]$ tail -f /u01/app/oracle/chkbase/scheduled/job-580-2025-06-11-17:10:06.log
####################################################################
zdm-host: 2025-06-12T08:01:48.268Z : Resuming zero downtime migrate operation ...
zdm-host: 2025-06-12T08:01:52.847Z : Skipping phase ZDM_PRECHECKS_SRC on resume
zdm-host: 2025-06-12T08:01:53.255Z : Skipping phase ZDM_PRECHECKS_TGT on resume
zdm-host: 2025-06-12T08:01:53.273Z : Executing phase ZDM_SETUP_SRC
zdm-host: 2025-06-12T08:01:53.274Z : Setting up ZDM on the source node dbsource-host ...
dbsource-host: 2025-06-12T08:02:23.476Z : TNS aliases successfully setup on the source node source-hostname...
dbsource-host: 2025-06-12T08:02:34.101Z : successfully registered ZDM with the database SRCSID
zdm-host: 2025-06-12T08:02:34.106Z : Execution of phase ZDM_SETUP_SRC completed
zdm-host: 2025-06-12T08:02:34.123Z : Executing phase ZDM_SETUP_TGT
zdm-host: 2025-06-12T08:02:34.123Z : Setting up ZDM on the target node exacc-cl01n1 ...
exacc-cl01n1: 2025-06-12T08:03:00.739Z : TNS aliases successfully setup on the target node exacc-cl01n1...
zdm-host: 2025-06-12T08:03:00.742Z : Execution of phase ZDM_SETUP_TGT completed
zdm-host: 2025-06-12T08:03:00.754Z : Skipping phase ZDM_VALIDATE_SRC on resume
zdm-host: 2025-06-12T08:03:00.765Z : Skipping phase ZDM_VALIDATE_TGT on resume
zdm-host: 2025-06-12T08:03:00.779Z : Skipping phase ZDM_DISCOVER_SRC on resume
zdm-host: 2025-06-12T08:03:00.789Z : Skipping phase ZDM_COPYFILES on resume
zdm-host: 2025-06-12T08:03:00.802Z : Skipping phase ZDM_PREPARE_TGT on resume
zdm-host: 2025-06-12T08:03:00.813Z : Skipping phase ZDM_SETUP_TDE_TGT on resume
zdm-host: 2025-06-12T08:03:00.823Z : Skipping phase ZDM_RESTORE_TGT on resume
zdm-host: 2025-06-12T08:03:00.833Z : Skipping phase ZDM_RECOVER_TGT on resume
zdm-host: 2025-06-12T08:03:00.843Z : Skipping phase ZDM_FINALIZE_TGT on resume
zdm-host: 2025-06-12T08:03:00.844Z : resuming job execution from phase "ZDM_CONFIGURE_DG_SRC"
zdm-host: 2025-06-12T08:03:00.859Z : Skipping phase ZDM_CONFIGURE_DG_SRC on resume
zdm-host: 2025-06-12T08:03:00.880Z : Executing phase ZDM_SWITCHOVER_SRC
zdm-host: 2025-06-12T08:03:00.881Z : Switching database null on the source node dbsource-host to standby role ...
dbsource-host: 2025-06-12T08:03:11.403Z : Switchover actions in the source environment executed successfully
zdm-host: 2025-06-12T08:03:11.407Z : Execution of phase ZDM_SWITCHOVER_SRC completed
####################################################################
zdm-host: 2025-06-12T08:03:11.445Z : Executing phase ZDM_SWITCHOVER_TGT
zdm-host: 2025-06-12T08:03:11.445Z : Switching database SRCSID_CHZ1 on the target node exacc-cl01n1 to primary role ...
exacc-cl01n1: 2025-06-12T08:03:23.137Z : Switchover actions in the target environment executed successfully
zdm-host: 2025-06-12T08:03:23.241Z : Execution of phase ZDM_SWITCHOVER_TGT completed
####################################################################
zdm-host: 2025-06-12T08:03:23.279Z : Executing phase ZDM_POST_DATABASE_OPEN_TGT
zdm-host: 2025-06-12T08:03:23.280Z : Executing post database open actions at the target ...
exacc-cl01n1: 2025-06-12T08:03:34.667Z : Restoring pluggable database state ...
exacc-cl01n1: 2025-06-12T08:03:35.368Z : Creating SSO for keystore location /u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/TDE/tde/ ...
exacc-cl01n1: 2025-06-12T08:03:35.569Z : Restoring RMAN configuration ...
exacc-cl01n1: 2025-06-12T08:03:40.374Z : Encrypting unencrypted temp tablespaces...
####################################################################
zdm-host: 2025-06-12T08:04:40.845Z : Oracle ZDM ONLINE PHYSICAL migration failed
zdm-host: 2025-06-12T08:04:40.845Z : Failed at phase: ZDM_POST_DATABASE_OPEN_TGT
zdm-host: 2025-06-12T08:04:40.846Z : Job duration: 2 minutes and 56 seconds
PRGZ-3727 : Re-creation of temporary tablespace "TEMP" as encrypted for database "PDB001P" failed.

Troubleshooting

Let’s investigate the issue.

The ZDM step log, named zdm_post_database_open_tgt, would show exactly the issue and the command that are expected to be run.

oracle@exacc-cl01n1:/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/ [PDB001P1 (CDB$ROOT)] view zdm_post_database_open_tgt_7924.log
...
...
...
[jobid-580][2025-06-12T08:03:41Z][mZDM_Queries.pm:655]:[DEBUG] Will be running following sql statements as user: oracle:

        export ORACLE_HOME=/u02/app/oracle/product/19.0.0.0/dbhome_1,
        export ORACLE_SID=PDB001P1,
        /u02/app/oracle/product/19.0.0.0/dbhome_1/bin/sqlplus -L / as sys ******

        set pagesize 0 feedback off verify off heading off echo off tab off linesize 32767 trimspool on trimout on wrap off
CREATE BIGFILE TEMPORARY TABLESPACE TEMP_ENCRYPTED TEMPFILE ENCRYPTION ENCRYPT
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP_ENCRYPTED
DROP TABLESPACE TEMP INCLUDING CONTENTS
ALTER TABLESPACE TEMP_ENCRYPTED RENAME TO TEMP

[jobid-580][2025-06-12T08:03:41Z][mZDM_Utils.pm:3438]:[DEBUG] run_as_user2: Running /*******@/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/out/sql/tmpsql_2025-06-12-08-03-41_7924.sql '
[jobid-580][2025-06-12T08:04:41Z][mZDM_Utils.pm:3460]:[DEBUG] Remove /tmp/xE6e9fcUzC
[jobid-580][2025-06-12T08:04:41Z][mZDM_Utils.pm:3468]:[DEBUG] /bin/su successfully executed

[jobid-580][2025-06-12T08:04:41Z][mZDM_Queries.pm:678]:[DEBUG] Output is :
 SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 12 08:03:41 2025
 Version 19.26.0.0.0

 Copyright (c) 1982, 2024, Oracle.  All rights reserved.


 Connected to:
 Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
 Version 19.26.0.0.0

 DROP TABLESPACE TEMP INCLUDING CONTENTS
 *
 ERROR at line 1:
 ORA-60100: dropping temporary tablespace with tablespace ID number (tsn) 3 is blocked due to sort segments


 ALTER TABLESPACE TEMP_ENCRYPTED RENAME TO TEMP
 *
 ERROR at line 1:
 ORA-02154: a tablespace with the name 'TEMP' is found


 Disconnected from Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
 Version 19.26.0.0.0

[jobid-580][2025-06-12T08:04:41Z][mZDM_Queries.pm:11791]:[ERROR] SQL Error recreating temp tablespace TEMP as encrypted for database 'PDB001P'
[jobid-580][2025-06-12T08:04:41Z][mZDM_Utils.pm:4632]:[ERROR] Reporting  error :

The TEMP tablespace can not be dropped, the database needs to be restarted.

Resolution

Let’s first create a pfile for the temporary non-CDB database ZDM created for the migration. Reminder, this temporary database will have as instance name and db_unique_name the name of the final PDB. And of course as DB_NAME the same than our source database, mandatory, when using HA primary and standby databases. I will create a pfile just in case…

I created a pfile from memory:

oracle@exacc-cl01n1:/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/ [PDB001P1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 12 10:14:20 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> create pfile='/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/init_PDB001P1.ora.bck.before.restart' from memory;

File created.

And a pfile from spfile:

SQL> show parameter spfile

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
spfile                               string                            /u02/app/oracle/product/19.0.0
                                                                       .0/dbhome_1/dbs/spfileAVQPZ_AP
                                                                       P_001P1.ora

SQL> create pfile='/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/init_PDB001P1.ora.bck.before.restart.from.spfile' from spfile='/u02/app/oracle/product/19.0.0.0/dbhome_1/dbs/spfilePDB001P1.ora';

File created.

SQL>

I restarted the temporary non-CDB

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 8577884400 bytes
Fixed Size                 13693168 bytes
Variable Size            3439329280 bytes
Database Buffers         5100273664 bytes
Redo Buffers               24588288 bytes
Database mounted.
ORA-28374: typed master key not found in wallet

ORA-28374 Failure. Would be the same if I start with the pfile, so there was some missing parameter into the spfile.

Confirmed, there is no wallet_root instance parameter configured.

SQL> show parameter wallet

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
ssl_wallet                           string
wallet_root                          string

And confirmed it was not in the pfile.

SQL> !grep -i wallet /u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/init_PDB001P1.ora.bck.before.restart

SQL>

We can easily find the directory where ZDM copied the wallet…

oracle@exacc-cl01n1:/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/ [PDB001P1 (CDB$ROOT)] ls -ltrh /u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/TDE/tde
total 92K
-rw-r----- 1 oracle oinstall 2.5K Jun 27  2024 ewallet_2024062706422567.p12
-rw-r----- 1 oracle oinstall    0 Jun 27  2024 ewallet.p12.lck
-rw-r----- 1 oracle oinstall    0 Jun 27  2024 cwallet.sso.lck
-rw-r----- 1 oracle oinstall 4.0K Sep 19  2024 ewallet_2024091909561990_pre-refresh-AVQZZ_APP_001T.p12
-rw-r----- 1 oracle oinstall 5.2K Jan 16 10:54 ewallet_2025011609540081_pre-refresh-AVQZZ_APP_001T-20250116.p12
-rw-r----- 1 oracle oinstall 6.4K Feb 27 09:28 ewallet_2025022708284959_pre-refresh-AVQZZ_APP_001T-20250227.p12
-rw-r----- 1 oracle oinstall 7.7K Mar 31 14:10 ewallet_2025033112100549_pre-refresh-AVQZZ_APP_001T-20250331.p12
-rw-r----- 1 oracle oinstall 8.9K Jun 11 13:58 ewallet_2025061111581822_chg_password_to_prod.p12
-rw-r----- 1 oracle oinstall  11K Jun 11 14:00 ewallet.p12
-rw-r----- 1 oracle oinstall 8.9K Jun 11 14:00 ewallet_2025061112003957_pre-dryrun-SRCSID-to-prodAVAPCl.p12
-rw-r----- 1 oracle oinstall  11K Jun 11 14:00 cwallet.sso.back
-rw------- 1 oracle asmdba    11K Jun 12 10:03 cwallet.sso
oracle@exacc-cl01n1:/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/ [PDB001P1 (CDB$ROOT)]

Let’s configure it!

oracle@exacc-cl01n1:/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/log/ [PDB001P1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 12 12:34:45 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> alter system set WALLET_ROOT='/u02/app/oracle/zdm/zdm_SRCSID_CHZ1_580/zdm/TDE' scope=spfile;

System altered.

Let’s restart the database again.

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 8577884400 bytes
Fixed Size                 13693168 bytes
Variable Size            3439329280 bytes
Database Buffers         5100273664 bytes
Redo Buffers               24588288 bytes
Database mounted.
ORA-28365: wallet is not open

Wallet is recognised but not opened.

Confirmed:

SQL> set lines 300
SQL> col WRL_PARAMETER for a30
SQL> col WRL_TYPE for a30
SQL> col WALLET_TYPE for a30
SQL> col status for a40
SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET;

WRL_PARAMETER                  WRL_TYPE                       WALLET_TYPE                    STATUS
------------------------------ ------------------------------ ------------------------------ ----------------------------------------
                               FILE                           UNKNOWN                        NOT_AVAILABLE


tde_configuration parameter is not configured.

SQL> show parameter tde_configuration

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
tde_configuration                    string

Let’s configure it.

SQL> alter system set tde_configuration='keystore_configuration=FILE' scope=both;

System altered.

Checking, we can see that the wallet is now opened.

SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET;

WRL_PARAMETER                    WRL_TYPE                       WALLET_TYPE                    STATUS
------------------------------   ------------------------------ ------------------------------ ----------------------------------------
/u02/app/oracle/zdm/zdm_SRCSID_C FILE                           AUTOLOGIN                      OPEN
HZ1_580/zdm/TDE/tde/

And we can open the database now.

SQL> alter database open;

Database altered.

SQL>

And completed the command for the TEMP tablespace encryption.

SQL> DROP TABLESPACE TEMP INCLUDING CONTENTS;

Tablespace dropped.

SQL> ALTER TABLESPACE TEMP_ENCRYPTED RENAME TO TEMP;

Tablespace altered.

SQL>

We can now resume the ZDM job. As the TEMP tablespace is now encrypted, ZDM will move forward with the next actions from the ZDM_POST_DATABASE_OPEN_TGT phase. Which is good because updating the ZDM XML metadata file to consider the phase as successful would have as consequence some missing steps and actions.

Resume ZDM job again:

[exauser@zdm-host ~]$ zdmcli resume job -jobid 580 -skip SWITCHOVER
zdm-host.domain.com: Audit ID: 6969

And we can see that the ZDM migration is now completed successfully.

[exauser@zdm-host ~]$ zdmcli query job -jobid 580
zdm-host.domain.com: Audit ID: 6970
Job ID: 580
User: exauser
Client: zdm-host
Job Type: "MIGRATE"
Scheduled job command: "zdmcli migrate database -sourcesid SRCSID -rsp /home/exauser/migration/zdm_SRCSID_physical_online.rsp -sourcenode dbsource-host -srcauth dbuser -srcarg1 user:oracle -srcarg2 identity_file:/home/exauser/.ssh/id_rsa -targetnode exacc-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/exauser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_COPYFILES"
Scheduled job execution start time: 2025-06-11T17:09:40+02. Equivalent local time: 2025-06-11 17:09:40
Current status: SUCCEEDED
Result file path: "/u01/app/oracle/chkbase/scheduled/job-580-2025-06-11-17:10:06.log"
Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-580-2025-06-11-17:10:06.json"
Job execution start time: 2025-06-11 17:10:07
Job execution end time: 2025-06-12 16:02:54
Job execution elapsed time: 4 hours 37 minutes 44 seconds
ZDM_GET_SRC_INFO .............. COMPLETED
ZDM_GET_TGT_INFO .............. COMPLETED
ZDM_PRECHECKS_SRC ............. COMPLETED
ZDM_PRECHECKS_TGT ............. COMPLETED
ZDM_SETUP_SRC ................. COMPLETED
ZDM_SETUP_TGT ................. COMPLETED
ZDM_PREUSERACTIONS ............ COMPLETED
ZDM_PREUSERACTIONS_TGT ........ COMPLETED
ZDM_VALIDATE_SRC .............. COMPLETED
ZDM_VALIDATE_TGT .............. COMPLETED
ZDM_DISCOVER_SRC .............. COMPLETED
ZDM_COPYFILES ................. COMPLETED
ZDM_PREPARE_TGT ............... COMPLETED
ZDM_SETUP_TDE_TGT ............. COMPLETED
ZDM_RESTORE_TGT ............... COMPLETED
ZDM_RECOVER_TGT ............... COMPLETED
ZDM_FINALIZE_TGT .............. COMPLETED
ZDM_CONFIGURE_DG_SRC .......... COMPLETED
ZDM_SWITCHOVER_SRC ............ COMPLETED
ZDM_SWITCHOVER_TGT ............ COMPLETED
ZDM_POST_DATABASE_OPEN_TGT .... COMPLETED
ZDM_DATAPATCH_TGT ............. COMPLETED
ZDM_NONCDBTOPDB_PRECHECK ...... COMPLETED
ZDM_NONCDBTOPDB_CONVERSION .... COMPLETED
ZDM_POST_MIGRATE_TGT .......... COMPLETED
ZDM_POSTUSERACTIONS ........... COMPLETED
ZDM_POSTUSERACTIONS_TGT ....... COMPLETED
ZDM_CLEANUP_SRC ............... COMPLETED
ZDM_CLEANUP_TGT ............... COMPLETED
[exauser@zdm-host ~]$

In the migrated PDB we can see that all tablespaces are encrypted!

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 PDB001P                 READ WRITE NO

SQL> select count(*) from  cdb_tablespaces a, (select x.con_id, y.ENCRYPTIONALG, x.name from V$TABLESPACE x,  V$ENCRYPTED_TABLESPACES y
  2    where x.ts#=y.ts# and x.con_id=y.con_id) b where a.con_id=b.con_id(+) and a.tablespace_name=b.name(+) and b.ENCRYPTIONALG='AES128';

  COUNT(*)
----------
      2803

SQL> select count(*) from  cdb_tablespaces a, (select x.con_id, y.ENCRYPTIONALG, x.name from V$TABLESPACE x,  V$ENCRYPTED_TABLESPACES y
  2    where x.ts#=y.ts# and x.con_id=y.con_id) b where a.con_id=b.con_id(+) and a.tablespace_name=b.name(+) and b.ENCRYPTIONALG is null;

  COUNT(*)
----------
         0

SQL> select a.con_id, a.tablespace_name, nvl(b.ENCRYPTIONALG,'NOT ENCRYPTED') from  cdb_tablespaces a, (select x.con_id, y.ENCRYPTIONALG, x.name from V$TABLESPACE x,  V$ENCRYPTED_TABLESPACES y
  2    where x.ts#=y.ts# and x.con_id=y.con_id) b where a.con_id=b.con_id(+) and a.tablespace_name=b.name(+) and b.ENCRYPTIONALG is null;

no rows selected

SQL>

To wrap up…

As explained in my previous blog, ZDM 21.5 version brings some correction, new functionnalities but also some new bug that I could easily mitigate.

L’article ZDM 21.5 Physical Online Migration failing during TEMP tablespace encryption est apparu en premier sur dbi Blog.

Oracle ZDM 21.5 new features

Tue, 2025-08-05 04:10

I have already written several articles about my experience migrating databases with Oracle Zero Downtime Migration. At that time, the version available was 21.4. The current version is now 21.5. Recently, I had the opportunity to patch our customer ZDM environment to 21.5 and to run new migrations in this new version. In this blog, I would like to share what new features brings the 21.5 version and also what I could see running the same kind of migration with the new version.

New 21.5 features for Physical Migration
Inflight Upgrades

We know that until 21.4, physical migration workflow required that source and target databases are in the same database major release. This of course due to Data Guard constrains.

With 21.5, Data Guard automation is changing to a migration methodology. There is no database release constraint any more. There is of course no change in the way Data Guard is working, but ZDM Physical Migration workflow now integrates and take in account in-flight upgrades. It is now possible to have source database in version 11.2.0.4 or 12c been migrated with physical online workflow to 19c or even from source 19c to 23ai.

In case source is a multitenant database, the temporary target database will be created in the same version as the source database. After switchover to this temporary database, ZDM will upgrade it using oracle tools.

In case source is a non-multitenant database, ZDM will perform the migration using a non-CDB temporary database and running a switchover to it, like I explained in my previous articles. Then ZDM will use autoupgrade to upgrade the database to the desired version and converting it to multitenant in the target CDB.

Cloud Native Disaster Recovery automation

Until 21.4, we had to manually create the Data Guard configuration in the cloud once the migration was performed. Version 21.5 now integrates a function to create Data Guard configuration in the target environment with ZDM once the migration is done. Customer can then benefit from a Cloud Native DR architecture having a standby database in the cloud.

New 21.5 features for Logical Migration
Oracle Autonomous Database as source

Oracle Autonomous Database as source database for migration is now supported. Migration from autonomous to autonomous is now possible. This will allow tiers, serverless and dedicated Exadata Infrastructure Autonomous Database move.

GoldenGate Enhancements

There is several enhancement for logical online migration supporting now following GoldenGate functions:

  • Integrated and non-integrated replication mode
  • Audit trail import
  • Large Transaction split
  • Pre-checks for ggadmin
  • Feature groups, Constraint Handling, Concurrent Migrations
  • User-specified GoldenGate Schema
Data Pump Enhancements

There is also further enhancements integrating following Data Pump functions:

  • Dump file retention and reuse
  • Advanced Queue Objects for post-import reload
Others…

There is some other new enhancements for logical workflow like:

  • OCI File Storage Servie (FSS) for data transfer medium with Oracle Autonomous Database as target
  • Automated refresh of Materialized Views if specified.
Offline Hybrid Migration

Version 21.5 brings a new migration method, the Offline Hybrid Migration using:

  • RMAN transportable tablespaces for Data Migration
  • Data Pump Export/Import for metadata

This method requires NFS as backup location and allows cross-endian and cross-version migration.

The supported targets are:

  • Oracle Base Database Services (BaseDB)
  • Oracle Exadata Database Service on Dedicated Infrastructure (ExaDB-D)
  • Oracle Exadata Database Service on Dedicated Infrastructure (ExaDB-D) on Oracle Database@Azure
  • Oracle Exadata database Service on Cloud@Customer (ExaDB-C@C)
  • Oracle Exadata On-premises
My testing feedback

Running the same migration method, I could see following enhancements from 21.4 to 21.5:

  • Bug correction like switchover issue (described in a previous blog)
  • TEMP, SYSTEM, SYSAUX and UNDO tablespaces are now automatically encrypted by ZDM. No need to do it manually after the migration.

On the other side, I could find a new bug during the TEMP tablespace encryption. The problem is that the temporary database needs to be restarted to complete deletion of the previous TEMP tablespace. I could easily resolve manually the problem and completed ZDM migration. I will describe this workaround in a further blog.

L’article Oracle ZDM 21.5 new features est apparu en premier sur dbi Blog.

Customer case study – Partition switching and transactional replication

Thu, 2025-07-31 14:02

The client’s environment includes two SQL Server instances (I1 and I2). One of the databases, D1 (on I1), is replicated to I2 via transactional replication. Additionally, the tables in this database are partitioned. The largest table (which is the one of interest) is approximately 350 GB in size and contains around 800 million rows. This table represents transactions.

The client has submitted the following requirements:

  • They want to archive data with minimal impact on data availability (during the archiving process)
  • They want to be able to retain the archived data
  • They want to be able to make the archived data accessible or make it visible again

The following elements also need to be taken into account:

  • Transactional replication is enabled on this table
  • This table is partitioned
What does the client’s environment look like?

The client’s environment consists of the following elements:

  • A SQL Server 2019 Standard Edition instance (Server 1) – Publisher
  • A SQL Server 2019 Standard Edition instance (Server 2) – Subscriber
  • A database composed of several tables, including one table that represents transactions
  • Transactional replication is used to replicate certain tables between the two SQL Server instances

The table we are interested in consists of several columns and is partitioned by quarter (based on the Transaction Date column):

What are the possible solutions?

One possible solution for archiving data (while keeping it easily accessible) is partition switching. It is indeed possible to “move” data from one partition to another. To do this, an identical table must be created. In our case, it must have the same number of partitions.

We will therefore need to:

  • Create a “copy” or “archive” table that will contain the archived data from the transactions table
  • Since we are using transactional replication, the ALTER SWITCH command must be accounted for and replicated in such a way that the subscriber also executes the ALTER SWITCH command

The command looks like this:

ALTER TABLE transactions SWITCH PARTITION 2 TO transactions_archive PARTITION 2;

There are many advantages:

  • Data “movement” using the ALTER SWITCH command is almost instantaneous, even when millions of rows are involved
  • The command is simple to implement
  • A rollback is possible, meaning data can be moved from one partition to another, or from one table to another. The “movement” can occur in both directions:
    • From the transactions table to the archive_transactions table
    • From the archive_transactions table back to the transactions table
  • Data remains easily accessible since it is only moved to another table
What does it look like?

The data from the selected partition is “migrated” to the same partition in the destination table.

Problems that arise:

By default, it is not possible to perform partition switches when transactional replication is being used. You can verify this using the following command:

exec sp_helppublication @publication = N'Publication_MyTransactions';

It is therefore necessary to allow the publication to handle “ALTER SWITCH” commands so that the partition switch is replicated to the subscriber. This is done using the following command:

exec sp_changepublication
	@publication = N'Publication_MyTransactions',
	@property = N'allow_partition_switch',
	@value = N'true'
go

exec sp_changepublication
	@publication = N'Publication_MyTransactions',
	@property = N'replicate_partition_switch',
	@value = N'true'
go

In our client’s case, the indexes were not aligned. In other words, they were not partitioned. When this happens, it is not possible to perform a partition switch. More generally, the source and destination tables must have exactly the same structure.

From there, we have two issues:

  • The unaligned indexes need to be rebuilt (and partitioned)
  • The new table (transactions_archive) needs to be added to the publication

Once these two actions are completed, a new snapshot must be generated. When this occurs, all articles are taken into account, and the snapshot captures all of them. The issue is that our client’s publication includes several dozen tables with a total volume exceeding 1 TB.

To include only the modified articles (in our specific case), the transactions table must be removed from the publication, the necessary changes must be made, and then the table should be added back (along with the transactions_archive table). Some publication properties must also be modified. This is done using the commands below:

exec sp_helppublication @publication = N'Publication_MyTransactions'
go

exec sp_changepublication 
	@publication = N'Publication_MyTransactions', 
	@property = N'allow_anonymous', 
	@value = 'false'
go

exec sp_changepublication 
	@publication = N'Publication_MyTransactions', 
	@property = N'immediate_sync', 
	@value = 'false'
go

Here are the different steps:

  • Enable partition switching since we are using transactional replication
  • Generate a snapshot only for the modified articles
  • Remove the transactions article from replication
  • Rebuild the non-partitioned indexes on the transactions table
  • Create the transactions_archive table with the same structure as the transactions table
  • Add the transactions and transactions_archive articles to the publication
    • When adding them, consider the clustered and nonclustered indexes as well as the partitioning
  • Start the snapshot agent
  • Execute a partition switch and verify the result
  • Plan for a rollback of the allow_anonymous and immediate_sync properties
Implementation

We previously mentioned that the transactions table already existed and was being replicated (via transactional replication).

We therefore have our database (MyTransactions) with several filegroups (to simplify, we are only creating a few filegroups that will contain the data for 2024):

use master
go

declare @NumPart int = 1, @NumPartStr varchar(2), @SQLCmd nvarchar(max) = ''
while (@NumPart <= 4)
begin
select @NumPart
set @NumPartStr = +RIGHT('0'+CAST(@NumPart as varchar(2)), 2)
set @SQLCmd = @SQLCmd+'alter database MyTransactions add filegroup FG2024Q'+@NumPartStr+'; '
set @SQLCmd = @SQLCmd+'alter database MyTransactions add file (name = F2024Q'+@NumPartStr+', filename = ''D:\Data\F2024Q'+@NumPartStr+'.mdf'') to filegroup FG2024Q'+@NumPartStr+';'
set @NumPart = @NumPart+1
end
print @SQLCMD
exec(@SQLCMD)

We create our partition function:

use MyTransactions 
go

create partition function FunctionPartition_Transactions(DATE)
as range right for values ('20240101','20240401','20240701','20241001')
go

We create our partition scheme:

use MyTransactions 
go

create partition scheme SchemePartition_Transactions as partition FunctionPartition_Transactions
to ([Primary], FG2024Q01, FG2024Q02, FG2024Q03, FG2024Q04)
go

We then create our transactions table:

use MyTransactions 
go

create table transactions
(
	transactions_id int identity(1,1) not null,
	transactions_label nvarchar(50) not null,
	transactions_amount int not null,
	transactions_date date not null
) on SchemePartition_Transactions(transactions_date)

alter table transactions add constraint pk_transactions primary key clustered(transactions_id, transactions_date)
go

create nonclustered index ix_transactions_label_date on transactions(transactions_label) include (transactions_date) on [PRIMARY]
go

We notice that the clustered index is partitioned:

However, the nonclustered index ix_transactions_label_date is not partitioned. Indeed, it was not created using the clause: ON SchemePartition_Transactions(transactions_date).

The table looks something like this:

We also mentioned that this table was replicated:

We now need to create our archive table, which will be an identical copy of the source table (transactions):

create table transactions_archive
(
	transactions_id int identity(1,1) not null,
	transactions_label nvarchar(50) not null,
	transactions_amount int not null,
	transactions_date date not null
) on SchemePartition_Transactions(transactions_date)

alter table transactions_archive add constraint pk_transactions_archive primary key clustered(transactions_id, transactions_date)
go

create nonclustered index ix_transactions_label_date on transactions_archive(transactions_label) include (transactions_date) on SchemePartition_Transactions(transactions_date)
go

This table is partitioned, as are its indexes (clustered and nonclustered). However, this is not the case for the transactions table, and this will cause issues when executing ALTER SWITCH commands to perform partition switches.

We then check what the partitioning of our tables looks like:

select 
	OBJECT_NAME(p.object_id) as obj_name, 
	f.name, 
	p.partition_number, 
	p.rows
		from sys.system_internals_allocation_units a
			join sys.partitions p
				on p.partition_id = a.container_id
			join sys.filegroups f on a.filegroup_id = f.data_space_id
	
	where p.object_id in (object_id('transactions'), object_id('transactions_archive'))
	order by obj_name, p.index_id, p.partition_number

We now need to perform the following operations:

  • Ensure that the nonclustered index ix_transactions_label_date on the transactions table is partitioned. To do this, it must be recreated. However, this table is replicated, and explicit index creation and deletion commands are not replicated. This is why we need to modify certain publication properties (allow_anonymous and immediate_sync), remove the article from the publication, partition the nonclustered index, and then add the article back to the publication.

Reference: https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2012/ms151870(v=sql.110)?redirectedfrom=MSDN

We have the following steps:

  • Remove the transactions article from the publication
  • Modify the following properties: allow_anonymous and immediate_sync
  • (Re)create and partition the nonclustered index
  • Modify certain replication properties to allow SQL commands like ALTER SWITCH
  • Add the transactions and transactions_article articles to the publication

First, we remove the transactions article from the publication (note that in our client’s case, several dozen tables were replicated).

Second, we only want to generate a snapshot for the modified objects (transactions and transactions_archive).

exec sp_changepublication 
	@publication = N'Publication_MyTransactions', 
	@property = N'allow_anonymous', 
	@value = 'false'
go

exec sp_changepublication 
	@publication = N'Publication_MyTransactions', 
	@property = N'immediate_sync', 
	@value = 'false'
go

We (re)create the nonclustered index on the transactions table in order to partition it:

drop index ix_transactions_label_date on [dbo].[transactions]
go

create nonclustered index ix_transactions_label_date on transactions(transactions_label) on SchemePartition_Transactions(transactions_date)
go

Our index is now partitioned:

We then need to modify certain publication properties to allow ALTER SWITCH commands:

exec sp_helppublication @publication = N'Publication_MyTransactions'
go

exec sp_changepublication
	@publication = N'Publication_MyTransactions',
	@property = N'allow_partition_switch',
	@value = N'true'
go

exec sp_changepublication
	@publication = N'Publication_MyTransactions',
	@property = N'replicate_partition_switch',
	@value = N'true'
go

exec sp_helppublication @publication = N'Publication_MyTransactions'
go

The article can be added easily via (for example) the graphical interface:

We take partitioning into account in the following way:

It is very important to modify the properties of only these two articles and not those of already published articles (if any). Modifying the properties of already published articles would invalidate the existing snapshot and force SQL Server to generate a snapshot of all articles.

The above modification was made to the two articles (transactions and transactions_archive).

Once these steps are completed, we need to generate a snapshot via the corresponding job:

The corresponding objects are taken into account:

Once these operations are completed, we can see that our objects have been replicated and that the elements below have been taken into account:

  • Partitioned nonclustered index
  • Partition scheme
  • Partition function
Replicated partition switch

We now want to perform partition switches and verify that these commands are replicated.

Here is the corresponding code to run on the publisher (I am moving my data from partition 2 of my transactions table to partition 2 of my transactions_archive table):

alter table transactions switch partition 2 to transactions_archive partition 2

Before (from the publisher):

After (from the publisher):

From the subscriber:

Here is the related code :

:connect TR-Pub-1\INSTANCE2019

use MyTransactions
go

alter table transactions switch partition 2 to transactions_archive partition 2
go

select 
	OBJECT_NAME(p.object_id) as obj_name, 
	f.name, 
	p.partition_number, 
	p.rows
		from sys.system_internals_allocation_units a
			join sys.partitions p
				on p.partition_id = a.container_id
			join sys.filegroups f on a.filegroup_id = f.data_space_id
	
	where p.object_id in (object_id('transactions'), object_id('transactions_archive'))
	order by obj_name, p.index_id, p.partition_number
go

:connect TR-Sub-1\INSTANCE2019

use MyTransactions
go

select 
	OBJECT_NAME(p.object_id) as obj_name, 
	f.name, 
	p.partition_number, 
	p.rows
		from sys.system_internals_allocation_units a
			join sys.partitions p
				on p.partition_id = a.container_id
			join sys.filegroups f on a.filegroup_id = f.data_space_id
	
	where p.object_id in (object_id('transactions'), object_id('transactions_archive'))
	order by obj_name, p.index_id, p.partition_number
go

We then need to roll back the modification of the following properties: allow_anonymous et immediate_sync.

exec sp_changepublication 
	@publication = N'Publication_MyTransactions', 
	@property = N'immediate_sync', 
	@value = 'true'
go

exec sp_changepublication 
	@publication = N'Publication_MyTransactions', 
	@property = N'allow_anonymous', 
	@value = 'true'
go

exec sp_helppublication @publication = N'Publication_MyTransactions'
go

Thank you. Amine Haloui.

L’article Customer case study – Partition switching and transactional replication est apparu en premier sur dbi Blog.

Automatisation for Oracle ZDM installation and update

Thu, 2025-07-31 01:31

Let’s have a look how we can install and update Oracle Zero Downtime Migration tool, and see how we can automatise it with ansible…

Check ZDM build

Let’s first see how we can check ZDM version using zdm cli.

[zdm@zdmhost ~]$ zdmcli -v
RHP_PT.ZDM21_LINUX.X64_221207.30

[zdm@zdmhost ~]$ zdmcli -build
version: 21.0.0.0.0
full version: 21.4.0.0.0
patch version: 21.4.5.0.0
label date: 221207.30
ZDM kit build date: Mar 21 2024 22:07:12 UTC
CPAT build version: 24.6.0
[exauser@sva-oelexa501 ~]$

Version installed is 21.4, and we would see how to update it to last current version, which is 21.5.

Manuel installation and update

Manuel installation and update is quite easy as it will be done through the zdminstall.sh script. It will be the option that will highlight if it is a first installation or an update. For a first installation, the option will be setup and for an update it will be update.

Automatisation of the installation and update of ZDM

The playbook to automatise the installation and the update, in my case named deploy_zdm.yml, is composed of following tasks.

We start by defining the playbook and the variable that will be used in the included tasks to install or update to zdm current last version, which is 21.5. The zip file to unarchive will be stored in the ../oracle_swfiles directory.

---
# Playbook for deploying ZDM host
# Marc Wagner - dbi
# Date : 06.10.2023
# 11.06.2025 : stop service + new 21.5 version
- name: Deploy ZDM
  hosts: "zdmhost"
  vars:
    sfw_folder: "../oracle_swfiles"


    zdm_base: "/u01/app/oracle"
    zdm_install_dir: "zdm21.5"
    zdm_download_dir: "/u01/app/zdm_download_dir"
    zdm_archive: "{{ sfw_folder }}/V1045330-01.zip"

    # ZDM update is an in-place process
    zdm_home: "/u01/app/oracle/product/zdm"
  environment:
    HTTP_PROXY: ""
    HTTPS_PROXY: ""

  tasks:


Then we will have all the tasks performing the installation.

The first one will be used to define the variable for the option provided as extra argument of the ansible playbook. And we will assert that the variable is provided.

- name: Assert extra-var has been passed as argument to playbook
  ansible.builtin.assert:
    that:
      - deploy_option is defined
    quiet: false
    fail_msg: "Please provide --extra-var deploy_option="
    success_msg: "deploy_option={{ deploy_option }}"

The next task will stop ZDM service and will only run if following file exists : /u01/app/oracle/product/zdm/bin/zdmservice.

This check is done so the task will only run if ZDM tool is already installed.

- name: Stop ZDM service
  ansible.builtin.shell: |
    cmd: |
    {{ zdm_home }}/bin/zdmservice stop
  args:
    executable: "/bin/bash"
    removes: "/u01/app/oracle/product/zdm/bin/zdmservice"

The next task will install some prerequisite if not already installed.

- name: Install ZDM software prerequisites
  become: true
  become_user: root
  ansible.builtin.dnf:
    name: "{{ item }}"
  loop:
    - perl
    - expect
    - libaio
    - glibc-devel
    - unzip
    - libnsl
    - ncurses-compat-libs
    - oraclelinux-developer-release-el8

The next task will create all needed directories like zdm base, zdm home and the file used to store the installation archive file.

- name: Create directories for ZDM tool
  become: true
  become_user: root
  ansible.builtin.file:
    path: "{{ item }}"
    state: directory
    owner: exauser
    group: exauser
    mode: '755'
  loop:
    - "{{ zdm_base }}"
    - "{{ zdm_home }}"
    - "{{ zdm_download_dir }}"

The next task will unarchive the installation zip file.

- name: Unarchive ZDM
  ansible.builtin.unarchive:
    src: "{{ zdm_archive }}"
    dest: "{{ zdm_download_dir }}"

And the next task will finally installed or update zdm tool according to the option given to the playbook.

- name: Install or update ZDM
  ansible.builtin.shell:
    cmd: |
      {{ zdm_download_dir }}/{{ zdm_install_dir }}/zdminstall.sh \
        {{ deploy_option }} oraclehome={{ zdm_home }} oraclebase={{ zdm_base }} \
        ziploc={{ zdm_download_dir }}/{{ zdm_install_dir }}/zdm_home.zip -zdm
  args:
    executable: "/bin/bash"

And we can finally with the last steps start ZDM service.

- name: Start ZDM service
  ansible.builtin.shell: |
    cmd: |
    {{ zdm_home }}/bin/zdmservice start
  args:
    executable: "/bin/bash"

Run the playbook

As prerequisite, let’s first check that the appropriate ZDM installation zip file is in the expected ansible folder.

[myuser@domain.com@admin-host zdm_ansible]$ ls -ltrh ./oracle_swfiles/
total 874M
-rw-r--r--. 1 myuser@domain.com myuser@domain.com 871M Jun 11 10:12 V1045330-01.zip
[myuser@domain.com@admin-host zdm_ansible]$

We can check that the assert task to ensure we put the appropriate –extra-vars works.

[myuser@domain.com@admin-host zdm_ansible]$ ansible-playbook ./playbooks/deploy_zdm.yml

PLAY [Deploy ZDM] ***********************************************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************************************************************************************************************
[WARNING]: Platform linux on host zdmhost is using the discovered Python interpreter at /usr/bin/python3, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible-
core/2.15/reference_appendices/interpreter_discovery.html for more information.
ok: [zdmhost]

TASK [Assert extra-var has been passed as argument to playbook] *************************************************************************************************************************************************************************************************************************
fatal: [zdmhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'deploy_option' is undefined. 'deploy_option' is undefined\n\nThe error appears to be in '/home/nfs/domain.com/myuser/ExaCCGit/zdm_ansible/playbooks/deploy_zdm.yml': line 25, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n    - name: Assert extra-var has been passed as argument to playbook\n      ^ here\n"}

PLAY RECAP ******************************************************************************************************************************************************************************************************************************************************************************
zdmhost              : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

[myuser@domain.com@admin-host zdm_ansible]$

Then we can run the playbook with the update option.

[myuser@domain.com@admin-host zdm_ansible]$ ansible-playbook ./playbooks/deploy_zdm.yml -e deploy_option="update"

PLAY [Deploy ZDM] ***********************************************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************************************************************************************************************
[WARNING]: Platform linux on host zdmhost is using the discovered Python interpreter at /usr/bin/python3, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible-
core/2.15/reference_appendices/interpreter_discovery.html for more information.
ok: [zdmhost]

TASK [Assert extra-var has been passed as argument to playbook] *************************************************************************************************************************************************************************************************************************
ok: [zdmhost] => {
    "changed": false,
    "msg": "deploy_option=update"
}

TASK [Stop ZDM service] *****************************************************************************************************************************************************************************************************************************************************************
changed: [zdmhost]

TASK [Install ZDM software prerequisites] ***********************************************************************************************************************************************************************************************************************************************
ok: [zdmhost] => (item=perl)
ok: [zdmhost] => (item=expect)
ok: [zdmhost] => (item=libaio)
ok: [zdmhost] => (item=glibc-devel)
ok: [zdmhost] => (item=unzip)
ok: [zdmhost] => (item=libnsl)
ok: [zdmhost] => (item=ncurses-compat-libs)
ok: [zdmhost] => (item=oraclelinux-developer-release-el8)

TASK [Create directories for ZDM tool] **************************************************************************************************************************************************************************************************************************************************
ok: [zdmhost] => (item=/u01/app/oracle)
ok: [zdmhost] => (item=/u01/app/oracle/product/zdm)
ok: [zdmhost] => (item=/u01/app/zdm_download_dir)

TASK [Unarchive ZDM] ********************************************************************************************************************************************************************************************************************************************************************
ok: [zdmhost]

TASK [Install or update ZDM] ************************************************************************************************************************************************************************************************************************************************************
changed: [zdmhost]

TASK [Start ZDM service] ****************************************************************************************************************************************************************************************************************************************************************
changed: [zdmhost]

PLAY RECAP ******************************************************************************************************************************************************************************************************************************************************************************
zdmhost              : ok=8    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Check new ZDM version

And we can check that the new ZDM tool version is 21.5.

[zdm@zdmhost ~]$ zdmcli -v
RHP_PT.ZDM21_LINUX.X64_240219.12

[zdm@zdmhost ~]$ zdmcli -build
version: 21.0.0.0.0
full version: 21.5.0.0.0
patch version: N/A
label date: 240219.12
ZDM kit build date: Sep 10 2024 21:59:18 UTC
CPAT build version: 24.6.0
[zdm@zdmhost ~]$

To wrap up…

Installing and updating ZDM cli is quite easy, and writing an ansible playbook will help automatising the installation and further update.

L’article Automatisation for Oracle ZDM installation and update est apparu en premier sur dbi Blog.

Patching SUSE Multi Linux Manager

Wed, 2025-07-30 04:10

In the last post about SUSE Multi Linux Manager we had a look at how you can schedule OpenSCAP reports using the API. In this post we’ll look into something very basic: How can you patch the server components of SUSE Multi Linux Manager. We speak about components because you need to patch the host (which is a SLE Micro in this case) and the container hosting the application.

Looking at the host operating system we can see this is a SLE Micro 5.5:

suma:~ $ cat /etc/os-release 
NAME="SLE Micro"
VERSION="5.5"
VERSION_ID="5.5"
PRETTY_NAME="SUSE Linux Enterprise Micro 5.5"
ID="sle-micro"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sle-micro:5.5"

As this comes with a read only root file system we cannot directly use zypper to patch the system. The tool to use in this case is transactional-update. This still uses zypper in the background, but the updates are installed into a new Btrfs snapshot. Using this approach the running system is not touched at all and the updates only become available when the system is rebooted into the new snapshot (which happens automatically when the system is rebooted). If something is wrong with the new snapshot, the system can be booted from the old snapshot and the system is back to what it was before patching.

Before we patch the host system let’s have a look at the snapshots we currently have available:

suma:~ $ snapper list
 # | Type   | Pre # | Date                             | User | Used Space | Cleanup | Description           | Userdata     
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+--------------
0  | single |       |                                  | root |            |         | current               |              
1  | single |       | Fri 08 Mar 2024 10:45:41 AM CET  | root |   1.30 GiB | number  | first root filesystem | important=yes
2  | single |       | Mon 07 Jul 2025 12:18:08 PM CEST | root |   1.51 MiB | number  | Snapshot Update of #1 | important=yes
3  | single |       | Mon 07 Jul 2025 12:30:01 PM CEST | root |   1.02 MiB | number  | Snapshot Update of #2 | important=yes
4  | single |       | Tue 08 Jul 2025 05:33:39 AM CEST | root |  39.78 MiB | number  | Snapshot Update of #3 | important=yes
5  | single |       | Wed 16 Jul 2025 09:25:23 AM CEST | root |  45.07 MiB |         | Snapshot Update of #4 |              
6* | single |       | Wed 23 Jul 2025 04:13:09 PM CEST | root |  58.62 MiB |         | Snapshot Update of #5 |         

Let’s patch and compare what we’ll have afterwards:

suma:~ $ zypper ref
Warning: The gpg key signing file 'repomd.xml' has expired.
  Repository:       SLE-Micro-5.5-Updates
  Key Fingerprint:  FEAB 5025 39D8 46DB 2C09 61CA 70AF 9E81 39DB 7C82
  Key Name:         SuSE Package Signing Key <build@suse.de>
  Key Algorithm:    RSA 2048
  Key Created:      Mon 21 Sep 2020 10:21:47 AM CEST
  Key Expires:      Fri 20 Sep 2024 10:21:47 AM CEST (EXPIRED)
  Rpm Name:         gpg-pubkey-39db7c82-5f68629b
Retrieving repository 'SLE-Micro-5.5-Updates' metadata ..............................................................................................................[done]
Building repository 'SLE-Micro-5.5-Updates' cache ...................................................................................................................[done]
Warning: The gpg key signing file 'repomd.xml' has expired.
  Repository:       SUSE-Manager-Server-5.0-Updates
  Key Fingerprint:  FEAB 5025 39D8 46DB 2C09 61CA 70AF 9E81 39DB 7C82
  Key Name:         SuSE Package Signing Key <build@suse.de>
  Key Algorithm:    RSA 2048
  Key Created:      Mon 21 Sep 2020 10:21:47 AM CEST
  Key Expires:      Fri 20 Sep 2024 10:21:47 AM CEST (EXPIRED)
  Rpm Name:         gpg-pubkey-39db7c82-5f68629b
Retrieving repository 'SUSE-Manager-Server-5.0-Updates' metadata ....................................................................................................[done]
Building repository 'SUSE-Manager-Server-5.0-Updates' cache .........................................................................................................[done]
Repository 'SLE-Micro-5.5-Pool' is up to date.                                                                                                                             
Repository 'SUSE-Manager-Server-5.0-Pool' is up to date.                                                                                                                   
All repositories have been refreshed.

suma:~ $ transactional-update 
Checking for newer version.
transactional-update 4.1.9 started
Options: 
Separate /var detected.
2025-07-30 09:42:32 tukit 4.1.9 started
2025-07-30 09:42:32 Options: -c6 open 
2025-07-30 09:42:33 Using snapshot 6 as base for new snapshot 7.
2025-07-30 09:42:33 /var/lib/overlay/6/etc
2025-07-30 09:42:33 Syncing /etc of previous snapshot 5 as base into new snapshot "/.snapshots/7/snapshot"
2025-07-30 09:42:33 SELinux is enabled.
ID: 7
2025-07-30 09:42:36 Transaction completed.
Calling zypper up
2025-07-30 09:42:38 tukit 4.1.9 started
2025-07-30 09:42:38 Options: callext 7 zypper -R {} up -y --auto-agree-with-product-licenses 
2025-07-30 09:42:39 Executing `zypper -R /tmp/transactional-update-JsIr01 up -y --auto-agree-with-product-licenses`:
Refreshing service 'SUSE_Linux_Enterprise_Micro_5.5_x86_64'.
Refreshing service 'SUSE_Manager_Server_Extension_5.0_x86_64'.
Loading repository data...
Reading installed packages...

The following 21 packages are going to be upgraded:
  boost-license1_66_0 libboost_system1_66_0 libboost_thread1_66_0 libpolkit-agent-1-0 libpolkit-gobject-1-0 mgradm mgradm-bash-completion mgrctl mgrctl-bash-completion polkit python3-pyparsing python3-pytz python3-PyYAML python3-requests python3-salt python3-simplejson python3-urllib3 salt salt-minion salt-transactional-update uyuni-storage-setup-server

21 packages to upgrade.

Package download size:    16.8 MiB

Package install size change:
              |      71.4 MiB  required by packages that will be installed
   654.0 KiB  |  -   70.8 MiB  released by packages that will be removed

Backend:  classic_rpmtrans
Continue? [y/n/v/...? shows all options] (y): y

...
2025-07-30 09:44:40 New default snapshot is #7 (/.snapshots/7/snapshot).
2025-07-30 09:44:40 Transaction completed.

Please reboot your machine to activate the changes and avoid data loss.
New default snapshot is #7 (/.snapshots/7/snapshot).
transactional-update finished

As noted above we must reboot the system for the updates to become active. Before we do that, let’s again have a look at the snapshots:

suma:~ $ snapper list
 # | Type   | Pre # | Date                             | User | Used Space | Cleanup | Description           | Userdata     
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+--------------
0  | single |       |                                  | root |            |         | current               |              
1  | single |       | Fri 08 Mar 2024 10:45:41 AM CET  | root |   1.30 GiB | number  | first root filesystem | important=yes
2  | single |       | Mon 07 Jul 2025 12:18:08 PM CEST | root |   1.51 MiB | number  | Snapshot Update of #1 | important=yes
3  | single |       | Mon 07 Jul 2025 12:30:01 PM CEST | root |   1.02 MiB | number  | Snapshot Update of #2 | important=yes
4  | single |       | Tue 08 Jul 2025 05:33:39 AM CEST | root |  39.78 MiB | number  | Snapshot Update of #3 | important=yes
5  | single |       | Wed 16 Jul 2025 09:25:23 AM CEST | root |  45.07 MiB |         | Snapshot Update of #4 |              
6- | single |       | Wed 23 Jul 2025 04:13:09 PM CEST | root |   4.11 MiB |         | Snapshot Update of #5 |              
7+ | single |       | Wed 30 Jul 2025 09:42:32 AM CEST | root |  88.39 MiB |         | Snapshot Update of #6 |      

We got a new snapshot (number 7) which is not yet active, let’s reboot and check again:

suma:~ $ reboot
...
suma:~ $ snapper list
 # | Type   | Pre # | Date                             | User | Used Space | Cleanup | Description           | Userdata     
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+--------------
0  | single |       |                                  | root |            |         | current               |              
1  | single |       | Fri 08 Mar 2024 10:45:41 AM CET  | root |   1.30 GiB | number  | first root filesystem | important=yes
2  | single |       | Mon 07 Jul 2025 12:18:08 PM CEST | root |   1.51 MiB | number  | Snapshot Update of #1 | important=yes
3  | single |       | Mon 07 Jul 2025 12:30:01 PM CEST | root |   1.02 MiB | number  | Snapshot Update of #2 | important=yes
4  | single |       | Tue 08 Jul 2025 05:33:39 AM CEST | root |  39.78 MiB | number  | Snapshot Update of #3 | important=yes
5  | single |       | Wed 16 Jul 2025 09:25:23 AM CEST | root |  45.07 MiB |         | Snapshot Update of #4 |              
6  | single |       | Wed 23 Jul 2025 04:13:09 PM CEST | root |   4.11 MiB |         | Snapshot Update of #5 |              
7* | single |       | Wed 30 Jul 2025 09:42:32 AM CEST | root |  88.39 MiB |         | Snapshot Update of #6 |            

The new snapshot became active and we’re fully patched on the host system.

Now that the host system is fully patched, we can proceed with patching the SUSE Multi Linux Manager application. Before we do that, let’s check what we currently have:

suma:~ $ mgradm inspect
10:40AM INF Welcome to mgradm
10:40AM INF Executing command: inspect
10:40AM INF Computed image name is registry.suse.com/suse/manager/5.0/x86_64/server:5.0.4.1
10:40AM INF Ensure image registry.suse.com/suse/manager/5.0/x86_64/server:5.0.4.1 is available
WARN[0002] Path "/etc/SUSEConnect" from "/etc/containers/mounts.conf" doesn't exist, skipping 
10:40AM INF 
{
  "CurrentPgVersion": "16",
  "ImagePgVersion": "16",
  "DBUser": "spacewalk",
  "DBPassword": "<REDACTED>",
  "DBName": "susemanager",
  "DBPort": 5432,
  "UyuniRelease": "",
  "SuseManagerRelease": "5.0.4.1",
  "Fqdn": "suma.dwe.local"
}

The currently running version is “5.0.4.1”. Patching is quite simple as this just updates the container:

suma:~ $ mgradm upgrade podman
10:41AM INF Welcome to mgradm
10:41AM INF Use of this software implies acceptance of the End User License Agreement.
10:41AM INF Executing command: podman
...
10:41AM INF No changes requested for hub. Keep 0 replicas.
10:41AM INF Computed image name is registry.suse.com/suse/manager/5.0/x86_64/server-hub-xmlrpc-api:5.0.5
10:41AM INF Ensure image registry.suse.com/suse/manager/5.0/x86_64/server-hub-xmlrpc-api:5.0.5 is available
10:42AM INF Cannot find RPM image for registry.suse.com/suse/manager/5.0/x86_64/server-hub-xmlrpc-api:5.0.5

Checking the version again:

suma:~ $ mgradm inspect
10:36AM INF Welcome to mgradm
10:36AM INF Use of this software implies acceptance of the End User License Agreement.
10:36AM INF Executing command: inspect
10:36AM INF Computed image name is registry.suse.com/suse/manager/5.0/x86_64/server:5.0.5
10:36AM INF Ensure image registry.suse.com/suse/manager/5.0/x86_64/server:5.0.5 is available
10:36AM ??? time="2025-07-30T10:36:20+02:00" level=warning msg="Path \"/etc/SUSEConnect\" from \"/etc/containers/mounts.conf\" doesn't exist, skipping"
10:36AM INF 
{
  "CurrentPgVersion": "16",
  "ImagePgVersion": "16",
  "DBUser": "spacewalk",
  "DBPassword": "<REDACTED>",
  "DBName": "susemanager",
  "DBPort": 5432,
  "UyuniRelease": "",
  "SuseManagerRelease": "5.0.5",
  "Fqdn": "suma.dwe.local"
}

Now we are on version “5.0.5” and we’re done with our patching for the server part. Clients also should be upgraded, especially the Salt client as SUSE Multi Linux Manager uses Salt to manage the clients. You can either do that manually by using the package manager of the distributions you’re managing or you can do that from the WebUI:

That’s it, not hard to do and an easy process to follow.

L’article Patching SUSE Multi Linux Manager est apparu en premier sur dbi Blog.

What will happen to your spare clones during a switchover on ExaCC?

Tue, 2025-07-29 09:57

As you might know, on ExaCC, it is possible to create spare clones. This is based on snapshot technology. There is no physical copy and the modified blocks will be written to the sparse clone, and in our case these blocks will be part of the ASM spare disk group. And of course, the sparse clone then needs a master read only. This article is not intended to explain the whole functioning of this technology. For more information on the technology, I would recommend you to read following blog article :

https://blogs.oracle.com/exadata/post/exadata-sparse-clones-and-monitor-sparse-disk-groups-size

Intention of this article is to show what will happen to your sparse during a switchover. In multitenant environment, on the standby side the sparse clone PDB will be seen in mount status, but not accessible. The sparse is made locally, that is to say on the primary database side where it was created, and only accessible on this site. The sparse clone is storage dependant and primary and standby are not sharing same shelf storage. So after a switchover, the sparse clone is not accessible until switchover back to the initial primary is performed. Let’s see how that works!

Lab description

For the demonstration we will use:

  • an ExaCC cluster named exacc-cl01 (currently the primary side) on which we will have a CDB named SWIT001T_CHZ2
  • an ExaCC cluster named exacc-cl02 (currently the standby side) on which we will have the physical standby CDB named SWIT001T_CHZ3
  • The PDB will be SWTET_APP_001T
  • The Master Read Only PDB will be SWTET_TMR_001T
  • The sparce clone PDB will be SWTET_APP_002T
Master Read Only creation

The command used to create the Master Read Only on the primary CDB was:

SQL> CREATE PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2 CONNECT TO C##USER IDENTIFIED BY ".................." USING 'SWIT001T_CHZ2';

SQL> CREATE PLUGGABLE DATABASE SWTET_TMR_001T FROM SWTET_APP_001T@"SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2"
            KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
            PARALLEL 8
            SERVICE_NAME_CONVERT=('SWTET_APP_001T','SWTET_TMR_001T')
            REFRESH MODE MANUAL
            STANDBYS=NONE;

SQL> ALTER PLUGGABLE DATABASE SWTET_TMR_001T OPEN READ ONLY INSTANCES=ALL;

SQL> DROP PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2;

On the primary side, this looks like:

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 14:46:17 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 READ ONLY  NO
SQL>

On the standby side, this looks like:

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 14:46:48 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
SQL>

Which is correct as the Master Read Only SWTET_TMR_001T was created with STANDBYS=NONE option.

Sparse clone creation

The command used to create the sparse clone on the primary CDB was:

SQL> CREATE PLUGGABLE DATABASE SWTET_APP_002T FROM SWTET_TMR_001T
                KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
                PARALLEL 8
                CREATE_FILE_DEST='+SPRC2'
                SERVICE_NAME_CONVERT=('SWTET_TMR_001T','SWTET_APP_002T','SWTET_APP_001T','SWTET_APP_002T')
                SNAPSHOT COPY
                STANDBYS=NONE;

SQL> ALTER PLUGGABLE DATABASE SWTET_APP_002T OPEN INSTANCES=ALL;

On the primary side, this looks like:

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:03:18 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 READ ONLY  NO
         5 SWTET_APP_002T                 READ WRITE NO
SQL>

The sparse clone PDB is opened as expected in READ/WRITE mode.

On the standby side, this looks like:

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:37:11 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 MOUNTED
SQL>

Which is correct as we used the standby=no option to create the sparse clone and in any case the snapshot is only locally as storage dependant.

Create some data in the spare clone

Let’s create some data into the spare clone.

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:28:39 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 READ ONLY  NO
         5 SWTET_APP_002T                 READ WRITE NO

SQL> alter session set container=SWTET_APP_002T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> create table TEST_SWITCH (id int, name varchar(100), run_exec date);

Table created.

SQL> insert into TEST_SWITCH values (0, 'Before switch', sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from TEST_SWITCH;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Before switch                                                                                        06/06/2025 15:33:32

SQL>

Switchover to CHZ3

Let’s switchover from CHZ2 to CHZ3.

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] dgh
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Fri Jun 6 15:36:06 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.

DGMGRL> connect sys@SWIT001T_CHZ2
Password:
Connected to "SWIT001T_CHZ2"
Connected as SYSDBA.

DGMGRL> show configuration lag

Configuration - swit001t_dgconf

  Protection Mode: MaxPerformance
  Members:
  SWIT001T_CHZ2 - Primary database
    SWIT001T_CHZ3 - Physical standby database
                    Transport Lag:      0 seconds (computed 0 seconds ago)
                    Apply Lag:          0 seconds (computed 0 seconds ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 46 seconds ago)

DGMGRL> validate database SWIT001T_CHZ3

  Database Role:     Physical standby database
  Primary Database:  SWIT001T_CHZ2

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Managed by Clusterware:
    SWIT001T_CHZ2:  YES
    SWIT001T_CHZ3:  YES

  Temporary Tablespace File Information:
    SWIT001T_CHZ2 TEMP Files:  5
    SWIT001T_CHZ3 TEMP Files:  3

DGMGRL> switchover to SWIT001T_CHZ3;
Performing switchover NOW, please wait...
Operation requires a connection to database "SWIT001T_CHZ3"
Connecting ...
Connected to "SWIT001T_CHZ3"
Connected as SYSDBA.
New primary database "SWIT001T_CHZ3" is opening...
Oracle Clusterware is restarting database "SWIT001T_CHZ2" ...
Connected to "SWIT001T_CHZ2"
Switchover succeeded, new primary is "swit001t_chz3"
DGMGRL>

DGMGRL> show configuration lag

Configuration - swit001t_dgconf

  Protection Mode: MaxPerformance
  Members:
  SWIT001T_CHZ3 - Primary database
    SWIT001T_CHZ2 - Physical standby database
                    Transport Lag:      0 seconds (computed 1 second ago)
                    Apply Lag:          0 seconds (computed 1 second ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 7 seconds ago)

Create table in source pdb in new primary cl02

To check on the sparse clone database would react after the switchover, let’s add some data in the source PDB on the new primary (CHZ3).

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:40:16 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 MOUNTED

SQL> alter session set container=SWTET_APP_001T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> create table TEST_ON_STDBY (id int, name varchar(100), run_exec date);

Table created.

SQL> insert into TEST_ON_STDBY values (0, 'Cl08 is primary', sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 SWTET_APP_001T                 READ WRITE NO
		 
SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
SWIT001T1        exacc-cl02n1

SQL> select * from TEST_ON_STDBY;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Cl08 is primary                                                                                      06/06/2025 15:43:19

SQL>

Check table on new standby side cl01

We can check data on the standby side PDB on the cluster cl01.

oracle@exacc-cl01n1:~/ [grinf19] SWIT001T1

 ********************************************
 INSTANCE_NAME   : SWIT001T1
 DB_NAME         : SWIT001T
 DB_UNIQUE_NAME  : SWIT001T_CHZ2
 STATUS          : OPEN READ ONLY WITH APPLY
 LOG_MODE        : ARCHIVELOG
 USERS/SESSIONS  : 4/19
 DATABASE_ROLE   : PHYSICAL STANDBY
 FLASHBACK_ON    : YES
 FORCE_LOGGING   : YES
 VERSION         : 19.26.0.0.0
 CDB_ENABLED     : YES
 PDBs            : PDB$SEED  SWTET_APP_001T  SWTET_APP_002T  SWTET_TMR_001T
 ********************************************

 PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only
 Statustime: 2025-06-06 15:45:25

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:45:27 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 READ ONLY  NO

The Master Read Only PDB is in MOUNT status, opening it in READ ONLY mode would not be possible. This would make sense as it is the reference for the sparse clone and we created it with the no standby option.

SQL> alter pluggable database SWTET_TMR_001T open read only instances=all;
alter pluggable database SWTET_TMR_001T open read only instances=all
*
ERROR at line 1:
ORA-01173: data dictionary indicates missing data file from system tablespace

Let’s check if the source PDB is having the new created data on the new standby side cl01.

SQL> alter session set container=SWTET_APP_001T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
SWIT001T1        exacc-cl01n1

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 SWTET_APP_001T                 READ ONLY  NO

SQL> select * from TEST_ON_STDBY;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Cl08 is primary                                                                                      06/06/2025 15:43:19

SQL>

Yes, we have the new data.

Run switchover back

Let’s switch back to cl01.

DGMGRL> show configuration lag

Configuration - swit001t_dgconf

  Protection Mode: MaxPerformance
  Members:
  SWIT001T_CHZ3 - Primary database
    SWIT001T_CHZ2 - Physical standby database
                    Transport Lag:      0 seconds (computed 0 seconds ago)
                    Apply Lag:          0 seconds (computed 0 seconds ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 11 seconds ago)

DGMGRL> validate database SWIT001T_CHZ2;

  Database Role:     Physical standby database
  Primary Database:  SWIT001T_CHZ3

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Managed by Clusterware:
    SWIT001T_CHZ3:  YES
    SWIT001T_CHZ2:  YES

  Temporary Tablespace File Information:
    SWIT001T_CHZ3 TEMP Files:  3
    SWIT001T_CHZ2 TEMP Files:  5

DGMGRL> switchover to SWIT001T_CHZ2;
Performing switchover NOW, please wait...
Operation requires a connection to database "SWIT001T_CHZ2"
Connecting ...
Connected to "SWIT001T_CHZ2"
Connected as SYSDBA.
New primary database "SWIT001T_CHZ2" is opening...
Oracle Clusterware is restarting database "SWIT001T_CHZ3" ...
Connected to "SWIT001T_CHZ3"
Switchover succeeded, new primary is "swit001t_chz2"

DGMGRL> show configuration lag

Configuration - swit001t_dgconf

  Protection Mode: MaxPerformance
  Members:
  SWIT001T_CHZ2 - Primary database
    SWIT001T_CHZ3 - Physical standby database
                    Transport Lag:      0 seconds (computed 5 seconds ago)
                    Apply Lag:          0 seconds (computed 5 seconds ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 20 seconds ago)

On the standby side, this would look like:

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:51:33 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 MOUNTED
SQL>

On the primary side, this would look like:

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:52:24 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 READ WRITE NO

Reopening the Master Read Only would fail.

SQL> alter pluggable database SWTET_TMR_001T open read only instances=all;
alter pluggable database SWTET_TMR_001T open read only instances=all
*
ERROR at line 1:
ORA-01173: data dictionary indicates missing data file from system tablespace

Add data in source PDB

Adding new data in source PDB would be of course successful.

SQL> alter session set container=SWTET_APP_001T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
SWIT001T1        exacc-cl01n1

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 SWTET_APP_001T                 READ WRITE NO

SQL> insert into TEST_ON_STDBY values (1,'primary back to cl07',sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from TEST_ON_STDBY;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Cl08 is primary                                                                                      06/06/2025 15:43:19
         1 primary back to cl07                                                                                 06/06/2025 15:54:54

SQL>

And data would be also available in the source PDB on the standby side.

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:56:00 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 MOUNTED

SQL> alter session set container=SWTET_APP_001T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
SWIT001T1        exacc-cl02n1

SQL> select * from TEST_ON_STDBY;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Cl08 is primary                                                                                      06/06/2025 15:43:19
         1 primary back to cl07                                                                                 06/06/2025 15:54:54

SQL>

<h3>Add new data in spare clone</h3>

Let’s add new data in the sparse clone.

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:57:00 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 READ WRITE NO

SQL> alter session set container=SWTET_APP_002T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
SWIT001T1        exacc-cl01n1

SQL> select * from TEST_SWITCH;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Before switch                                                                                        06/06/2025 15:33:32

SQL> insert into TEST_SWITCH values (1,'After switchover and back cl07', sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from TEST_SWITCH;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Before switch                                                                                        06/06/2025 15:33:32
         1 After switchover and back cl07                                                                       06/06/2025 15:59:01

SQL>

Adding new data into the sparse clone is possible.

Drop the sparse clone

Command to drop the sparse clone would be:

SQL> ALTER PLUGGABLE DATABASE SWTET_APP_002T CLOSE IMMEDIATE INSTANCES=ALL;

SQL> DROP PLUGGABLE DATABASE SWTET_APP_002T INCLUDING DATAFILES;

On the primary side, this would now look like:

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:25:32 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 MOUNTED
SQL>

On the standby side, this would now look like:

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:26:03 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
SQL>

Refresh Master Read Only

We can refresh the Master Read Only with following command.

SQL> drop pluggable database SWTET_TMR_001T including datafiles;

SQL> CREATE PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2 CONNECT TO C##USER IDENTIFIED BY ".................." USING 'SWIT001T_CHZ2';

SQL> CREATE PLUGGABLE DATABASE SWTET_TMR_001T FROM SWTET_APP_001T@"SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2"
            KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
            PARALLEL 8
            SERVICE_NAME_CONVERT=('SWTET_APP_001T','SWTET_TMR_001T')
            REFRESH MODE MANUAL
            STANDBYS=NONE;

SQL> ALTER PLUGGABLE DATABASE SWTET_TMR_001T OPEN READ ONLY INSTANCES=ALL;

SQL> DROP PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2;
Create spare clone again

And we can create the sparse clone again with the same command.

SQL> CREATE PLUGGABLE DATABASE SWTET_APP_002T FROM SWTET_TMR_001T
                KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
                PARALLEL 8
                CREATE_FILE_DEST='+SPRC2'
                SERVICE_NAME_CONVERT=('SWTET_TMR_001T','SWTET_APP_002T','SWTET_APP_001T','SWTET_APP_002T')
                SNAPSHOT COPY
                STANDBYS=NONE;

SQL> ALTER PLUGGABLE DATABASE SWTET_APP_002T OPEN INSTANCES=ALL;

On the primary side, this looks like:

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:38:12 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_APP_002T                 READ WRITE NO
         5 SWTET_TMR_001T                 READ ONLY  NO
SQL>

And on the standby side:

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:38:38 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_APP_002T                 MOUNTED
         5 SWTET_TMR_001T                 MOUNTED
SQL>

To wrap up…

Sparse clone works successfully in Data Guard environment. Sparse clone are only available locally, and so available only when the CDB hosting initially the sparse clone PDB has the primary role. Switching to the other site is not a problem. The sparse clone would be available again once switching back. At last, we can then easily refresh the master read only and create again the spare clone.

L’article What will happen to your spare clones during a switchover on ExaCC? est apparu en premier sur dbi Blog.

dbaascli database move failing with ORA-01691 on ExaCC

Mon, 2025-07-28 10:59

I recently had to move a database from a 19.23 to a 19.26 dbhome on an ExaCC. This will include the database to be patched. The command dbaascli database move failed during the datapatch steps with following error:

DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM

With this blog I would like to share my troubleshooting with you and how I resolved the problem.

Moving database to new dbhome

In order to move/patch the database I used dbaascli database move command.

[root@exacc-cl01n1 ~]# dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST
DBAAS CLI version 25.1.1.0.0
Executing command database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST
Job id: 17f6fef5-2143-4bb7-8007-a27cf5232099
Session log: /var/opt/oracle/log/CDBTEST/database/move/dbaastools_2025-05-05_02-47-51-PM_237874.log
Loading PILOT...
Session ID of the current execution is: 15920
Log file location: /var/opt/oracle/log/CDBTEST/database/move/pilot_2025-05-05_02-47-55-PM_238225
-----------------
Running initialization job
Completed initialization job
-----------------
Running validate_user_input job
Completed validate_user_input job
-----------------
Running validate_database job
[WARNING] [DBAAS-70643] Following pluggable databases '{CDBTEST=[PDB_001T]}' do not have services configured.
   ACTION: Make sure to configure the services of pluggable databases so that pluggable databases are started after the database bounce.
Completed validate_database job
-----------------
Running validate_creg_file_existence job
Completed validate_creg_file_existence job
-----------------
Running validate_source_home job
Completed validate_source_home job
-----------------
Running validate_major_version job
Completed validate_major_version job
-----------------
Running validate_oracle_home_type job
Completed validate_oracle_home_type job
-----------------
Running check_target_source_home_not_same job
Completed check_target_source_home_not_same job
-----------------
Running validate_home_existence job
Completed validate_home_existence job
-----------------
Running validate_home_consistency job
Completed validate_home_consistency job
-----------------
Running validate_home_options job
Completed validate_home_options job
-----------------
Running validate_disk_space job
Completed validate_disk_space job
-----------------
Acquiring write lock: cdbtest
Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_3
Running pre_patch_lock_manager job
Completed pre_patch_lock_manager job
-----------------
Running enable_wallet_root job
Completed enable_wallet_root job
-----------------
Running copy_config_files job
Completed copy_config_files job
-----------------
Running stop_database_instance-exacc-cl01n1 job
Completed stop_database_instance-exacc-cl01n1 job
-----------------
Running update_database_resource-exacc-cl01n1 job
Completed update_database_resource-exacc-cl01n1 job
-----------------
Running start_database_instance-exacc-cl01n1 job
Completed start_database_instance-exacc-cl01n1 job
-----------------
Running exacs_post_patch_node_updation job
Completed exacs_post_patch_node_updation job
-----------------
Running update_dba_directories job
Completed update_dba_directories job
-----------------
Running datapatch_and_recompile_invalid_objects job
Datapatch execution on database 'CDBTEST' is in progress
Execution of datapatch_and_recompile_invalid_objects failed
[FATAL] [DBAAS-60022] Command '/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/datapatch -verbose' has failed on nodes [exacc-cl01n1].
*MORE DETAILS*
Result of node:exacc-cl01n1
[Interim patch 28318139 (ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.36.095921 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 28555193 (DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.37.111786 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 30944402 (SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.269998 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 32067171 (OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171)):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.35.074017 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 33192694 (OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694)):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.262033 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 33522539 (MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.271219 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 34786990 (OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990)):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.17.242116 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 34972375 (DATAPUMP BUNDLE PATCH 19.18.0.0.0):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.19.038902 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 36199232 (OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)):,   Binary registry: Not installed,   PDB PDB_001T: Applied successfully on 17-AUG-24 08.59.17.362926 AM,   PDB CDB$ROOT: Applied successfully on 18-APR-24 10.08.56.234640 AM,   PDB PDB$SEED: Applied successfully on 18-APR-24 10.19.33.039940 AM, Interim patch 36420641 (DATAPUMP BUNDLE PATCH 19.23.0.0.0):,   Binary registry: Not installed,   PDB PDB_001T: Applied successfully on 17-AUG-24 09.00.21.159959 AM,   PDB CDB$ROOT: Applied successfully on 02-OCT-24 11.47.34.300679 AM,   PDB PDB$SEED: Applied successfully on 02-OCT-24 11.50.34.101031 AM, Interim patch 37102264 (OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)):,   Binary registry: Installed,   PDB PDB_001T: Not installed,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 37470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0):,   Binary registry: Installed,   PDB PDB_001T: Not installed,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Current state of release update SQL patches:,   Binary registry:,     19.26.0.0.0 Release_Update 250118124854: Installed,   PDB PDB_001T:,     Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 17-AUG-24 08.59.50.930598 AM,   PDB CDB$ROOT:,     Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 18-APR-24 10.19.15.621942 AM,   PDB PDB$SEED:,     Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 18-APR-24 10.28.06.019862 AM, Adding patches to installation queue and performing prereq checks...done, Installation queue:,   For the following PDBs: CDB$ROOT PDB$SEED PDB_001T,     The following interim patches will be rolled back:,       36199232 (OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)),       26749785 (PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK),       27605010 (DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999),       36420641 (DATAPUMP BUNDLE PATCH 19.23.0.0.0),     Patch 37260974 (Database Release Update : 19.26.0.0.250121 (37260974)):,       Apply from 19.23.0.0.0 Release_Update 240406004238 to 19.26.0.0.0 Release_Update 250118124854,     The following interim patches will be applied:,       37102264 (OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)),       26749785 (PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK),       27605010 (DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999),       37470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0), DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM, ORA-06512: at line 2 (DBD ERROR: OCIStmtExecute) [for Statement "BEGIN,            INSERT INTO sys.dba_registry_sqlpatch_ru_info,              (patch_id,,               patch_uid,,               patch_descriptor,,               ru_version,,               ru_build_description,,               ru_build_timestamp,,               patch_directory),            VALUES,              (:patch_id,,               :patch_uid,,               :patch_descriptor,,               :ru_version,,               :ru_build_description,,               TO_TIMESTAMP(:ru_build_timestamp, 'YYMMDDHH24MISS'),,               :patch_directory);,            COMMIT;,          END;" with ParamValues: :patch_descriptor=OCIXMLTypePtr=SCALAR(0x4b4dcb0), :patch_directory='PK.........y4Z�&(.�...........37260974_rollback.sql�.]s.ֺ.��+t��ng�.o����v.�?2�����F..�$ . [���.@.q���dQ.�Tqc��û��z���rv�8.�������.���..����(M�ϳ��´.]%�,u
                                                                                                    �z��.�8���.��.s�f�����,J�Wa.�a6�y�.zp�������,..�?�l���t���..�/fW�(j�}pp�8^��..~w�N.�?�..�.��.(Z���e��.������b���.�������K6E�+�?��A.ln._�.=.�qp����������.������ǫ.qﻋ��.���.�.β �\/x��.M�Q�_�q�ý7.-N�Tory�|���y�&E..y��i��..)�N.�.7��.׌��������oqxz|�|st�X...������G'��.N�_�.?..*�.�..../����.��?|�pWn.�*��;;;.}��.�.�+����..fJ�`., 7+��z�>.?
                                                                                                                                                        �Y�.�.��...�.,...N��9��.��.Y�rp����O��8_���O��%�.E�F�.��t�|��.��.�Q|�L�����y.��[�'�|.���.�.y�ë.�:xy�..pS>��.|U.��r�/j�?=:�\...������.�������ry�����QY�J.�.}N��.�.S%������j�^j.=7T�Z�e.W�z.�?..�>��?�w�.{w9{������/Nj�������f])�.�����.O�*�
��϶!����?�.��&)���I�tzv�x��., �LU.�IV�..7.R7.y��&h{*�3.�mJm}n(, _�...��2�Lݰ�r*ç..�S7ղ@�V��.1>���'�2�.��., ...�9.m)s.�S.ȴ�T��K.�.$!'�..;�Z�.#T�.��(�f.��.�i.pBK��h�fX*ߩ...��i�, �.*d�.�e..6.Oy!*�.:�.MQ4s���.�...�SW��|��...F�OC�,e�..�"/��:5�...', :patch_id='37260974', :patch_uid='26040769', :ru_build_description="Release_Update", :ru_build_timestamp="250118124854", :ru_version="19.26.0.0.0"] at /u02/app/oracle/product/19.0.0.0/dbhome_3/sqlpatch/sqlpatch.pm line 5337., Please refer to MOS Note 1609718.1 and/or the invocation log, /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_258023_2025_05_05_14_50_06/sqlpatch_invocation.log, for information on how to resolve the above errors., SQL Patching tool complete on Mon May  5 14:50:40 2025]
Exit code of the operation:1
Releasing lock: cdbtest
Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_3
*** Executing jobs which need to be run always... ***
-----------------
Running post_patch_lock_manager job
Completed post_patch_lock_manager job
******** PLUGIN EXECUTION FAILED ********
To resume this failed session, run the following command:
dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume

As we can see the command failed with following error:

[FATAL] [DBAAS-60022] Command '/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/datapatch -verbose' has failed on nodes [exacc-cl01n1].
7470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0), DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM
Troubleshooting

So let’s troubleshoot!

The database has been moved to the new dbhome, and the dbaascli command failed on datapatch level.

oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [rdbms1900] CDBTEST1
2025-05-05_14-56-12::DMK_ENV.pm::read_oratab            ::INFO ==> changed ORACLE_HOME found in oratab: +ASM1 - old=/u01/app/19.0.0.0/grid, new=/u02/app/23.0.0.0/gridhome_1

 **********************************
 INSTANCE_NAME   : CDBTEST1
 DB_NAME         : CDBTEST
 DB_UNIQUE_NAME  : CDBTEST_CHZ2
 STATUS          : OPEN READ WRITE
 LOG_MODE        : ARCHIVELOG
 USERS/SESSIONS  : 2/7
 DATABASE_ROLE   : PRIMARY
 FLASHBACK_ON    : YES
 FORCE_LOGGING   : YES
 VERSION         : 19.26.0.0.0
 CDB_ENABLED     : YES
 PDBs            : PDB_001T  PDB$SEED
 **********************************

 PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only
 Statustime: 2025-05-05 14:56:12

oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [CDBTEST1 (CDB$ROOT)] echo $ORACLE_HOME
/u02/app/oracle/product/19.0.0.0/dbhome_3

And the PDB is opened read write.

oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [CDBTEST1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 14:56:31 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         4 PDB_001T                       READ WRITE NO
SQL>

Let’s try to run datapatch manually, expecting it will fail with same errors. But it might help me to extract more easily datapatch result.

oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/ [CDBTEST1 (CDB$ROOT)] cd OPatch/

oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/ [CDBTEST1 (CDB$ROOT)] ./datapatch -verbose
SQL Patching tool version 19.26.0.0.0 Production on Mon May  5 14:57:00 2025
Copyright (c) 2012, 2025, Oracle.  All rights reserved.

Log file for this invocation: /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_309940_2025_05_05_14_57_00/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
Interim patch 26749785 (PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK):
  Binary registry: Not installed
  PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.18.040319 AM
  PDB CDB$ROOT: Not installed
  PDB PDB$SEED: Not installed
Interim patch 26749785 (PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK):
  Binary registry: Not installed
  PDB PDB_001T: Applied successfully on 17-AUG-24 08.59.56.646477 AM
  PDB CDB$ROOT: Applied successfully on 02-OCT-24 11.46.11.005940 AM
  PDB PDB$SEED: Applied successfully on 02-OCT-24 11.47.47.794119 AM
...
...
...
DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM
ORA-06512: at line 2 (DBD ERROR: OCIStmtExecute) [for Statement "BEGIN
           INSERT INTO sys.dba_registry_sqlpatch_ru_info
             (patch_id,
              patch_uid,
              patch_descriptor,
              ru_version,
              ru_build_description,
              ru_build_timestamp,
              patch_directory)
           VALUES
             (:patch_id,
              :patch_uid,
              :patch_descriptor,
              :ru_version,
              :ru_build_description,
              TO_TIMESTAMP(:ru_build_timestamp, 'YYMMDDHH24MISS'),
              :patch_directory);
           COMMIT;
         END;" with ParamValues: :patch_descriptor=OCIXMLTypePtr=SCALAR(0x4611c98), :patch_directory='PK.........y4Z▒&(.▒...........37260974_rollback.sql▒.]s.ֺ.▒▒+t▒▒ng▒.o▒▒▒▒v.▒͌?2▒▒▒▒▒F..▒$ . [▒▒▒.@.q▒▒▒dQ.▒Tqc▒▒û▒▒z▒▒▒rv▒8.▒▒▒▒▒▒▒.▒▒▒..▒▒▒▒(M▒ϳ▒▒´.]%▒,u
                                                                                                                                                                                                                                                              ▒z▒▒.▒8▒▒▒.▒▒.s▒f▒▒▒▒▒,J▒Wa.▒a6▒y▒.zp▒▒▒▒▒▒▒,..▒?▒l▒▒▒t▒▒▒..▒/fW▒(j▒}pp▒8^▒▒..~w▒N.▒?▒..▒.▒▒.(Z▒▒▒e▒▒.▒▒▒▒▒▒b▒▒▒.▒▒▒▒▒▒▒K6E▒+▒?▒▒A.ln._▒.=.▒ܸqp▒▒▒▒▒▒▒▒▒▒.▒▒▒▒▒▒ǫ.qﻋ▒▒.▒▒▒.▒.β ▒\/x▒.M▒Q▒_▒q▒ý7.-N▒Tory▒|▒▒▒y▒&E..y▒▒i▒▒..)▒N.▒.7▒▒.׌▒▒▒▒▒▒▒▒oqxz|▒|st▒X...▒▒▒▒▒▒G'▒▒.N▒_▒.?..*▒.▒..../▒▒▒▒.▒▒?|▒pWn.▒*▒▒;;;.}▒▒.▒.▒+▒▒▒▒..fJ▒`.
7+▒▒z▒>.?
         ▒Y▒.▒.▒▒...▒.,...N▒▒9▒.▒▒.Y▒rp▒▒ⷷ▒▒O▒8_▒▒▒O▒▒%▒.E▒F▒.▒▒t▒|▒▒.▒▒.▒Q|▒L▒▒▒▒y.▒▒[▒'▒|.▒▒▒.▒.y▒ë.▒:xy▒..pS>▒▒.|U.▒▒r▒/j▒?=:▒\...▒▒▒▒▒▒.▒▒▒▒▒▒▒ry▒▒▒▒▒QY▒J.▒.}N▒▒.▒.S%▒▒▒▒▒▒j▒^j.=7T▒Z▒e.W▒z.▒?..▒>▒▒?▒w▒.{w9{▒▒▒▒▒▒/Nj▒▒▒▒▒▒▒f])▒.▒▒▒▒.O▒*▒
▒▒϶!▒▒▒▒?▒.▒▒&)▒▒▒I▒tzv▒x▒▒.
▒LU.▒IV▒..7.R7.y▒▒&h{*▒3.▒mJm}n(
_▒...▒▒2▒Lݰ▒r*ç..▒S7ղ@▒V▒▒.1>▒▒▒'▒2▒.▒▒.
...▒9.m)s.▒S.ȴ▒T▒▒K.▒.$!'▒..;▒Z▒.#T▒.▒▒(▒f.▒▒.▒i.pBK▒▒h▒fX*ߩ...▒▒i▒
▒.*d▒.▒e..6.Oy!*▒.:▒.MQ4s▒▒▒.▒...▒SW▒▒|▒▒...F▒OC▒,e▒..▒"/▒▒:5▒...', :patch_id="37260974", :patch_uid="26040769", :ru_build_description="Release_Update", :ru_build_timestamp="250118124854", :ru_version="19.26.0.0.0"] at /u02/app/oracle/product/19.0.0.0/dbhome_3/sqlpatch/sqlpatch.pm line 5337.


Please refer to MOS Note 1609718.1 and/or the invocation log
/u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_309940_2025_05_05_14_57_00/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Mon May  5 14:57:34 2025

Let’s check SYSTEM tablespace usage in the CDB$ROOT.

oracle@exacc-cl01n1:~/ [CDBTEST1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 15:02:00 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> @qdbstbssize.sql

PL/SQL procedure successfully completed.


                             Nb      Extent Segment    Alloc.      Space        Max. Percent Block
Name                      files Type Mgmnt  Mgmnt    Size (GB)  Free (GB)  Size (GB)  used % size  Log Encrypt Compress
------------------------- ----- ---- ------ ------- ---------- ---------- ---------- ------- ----- --- ------- --------
SYSAUX                        1 DATA LM-SYS AUTO          5.95       4.47   32768.00     .00 8 KB  YES YES     NO
SYSTEM                        1 DATA LM-SYS MANUAL        1.95        .62   32768.00     .00 8 KB  YES YES     NO
TEMP                          1 TEMP LM-UNI MANUAL        1.00       2.31     512.00    -.26 8 KB  NO  YES     NO
UNDOTBS1                      1 UNDO LM-SYS MANUAL        1.95       1.89     512.00     .01 8 KB  YES YES     NO
USERS                         1 DATA LM-SYS AUTO          1.00        .93   32768.00     .00 8 KB  YES YES     NO
                          -----                     ---------- ---------- ----------
TOTAL                         5                          11.85      10.22   99328.00

SQL>

We can see all is ok. Let’s check the same in the PDB.

SQL> alter session set container=PDB_001T;

Session altered.

SQL> @qdbstbssize.sql
...
...
...
SYSAUX                        1 DATA         LM-SYS AUTO         82.62      74.52   32768.00     .02 8 KB  YES YES     NO
SYSTEM                        1 DATA         LM-SYS MANUAL       24.71        .10   32768.00     .08 8 KB  YES YES     NO
TEMP                          1 TEMP         LM-UNI MANUAL      114.26     342.75   32768.00    -.70 8 KB  NO  YES     NO
UNDO                          1 UNDO         LM-SYS MANUAL      684.57     673.58   32768.00     .03 8 KB  YES YES     NO
USERS                         1 DATA         LM-SYS AUTO         20.00      19.93      20.00     .33 8 KB  YES YES     NO
                          -----                             ---------- ---------- ----------
TOTAL                      2732                                7101.04    1385.79 ##########

2732 rows selected.

SQL>

It’s all good as well. And the pdb$seed?

SQL> alter session set container=PDB$SEED;

Session altered.

SQL> @qdbstbssize.sql

PL/SQL procedure successfully completed.


                             Nb      Extent Segment    Alloc.      Space        Max. Percent Block
Name                      files Type Mgmnt  Mgmnt    Size (GB)  Free (GB)  Size (GB)  used % size  Log Encrypt Compress
------------------------- ----- ---- ------ ------- ---------- ---------- ---------- ------- ----- --- ------- --------
SYSAUX                        1 DATA LM-SYS AUTO           .59        .08   32768.00     .00 8 KB  YES NO      NO
SYSTEM                        1 DATA LM-SYS MANUAL         .59        .00   32768.00     .00 8 KB  YES NO      NO
TEMP                          1 TEMP LM-UNI MANUAL         .18        .00   32768.00     .00 8 KB  NO  NO      NO
UNDOTBS1                      1 UNDO LM-SYS MANUAL         .59        .33     512.00     .05 8 KB  YES NO      NO
                          -----                     ---------- ---------- ----------
TOTAL                         4                           1.95        .41   98816.00

SQL>

All is ok as well.

Let’s check where this SYS_LOB belongs to. cdb$root? PDB? pdb$seed?

SQL> alter session set container=PDB$SEED;

Session altered.

SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS';

TABLESPACE_NAME                TABLE_NAME
------------------------------ --------------------------------------------------------------------------------------------------------------------------------
SYSTEM                         REGISTRY$SQLPATCH_RU_INFO

SQL> alter session set container=cdb$root;

Session altered.

SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS';

no rows selected

SQL> alter session set container=PDB_001T;

Session altered.

SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS';

no rows selected

SQL>

So the LOB belongs to the pdb$seed.

From the alert log I can see:

PDB$SEED(2):Pluggable database PDB$SEED opening in read write
PDB$SEED(2):Autotune of undo retention is turned on.
PDB$SEED(2):This instance was first to open pluggable database PDB$SEED (container=2)
PDB$SEED(2):queued attach DA request 0xb2325ed8 for pdb 2, ospid 3457
2025-05-05T15:11:01.061406+02:00
Domain Action Reconfiguration started (domid 2, new da inc 19, cluster inc 2)
Instance 1 is attaching to domain 2
 Global Resource Directory partially frozen for domain action
Domain Action Reconfiguration complete (total time 0.0 secs)
2025-05-05T15:11:01.067485+02:00
PDB$SEED(2):Endian type of dictionary set to little
PDB$SEED(2):Undo initialization recovery: Parallel FPTR failed: start:1139322373 end:1139322379 diff:6 ms (0.0 seconds)
PDB$SEED(2):Undo initialization recovery: err:0 start: 1139322373 end: 1139322392 diff: 19 ms (0.0 seconds)
PDB$SEED(2):[3457] Successfully onlined Undo Tablespace 2.
PDB$SEED(2):Undo initialization online undo segments: err:0 start: 1139322392 end: 1139322638 diff: 246 ms (0.2 seconds)
PDB$SEED(2):Undo initialization finished serial:0 start:1139322373 end:1139322643 diff:270 ms (0.3 seconds)
PDB$SEED(2):Database Characterset for PDB$SEED is AL32UTF8
PDB$SEED(2):SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x0, new 0x0 (no suplog)
PDB$SEED(2):Opening pdb with no Resource Manager plan active
2025-05-05T15:11:01.806446+02:00
PDB$SEED(2):joxcsys_required_dirobj_exists: directory object exists with required path /u02/app/oracle/product/19.0.0.0/dbhome_3/javavm/admin/, pid 3457 cid 2
Pluggable database PDB$SEED opened read write
2025-05-05T15:11:03.081311+02:00
QPI: opatch file present, opatch
QPI: qopiprep.bat file present
QPI: Cleaning and refreshing metadata..
2025-05-05T15:11:09.187099+02:00
QPI: Cleaning and refreshing metadata..
2025-05-05T15:11:23.765174+02:00
PDB$SEED(2):ORA-1691: unable to extend lobsegment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM [PDB$SEED] (ospid 3221)
2025-05-05T15:11:24.482146+02:00
PDB$SEED(2):Pluggable database PDB$SEED closing

So we can see that once the pdb$seed is opened in read write mode, datapatch is failing due to the fact it is not possible to extend the SYS.SYS_LOB0000023009C00008$$ lob segment.

Let’s look into the ASM to see if there is enough disk space.

ASMCMD> lsdg
State    Type  Rebal  Sector  Logical_Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512             512   4096  4194304   49072128      56708                0           18902              0             Y  DATAC4/
MOUNTED  HIGH  N         512             512   4096  4194304   16356864   13063776                0         4354592              0             N  RECOC4/
MOUNTED  HIGH  N         512             512   4096  4194304  163568640  163563936                0        54521312              0             N  SPRC4/
ASMCMD>

There is not a lot of usable space for the DATA Disk Group, certainly not enough, but still some space. Let’s try to open the pdb$seed in read write and to create a table.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO

SQL> shutdown immediate
Pluggable Database closed.

SQL> alter pluggable database open read write;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ WRITE NO

SQL> create table xxx (n number) tablespace system;

Table created.

SQL> insert into xxx values (10);

1 row created.

SQL> commit;

Commit complete.

SQL> drop table xxx purge;

Table dropped.

This is possible. Let’s try to resize system data file.

SQL> alter database datafile 2 resize 2048M;
alter database datafile 2 resize 2048M
*
ERROR at line 1:
ORA-01237: cannot extend datafile 2
ORA-01110: data file 2: '+DATAC4/CDBTEST_CHZ2/165C22D4CFFCA759E0638A534664B46C/DATAFILE/system.615.1181302705'
ORA-17505: ksfdrsz:1 Failed to resize file to size 262144 blocks
ORA-15041: diskgroup "DATAC4" space exhausted

This is not possible and it is confirmed DATA Disk Group does not have enough available space.

I first will reopen the pdb$seed in read only.

SQL> shutdown immediate
Pluggable Database closed.

SQL> alter pluggable database open read only;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
SQL>

Following DocId would explain the problem. One or more disk having 0 free MB:

Datapatch failed with ORA-01691: unable to extend lob segment on MGMTDB (Doc ID 2352895.1)

Let’s check, connecting to the +ASM instance.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
+ASM1

SQL> select group_number, name from v$asm_diskgroup;

GROUP_NUMBER NAME
------------ ------------------------------
           1 DATAC4
           2 RECOC4
           3 SPRC4

SQL> select DISK_NUMBER, FREE_MB from v$asm_disk where group_number=1 and FREE_MB=0;

DISK_NUMBER    FREE_MB
----------- ----------
         48          0

SQL> select count(*) from v$asm_disk where group_number=1 and FREE_MB  0;

  COUNT(*)
----------
        95

SQL>

Yes, this is exactly the problem! Disk Number 48 has got 0 free MB.

Resolution

Let’s resolve the problem.

Through the OCI console, I added 1 TB to the ASM. The usable space of the disk groups then became as following:

ASMCMD> lsdg
State    Type  Rebal  Sector  Logical_Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  Y         512             512   4096  4194304   50959872    1932164                0          644054              0             Y  DATAC4/
MOUNTED  HIGH  N         512             512   4096  4194304   16356864   13058556                0         4352852              0             N  RECOC4/
MOUNTED  HIGH  N         512             512   4096  4194304  163568640  163563936                0        54521312              0             N  SPRC4/
ASMCMD>

I checked to ensure there is no more disks with 0 free MB.

[grid@exacc-cl01n1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Mon May 5 16:28:11 2025
Version 23.7.0.25.01

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 23ai Enterprise Edition Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems
Version 23.7.0.25.01

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
+ASM1

SQL> select DISK_NUMBER, FREE_MB from v$asm_disk where group_number=1 and FREE_MB=0;

no rows selected

SQL> select count(*) from v$asm_disk where group_number=1 and FREE_MB  0;

  COUNT(*)
----------
        96

SQL>

I resume the dbaascli move operation, which became successful.

[root@exacc-cl01n1 ~]# dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume
DBAAS CLI version 25.1.1.0.0
Executing command database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume
Job id: 7e59d18a-ca1b-4808-a927-c3ce9131b968
Session log: /var/opt/oracle/log/CDBTEST/database/move/dbaastools_2025-05-05_05-52-50-PM_350983.log
Loading PILOT...
Session ID of the current execution is: 15932
Log file location: /var/opt/oracle/log/CDBTEST/database/move/pilot_2025-05-05_05-52-54-PM_351354
-----------------
Running initialization job
Completed initialization job
-----------------
Acquiring write lock: cdbtest
Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_3
Running datapatch_and_recompile_invalid_objects job
Datapatch execution on database 'CDBTEST' is in progress
Datapatch execution on database 'CDBTEST' is complete
Recompilation of invalid objects on database 'CDBTEST' is in progress
Recompilation of invalid objects on database 'CDBTEST' is complete
Completed datapatch_and_recompile_invalid_objects job
-----------------
Running update_pdb_status job
Completed update_pdb_status job
-----------------
Running post_patch_lock_manager job
Completed post_patch_lock_manager job
Releasing lock: cdbtest
Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_3
-----------------
Running invoke_backup_asst job
Completed invoke_backup_asst job
-----------------
Running post_move_validation job
Completed post_move_validation job
-----------------
Running generate_dbsystem_details job
Acquiring native write lock: global_dbsystem_details_generation
Releasing native lock: global_dbsystem_details_generation
Completed generate_dbsystem_details job

dbaascli execution completed
You have new mail in /var/spool/mail/root

And I checked database patch version for the cdb$root and the pdb.

oracle@exacc-cl01n1:~/ [rdbms1900] CDBTEST1
2025-05-05_18-01-21::DMK_ENV.pm::read_oratab            ::INFO ==> changed ORACLE_HOME found in oratab: +ASM1 - old=/u01/app/19.0.0.0/grid, new=/u02/app/23.0.0.0/gridhome_1

 **********************************
 INSTANCE_NAME   : CDBTEST1
 DB_NAME         : CDBTEST
 DB_UNIQUE_NAME  : CDBTEST_CHZ2
 STATUS          : OPEN READ WRITE
 LOG_MODE        : ARCHIVELOG
 USERS/SESSIONS  : 2/7
 DATABASE_ROLE   : PRIMARY
 FLASHBACK_ON    : YES
 FORCE_LOGGING   : YES
 VERSION         : 19.26.0.0.0
 CDB_ENABLED     : YES
 PDBs            : PDB_001T  PDB$SEED
 **********************************

 PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only
 Statustime: 2025-05-05 18:01:22

oracle@exacc-cl01n1:~/ [CDBTEST1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 18:01:25 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> set lines 300
SQL> set tab off
SQL> set pages 500

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> col action_time for a30
SQL> select action_time, patch_id, source_version, target_version, status, description from dba_registry_sqlpatch;

ACTION_TIME                      PATCH_ID SOURCE_VERSION  TARGET_VERSION  STATUS                    DESCRIPTION
------------------------------ ---------- --------------- --------------- ------------------------- ----------------------------------------------------------------------------------------------------
18-APR-24 10.08.56.234640 AM     36199232 19.1.0.0.0      19.23.0.0.0     SUCCESS                   OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)
18-APR-24 10.19.15.621942 AM     36233263 19.1.0.0.0      19.23.0.0.0     SUCCESS                   Database Release Update : 19.23.0.0.240416 (36233263)
02-OCT-24 11.46.11.005940 AM     26749785 19.23.0.0.0     19.23.0.0.0     SUCCESS                   PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
02-OCT-24 11.46.13.343508 AM     27605010 19.23.0.0.0     19.23.0.0.0     SUCCESS                   DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
02-OCT-24 11.47.34.300679 AM     36420641 19.23.0.0.0     19.23.0.0.0     SUCCESS                   DATAPUMP BUNDLE PATCH 19.23.0.0.0
05-MAY-25 05.27.25.226657 PM     36199232 19.23.0.0.0     19.26.0.0.0     SUCCESS                   OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)
05-MAY-25 05.27.26.115643 PM     37102264 19.23.0.0.0     19.26.0.0.0     SUCCESS                   OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)
05-MAY-25 05.27.26.697687 PM     26749785 19.23.0.0.0     19.26.0.0.0     SUCCESS                   PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
05-MAY-25 05.27.28.734483 PM     27605010 19.23.0.0.0     19.26.0.0.0     SUCCESS                   DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
05-MAY-25 05.27.28.839820 PM     36420641 19.23.0.0.0     19.26.0.0.0     SUCCESS                   DATAPUMP BUNDLE PATCH 19.23.0.0.0
05-MAY-25 05.28.35.890001 PM     37260974 19.23.0.0.0     19.26.0.0.0     SUCCESS                   Database Release Update : 19.26.0.0.250121 (37260974)
05-MAY-25 05.28.44.283893 PM     26749785 19.23.0.0.0     19.26.0.0.0     SUCCESS                   PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
05-MAY-25 05.28.46.272241 PM     27605010 19.23.0.0.0     19.26.0.0.0     SUCCESS                   DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
05-MAY-25 05.29.22.101438 PM     37470729 19.23.0.0.0     19.26.0.0.0     SUCCESS                   DATAPUMP BUNDLE PATCH 19.26.0.0.0

14 rows selected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         4 PDB_001T                 READ WRITE NO

SQL> alter session set container=PDB_001T;

Session altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 PDB_001T                 READ WRITE NO

SQL> select action_time, patch_id, source_version, target_version, status, description from dba_registry_sqlpatch;

ACTION_TIME                      PATCH_ID SOURCE_VERSI TARGET_VERSI STATUS               DESCRIPTION
------------------------------ ---------- ------------ ------------ -------------------- ----------------------------------------------------------------------------------------------------
19-JUN-21 09.56.16.648813 AM     32067171 19.1.0.0.0   19.1.0.0.0   SUCCESS              OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171)
19-JUN-21 09.56.16.641919 AM     32218454 19.1.0.0.0   19.10.0.0.0  SUCCESS              Database Release Update : 19.10.0.0.210119 (32218454)
19-JUN-21 09.56.17.529234 AM     28318139 19.1.0.0.0   19.1.0.0.0   SUCCESS              ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL
19-JUN-21 09.56.18.407598 AM     28555193 19.1.0.0.0   19.1.0.0.0   WITH ERRORS          DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT
19-JUN-21 10.13.13.165620 AM     28555193 19.10.0.0.0  19.10.0.0.0  SUCCESS              DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT
06-MAR-22 10.57.35.074017 AM     32067171 19.13.0.0.0  19.13.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171)
06-MAR-22 10.57.39.351946 AM     33192694 19.10.0.0.0  19.10.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694)
06-MAR-22 10.57.36.095921 AM     28318139 19.13.0.0.0  19.13.0.0.0  SUCCESS              ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL
06-MAR-22 10.57.37.111786 AM     28555193 19.13.0.0.0  19.13.0.0.0  SUCCESS              DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT
06-MAR-22 10.57.39.348199 AM     33192793 19.10.0.0.0  19.13.0.0.0  SUCCESS              Database Release Update : 19.13.0.0.211019 (33192793)
06-MAR-22 10.57.40.368792 AM     30944402 19.10.0.0.0  19.10.0.0.0  SUCCESS              SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS
06-MAR-22 10.57.41.384551 AM     33522539 19.10.0.0.0  19.10.0.0.0  SUCCESS              MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027
19-AUG-23 09.58.11.262033 AM     33192694 19.18.0.0.0  19.18.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694)
19-AUG-23 09.58.13.172768 AM     34786990 19.13.0.0.0  19.13.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990)
19-AUG-23 09.58.11.269998 AM     30944402 19.18.0.0.0  19.18.0.0.0  SUCCESS              SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS
19-AUG-23 09.58.11.271219 AM     33522539 19.18.0.0.0  19.18.0.0.0  SUCCESS              MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027
19-AUG-23 09.58.13.169832 AM     34765931 19.13.0.0.0  19.18.0.0.0  SUCCESS              DATABASE RELEASE UPDATE : 19.18.0.0.230117 (REL-JAN230131) (34765931)
19-AUG-23 09.58.13.340768 AM     26749785 19.13.0.0.0  19.13.0.0.0  SUCCESS              PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
19-AUG-23 09.58.13.346063 AM     27605010 19.13.0.0.0  19.13.0.0.0  SUCCESS              DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
19-AUG-23 09.58.14.253669 AM     34972375 19.13.0.0.0  19.13.0.0.0  SUCCESS              DATAPUMP BUNDLE PATCH 19.18.0.0.0
17-AUG-24 08.59.17.242116 AM     34786990 19.18.0.0.0  19.23.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990)
17-AUG-24 08.59.17.362926 AM     36199232 19.18.0.0.0  19.23.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)
17-AUG-24 08.59.18.040319 AM     26749785 19.18.0.0.0  19.23.0.0.0  SUCCESS              PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
17-AUG-24 08.59.18.971587 AM     27605010 19.18.0.0.0  19.23.0.0.0  SUCCESS              DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
17-AUG-24 08.59.19.038902 AM     34972375 19.18.0.0.0  19.23.0.0.0  SUCCESS              DATAPUMP BUNDLE PATCH 19.18.0.0.0
17-AUG-24 08.59.50.930598 AM     36233263 19.18.0.0.0  19.23.0.0.0  SUCCESS              Database Release Update : 19.23.0.0.240416 (36233263)
17-AUG-24 08.59.56.646477 AM     26749785 19.18.0.0.0  19.23.0.0.0  SUCCESS              PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
17-AUG-24 08.59.57.504574 AM     27605010 19.18.0.0.0  19.23.0.0.0  SUCCESS              DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
17-AUG-24 09.00.21.159959 AM     36420641 19.18.0.0.0  19.23.0.0.0  SUCCESS              DATAPUMP BUNDLE PATCH 19.23.0.0.0
05-MAY-25 05.31.01.733241 PM     36199232 19.23.0.0.0  19.26.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)
05-MAY-25 05.31.03.413696 PM     37102264 19.23.0.0.0  19.26.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)
05-MAY-25 05.31.04.821299 PM     26749785 19.23.0.0.0  19.26.0.0.0  SUCCESS              PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
05-MAY-25 05.31.06.205197 PM     27605010 19.23.0.0.0  19.26.0.0.0  SUCCESS              DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
05-MAY-25 05.31.06.379357 PM     36420641 19.23.0.0.0  19.26.0.0.0  SUCCESS              DATAPUMP BUNDLE PATCH 19.23.0.0.0
05-MAY-25 05.32.25.947558 PM     37260974 19.23.0.0.0  19.26.0.0.0  SUCCESS              Database Release Update : 19.26.0.0.250121 (37260974)
05-MAY-25 05.32.32.616612 PM     26749785 19.23.0.0.0  19.26.0.0.0  SUCCESS              PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
05-MAY-25 05.32.33.746247 PM     27605010 19.23.0.0.0  19.26.0.0.0  SUCCESS              DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
05-MAY-25 05.35.48.324333 PM     37470729 19.23.0.0.0  19.26.0.0.0  SUCCESS              DATAPUMP BUNDLE PATCH 19.26.0.0.0

38 rows selected.

And finally I checked PDB violations.

oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/ [CDBTEST1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 18:10:56 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         4 PDB_001T                 READ WRITE NO

SQL> alter session set container=PDB_001T;

Session altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 PDB_001T                 READ WRITE NO

SQL> col name for a15
SQL> col message for a120
SQL> col cause for a50
SQL> set lines 300
SQL> col cause for a20
SQL> col type for a10

SQL> select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where status  'RESOLVED';

NAME            CAUSE                TYPE       MESSAGE                                                                                                                  STATUS
--------------- -------------------- ---------- ------------------------------------------------------------------------------------------------------------------------ ---------------------------
PDB_001T  OPTION               WARNING    Database option APS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                              PENDING
PDB_001T  OPTION               WARNING    Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                          PENDING
PDB_001T  OPTION               WARNING    Database option DV mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                               PENDING
PDB_001T  OPTION               WARNING    Database option OLS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                              PENDING
PDB_001T  OPTION               WARNING    Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                            PENDING
PDB_001T  OPTION               WARNING    Database option OWM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                              PENDING
PDB_001T  OPTION               WARNING    Database option SDO mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                              PENDING
PDB_001T  OPTION               WARNING    Database option XOQ mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                              PENDING

8 rows selected.

SQL>

Yes, that’s it, all is ok now.

To wrap up…

The database move on another higher version dbhome on the ExaCC failed on the datapatch step, due to the fact one of the Disk had 0 free MB. Adding space to the ASM resolved the issue.

L’article dbaascli database move failing with ORA-01691 on ExaCC est apparu en premier sur dbi Blog.

What brings the M-Files and Microsoft 365 Strategic Partnership

Sun, 2025-07-27 04:11
Official image of the M-Files and Microsoft 365 strategic partnership

On the 1st of July 2025 M-Files and Microsoft announced their strategic partnership with the headline “M-Files and Microsoft Announce Strategic Partnership Revolutionising Document Management”
I was wondering myself what does this will mean for us and a M-Files Partner and for our customers. What are the new features and benefits. In this blog article I will share the information and my perspective about this phantasmic announcement.

One of the hot topics is the fundamental shift of how enterprise content is managed within the M-Files and the Microsoft ecosystem.

Main Features

Thanks to the strategic partnership between M-Files and Microsoft, M-Files will now be able to use features such as the ones below. This unlocks the native Microsoft 365 capabilities for M-Files AI-curated content.

  • Microsoft 365 and its API-only service
  • SharePoint Embedded
  • Document co-authoring

This powerful solution is native to Microsoft 365, offering the best of both worlds.

M-Files strengthen:

  • Automation of M-Files including their strengths in metadata
  • Use one client to access data and features to make work easier

Microsoft’s power:

  • Copilot capabilities
  • Collaboration tools
  • Security integeration
Benefits we get out of the strategic partnership
  • Generative AI results based on the M-Files content
  • Be insured that the results are accurate, relevant and can be trusted
  • Ground the fundamental base for the AI journey
  • Native document co-authoring with Microsoft applications
  • Native collaboration with the trusted Microsoft tools
  • M-Files automated content government and permission and secure content remains within the Microsoft security boundary
  • Use of Microsoft Purview to use compliance and governance policies
Requirements and availibility of the solution

Requirements for M-Files and Micrososft 365:

  • M-Files Cloud
  • M-Files Business platform
  • Microsoft 365 subscription

Desktop co-authoring is already enabled for M-Files Cloud customers with a Business Platform subscription. Detailed instructions on how to unlock all the benefits and capabilities will be available in August 2025. Stay tuned — I will share further details as soon as they are available!

Conclusion

As has been noted in previous blogs, the direction of travel is towards incorporating AI in the optimum way. And to ensure optimal security boundaries.

This announcement is important because it provides a centralised, scalable environment for managing and supporting your business requirements. Topics include effortlessly finding documents, managing the entire document lifecycle, effective collaboration, gaining actionable insights faster, and ensuring robust governance. Using AI to achieve all this will boost productivity and reduce risk for your business.

If you would like to hear the perspective of a Microsoft architect on this strategic partnership, you can watch this video.

I hope this blog gives you a good idea of the strategic partnership between M-Files and Microsoft. Keep an eye out for more blog posts about M-Files on this channel, and be sure to check out the other blogs that are around the M-Files topic.

Do not hesitate to get in touch with us if you have any questions, or contact me directly if you prefer.

L’article What brings the M-Files and Microsoft 365 Strategic Partnership est apparu en premier sur dbi Blog.

Dctm – Invalid/Missing XSRF token on D2

Wed, 2025-07-23 13:25

During an upgrade project to Documentum 23.4, I faced yet again another interesting behavior from D2. The Documentum Server upgrade happened properly for all components like the Connection Broker, Repositories, D2, etc… The OTDS was also setup and available/working properly without too much trouble. However, the deployment of the D2 war file was a bit of another story.

As usual, we try to make it so that Documentum and all its components are setup as securely as possible. From a WebServer point of view, that include a bunch of Best Practices that we add into our deployments / custom images (when using containers), and D2 isn’t without rest. One of such things is for example to setup the Tomcat and D2 application to work only with cookies that have the “secure” and “httpOnly” flags. That is done in a few locations, but in recent versions of D2, there is additional parameters to help control this kind of behavior inside the ESAPI.properties file.

Note: there are often confusions about the “httpOnly” flag for cookies, so I think a quick reminder wouldn’t hurt. The “secure” flag means that the cookie can only be sent through HTTPS (except when using localhost), so it’s much harder to get access to it. The “httpOnly” one, contrary to his name, doesn’t mean that the cookie is only for HTTP communications, but it means that it cannot be accessed by client’s scripts like JavaScript. Therefore, sensitive cookies should have both flags, so that they go through the network securely and even when it arrives on the target client’s browser, its access is protected.

Therefore, as a good practice, I went ahead and configured D2 as secure as I could, even before a 1st deployment, and that included these 4 parameters:

[tomcat@d2-0 war_prep]$ grep -B1 -E "ForceHttpOnly|ForceSecure" WEB-INF/classes/ESAPI.properties
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=true
# Whlie doing a cross site access through https make the below flag to true 
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 war_prep]$

Once my D2 WAR file was ready and configured, I tried to deploy it on Tomcat. No errors/issues during the deployment/startup of D2. However, accessing the D2 UI ended up with a pretty and infinite loading logo of D2. You probably have all seen that happen at some point:

Nothing on the D2 logs (generated through the logback.xml or log4j2.properties), but on the Tomcat logs, I could see the stack related to that issue when I accessed the URL a few minutes after Tomcat was fully up&running:

2025-07-08 14:25:56,379 UTC INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [$CATALINA_HOME/webapps/D2/D2.war] has finished in [57,704] ms
2025-07-08 14:25:56,382 UTC INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["https-jsse-nio-8080"]
2025-07-08 14:25:56,400 UTC INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [57846] milliseconds
2025-07-08 14:29:36,966 UTC SEVERE [https-jsse-nio-8080-exec-42] org.apache.catalina.core.ApplicationContext.log Key[type=com.emc.x3.server.services.labels.RpcLabelServiceImpl, annotation=[none]]: An RpcTokenException was thrown while processing this call.
	com.google.gwt.user.client.rpc.RpcTokenException: Invalid RPC token (Missing XSRF token: not on request, client IP=xxx.xxx.xxx.xxx)
		at com.emc.x3.server.D2XsrfProtectedServiceServlet.validateXsrfToken(D2XsrfProtectedServiceServlet.java:33)
		at com.google.gwt.user.server.rpc.AbstractXsrfProtectedServiceServlet.onAfterRequestDeserialized(AbstractXsrfProtectedServiceServlet.java:66)
		at com.emc.x3.server.GuiceRemoteServiceServlet.processCall(GuiceRemoteServiceServlet.java:120)
		at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:373)
		at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
		at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:590)
		at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:658)
		at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:290)
		at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:280)
		at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:184)
		at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:89)
		at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
		at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
		at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
		at com.emc.x3.portal.server.filters.authc.X3OTDSAuthenticationFilter.executeChain(X3OTDSAuthenticationFilter.java:1106)
		at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
		at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
		at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
		at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
		at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
		at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
		at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
		at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:458)
		at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:373)
		at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
		at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
		at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:387)
		at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:370)
		at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
		at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
		at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:121)
		at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:133)
		at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
		at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
		at com.emc.x3.portal.server.filters.X3SessionTimeoutFilter.doFilter(X3SessionTimeoutFilter.java:52)
		at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
		at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
		at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167)
		at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90)
		at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483)
		at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115)
		at org.apache.catalina.valves.StuckThreadDetectionValve.invoke(StuckThreadDetectionValve.java:185)
		at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93)
		at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:663)
		at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:731)
		at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
		at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344)
		at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:397)
		at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63)
		at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:905)
		at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1741)
		at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
		at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1190)
		at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
		at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63)
		at java.base/java.lang.Thread.run(Thread.java:840)

I checked that the XSRF token was indeed generated, and it appeared to be present in the request, at least as shown in the Browser’s Network traces (Developer Tools). It was being generated and forwarded by the browser with both the “secure” and “httpOnly” flags. So, what was the issue then? It took me a bit of time, but I could pinpoint the issue to the ESAPI.properties file and more specifically to the 4 properties I mentioned above, that control the flags of both cookies and sessions. To be exact, I expected the “httpOnly” flag for the cookies to be the issue, since it would hide the XSRF_TOKEN from JavaScript on the client-side. Keeping the content of the WAR file exploded folder the same, I tried to switch this 1 parameter back to false, which is the default value:

[tomcat@d2-0 ~]$ esapi_file="$CATALINA_HOME/webapps/D2/WEB-INF/classes/ESAPI.properties"
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ grep -B1 -E "ForceHttpOnly|ForceSecure" ${esapi_file}
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=true
# Whlie doing a cross site access through https make the below flag to true
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ sed -i 's,\(HttpUtilities.ForceHttpOnlyCookies\)=true,\1=false,' ${esapi_file}
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ grep -B1 -E "ForceHttpOnly|ForceSecure" ${esapi_file}
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=false
# Whlie doing a cross site access through https make the below flag to true
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 ~]$

After a restart of Tomcat, the issue was completely gone and the login to D2 through the OTDS was working successfully… Since I could deploy D2-REST, D2-Smartview and D2-Config with all 4 ESAPI.properties parameter set to “true“, I asked OT if it was expected that only D2 has a problem with “HttpUtilities.ForceHttpOnlyCookies=true“. After a few days of exchange, I got the feedback that it’s not documented but it’s apparently required for D2 to NOT have the “httpOnly” flag because of the XSRF_TOKEN. They will see to create a KB for that topic (update: it was created and it’s available here: KB0845279). If you really need to have both flags set, then you will not have any other choice than switching to the new UI, D2-Smartview.

L’article Dctm – Invalid/Missing XSRF token on D2 est apparu en premier sur dbi Blog.

Scheduling OpenSCAP reports in SUSE Multi-Linux Manager

Wed, 2025-07-23 09:44

As we’ve recently supported some customers on SUSE Multi Linux Manager I’d like share something which was not as easy to implement as it appeared to be in the first place. But first of all, what is SUSE Multi Linux Manager? It is basically a fork of Spacewalk which was also used as the upstream project by the Red Hat Satellite product. But as Spacewalk was dis-continued and the project on Github was archived some people decided to fork in and started a new project called Uyuni, and this is now the upstream project for SUSE Multi Linux Manager. One of the great things about Uyuni and SUSE Multi Linux Manager is, that it supports various Linux distributions such as SUSE and openSUSE distributions, Red Hat, Rocky, Oracle and Alma Linux, Debian, Ubuntu, and also ancient versions of CentOS if you still depend on them.

I am not going into the setup or basic configuration as you can already find related bogs here and more information in the documentation:

What I want to look at in this post is automatic scheduling of OpenSCAP scans/reports. When this requirement came up, it seemed pretty easy to do, as you can easily schedule such a scan against a single system. As you can see below I have a Red Hat 9 system registered to my SUSE Multi Linux Server:

What you can easily do out of the box is to manually schedule an OpenSCAP scan:

Once the scan completes, it becomes visible under the “List Scan” tab and you can browse into the details:

Quite easy to do but still a manual action. As we wanted to have it automated the obvious choice was to create a “Recurring Action”:

This gives you to option to create and configure a “Recurring Action”:

The issue is, there is no pre-defined “Custom State” which is scheduling an OpenSCAP scan:

The very same is true for “System Groups”, which you normally would use because otherwise you’d need to schedule that on every single system:

The last option seemed to be something under “Schedule” but this only gives you a list of what you already have:

At this point we were stuck and had to talk to SUSE support, which really was a great experience by the way. It turned out there is no easy, build-in, way to do this. A feature request has been logged, but of course there is no guarantee that it will be implemented.

But, there is a workaround, not a very beautiful one, but at least it works. SUSE Multi Linux Manager (and Uyuni of course) come with an API and there is one call for triggering an OpenSCAP scan. Using this, a custom state channel can be created which in turn calls the API to trigger the scan:

The “SLS Contents” actually contains the code (Python in this case) which is taking to the API and triggers the scan:

/usr/local/bin/schedule_xccdf_scan.py:
  file.managed:
    - user: root
    - group: root
    - mode: 755
    - contents: |
        #!/usr/bin/python3
        import xmlrpc.client

        client = xmlrpc.client.ServerProxy('https://suma.dwe.local/rpc/api')
        key = client.auth.login('admin', 'xxxx')
        client.system.scap.scheduleXccdfScan(
            key,
            1000010000,
            '/usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml',
            '--profile xccdf_org.ssgproject.content_profile_cis_server_l1'
        )
        client.auth.logout(key)

schedule_xccdf_scan:
  cmd.run:
    - name: /usr/local/bin/schedule_xccdf_scan.py
    - require:
      - file: /usr/local/bin/schedule_xccdf_scan.py

I am not going into the code itself, this should be easy to understand. The important part is the system ID in line 14. This defines the system you want the scan to happen on (you can also provide an array of systems, see the API documentation linked above).

As soon as you have this, you can schedule this automatically as a recurring action on either the system itself, or a group of systems in “System Groups”:

Not as easy as it could be, and the systems are still hard coded in the Python code, but at least we have something that works. Hope that helps.

L’article Scheduling OpenSCAP reports in SUSE Multi-Linux Manager est apparu en premier sur dbi Blog.

Pages