Feed aggregator

Stop measuring ECM success like this

Yann Neuhaus - Wed, 2026-05-13 03:15

As I’ve mentioned in some of my previous blog posts, ECM projects don’t fail because of the chosen technology. Most products can meet the necessary requirements. The choice of a solution is driven more by other aspects, such as the effort required for deployment and maintenance, the end-user interface, and the integration capabilities.

As I wrote earlier, an ECM implementation is different from other IT projects. It’s a living system!

For years, organizations have relied on the same familiar KPIs to measure success:

“We migrated one million documents.”

“Two hundred users successfully completed the training.”

And yet, users still can’t find what they need. Decisions are still slow to be made. Content is still seen as a burden rather than an asset.

stop measuring ECM success this way

It’s time to stop measuring ECM success this way.

The problem with traditional KPIs Number of documents migrated

Migrating documents is a technical milestone, not a business outcome.
A repository full of poorly classified, duplicated, or outdated documents is not a success; it’s digital clutter on a large scale. Migration only answers the question, “Did we move data?” It completely ignores whether people can actually use it.

Even worse, focusing on volume often encourages lift-and-shift strategies that preserve old folder structures and bad habits, the very things that ECM is designed to address.

Number of trained users

Although training metrics can be reassuring, they measure exposure, not adoption.
Completing a training session does not necessarily mean that:

  • Users have changed how they work.
  • They trust the system.
  • They stopped saving content locally or emailing attachments.

In many ECM projects, users are technically “trained” yet still bypass the system because it slows them down instead of helping them.

Why these metrics miss the point?

ECM isn’t just about storing documents. It’s about enabling better, faster, and safer work.

If your KPIs don’t reflect this, you may declare success while the business quietly disagrees.

So, what should we measure instead?

Measures that matter Time-To-Find (TTF)

This is one of the most honest and revealing ECM KPIs.

In organizations that rely on folders:

  • Users browse
  • guess locations
  • Open the wrong versions
  • Ask colleagues, “Where is the latest file?”

With a metadata-driven ECM like M-Files, content is found by what it is, not where it’s stored.

Measuring Time to Find:

  • Before ECM
  • After going live
  • And again after optimization.

This gives you a direct line from ECM value to daily productivity.
If users can’t find content faster, the ECM isn’t working, regardless of how many documents were migrated.

Decision speed

This KPI is even more powerful and strategic.

Decision speed is affected by:

  • Content availability
  • Version accuracy
  • Context (related documents, metadata, and workflow state)
  • Trust in information completeness

M Files accelerates decision-making by:

  • Ensuring users always see the latest version
  • Automatically surfacing related content
  • Embedding documents directly into business processes
  • Applying governance without slowing people down

When ECM is implemented effectively, decision cycles shorten. Approvals happen faster, issues are resolved sooner, and risks are identified earlier.
That’s real business impact.

Why M‑Files changes the KPI conversation

Traditional ECM systems force users to adapt to them. M-Files, however, adapts to the business.

M-Files is:

  • Metadata-driven
  • Process aware
  • Contextual
  • Automation ready

It enables KPIs centered around work outcomes, not IT activities.

Instead of asking:

“How much content did we store?”

You can ask:

  • “How quickly do our teams find information?”
  • “Where are decisions still slow, and why?”
  • “Which processes would improve most with better information?”
A more effective approach to ECM success

Thinking differently isn’t that hard, and yet it makes a huge difference. You just need the courage to take a step back.

Instead of thinking about “How your documents are stored?”, ask yourself: “What information is available?”

“Are my users trained?” Or rather, “Do my users have the tools they need to take action?”

“Is the metadata understood?”, is more important than “How my folder structure is organized?”

The question isn’t whether I completed the migration, but whether it sped up the decision-making process.

To sum things up

If your ECM success story starts and ends with numbers like documents migrated or users trained, then you’re focusing on the effort rather than the actual impact. That’s a real shame because you’re missing the point of such a project.

Modern ECM success is about:

  • Time saved
  • Decisions accelerated
  • Risk reduced
  • Work simplified

With M Files, these outcomes are not side effects but they are the goal.

Stop measuring ECM success like an IT project. Instead start measuring it like a business advantage!

We’re here to help you with that transition.

L’article Stop measuring ECM success like this est apparu en premier sur dbi Blog.

PostgreSQL 19: Dynamically adjust the I/O worker pool

Yann Neuhaus - Wed, 2026-05-13 00:12

When PostgreSQL 18 was released last year one of the major features was the introduction of the asynchronous I/O subsystem. The main configuration parameter for this was (and still is) io_method, which can be “worker” (the default), io_uring or sync (the old behavior). If you opted for “workers” the number of those workers is controlled by “io_workers” and the default for this is 3. PostgreSQL 19 most probably will change the way how many of those workers are launched, not anymore using the static value of “io_workers” but making this dynamic by launching workers from a predefined pool.

The configuration parameter “io_workers” is gone and four additional parameters show up to control this:

postgres=# \dconfig io_*work*
 List of configuration parameters
         Parameter         | Value 
---------------------------+-------
 io_max_workers            | 8
 io_min_workers            | 2
 io_worker_idle_timeout    | 1min
 io_worker_launch_interval | 100ms
(4 rows)

“io_min_workes” (as the name implies) controls how many workers are available by default, which is two:

postgres@:/home/postgres/ [DEV] ps -ef | grep postgres | grep worker | grep -v grep
postgres    8564    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 0
postgres    8565    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 1

“io_max_workers” (again, as the name implies) controls the maximum worker processes which can be launched for the whole instance.

To see that dynamic startup of workers in action lets create a simple table containing twenty million rows:

postgres=# create table t ( a int, b text, c timestamptz );
CREATE TABLE
postgres=# insert into t select i, i::text, now() from generate_series(1,20000000) i;
INSERT 0 2000000

While watching the workers in a separate session:

postgres@:/home/postgres/ [DEV] watch "ps -ef | grep postgres | grep worker | grep -v grep"

Every 2.0s: ps -ef | grep postgres | grep worker | grep -v grep               pgbox.it.dbi-services.com: 06:52:20 AM
                                                                                                       in 0.022s (0)
postgres    8564    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 0
postgres    8565    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 1

… and doing a count(*) over the whole table in session one:

postgres=# select count(*) from t;
  count   
----------
 20000000
(1 row)

… you’ll notice that an additional worker (io worker 2) shows up in the second session watching the processes (maybe you have to play a bit with the number of rows depending on your configuration of PostgreSQL):

Every 2.0s: ps -ef | grep postgres | grep worker | grep -v grep               pgbox.it.dbi-services.com: 07:02:40 AM
                                                                                                       in 0.018s (0)
postgres    8564    8562  0 06:34 ?        00:00:02 postgres: pgdev: io worker 0
postgres    8565    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 1
postgres   11914    8562  0 07:02 ?        00:00:00 postgres: pgdev: io worker 2

Once this additional worker is idle for one minute it will disappear and we’re back to two worker processes:

Every 2.0s: ps -ef | grep postgres | grep worker | grep -v grep               pgbox.it.dbi-services.com: 07:04:24 AM
                                                                                                       in 0.020s (0)
postgres    8564    8562  0 06:34 ?        00:00:02 postgres: pgdev: io worker 0
postgres    8565    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 1

This is controlled by “io_worker_idle_timeout” and the default is one minute.

The remaining configuration knob is “io_worker_launch_interval”, and this is the interval at which additional workers can be launched. The reason behind this is, that not too many workers will be launched at once.

This will make tuning the workers easier, compared to PostgreSQL 18. Again, thanks to all involved, the commit is here.

L’article PostgreSQL 19: Dynamically adjust the I/O worker pool est apparu en premier sur dbi Blog.

Increase GoldenGate 26ai Log Retention

Yann Neuhaus - Tue, 2026-05-12 01:20

GoldenGate logs are a powerful source of information when debugging or analyzing your deployments. However, some of these logs have a rather low retention period in active deployments. They might then not even be useful for debugging if you send them to your dbi consultants or the Oracle support for analysis. So how can you increase GoldenGate log retention ?

In a previous blog, I presented all the log files available in GoldenGate. Each of them has its own format and characteristics, but they have one common aspect: they can be customized.

Standard GoldenGate logging configuration files are located in the $OGG_HOME/lib/utl/logging directory.

> ll $OGG_HOME/lib/utl/logging
-rw-r-----. 1 oracle oinstall  1066 Nov 17  2018 app-adminsrvr-debug.xml
-rw-r-----. 1 oracle oinstall  1076 Jan 10  2019 app-adminsrvr-events.xml
-rw-r-----. 1 oracle oinstall  1066 Nov 17  2018 app-distsrvr-debug.xml
-rw-r-----. 1 oracle oinstall  1040 Apr 17  2017 app-extract-events.xml
-rw-r-----. 1 oracle oinstall  1066 Nov 17  2018 app-pmsrvr-debug.xml
-rw-r-----. 1 oracle oinstall  1397 Apr  1  2018 app-pmsrvr-default.xml
-rw-r-----. 1 oracle oinstall  1066 Nov 17  2018 app-recvsrvr-debug.xml
-rw-r-----. 1 oracle oinstall  1048 Jan 22  2020 app-replicat-debug509.xml
-rw-r-----. 1 oracle oinstall  1040 Apr 17  2017 app-replicat-events.xml
-rw-r-----. 1 oracle oinstall  1066 Nov 17  2018 app-ServiceManager-debug.xml
-rw-r-----. 1 oracle oinstall  2459 Jan 17  2024 app-ServiceManager-services.xml
-rw-r-----. 1 oracle oinstall  1282 Dec 19 18:44 ogg-AIService.xml
-rw-r-----. 1 oracle oinstall  4946 May 14  2020 ogg-audit.xml
-rw-r-----. 1 oracle oinstall  1582 Jun 28  2023 ogg-ConfigService.xml
-rw-r-----. 1 oracle oinstall  4487 Jan 10  2019 ogg-ggserr.xml
-rw-r-----. 1 oracle oinstall 18162 Jun 26  2020 ogg-loggers.json
-rw-r-----. 1 oracle oinstall  1095 Sep 25  2019 ogg-loggers.xml
-rw-r-----. 1 oracle oinstall  2180 Sep 11  2024 sca-default.xml
-rw-r-----. 1 oracle oinstall  1211 Jan 10  2019 sca-restapi.xml
-rw-r-----. 1 oracle oinstall  1210 Jun  6  2022 sca-stdout.xml

The process to modify logging properties of any GoldenGate log file is to copy one of these files in your deployment and update it. It means that you can have different logging properties between your deployments.

Oracle GoldenGate Microservices uses a hierarchical logger framework with Log4j-style appenders, layouts, logger inheritance, and category namespaces. I will not dwell on all the configuration files in this blog, but let’s try to describe the most useful ones.

sca-restapi.xml
<?xml version="1.0"?>
<configuration>

  <!--
   /- ============================================================= -\
   !-   s c a - r e s t a p i . x m l                               -|
   !-                                                               -|
   !-   Logging control file for recording all REST API calls       -|
   !-   to an OGG deployment.                                       -|
   \- ============================================================= -/
  ! -->

  <appender  name="sca-restapi.log"  class="RollingFileAppender">
    <level  value="info"/>
    <param   name="File"             value="restapi.log"/>
    <param   name="MaxFileSize"      value="10MB"/>
    <param   name="MaxBackupIndex"   value="9"/>
    <param   name="BufferedIO"       value="false"/>
    <param   name="Append"           value="true"/>
    <layout class="PatternLayout">
      <param name="Pattern"          value="%d{%Y-%m-%d %H:%M:%S%z} %-5p|%-36.36c| %m%n"/>
    </layout>
  </appender>

  <!--
   !-   M i c r o s e r v i c e s   A r c h i t e c t u r e
  ! -->
  <logger          name="RestAPI">
    <appender-ref  name="sca-restapi.log"/>
    <level        value="info"/>
  </logger>

</configuration>

The sca-restapi.xml file is the logging configuration file for the restapi.log file.

REST API logs are the most verbose across all GoldenGate log files. In production environments where the REST API is often called, you could easily go over the 10 log files in a day or even less. If you have enough space, I would strongly recommend increasing the retention and/or the maximum size of a single log file. This way, you ensure that you keep enough logs for debugging and analysis.

To modify the retention, change MaxFileSize to set the maximum log file size and MaxBackupIndex to choose the number of files you want to keep (on top of the active log file).

Let’s copy the file in the deployment home (it could be any deployment, including the Service Manager home).

cd $OGG_DEPLOYMENT_HOME/etc/conf/logging
cp -p $OGG_HOME/lib/utl/logging/sca-restapi.xml .
vim sca-restapi.xml

For instance, to have 5 files of 50 MB each, edit the following lines:

    <param   name="MaxFileSize"      value="50MB"/>
    <param   name="MaxBackupIndex"   value="4"/>

Once this is done, just restart the administration service.

What about the other log files ?

If you want to increase the retention or the log file size of any other log file in GoldenGate, just use the following mapping and repeat the same process. If you want to edit the Service Manager logging properties, you should also restart it.

If you want to modify the log retention for…Copy and modify the following file from $OGG_HOME/lib/utl/loggingMost standard microservice logs, except restapi.log and ER-events.logsca-default.xmlggserr.logogg-ggserr.xmlER-events.logapp-extract-events.xmlrestapi.logsca-restapi.xml

Warning: If you want to use the same configuration across all your deployments, you could modify the standard files in $OGG_HOME/lib/utl/logging and create links from the deployments to this file. However, make sure you do not lose these changes when patching out-of-place !

Can I increase the retention to more than 10 files ?

Yes, there is no problem having more than 10 log files. Just increase the MaxBackupIndex (9, by default) to the number of log files you want, minus 1. For 20 log files, set MaxBackupIndex to 19.

L’article Increase GoldenGate 26ai Log Retention est apparu en premier sur dbi Blog.

PostgreSQL 19: pg_waldump can now read from archives

Yann Neuhaus - Sun, 2026-05-10 23:48

When PostgreSQL 18 introduced the ability to verify tar based (and compressed) backups with pg_verifybackup there was one limitation: The verification of the WAL files in the tars (or compressed files) had to be skipped (--no-parse-wal) because pg_waldump in that version of PostgreSQL is not able to cope with that (and pg_waldump is used by pg_verifybackup). This will change with PostgreSQL 19 because of this commit: “pg_waldump: Add support for reading WAL from tar archives”.

This is maybe not a feature a lot of people have waited for but it makes two tasks a lot easier:

  • As mentioned above: pg_verifybackup can now read from WAL in tar and compressed files and therefore can do WAL verification
  • When you have WAL in a tar or compressed file and you know what you’re looking for you do not need to manually extract those archives before using pg_waldump

To see that in action once can create a tar or compressed backup with pb_basebackup:

postgres@:/home/postgres/ [pgdev] mkdir /var/tmp/dummy
postgres@:/home/postgres/ [pgdev] pg_basebackup --checkpoint=fast --format=t --pgdata=/var/tmp/dummy
postgres@:/home/postgres/ [pgdev] ls -la /var/tmp/dummy
total 128476
drwxr-xr-x. 1 postgres postgres        66 May 11 06:36 .
drwxrwxrwt. 1 root     root           762 May 11 06:33 ..
-rw-------. 1 postgres postgres    149515 May 11 06:36 backup_manifest
-rw-------. 1 postgres postgres 114619904 May 11 06:36 base.tar
-rw-------. 1 postgres postgres  16778752 May 11 06:36 pg_wal.tar

Looking at the PostgreSQL log file while the backup is running gives us a LSN we can give to pg_waldump:

2026-05-11 06:36:18.397 CEST - 2 - 1731 -  - @ - 0LOG:  checkpoint complete: fast force wait: wrote 2 buffers (0.0%), wrote 3 SLRU buffers; 0 WAL file(s) added, 1 removed, 0 recycled; write=0.002 s, sync=0.005 s, total=0.019 s; sync files=4, longest=0.003 s, average=0.002 s; distance=16384 kB, estimate=16384 kB; lsn=0/0D000088, redo lsn=0/0D000028

postgres@:/home/postgres/ [pgdev] pg_waldump --path=/var/tmp/dummy/pg_wal.tar -s "0/0D000088" 
rmgr: XLOG        len (rec/tot):    122/   122, tx:          0, lsn: 0/0D000088, prev 0/0D000050, desc: CHECKPOINT_ONLINE redo 0/0D000028; tli 1; prev tli 1; fpw true; wal_level replica; logical decoding false; xid 0:729; oid 16420; multi 1; offset 1; oldest xid 684 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 729; checksums on; online
rmgr: Standby     len (rec/tot):     54/    54, tx:          0, lsn: 0/0D000108, prev 0/0D000088, desc: RUNNING_XACTS nextXid 729 latestCompletedXid 728 oldestRunningXid 729; dbid: 0
rmgr: XLOG        len (rec/tot):     34/    34, tx:          0, lsn: 0/0D000140, prev 0/0D000108, desc: BACKUP_END 0/0D000028
rmgr: XLOG        len (rec/tot):     24/    24, tx:          0, lsn: 0/0D000168, prev 0/0D000140, desc: SWITCH 
pg_waldump: error: could not find WAL "00000001000000000000000E" in archive "pg_wal.tar

This helps pg_verifybackup fully verify a backup (in previous versions you had to use “–no-parse-wal”):

postgres@:/home/postgres/ [pgdev] pg_verifybackup --progress /var/tmp/dummy/
111933/111933 kB (100%) verified
backup successfully verified

As usual, thanks to all involved.

L’article PostgreSQL 19: pg_waldump can now read from archives est apparu en premier sur dbi Blog.

SQL Server 2025 In-Memory: New Cleanup Features & SQLBits 2026 Insights

Yann Neuhaus - Sun, 2026-05-10 14:29

Summer is already around the corner, but it’s not too late for some spring cleaning!
If you manage SQL Server databases with In-Memory tables, you may have already tried to delete a MEMORY_OPTIMIZED_DATA file or FILEGROUP, only to find that SQL Server simply won’t let you.
This limitation has existed since the debut of In-Memory with SQL Server 2014, and the only workaround until now was to recreate the database from scratch.
It is with SQL Server 2025 that Microsoft finally lifts this restriction. In this article, we will analyze these behavioral differences before and after this version.
To conclude, we will draw on the key points presented by Thodoris Katsimanis, DBA Team Technology Manager at Kaizen Gaming, during his session at SQLBits 2026 on In-Memory tables, in order to summarize the challenges and benefits this feature can bring to production.

The Legacy Struggle: In-Memory Limitations from 2014 to 2022

To demonstrate this difference in behavior, we will create a database under SQL Server 2022 with an In-Memory table loaded with data, and then attempt to delete the associated files and FILEGROUP:

DECLARE @DataPath NVARCHAR(512) = '<YOUR_DATA_FOLDER>';
DECLARE @LogPath  NVARCHAR(512) = '<YOUR_LOG_FOLDER>';
DECLARE @SQL      NVARCHAR(MAX);

SET @SQL = N'
CREATE DATABASE TestInMemory
ON PRIMARY (
    NAME = TestInMemory_data,
    FILENAME = ''' + @DataPath + N'TestInMemory.mdf''
),
FILEGROUP XTP_FG CONTAINS MEMORY_OPTIMIZED_DATA (
    NAME = TestInMemory_XTP,
    FILENAME = ''' + @DataPath + N'TestInMemory_XTP''
)
LOG ON (
    NAME = TestInMemory_log,
    FILENAME = ''' + @LogPath + N'TestInMemory_log.ldf''
);';

EXEC sp_executesql @SQL;

USE TestInMemory;
GO

CREATE TABLE dbo.TestTable
(
    ID    INT          NOT NULL,
    Val   NVARCHAR(50) NOT NULL,
    CONSTRAINT PK_TestTable PRIMARY KEY NONCLUSTERED HASH (ID)
        WITH (BUCKET_COUNT = 1024)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO

INSERT INTO dbo.TestTable VALUES (1, 'Hello'), (2, 'World');
GO

Once our table is loaded (to ensure the table exists and is not just metadata), we can then delete it:

DROP TABLE dbo.TestTable;
GO

SELECT name, type_desc 
FROM sys.tables 
WHERE is_memory_optimized = 1;

The verification clearly shows that no more In-Memory objects exist. We can therefore proceed with the famous cleanup of the files linked to the table we deleted:

ALTER DATABASE TestInMemory 
REMOVE FILE TestInMemory_XTP;
GO

ALTER DATABASE TestInMemory
REMOVE FILEGROUP XTP_FG;
GO

And here is the famous error, impossible to bypass it and sort things out.
Note: This cleanup challenge specifically affects tables using DURABILITY = SCHEMA_AND_DATA, as they are the only ones where data persists within physical files on disk.

SQL Server 2025: Breaking the In-Memory Cleanup Barrier

SQL Server 2025 does not just lift the restriction: it also introduces a new DMV, sys.dm_db_xtp_undeploy_status, which exposes the precise reason why the deletion is not yet possible.
By querying it at the same stage as our previous example, here is what it returns:

USE TestInMemory;
GO

SELECT
    deployment_state,
    deployment_state_desc,
    undeploy_lsn,
    start_of_log_lsn
FROM sys.dm_db_xtp_undeploy_status;
GO

Now we have a clear reason: the start_of_log_lsn is too old, which prevents SQL Server from releasing the FILEGROUP. To resolve this, the LSNs must be advanced. A FULL backup is first required to initialize the backup chain, followed by a LOG backup to effectively advance the position in the logs:

CHECKPOINT;
GO

BACKUP DATABASE TestInMemory TO DISK = 'NUL';
GO

BACKUP LOG TestInMemory TO DISK = 'NUL';
GO

Once the LOG backup is executed and the LSNs are sufficiently advanced, the files can finally be deleted. The sys.dm_db_xtp_undeploy_status view confirms that the XTP engine is no longer deployed and that the cleanup has been successfully performed.

SQL Server 2025 not only introduces the ability to purge empty files that were linked to an In-Memory object but also the ability to troubleshoot their deletion!

From Milliseconds to Microseconds: Thodoris Katsimanis at SQLBits 2026

To conclude this article, let’s look back at the key points covered by Thodoris Katsimanis during his session at SQLBits 2026, entitled “Revolutionizing Database Performance: Deep Dive into SQL InMemory Technology”.
His context is particularly telling: at Kaizen Gaming, SQL Server databases handle thousands of transactions per second in real time, in an environment where every millisecond has a direct impact on the user experience. It is precisely in this type of workload that In-Memory tables reveal their full potential.

Eliminating Page Contention with Latch-Free Architecture

The presentation exposed limitations of the SQL Server engine: in a high-performance system, latches on disk pages (PAGELATCH_EX) create bottlenecks that can lead to Thread Pool exhaustion. The In-Memory architecture solves this problem at its root via a latch-free structure. By relying on optimistic concurrency control and multi-versioning (MVCC), SQL Server no longer waits for locks. Each row has a Begin-Timestamp and an End-Timestamp, allowing transactions to read the valid version of the data without blocking writes.

Maximizing Performance: The Crucial Choice between Hash and BW-Tree

The choice between a Hash index and a Nonclustered index is crucial. The Hash index is perfectly suited for Point Lookups (searches on an exact value): it points directly to the memory address via a hash function. Conversely, the Nonclustered index relies on a BW-Tree structure, which is essential for range scans and sorting, where Hash is of little use. To learn more about indexes for In-Memory tables, check the Microsoft’s documentation.

The Critical Impact of BUCKET_COUNT Misconfiguration

As Thodoris points out, the success of a Hash index relies on tuning the BUCKET_COUNT. This parameter defines the number of entry points in the index. If it is too low, the system generates collision chains: multiple values end up in the same bucket, forcing the engine to scan a linked list, which degrades performance. If it is too high, it consumes memory unnecessarily. Thodoris also highlights a crucial observation: using a Nonclustered index for an equality search can consume significantly more memory than a properly sized Hash index.

Final Thoughts: Embracing the In-Memory Revolution

SQL Server 2025 finally lifts a limitation that has hampered the lifecycle management of In-Memory databases for over ten years. Being able to cleanly delete associated files and FILEGROUPs, understanding why the engine blocks this operation thanks to sys.dm_db_xtp_undeploy_status, and having a clear procedure to remedy it: this is concrete progress for everyone operating this technology in production.

Thodoris Katsimanis’s session at SQLBits 2026 reminds us that the care given to maintenance and monitoring matters just as much as the initial design. In-Memory tables are not a simple performance lever to be activated and forgotten: they require a mastery of their internal mechanisms, thread management to eliminate contention, and rigorous sizing of the BUCKET_COUNT. As he summarizes: the millisecond is no longer a sufficient unit of measurement. In-Memory OLTP aims for the microsecond and in hyper-transactional environments, that is precisely what makes the difference.

L’article SQL Server 2025 In-Memory: New Cleanup Features & SQLBits 2026 Insights est apparu en premier sur dbi Blog.

The metadata trap

Yann Neuhaus - Thu, 2026-05-07 02:30

You know, metadata is the key of a nice Enterprise Content Management system.

It is supposed to make work easier, promising order instead of chaos, findability instead of frustration, control instead of clutter. In the world of document management, metadata is often treated as the solution, the moment where unstructured content becomes manageable.

In reality, in many organizations, metadata quietly becomes the problem.

Not because metadata is bad. But because too much structure, designed in the wrong way, can actively destroy productivity.

This is the metadata trap: when structure stops serving work and starts working against it.

The metadata trap
The promise of perfect structure

Every ECM project starts the same way. A workshop room with a whiteboard. Some PowerPoint slides with boxes and arrows. Someone asks:

What metadata do we need?

At first, the answers are sensible:

  • Document type
  • Customer
  • Project
  • Status

Then the room warms up. Legal wants contract subtype. Finance wants cost center. Compliance wants retention category. Sales wants region, industry, deal size. IT suggests future-proofing “while we’re at it.”

Before long, a simple document requires:

  • 10/15 mandatory fields
  • Complex naming conventions
  • Conditional rules and dependencies

On paper, it’s beautiful.

In practice, it’s an additional charge on every single user, every single day.

The user “fees” nobody calculates

Although metadata only amounts to a few bytes and is therefore seemingly insignificant in the context of a global ECM project, it isn’t free.

Each required field adds:

  • A decision the user must make
  • Context they must understand
  • Time they must spend

Individually, that cost seems trivial. Five extra seconds here. Ten seconds there, but multiply that by:

  • Hundreds of users
  • Thousands of documents
  • Years of daily work

What looked like “good governance” becomes a significant productivity drain.

Worse, the people who pay this tax are rarely the people who designed the metadata model, that’s why it is crucial to involves the right people at the really beginning of the project.

What really happens in the real world

When metadata becomes too heavy, users don’t become more disciplined.

They become creative.

They:

  • Select the first value in the list just to proceed
  • Copy metadata from an old document whether it fits or not
  • Create “Miscellaneous” documents whenever possible
  • Store drafts locally and upload them later (maybe)
  • Avoid the system unless absolutely forced

Although the metadata appears to be complete at this stage, it is far from relevant. The result is a highly detailed structure that lacks substance.

The illusion of control

One of the most dangerous assumptions in ECM projects is this:

If we enforce the metadata, people will use it correctly.

They won’t.

Not because they’re lazy.
Because their primary job is not data quality.

A project manager wants to move a project forward.
A lawyer wants to close a contract.
An engineer wants to solve a problem.

Metadata is secondary. If it becomes an obstacle, it will be bypassed consciously or subconsciously.

Heavy structure creates the illusion of control while eroding actual adoption.

Metadata designed for reporting vs. for work

Here’s a useful distinction:

  • Reporting metadata serves management, analytics, and compliance.
  • Operational metadata serves daily work.

The trap appears when reporting needs dominate design decisions.

Fields are added because:

  • “We might need this later”
  • “It could be useful for dashboards”
  • “Compliance asked for it”
  • “Another department uses it”

Very few fields are added because:

“This helps users get their work done faster”

This imbalance is deadly.

If metadata does not help users perform actions such as finding, reusing, automating or making decisions, it will eventually become redundant.

Metadata isn’t a magic bullet.

This may sound heretical in ECM circles, but it’s true.

Metadata itself has no value.
Metadata only becomes valuable when…

  • It is used later.
  • by a real process.
  • with a clear outcome.

Unused metadata is just overhead.

A useful question to ask about every field is:

“What breaks if this metadata is missing or incorrect?”

If the honest answer is “nothing important”, then the field may not be necessary.

The cognitive load problem

There’s also a human factor we often ignore: cognitive load.

Every metadata field requires the user to:

  • Understand the difference between similar options
  • Interpret abstract definitions
  • Predict future use cases

This is exhausting, especially under time pressure.

When systems demand constant classification, users feel policed rather than supported.

The result?

  • Reduced satisfaction
  • Lower trust in the system
  • Gradual disengagement

And no training program can fix that.

Simpler structure, better outcomes

The most successful systems tend to share a few traits:

  • Minimal mandatory metadata
  • Strong defaults and automation
  • Metadata inferred from context whenever possible
  • Progressive disclosure (advanced fields only when needed)

They accept a hard truth:

Incomplete but accurate metadata is better than complete but meaningless metadata.

The goal isn’t perfection. The goal is usefulness at scale.

A practical rule of thumb

Here’s a very simple rule that works shockingly well:

If a user cannot explain why a metadata field is important for their work in a single sentence, then that field is hindering efficiency.

This doesn’t mean removing governance. It means aligning structure with reality.

Metadata should:

  • Reduce friction, not add it
  • Follow work, not dictate it
  • Evolve over time, not fossilize
Escaping the Metadata Trap

Avoiding the metadata trap doesn’t require radical change, but just discipline.

  • Start with the minimum viable structure
  • Observe real usage, not design intent
  • Remove fields that don’t pull their weight
  • Treat metadata models as living systems

Most importantly, listen to users. Not in requirements workshops, but in their daily behavior.

They will always tell you when structure has crossed the line.

Life is full of compromises, and so is the life of an ECM too! Structure is powerful. Metadata is necessary. Governance matters. But when structure becomes heavier than the work it supports, productivity collapses quietly and steadily.

The structure should serve the work, not the other way around.

That’s the difference between a system people tolerate and a system they actually use.

Modern ECM systems, such as M-Files, can greatly improve the user experience. There are various mechanisms for doing so:

  • Metadata discovery suggests values for you
  • Automatic value applies basic rules (concatenation, automatic numbering,…)
  • Background calculations perform more complex actions
  • A dynamic user interface that displays or hides properties depending on the lifecycle state and or the user profile

These capabilities enable users to work efficiently without wasting time filling out endless forms. Meanwhile, governance keeps everything under control, and management continues to receive relevant analytics data.

As always, dbi services (and I) are here to guide you through this complex yet essential process.

L’article The metadata trap est apparu en premier sur dbi Blog.

Deployment removal failed in GoldenGate Configuration Assistant (INS-85038)

Yann Neuhaus - Thu, 2026-05-07 01:55

Recently, I wrote about deleting a GoldenGate deployment with the REST API, and when investigating this issue, I remembered a limitation of the configuration assistant, which I wanted to talk about. For more information about deployment removal with the configuration assistant, you can read a blog I wrote on the topic (for 23ai, but nothing changed in that regard in 26ai).

Let’s now talk about the INS-85038 error, which is rather generic. You will definitely need more details to investigate, and this blog does not cover all possibilities. Still, I will try to give you solutions, including one that should work in most cases.

As a reminder, there are three steps when deleting a GoldenGate deployment with oggca.sh :

  • Verify the deployment credentials
  • Stop the deployment services
  • Unregister the deployment in the Service Manager

In my case, the problem I noticed is that the configuration assistant waits for the deployment to be stopped. However, it fails after some time with the following error details:

Log of this session available at: /u01/app/oraInventory/logs/OGGCAConfigActions2026-04-22_12-51-46PM
Setup completed with overall status as Succeeded
Setup completed with overall status as Failed
Verification failed. Expected value: (NOT) 'running', actual value: 'running'
Verification (0) failed for property 'response/status'.
Verification failed for REST call to '/services/v2/deployments/ogg_test_blog'
Results for "Stop the deployment":
..."Retrieving the 'ogg_test_blog' deployment details.": SUCCEEDED
..."Stop the 'ogg_test_blog' deployment.": SUCCEEDED
..."Verify the 'ogg_test_blog' deployment is stopped": FAILED
(1) Errors ocurred when trying to stop deployment. Make sure Service Manager is running. Check Service Manager log files for more details.

However, when debugging this issue, I observed the following strange behavior. While the configuration assistant complains about the deployment being started, my deployment was stopped when checking its status a few seconds after.

I decided to analyze the restapi.log file of my deployment. What I discovered was that the configuration assistant was checking the status of the deployment every ten seconds for two minutes. After this period, it fails with the abovementioned error.

What to do after an INS-85038 error ?

If you still want to delete this deployment, you have two options:

  • Deleting the deployment with the REST API, with the method described in this blog, using the dedicated endpoint.
  • Restarting the deployment and re-running the configuration assistant. But this time, if you get the same error as before, remember to wait. Once the deployment is completely stopped, click on Skip in the configuration assistant. The assistant will follow with the next step, unregistering the deployment from the Service Manager.

To summarize, if you get an INS-85038 error in the configuration assistant, try to remove the deployment with the REST API. But if the reason for this error is just a deployment that is not stopping soon enough, just use the Skip button to continue removing the deployment when it is finally stopped.

L’article Deployment removal failed in GoldenGate Configuration Assistant (INS-85038) est apparu en premier sur dbi Blog.

Large Table Extraction to JSON with dots.ocr — No Vision LLM Hallucinations

Andrejus Baranovski - Mon, 2026-05-04 13:35
Sparrow now supports a dedicated table mode for extracting large, complex tables into structured JSON — without Vision LLM hallucinations. 

Vision LLMs struggle with dense tabular data: they hallucinate values, misalign rows, and lose precision at scale. Sparrow's table mode solves this by using dots.ocr to capture the full table structure as HTML, then applying a generic Sparrow template to convert that HTML into clean, structured JSON. 

 

EM-90000 SSL Error With GoldenGate Targets in OEM

Yann Neuhaus - Mon, 2026-05-04 01:12

In this blog, I will explain what needs to be done when registering GoldenGate targets behind NGINX reverse proxy in the Enterprise Manager. More specifically, we will see how to avoid the EM-90000 error related to an SSLHandshakeException.

If you are upgrading to GoldenGate 26ai and migrating from Classic to Microservices Architecture, you must re-discover GoldenGate targets in the Enterprise Manager. The discovery module settings vary from one setup to another, but for a GoldenGate deployment exposed via an NGINX reverse proxy, you should set up the discovery module with the following:

While this could be enough in some GoldenGate setups, it will fail if the certificate chain is not trusted by the OEM agent. In fact, if you run a discovery of your host with the settings mentioned above, no target will be discovered. This is especially tricky, since the Enterprise Manager will tell you “Discover Now – Completed Successfully“.

Open the ogg_so_logs.log.0 file in the agent_inst/sysman/emd directory of your target agent. You will see that the discovery has failed with the following error : INFO: Exception occured when tried with SSL deployment. SSLHandshakeException.

oracle@vmogg:~ [emagent] vim ogg_so_logs.log.0
Apr 19, 2026 8:01:32 AM com.oracle.sysman.goldengate.discovery.GoldenGateDiscovery createDiscovery
INFO: Discovery : Discovering Oracle Goldengate Instances . Discovery parameters values are , Port=443, UserName=ogg, HostName=vmogg, OGG Mode=Microservices, EMStateDir=/u01/app/oracle/agent_24ai/agent_inst
Apr 19, 2026 8:01:32 AM com.oracle.sysman.goldengate.discovery.GoldenGateDiscovery createDiscovery
INFO: Discovery :Target name prefix =oggtest:
Apr 19, 2026 8:01:32 AM com.oracle.sysman.goldengate.discovery.GoldenGateDiscovery createDiscovery
INFO: agentTrustLocation:/u01/app/oracle/agent_24ai/agent_inst/sysman/config/montrust/AgentTrust.jks
Apr 19, 2026 8:01:32 AM com.oracle.sysman.goldengate.discovery.GoldenGateMicroServicesDiscovery getJSONDataFromUrl
INFO: Discovery : Invoking URL request :https://vmogg:443/services/v2/deployments
Apr 19, 2026 8:01:32 AM com.oracle.sysman.goldengate.discovery.GoldenGateMicroServicesDiscovery getJSONDataFromUrlForSSL
INFO: Trying to connect as a ssl connection https://vmogg:443/services/v2/deployments
Apr 19, 2026 8:01:32 AM com.oracle.sysman.goldengate.discovery.GoldenGateMicroServicesDiscovery getJSONDataFromUrlForSSL
INFO: SSLHandshakeException while getting response for URL:https://vmogg:443/services/v2/deployments . javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Apr 19, 2026 8:01:32 AM com.oracle.sysman.goldengate.discovery.GoldenGateMicroServicesDiscovery getJSONDataFromUrl
INFO: Exception occured when tried with SSL deployment. SSLHandshakeException: Failed to connect to ssl Microservices . Retrying via non ssl microservices. Trying with http.
Apr 19, 2026 8:01:32 AM com.oracle.sysman.goldengate.discovery.GoldenGateMicroServicesDiscovery getJSONDataFromUrl
SEVERE: For the Url = http://vmogg:443/services/v2/deployments , HTTP/response error code = 307
Apr 19, 2026 8:01:32 AM com.oracle.sysman.goldengate.discovery.GoldenGateMicroServicesDiscovery generateTargetsXML
SEVERE: Exception getting response from URL:https://vmogg:443/services/v2/deployments - com.oracle.sysman.goldengate.discovery.GoldenGateDiscovery$GGDiscoveryException: Discovery failed : HTTP error code : 307 from URL:http://vmogg:443/services/v2/deployments
Apr 19, 2026 8:01:32 AM com.oracle.sysman.goldengate.discovery.GoldenGateDiscovery main
SEVERE: Exception during Targets discovery: EM-90000 - Target Discovery failed. Internal Error. Please contact System Administrator. - com.oracle.sysman.goldengate.discovery.GoldenGateDiscovery$GGDiscoveryException: EM-90000 - Target Discovery failed. Internal Error. Please contact System Administrator.

The problem here is that the OEM agent does not trust the certificate chain being used in your GoldenGate installation. For the discovery (and the monitoring) to work, you need to register the certificates in the agent’s truststore.

Displaying the content of the truststore

First, let’s have a look at the content of the agent’s truststore. To do so, use the keytool utility shipped with the agent. Here is an example of a default AgentTrust.jks content. When prompted for the keystore password, use the configured truststore password (welcome, by default).

oracle@vmogg:~ [emagent] /u01/app/oracle/agent_24ai/agent_24.1.0.0.0/oracle_common/jdk/bin/keytool -list -keystore /u01/app/oracle/agent_24ai/agent_inst/sysman/config/montrust/AgentTrust.jks
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN

Your keystore contains 9 entries

verisignclass1pca, Oct 20, 2009, trustedCertEntry,
Certificate fingerprint (SHA-256): 13:B8:4A:BA:EC:A3:DE:8C:71:9A:06:7D:E8:CF:18:5F:65:DC:19:E0:3E:BD:92:C2:0B:D3:8C:75:09:7B:E1:13
verisignclass3ca, Oct 20, 2009, trustedCertEntry,
Certificate fingerprint (SHA-256): E7:68:56:34:EF:AC:F6:9A:CE:93:9A:6B:25:5B:7B:4F:AB:EF:42:93:5B:50:A2:65:AC:B5:CB:60:27:E4:4E:70
gtecybertrustglobalca, Oct 20, 2009, trustedCertEntry,
Certificate fingerprint (SHA-256): A5:31:25:18:8D:21:10:AA:96:4B:02:C7:B7:C6:DA:32:03:17:08:94:E5:FB:71:FF:FB:66:67:D5:E6:81:0A:36
entrustsslca, Oct 20, 2009, trustedCertEntry,
Certificate fingerprint (SHA-256): 62:F2:40:27:8C:56:4C:4D:D8:BF:7D:9D:4F:6F:36:6E:A8:94:D2:2F:5F:34:D9:89:A9:83:AC:EC:2F:FF:ED:50
entrust2048ca, Oct 20, 2009, trustedCertEntry,
Certificate fingerprint (SHA-256): D1:C3:39:EA:27:84:EB:87:0F:93:4F:C5:63:4E:4A:A9:AD:55:05:01:64:01:F2:64:65:D3:7A:57:46:63:35:9F
verisignserverca, Oct 20, 2009, trustedCertEntry,
Certificate fingerprint (SHA-256): 29:30:BD:09:A0:71:26:BD:C1:72:88:D4:F2:AD:84:64:5E:C9:48:60:79:07:A9:7B:5E:D0:B0:B0:58:79:EF:69
gtecybertrustca, Oct 20, 2009, trustedCertEntry,
Certificate fingerprint (SHA-256): 52:7B:05:05:27:DF:52:9C:0F:7A:D0:0C:EF:1E:7B:A4:21:78:81:82:61:5C:32:6C:8B:6D:1A:20:61:A0:BD:7C
entrustgsslca, Oct 20, 2009, trustedCertEntry,
Certificate fingerprint (SHA-256): 2F:2F:87:02:A6:ED:EC:B6:46:92:94:BC:A0:40:F6:3B:88:49:42:1F:CE:E1:C3:7D:1C:FB:EE:89:DC:CD:43:83
verisignclass2ca, Oct 20, 2009, trustedCertEntry,
Certificate fingerprint (SHA-256): BD:46:9F:F4:5F:AA:E7:C5:4C:CB:D6:9D:3F:3B:00:22:55:D9:B0:6B:10:B1:D0:FA:38:8B:F9:6B:91:8B:2C:E9

Warning:
uses a 1024-bit RSA key which is considered a security risk. This key size will be disabled in a future update.
uses a 1024-bit RSA key which is considered a security risk. This key size will be disabled in a future update.
uses a 1024-bit RSA key which is considered a security risk. This key size will be disabled in a future update.
uses a 1024-bit RSA key which is considered a security risk. This key size will be disabled in a future update.
uses a 1000-bit RSA key which is considered a security risk and is disabled.
uses a 1024-bit RSA key which is considered a security risk. This key size will be disabled in a future update.
uses a 1024-bit RSA key which is considered a security risk. This key size will be disabled in a future update.
uses a 1024-bit RSA key which is considered a security risk. This key size will be disabled in a future update.
Importing the certificate in the truststore

Let’s add the certificate in the agent’s truststore with the following command.

/u01/app/oracle/agent_24ai/agent_24.1.0.0.0/oracle_common/jdk/bin/keytool -importcert -alias ogg_capath -file /path/to/ogg_certs/RootCA_cert.pem -keystore /u01/app/oracle/agent_24ai/agent_inst/sysman/config/montrust/AgentTrust.jks

If you are not sure which file you should add, keep in mind that it should be the same file you are using in the OGG_CLIENT_TLS_CAPATH environment variable when loading the GoldenGate environment and connecting to your deployments with the adminclient.

oracle@vmogg:~ [emagent] export OGG_CLIENT_TLS_CAPATH=/path/to/ogg_certs/RootCA_cert.pem
oracle@vmogg:~ [emagent] /u01/app/oracle/agent_24ai/agent_24.1.0.0.0/oracle_common/jdk/bin/keytool -importcert -alias ogg_capath -file $OGG_CLIENT_TLS_CAPATH -keystore /u01/app/oracle/agent_24ai/agent_inst/sysman/config/montrust/AgentTrust.jks
Enter keystore password:
Owner: CN=OGG RootCA, O=OGG ROOT CA, C=AU
Issuer: CN=OGG RootCA, O=OGG ROOT CA, C=AU
Serial number: 1a11d06e7d73700aa85e35ffc0ce4a27dcfdaf7d
Valid from: Fri Mar 21 15:22:22 GMT 2026 until: Mon Mar 18 15:22:22 GMT 2036
Certificate fingerprints:
SHA1: 3C:99:5B:3A:D9:A4:64:6D:23:F0:0A:48:16:FA:AF:85:BD:26:E3:C7
SHA256: 8D:6F:1D:67:ED:D9:7B:C0:C8:0E:D1:0E:50:2F:15:25:45:5D:F2:1D:A2:AB:22:C7:2D:AE:05:19:F1:DE:28:31
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 4096-bit RSA key
Version: 3

Extensions:

#1: ObjectId: 2.5.29.35 Criticality=false

AuthorityKeyIdentifier [
KeyIdentifier [
0000: E3 BC BD 71 43 FD 85 B0 48 E1 44 A1 81 04 FA A9 …qC…H.D…..
0010: 1D 1C 45 16 ..E.
]
]

#2: ObjectId: 2.5.29.19 Criticality=true

BasicConstraints:[
CA:true
PathLen:2147483647
]

#3: ObjectId: 2.5.29.14 Criticality=false

SubjectKeyIdentifier [
KeyIdentifier [
0000: E3 BC BD 71 43 FD 85 B0 48 E1 44 A1 81 04 FA A9 …qC…H.D…..
0010: 1D 1C 45 16 ..E.
]
]

Trust this certificate? [no]: yes
Certificate was added to keystore

If you check the content of the keystore again, you will see the new entry under the alias you just added. We will use the -alias ogg_capath option to only display the new alias.

oracle@vmogg:~ [emagent] /u01/app/oracle/agent_24ai/agent_24.1.0.0.0/oracle_common/jdk/bin/keytool -list -keystore /u01/app/oracle/agent_24ai/agent_inst/sysman/config/montrust/AgentTrust.jks -alias ogg_capath
Enter keystore password:
ogg_capath, Apr 19, 2026, trustedCertEntry,
Certificate fingerprint (SHA-256): 8D:6F:1D:67:ED:D9:7B:C0:C8:0E:D1:0E:50:2F:15:25:45:5D:F2:1D:A2:AB:22:C7:2D:AE:05:19:F1:DE:28:31

Without modifying anything in the Enterprise Manager, you can re-run the discovery. This time, the GoldenGate targets will be discovered ! To promote the new targets, just click on the number of targets discovered. You can also go in the Setup > Add Target > Auto Discovery Results section to view all discovered targets. Once this is done, the new targets will be monitored.

To summarize, in this NGINX context, OEM GoldenGate discovery fails with EM-90000 due to missing CA certificates in the agent truststore. Importing the proper CA chain into AgentTrust.jks resolves the SSL handshake failure.

L’article EM-90000 SSL Error With GoldenGate Targets in OEM est apparu en premier sur dbi Blog.

AutoUpgrade powered easy download and installation of Oracle AI Database 26ai on Linux

Yann Neuhaus - Fri, 2026-05-01 08:21
With latest versions of AutoUpgrade, downloading and installing Oracle AI Database 26ai on Linux x86 on-premise has become a very simple process. If you want to give a try to Oracle Database 26ai, building a test environment requires minimal effort as you will see in this short post.

What we need:

  • Internet access
  • Oracle Support account
  • new installation of Oracle Linux 8 or 9, RHEL 8 or 9, SLES 15 with enough space for Oracle binaries
  • Java JDK 8 to JDK 11
  • latest version of AutoUpgrade

Steps to proceed:

  • update the system if not already performed
  • install OpenJDK if java not yet installed on the system
  • install Oracle preinstall package for 26ai
  • create oracle base, inventory and home directories with relevant permissions
  • change to oracle user created during installation of preinstall package for 26ai
  • create AutoUpgrade home with some subdirectories
  • download latest AutoUgrade version
  • check version of AutoUpgrade
  • create AutoUpgrade configuration file for software download
  • configure AutoUpgrade keystore to enable access to Oracle Support
  • launch software download
  • create AutoUpgrade configuration file for Oracle Home creation
  • launch Oracle Home creation
  • check the newly installed Oracle AI Database 26ai environment
1) Update the system
sudo dnf update
2) Install OpenJDK if java not yet installed on the system
sudo dnf install -y java-11-openjdk
3) Install Oracle preinstall package for 26ai
sudo dnf install -y oracle-database-preinstall-23ai
4) Create oracle base, inventory and home directories with relevant permissions
sudo mkdir -p /u01/app/oracle
sudo mkdir -p /u01/app/oraInventory
sudo mkdir -p /u01/app/oracle/product/23.26.2/dbhome_1
sudo chown -R oracle:oinstall /u01/app/oracle
sudo chown -R oracle:oinstall /u01/app/oraInventory
5) Change to oracle user created during installation of preinstall package for 26ai
sudo su - oracle
6) Create AutoUpgrade home with some subdirectories
mkdir autoupgrade
mkdir autoupgrade/logs
mkdir autoupgrade/patches
mkdir autoupgrade/keystore
mkdir autoupgrade/etc
cd autoupgrade
7) Download latest version of AutoUpgrade
wget -O /home/oracle/autoupgrade/autoupgrade.jar https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
8) Check version of AutoUpgrade
java -jar autoupgrade.jar -version
9) Create AutoUpgrade configuration file for software download
vi etc/au-download.cfg

global.global_log_dir=/home/oracle/autoupgrade/logs
global.keystore=/home/oracle/autoupgrade/keystore

dl.folder=/home/oracle/autoupgrade/patches
dl.patch=RU,OPATCH
dl.target_version=23
dl.platform=LINUX.X64
10) Configure AutoUpgrade keystore to enable access to Oracle Support
java -jar autoupgrade.jar -config etc/au-download.cfg -patch -load_password
Processing config file ...

Starting AutoUpgrade Patching Password Loader - Type help for available options
Creating new AutoUpgrade Patching keystore - Password required
Enter password:
Enter password again:
AutoUpgrade Patching keystore was successfully created

MOS> add -user <your-mos-account-name>
Enter your secret/Password:
Re-enter your secret/Password:
MOS> save
Convert the AutoUpgrade Patching keystore to auto-login [YES|NO] ? YES
MOS> exit

AutoUpgrade Patching Password Loader finished - Exiting AutoUpgrade Patching
11) Launch software download
java -jar autoupgrade.jar -config etc/au-download.cfg -patch -mode download
AutoUpgrade Patching 26.3.260401 launched with default internal options
Processing config file ...
Loading AutoUpgrade Patching keystore
AutoUpgrade Patching keystore is loaded

Connected to MOS - Searching for specified patches

-----------------------------------------------------
Downloading files to /home/oracle/autoupgrade/patches
-----------------------------------------------------
DATABASE RELEASE UPDATE 23.26.2.0.0
    File: p39099680_230000_Linux-x86-64.zip \ 16%
...

After download has completed, we get the following output:

AutoUpgrade Patching 26.3.260401 launched with default internal options
Processing config file ...
Loading AutoUpgrade Patching keystore
AutoUpgrade Patching keystore is loaded

Connected to MOS - Searching for specified patches

-----------------------------------------------------
Downloading files to /home/oracle/autoupgrade/patches
-----------------------------------------------------
DATABASE RELEASE UPDATE 23.26.2.0.0
    File: p39099680_230000_Linux-x86-64.zip - VALIDATED

OPatch 12.2.0.1.51 for DB 23.0.0.0.0 (Apr 2026)
    File: p6880880_230000_Linux-x86-64.zip - VALIDATED
-----------------------------------------------------

At time of writing, 23.26.2 was just released and Data Pump Bundle Patch could not yet be downloaded for this Release Update.

12) Create AutoUpgrade configuration file for Oracle Home creation
vi etc/au-create-home.cfg

global.global_log_dir=/home/oracle/autoupgrade/logs

crh.folder=/home/oracle/autoupgrade/patches
crh.patch=RU,OPATCH
crh.target_version=23
crh.platform=LINUX.X64
crh.target_home=/u01/app/oracle/product/23.26.2/dbhome_1
crh.home_settings.edition=ee
crh.home_settings.oracle_base=/u01/app/oracle
crh.home_settings.inventory_location=/u01/app/oraInventory
crh.download=no
13) Launch Oracle Home creation
java -jar autoupgrade.jar -config etc/au-create-home.cfg -patch -mode create_home
AutoUpgrade Patching 26.3.260401 launched with default internal options
Processing config file ...
+-----------------------------------------+
| Starting AutoUpgrade Patching execution |
+-----------------------------------------+
Type 'help' to list console commands
patch>lsj -a
patch> +----+-------------+-------+---------+-------+----------+-------+---------------------+
|Job#|      DB_NAME|  STAGE|OPERATION| STATUS|START_TIME|UPDATED|              MESSAGE|
+----+-------------+-------+---------+-------+----------+-------+---------------------+
| 100|create_home_1|EXTRACT|EXECUTING|RUNNING|  09:29:32|37s ago|Extracting gold image|
+----+-------------+-------+---------+-------+----------+-------+---------------------+
Total jobs 1

patch> Job 100 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished                  [1]
Jobs failed                    [0]
Jobs restored                  [0]
Jobs pending                   [0]

# Run the root.sh script as root for the following jobs:
For create_home_1 -> /u01/app/oracle/product/23.26.2/dbhome_1/root.sh

# Run the orainstRoot.sh script as root for the following jobs:
For create_home_1 -> /u01/app/oraInventory/orainstRoot.sh

Please check the summary report at:
/home/oracle/autoupgrade/logs/cfgtoollogs/patch/auto/status/status.html
/home/oracle/autoupgrade/logs/cfgtoollogs/patch/auto/status/status.log

To complete the installation, we need to execute with root privileges the two scripts mentioned at the end of the output.

14) Check the newly installed Oracle AI Database 26ai environment
more /home/oracle/autoupgrade/logs/cfgtoollogs/patch/auto/status/status.log
==========================================
   AutoUpgrade Patching Summary Report
==========================================
[Date]           Fri May 01 09:33:13 GMT 2026
[Number of Jobs] 1
==========================================
[Job ID] 100
==========================================
[DB Name]                create_home_1
[Version Before AutoUpgrade Patching] 23.0.0.0.0
[Version After AutoUpgrade Patching]  23.26.2.0.0
------------------------------------------
[Stage Name]    PENDING
[Status]        SUCCESS
[Start Time]    2026-05-01 09:29:32
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade/logs/create_home_1/100/pending
------------------------------------------
[Stage Name]    PREACTIONS
[Status]        SUCCESS
[Start Time]    2026-05-01 09:29:32
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade/logs/create_home_1/100/preaction
------------------------------------------
[Stage Name]    EXTRACT
[Status]        SUCCESS
[Start Time]    2026-05-01 09:29:32
[Duration]      0:01:17
[Log Directory] /home/oracle/autoupgrade/logs/create_home_1/100/extract
------------------------------------------
[Stage Name]    DBTOOLS
[Status]        SUCCESS
[Start Time]    2026-05-01 09:30:51
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade/logs/create_home_1/100/dbtools
------------------------------------------
[Stage Name]    INSTALL
[Status]        SUCCESS
[Start Time]    2026-05-01 09:30:52
[Duration]      0:02:20
[Log Directory] /home/oracle/autoupgrade/logs/create_home_1/100/install
------------------------------------------
[Stage Name]    OH_PATCHING
[Status]        SUCCESS
[Start Time]    2026-05-01 09:33:13
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade/logs/create_home_1/100/opatch
------------------------------------------
[Stage Name]    OPTIONS
[Status]        SUCCESS
[Start Time]    2026-05-01 09:33:13
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade/logs/create_home_1/100/options
------------------------------------------
[Stage Name]    ROOTSH
[Status]        SUCCESS
[Start Time]    2026-05-01 09:33:13
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade/logs/create_home_1/100/rootsh
------------------------------------------
[Stage Name]    DISPATCH
[Status]        SUCCESS
[Start Time]    2026-05-01 09:33:13
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade/logs/create_home_1/100/dispatch
------------------------------------------
[Stage Name]    POSTACTIONS
[Status]        SUCCESS
[Start Time]    2026-05-01 09:33:13
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade/logs/create_home_1/100/postaction
------------------------------------------

sqlplus -V

SQL*Plus: Release 23.26.2.0.0 - Production
Version 23.26.2.0.0

And that’s it.

L’article AutoUpgrade powered easy download and installation of Oracle AI Database 26ai on Linux est apparu en premier sur dbi Blog.

M5 Cross-Endian Platform Migration - KB144840

Tom Kyte - Thu, 2026-04-30 16:24
We are planning to migrate our database from on-premises infrastructure to ODA. The source platform is Solaris sparc, and the target platform is x86. We intend to use the M5 Cross-Endian Platform Migration approach. However, we have a question regarding whether APEX and ORDS will be migrated as part of this process. On the source system, APEX and ORDS are installed in dedicated tablespaces rather than in SYSTEM or SYSAUX. We would like to verify whether this M5 migration is fully compatible for APEX,ORDS and whether any issues are expected. Both Source and target DB's are in 19.25 , APEX IS 22.1
Categories: DBA Blogs

On Premisis MCP server

Tom Kyte - Thu, 2026-04-30 16:24
Is there a way to create a On-premise MCP for oracle database without using OCI as a gateway? I want to deploy a real server (vm/container etc) rather than running a local instance of SQLCL on a user desktop that way we are able to configure the agent framework code (multiple chat bots) to talk to the database.
Categories: DBA Blogs

drop table without purge

Tom Kyte - Thu, 2026-04-30 16:24
I conducted an experiment with the recyclebin parameter. First, I set recyclebin = OFF to observe the behavior of a regular DROP TABLE. As I understand the architecture: there is a tablespace and a segment, for example T1.Information about this segment is stored in the data dictionary.The tablespace also uses a space management mechanism (Segment Space Management AUTO), which tracks free blocks. Therefore, when I drop a table with recyclebin = OFF,the entry about the segment is removed from the data dictionary,and all blocks of the segment are marked as free and can be reused by other objects.I performed this experiment and indeed observed exactly this behavior. Next, I enabled the parameter recyclebin = ON. In this case, the information about the table is not fully removed.The table is marked as inaccessible and renamed,after which it appears in the RECYCLEBIN view. At the same time, the segment continues to exist. Then I read the following statement in the documentation: <i>Unless you specify the PURGE clause, the DROP TABLE statement does not result in space being released back to the tablespace for use by other objects, and the space continues to count toward the user's space quota. </i> However, in my experiment I observe the following behavior. Suppose initially the tablespace had: 500 MB free space I created a table and filled it with data totaling:200 MB As a result: free space = 300 MB user quota used = 200 MB out of 500 MB After that, I executed a regular: DROP TABLE table_name; And I observed that: 200 MB returned to free space // free space became 500 MB again At the same time: the object still exists in the RECYCLEBIN the segment size is approximately 200 MB the user's quota still shows 200 MB used I expected the free space to remain unchanged: 300 MB because the documentation states that the space is not released for use by other objects. Question: Where is the flaw in my understanding of the logic? Why does the free space in the tablespace increase even though the segment remains in the RECYCLEBIN and the user's quota is still consumed?
Categories: DBA Blogs

Exascale new clone and snapshot capabilities

Yann Neuhaus - Wed, 2026-04-29 08:03
One of the most game changing capabilities of Exascale are the new clone and snapshot features compared to previous options available with ASM / ACFS. And since the business requirements for clones or snapshots to be made available rapidly and conveniently are rising, Exascale comes in rescue for IT operations to quickly provide storage efficient copies of production databases, even the most big ones.

The business requirements to be able to get hands on clones of production databases are nowadays countless:

  • developer agility (needing separate and isolated databases),
  • production-like shared environment providing same model and code but with a smaller footprint,
  • test databases for software update testing,
  • data sharing,
  • point-in-time reporting, what-if scenario modeling or data audit.

For environments with big databases (tens of terabytes and more), Exascale is very well suited to do the job, with the following main features and advantages:

  • source database flexibility: you can now clone read-write and read-only 23/26ai databases, at the PDB and CDB level
  • no upstream dependencies: unlike ASM, Exascale no longer requires a test-master read-only copy
  • storage reclamation is automatically performed by Exascale for extents no longer referenced by a file
  • redirect-on-write techniques enable quick clone creation and maintenance with unprecedented space efficiency
  • and last but not least, clone/snapshot is independent of source, which can be dropped without impacting clone

Before going through the details on how to make clones and snapshots, let’s remind what clones and snapshots are.

Snapshot

A snapshot is a read-only point-in-time copy of a file. Possible sources for snapshot creation are regular files, clones or another snapshot.

Clone

A clone is a read-write point-in-time copy of a file, possibly created from a regular file, another clone or a snapshot.

There are actually two types of clones: thin (space efficient) or full (byte-for-byte copy of source).

Cloning implicitly creates snapshots.

How do we create clones or snapshots of PDBs ?

This is where database kernel and Exascale integration really shines: physical storage operations are encapsulated in single SQL commands, thus there is no need to learn new tricky commands and clauses to leverage Exascale cloning and snapshoting capabilities. You use SQL commands you are already familiar with:

CREATE|ALTER PLUGGABLE DATABASE
Creating PDB snapshots

To create a read-only point-in-time reference to a PDB, use:

ALTER PLUGGABLE DATABASE pdb_name SNAPSHOT pdb_snapshot_name;

Remember that snapshots are inconsistent by default (which precisely makes quick creation possible); so cloning a snapshot triggers recovery based on redo data before the new clone can be opened read-write. Thus, required redo data has to be retained or restored from backup.

The DBA_PDB_SNAPSHOTS dictionary view is available to display snapshots information.

Creating PDB thin clones

A PDB thin clone is a space-efficient clone of an existing PDB. You need to use the SNAPSHOT COPY clause like in the following syntax:

CREATE PLUGGABLE DATABASE pdb_name FROM source_pdb_name SNAPSHOT COPY;
Thin cloning a PDB
CREATE PLUGGABLE DATABASE thin_clone_pdb_name FROM source_pdb_name SNAPSHOT COPY;
ALTER PLUGGABLE DATABASE thin_clone_pdb_name OPEN INSTANCES=ALL;
Thin cloning a PDB snapshot
CREATE PLUGGABLE DATABASE thin_clone_pdb_name FROM source_pdb_name USING SNAPSHOT pdb_snapshot_name SNAPSHOT COPY;
ALTER PLUGGABLE DATABASE thin_clone_pdb_name OPEN INSTANCES=ALL;
Creating PDB full clones

As already mentioned, a full clone is a byte-for-byte copy of an existing PDB and therefore takes the same storage space than source PDB. The command to be used is a simple:

CREATE PLUGGABLE DATABASE … FROM …

The missing SNAPSHOT COPY clause directs the database to create a complete block-for-block copy of the PDB.

CREATE PLUGGABLE DATABASE full_clone_pdb_name FROM source_pdb_name;
ALTER PLUGGABLE DATABASE full_clone_pdb_name OPEN INSTANCES=ALL;

Existing snapshots or thin clones can also be referenced as a source in the preceding commands.

What about cloning an entire container database ?

Space-efficient clones of an entire container database are possible thanks to the powerful gDBClone utility written by Ruggero Citton, to whom we owe other great utilities ODABR (consistent and incremental backup and restore for Oracle Database Appliance nodes), KVMBR (backup and restore of KVM domains) or more recently DSCM & DSCMREST (deployment of Oracle database containers with efficient storage snapshots).

The gDBClone utility enables:

  • End-to-end thin and full cloning of a CDB with all its PDB
  • File clone operations
  • RMAN calls to make cloned files consistent
  • Instance related files (such as password, spfile or TDE wallet files) creation
  • GI registration

Here are some simple examples to respectively thin clone and full clone an entire CDB:

gDBClone.bin snap –sdbname source_db_name –tdbname target_db_name
gDBClone.bin clone –sdbname source_db_name –tdbname target_db_name

More on this powerful tool in Exascale documentation and Oracle Support KB145187 article.

Performance

Thin cloning a 2.4 TB PDB takes less than 12 seconds while opening the new thin clone on a two instances cluster database takes another 11 seconds, for a total elapsed time of the entire operation under 30 seconds !!!

Additional details can be found in this Alex Blyth blog post.

Conclusion

By leveraging state-of-the-art redirect-on-write technologies, Exascale is a leap forward and game changing technology when it comes to clone and snapshot features and capabilities. It is CI/CD friendly and the ideal tool to address the business requirements for test and qualification environments by letting DBAs provide very quickly space efficient copies of production or standby databases or PDBs.

Give it a try to be definitely convinced.

L’article Exascale new clone and snapshot capabilities est apparu en premier sur dbi Blog.

Delete a GoldenGate deployment when the password is lost

Yann Neuhaus - Tue, 2026-04-28 01:38

Recently, I was asked how to recreate a GoldenGate deployment if the password for the Security user (the first user created with the deployment) was lost. This is an interesting question that I figured was worth writing about.

As a reminder, you have two main ways of creating and deleting deployments in GoldenGate:

  • Using the configuration assistant oggca.sh
  • Using the REST API and its deployment endpoints

I already covered the creation and removal of deployments with the configuration assistant in a previous blog. I also wrote about deployment creation with the REST API, but not yet about deletion.

Deleting the deployment from oggca.sh won’t work

From the GoldenGate Configuration Assistant (oggca.sh script), deleting a deployment is straightforward, but as shown below, you will be asked to input not only the Service Manager credentials, but also the deployment credentials !

This means that oggca.sh cannot be used for such a task.

Deleting a deployment with the REST API

To overcome this issue, let’s list the steps taken by oggca.sh when deleting GoldenGate deployment:

  • Verify the deployment credentials
  • Stop the deployment services
  • Unregister the deployment in the Service Manager

While asking for the deployment’s password might be relevant to avoid deleting the wrong deployment, you do not strictly need it. In fact, you can stop the deployment services from the Service Manager itself. To do so, use the update_deployment endpoint (PATCH /services/{version}/deployments/{deployment}) to change the status of the deployment to stopped. Using the Python client I presented in another blog, I will open a connection to the Service Manager and stop the deployment ogg_test_01.

>>> from oggrestapi import OGGRestAPI
>>> ogg_client = OGGRestAPI(url="http://vmogg:7809", username="ogg", password="***")
>>> ogg_client.update_deployment('ogg_test_01', data={"status":"stopped"})
{'$schema': 'api:standardResponse', 'links': [{'rel': 'canonical', 'href': 'https://vmogg/services/ServiceManager/v2/deployments/ogg_test_01', 'mediaType': 'application/json'}, {'rel': 'self', 'href': 'https://vmogg/services/ServiceManager/v2/deployments/ogg_test_01', 'mediaType': 'application/json'}], 'messages': []}

Once this is done, you can unregister the deployment from the Service Manager with the remove_deployment endpoint (DELETE /services/{version}/deployments/{deployment})

And for this, we never needed the deployment password !

To clean your installation, make sure to remove all files related to the old deployment. More specifically:

  • The deploymentConfiguration.dat of your Service Manager was already edited, you do not need to clean it manually.
  • You can delete or archive the deployment main directory.
  • The $OGG_SM_HOME/var/run directory should be cleaned of the .dat files named after the deployment : rm $OGG_SM_HOME/var/run/ogg_test_01*

That’s it. You have successfully deleted your deployment, and you can re-create it anytime you want. And this time, remember to store the password !

L’article Delete a GoldenGate deployment when the password is lost est apparu en premier sur dbi Blog.

Beyond TDE and TLS: Bridging the Data Security Governance Gap in Lower Environments

Yann Neuhaus - Mon, 2026-04-27 15:21
Conceptual diagram of a secure data pipeline showing production data passing through a governance engine to anonymized dev and staging environments. The Multi-Layered Threat: Why One Tool is Never Enough

We’ve all left the key in our bike lock at least once. This simple human oversight makes the heaviest chain irrelevant and we often see the exact same logic applied to data environments. Most organizations spend months hardening their production core but leave the keys in the locks of the dev and staging systems that sit right next to it.

The numbers back this up. While 91% of organizations are concerned about their exposure across lower environments, a staggering 86% still allow data compliance exceptions in non-production. This gap between concern and action has real consequences: more than half of these organizations have already experienced a breach or audit failure in their testing and development systems (PR Newswire).

Effective security is rarely a single-layer problem. Between the stolen backup that lands in the wrong hands, the analyst running a SELECT on a table they probably shouldn’t see, and the packet quietly crossing an unsecured network segment, the attack surface is wide, and no single mechanism covers it all.

Transport Layer Security (TLS), Transparent Data Encryption (TDE), symmetric encryption, dynamic masking, row-level security, data anonymization: for most RDBMS, the options exist and they work. Most teams already have access to at least one of them. The real challenge isn’t finding a solution; it’s understanding what each one actually protects, where it breaks down, and whether it survives contact with a production environment.

Shadow Environments: The Weakest Link in Your Data Chain

Here is the uncomfortable truth: non-production environments are often where security policies are quietly buried. It starts with a backup restored without encryption, or real customer data seeding a dev database “just for a quick test“.

The fundamental problem is that most protections assume a controlled environment. Encryption can be bypassed by someone with the right credentials. Masking can be misconfigured. Row-level security doesn’t help much when the whole database is sitting on a developer’s laptop.

Technical Trade-offs: Finding Your Strategic Fit

To make this reasoning concrete, the table below maps six core techniques against the operational criteria that define their success. The goal isn’t to pick a favorite tool, but to identify which combination actually addresses your specific vulnerabilities.

Physical File TheftRead Access (SELECT)Network SniffingPerformance ImpactGranularityApplicable in Prod
(live data)Applicable in DEVTLS❌❌✅✅Data packet✅✅TDE✅❌❌✅Column
Tablespace
Datafile✅⚠Symmetric encryption (applicative)✅✅✅❌Field
Value✅✅Dynamic Masking❌✅❌✅Column✅✅Row-level security❌✅❌✅Row✅✅Data anonymization✅✅✅✅Field
Column❌✅
  • TLS protects data in motion. The moment a packet leaves a server, TLS ensures anyone intercepting it sees encrypted noise. What it doesn’t do is equally important: it has no opinion about who queries your database or what’s stored on disk. Once the data arrives, TLS’s job is done.
    TLS is now the industry standard for securing data in motion.
    (SQL Server technical blog about TLS here)
  • TDE encrypts the physical files that make up your database (data files, log files, backups), so that anyone who gets their hands on them without the encryption key can’t read them. The performance impact is a negligible overhead; in fact, Microsoft for example enables TDE by default for all its cloud-based databases.
    (PostgreSQL technical blog about TDE here)
    However, deploying TDE in development is a security best practice, but it quickly becomes an operational nightmare for environment refreshes, especially if you want to use distinct certificates to avoid leaking production secrets into lower environments.
  • Symmetric encryption is field-level encryption applied directly in the application layer. Unlike TDE, it survives a legitimate SELECT; even a user with full read access sees ciphertext unless they hold the applicative key. The tradeoff is performance: encrypting and decrypting at scale adds up quickly.
    (MongoDB technical blog about Client-side Field Level Encryption here)
  • Dynamic masking doesn’t encrypt anything. It intercepts query results and replaces sensitive values with masked equivalents based on the user’s role. Fast, lightweight, zero application changes required. The catch: it only controls what’s displayed, not what’s stored. A user with sufficient privileges can bypass it entirely.
    (SQL Server technical blog about dynamic masking here)
  • Row-Level Security enforces access at the row level directly inside the database engine. Users see only the rows they’re allowed to see, regardless of how the query is written. No application changes, no trust placed in the calling layer. The policy lives in the database and applies universally.
    (Oracle technical blog about Virtual Private Database here)
  • Data anonymization doesn’t protect sensitive data, it eliminates it. Real values are replaced with realistic but fictional equivalents (synthetic data), permanently and irreversibly. No encryption key to steal, no masking rule to bypass. Whatever leaks simply isn’t sensitive anymore. This is why anonymization is the only control that makes unconditional sense in non-production environments. A stolen backup, a misconfigured SELECT, a sniffed packet: none of it matters if the data was anonymized before it ever reached a staging environment. We covered how to implement it in practice in a previous post
Ownership Gaps: The Security No Man’s Land

We are shifting from a technical challenge to a human and organizational one. The security landscape moves so fast that the struggle of mastering every layer has become overwhelming.

This complexity is where governance goes to die. Infrastructure teams build the walls, developers write the code, and DBAs manage the house, but the accountability for the data itself often falls through the cracks. The most dangerous gap isn’t a missing feature; it’s the absence of a governance model strong enough to stop the game of hot potato and force a cross-domain ownership of security.

The CISO’s role in this landscape is not to master every technical layer, it is to force the question of ownership into the open. Who signs off on what data enters a non-production environment? Who is accountable when a dev database is restored without encryption? Who audits that masking policies are still effective after a release?

Without explicit answers to these questions, security becomes a game of assumptions. Every team assumes another layer is holding. And the gaps compound silently, until they don’t.

From Handcrafted Scripts to Enterprise Platforms

Every technique in this table can be implemented on a spectrum, from a carefully written script to a fully automated enterprise solution. The right choice depends on your scale, your team, and how much operational overhead you can realistically absorb.

  • TLS certificate deployment: you can generate and rotate certificates manually, instance by instance. Or you can automate the entire lifecycle using Ansible against an internal PKI with a consistent and auditable way that is invisible to the teams consuming it. The security outcome is identical; the operational cost is not.
  • Data anonymization: a custom script that detects PII columns and replaces values with masked data works well at small scale. The challenge appears when your data spans multiple database engines (SQL Server, Oracle, PostgreSQL, …) and when anonymized values need to remain consistent across foreign keys and referential constraints. Replacing a customer ID in one table while leaving it intact in another isn’t anonymization, it’s a GDPR incident waiting to happen. Solutions like Delphix Continuous Compliance handle cross-DBMS consistency, constraint awareness, and sensitive field detection out of the box, turning a fragile hand-rolled process into a governed, repeatable and auditable one.
  • Dynamic masking and row-level security: defining a handful of policies manually in SSMS is perfectly reasonable for a contained environment. Automating policy deployment across environments and instances is a different challenge entirely. It is a level of scale where ad-hoc scripts quickly become a liability.
Conclusion: Moving Beyond Security by Accident

Security is not a one-time project. It is an operational discipline that requires the same rigor in a developer’s sandbox as it does in production, and that rigor has to be enforced by design, not by goodwill.

Most breaches in non-production environments don’t happen because a tool failed. They happen because nobody owned the decision to use it in the first place.

At dbi services, we help organizations move from fragile, handcrafted scripts to governed, auditable architectures across every environment, every database engine, and every team.

Because under GDPR, one incident is all it takes to make ownership everyone’s problem.

L’article Beyond TDE and TLS: Bridging the Data Security Governance Gap in Lower Environments est apparu en premier sur dbi Blog.

Pages

Subscribe to Oracle FAQ aggregator