Feed aggregator

M5 Cross-Endian Platform Migration - KB144840

Tom Kyte - Thu, 2026-04-30 16:24
We are planning to migrate our database from on-premises infrastructure to ODA. The source platform is Solaris sparc, and the target platform is x86. We intend to use the M5 Cross-Endian Platform Migration approach. However, we have a question regarding whether APEX and ORDS will be migrated as part of this process. On the source system, APEX and ORDS are installed in dedicated tablespaces rather than in SYSTEM or SYSAUX. We would like to verify whether this M5 migration is fully compatible for APEX,ORDS and whether any issues are expected. Both Source and target DB's are in 19.25 , APEX IS 22.1
Categories: DBA Blogs

On Premisis MCP server

Tom Kyte - Thu, 2026-04-30 16:24
Is there a way to create a On-premise MCP for oracle database without using OCI as a gateway? I want to deploy a real server (vm/container etc) rather than running a local instance of SQLCL on a user desktop that way we are able to configure the agent framework code (multiple chat bots) to talk to the database.
Categories: DBA Blogs

drop table without purge

Tom Kyte - Thu, 2026-04-30 16:24
I conducted an experiment with the recyclebin parameter. First, I set recyclebin = OFF to observe the behavior of a regular DROP TABLE. As I understand the architecture: there is a tablespace and a segment, for example T1.Information about this segment is stored in the data dictionary.The tablespace also uses a space management mechanism (Segment Space Management AUTO), which tracks free blocks. Therefore, when I drop a table with recyclebin = OFF,the entry about the segment is removed from the data dictionary,and all blocks of the segment are marked as free and can be reused by other objects.I performed this experiment and indeed observed exactly this behavior. Next, I enabled the parameter recyclebin = ON. In this case, the information about the table is not fully removed.The table is marked as inaccessible and renamed,after which it appears in the RECYCLEBIN view. At the same time, the segment continues to exist. Then I read the following statement in the documentation: <i>Unless you specify the PURGE clause, the DROP TABLE statement does not result in space being released back to the tablespace for use by other objects, and the space continues to count toward the user's space quota. </i> However, in my experiment I observe the following behavior. Suppose initially the tablespace had: 500 MB free space I created a table and filled it with data totaling:200 MB As a result: free space = 300 MB user quota used = 200 MB out of 500 MB After that, I executed a regular: DROP TABLE table_name; And I observed that: 200 MB returned to free space // free space became 500 MB again At the same time: the object still exists in the RECYCLEBIN the segment size is approximately 200 MB the user's quota still shows 200 MB used I expected the free space to remain unchanged: 300 MB because the documentation states that the space is not released for use by other objects. Question: Where is the flaw in my understanding of the logic? Why does the free space in the tablespace increase even though the segment remains in the RECYCLEBIN and the user's quota is still consumed?
Categories: DBA Blogs

Delete a GoldenGate deployment when the password is lost

Yann Neuhaus - Tue, 2026-04-28 01:38

Recently, I was asked how to recreate a GoldenGate deployment if the password for the Security user (the first user created with the deployment) was lost. This is an interesting question that I figured was worth writing about.

As a reminder, you have two main ways of creating and deleting deployments in GoldenGate:

  • Using the configuration assistant oggca.sh
  • Using the REST API and its deployment endpoints

I already covered the creation and removal of deployments with the configuration assistant in a previous blog. I also wrote about deployment creation with the REST API, but not yet about deletion.

Deleting the deployment from oggca.sh won’t work

From the GoldenGate Configuration Assistant (oggca.sh script), deleting a deployment is straightforward, but as shown below, you will be asked to input not only the Service Manager credentials, but also the deployment credentials !

This means that oggca.sh cannot be used for such a task.

Deleting a deployment with the REST API

To overcome this issue, let’s list the steps taken by oggca.sh when deleting GoldenGate deployment:

  • Verify the deployment credentials
  • Stop the deployment services
  • Unregister the deployment in the Service Manager

While asking for the deployment’s password might be relevant to avoid deleting the wrong deployment, you do not strictly need it. In fact, you can stop the deployment services from the Service Manager itself. To do so, use the update_deployment endpoint (PATCH /services/{version}/deployments/{deployment}) to change the status of the deployment to stopped. Using the Python client I presented in another blog, I will open a connection to the Service Manager and stop the deployment ogg_test_01.

>>> from oggrestapi import OGGRestAPI
>>> ogg_client = OGGRestAPI(url="http://vmogg:7809", username="ogg", password="***")
>>> ogg_client.update_deployment('ogg_test_01', data={"status":"stopped"})
{'$schema': 'api:standardResponse', 'links': [{'rel': 'canonical', 'href': 'https://vmogg/services/ServiceManager/v2/deployments/ogg_test_01', 'mediaType': 'application/json'}, {'rel': 'self', 'href': 'https://vmogg/services/ServiceManager/v2/deployments/ogg_test_01', 'mediaType': 'application/json'}], 'messages': []}

Once this is done, you can unregister the deployment from the Service Manager with the remove_deployment endpoint (DELETE /services/{version}/deployments/{deployment})

And for this, we never needed the deployment password !

To clean your installation, make sure to remove all files related to the old deployment. More specifically:

  • The deploymentConfiguration.dat of your Service Manager was already edited, you do not need to clean it manually.
  • You can delete or archive the deployment main directory.
  • The $OGG_SM_HOME/var/run directory should be cleaned of the .dat files named after the deployment : rm $OGG_SM_HOME/var/run/ogg_test_01*

That’s it. You have successfully deleted your deployment, and you can re-create it anytime you want. And this time, remember to store the password !

L’article Delete a GoldenGate deployment when the password is lost est apparu en premier sur dbi Blog.

Beyond TDE and TLS: Bridging the Data Security Governance Gap in Lower Environments

Yann Neuhaus - Mon, 2026-04-27 15:21
Conceptual diagram of a secure data pipeline showing production data passing through a governance engine to anonymized dev and staging environments. The Multi-Layered Threat: Why One Tool is Never Enough

We’ve all left the key in our bike lock at least once. This simple human oversight makes the heaviest chain irrelevant and we often see the exact same logic applied to data environments. Most organizations spend months hardening their production core but leave the keys in the locks of the dev and staging systems that sit right next to it.

The numbers back this up. While 91% of organizations are concerned about their exposure across lower environments, a staggering 86% still allow data compliance exceptions in non-production. This gap between concern and action has real consequences: more than half of these organizations have already experienced a breach or audit failure in their testing and development systems (PR Newswire).

Effective security is rarely a single-layer problem. Between the stolen backup that lands in the wrong hands, the analyst running a SELECT on a table they probably shouldn’t see, and the packet quietly crossing an unsecured network segment, the attack surface is wide, and no single mechanism covers it all.

Transport Layer Security (TLS), Transparent Data Encryption (TDE), symmetric encryption, dynamic masking, row-level security, data anonymization: for most RDBMS, the options exist and they work. Most teams already have access to at least one of them. The real challenge isn’t finding a solution; it’s understanding what each one actually protects, where it breaks down, and whether it survives contact with a production environment.

Shadow Environments: The Weakest Link in Your Data Chain

Here is the uncomfortable truth: non-production environments are often where security policies are quietly buried. It starts with a backup restored without encryption, or real customer data seeding a dev database “just for a quick test“.

The fundamental problem is that most protections assume a controlled environment. Encryption can be bypassed by someone with the right credentials. Masking can be misconfigured. Row-level security doesn’t help much when the whole database is sitting on a developer’s laptop.

Technical Trade-offs: Finding Your Strategic Fit

To make this reasoning concrete, the table below maps six core techniques against the operational criteria that define their success. The goal isn’t to pick a favorite tool, but to identify which combination actually addresses your specific vulnerabilities.

Physical File TheftRead Access (SELECT)Network SniffingPerformance ImpactGranularityApplicable in Prod
(live data)Applicable in DEVTLS❌❌✅✅Data packet✅✅TDE✅❌❌✅Column
Tablespace
Datafile✅⚠Symmetric encryption (applicative)✅✅✅❌Field
Value✅✅Dynamic Masking❌✅❌✅Column✅✅Row-level security❌✅❌✅Row✅✅Data anonymization✅✅✅✅Field
Column❌✅
  • TLS protects data in motion. The moment a packet leaves a server, TLS ensures anyone intercepting it sees encrypted noise. What it doesn’t do is equally important: it has no opinion about who queries your database or what’s stored on disk. Once the data arrives, TLS’s job is done.
    TLS is now the industry standard for securing data in motion.
    (SQL Server technical blog about TLS here)
  • TDE encrypts the physical files that make up your database (data files, log files, backups), so that anyone who gets their hands on them without the encryption key can’t read them. The performance impact is a negligible overhead; in fact, Microsoft for example enables TDE by default for all its cloud-based databases.
    (PostgreSQL technical blog about TDE here)
    However, deploying TDE in development is a security best practice, but it quickly becomes an operational nightmare for environment refreshes, especially if you want to use distinct certificates to avoid leaking production secrets into lower environments.
  • Symmetric encryption is field-level encryption applied directly in the application layer. Unlike TDE, it survives a legitimate SELECT; even a user with full read access sees ciphertext unless they hold the applicative key. The tradeoff is performance: encrypting and decrypting at scale adds up quickly.
    (MongoDB technical blog about Client-side Field Level Encryption here)
  • Dynamic masking doesn’t encrypt anything. It intercepts query results and replaces sensitive values with masked equivalents based on the user’s role. Fast, lightweight, zero application changes required. The catch: it only controls what’s displayed, not what’s stored. A user with sufficient privileges can bypass it entirely.
    (SQL Server technical blog about dynamic masking here)
  • Row-Level Security enforces access at the row level directly inside the database engine. Users see only the rows they’re allowed to see, regardless of how the query is written. No application changes, no trust placed in the calling layer. The policy lives in the database and applies universally.
    (Oracle technical blog about Virtual Private Database here)
  • Data anonymization doesn’t protect sensitive data, it eliminates it. Real values are replaced with realistic but fictional equivalents (synthetic data), permanently and irreversibly. No encryption key to steal, no masking rule to bypass. Whatever leaks simply isn’t sensitive anymore. This is why anonymization is the only control that makes unconditional sense in non-production environments. A stolen backup, a misconfigured SELECT, a sniffed packet: none of it matters if the data was anonymized before it ever reached a staging environment. We covered how to implement it in practice in a previous post
Ownership Gaps: The Security No Man’s Land

We are shifting from a technical challenge to a human and organizational one. The security landscape moves so fast that the struggle of mastering every layer has become overwhelming.

This complexity is where governance goes to die. Infrastructure teams build the walls, developers write the code, and DBAs manage the house, but the accountability for the data itself often falls through the cracks. The most dangerous gap isn’t a missing feature; it’s the absence of a governance model strong enough to stop the game of hot potato and force a cross-domain ownership of security.

The CISO’s role in this landscape is not to master every technical layer, it is to force the question of ownership into the open. Who signs off on what data enters a non-production environment? Who is accountable when a dev database is restored without encryption? Who audits that masking policies are still effective after a release?

Without explicit answers to these questions, security becomes a game of assumptions. Every team assumes another layer is holding. And the gaps compound silently, until they don’t.

From Handcrafted Scripts to Enterprise Platforms

Every technique in this table can be implemented on a spectrum, from a carefully written script to a fully automated enterprise solution. The right choice depends on your scale, your team, and how much operational overhead you can realistically absorb.

  • TLS certificate deployment: you can generate and rotate certificates manually, instance by instance. Or you can automate the entire lifecycle using Ansible against an internal PKI with a consistent and auditable way that is invisible to the teams consuming it. The security outcome is identical; the operational cost is not.
  • Data anonymization: a custom script that detects PII columns and replaces values with masked data works well at small scale. The challenge appears when your data spans multiple database engines (SQL Server, Oracle, PostgreSQL, …) and when anonymized values need to remain consistent across foreign keys and referential constraints. Replacing a customer ID in one table while leaving it intact in another isn’t anonymization, it’s a GDPR incident waiting to happen. Solutions like Delphix Continuous Compliance handle cross-DBMS consistency, constraint awareness, and sensitive field detection out of the box, turning a fragile hand-rolled process into a governed, repeatable and auditable one.
  • Dynamic masking and row-level security: defining a handful of policies manually in SSMS is perfectly reasonable for a contained environment. Automating policy deployment across environments and instances is a different challenge entirely. It is a level of scale where ad-hoc scripts quickly become a liability.
Conclusion: Moving Beyond Security by Accident

Security is not a one-time project. It is an operational discipline that requires the same rigor in a developer’s sandbox as it does in production, and that rigor has to be enforced by design, not by goodwill.

Most breaches in non-production environments don’t happen because a tool failed. They happen because nobody owned the decision to use it in the first place.

At dbi services, we help organizations move from fragile, handcrafted scripts to governed, auditable architectures across every environment, every database engine, and every team.

Because under GDPR, one incident is all it takes to make ownership everyone’s problem.

L’article Beyond TDE and TLS: Bridging the Data Security Governance Gap in Lower Environments est apparu en premier sur dbi Blog.

Not enough space for using ODABR? Reduce /u01 and /opt size on your ODA

Yann Neuhaus - Fri, 2026-04-24 01:42
Introduction

The patching of an Oracle Database Appliance needs to be secured. This is because it won’t limit to patching the databases: there is plenty of other components embedded with the patch, the goal being to keep everything updated. A possible rollback may be needed.

For sure, there is no possible rollback for firmwares, BIOS and ILOM (it’s unlikely you would need to go back to previous version), but you may need to rollback when it comes to patching the OS and the GI stack. Unlike DB homes, OS and GI stack are located on the ODA local disks, and can be protected during patching with LVM snapshots. This is the purpose of ODABR: making LVM snapshots of /, /u01 and /opt before applying the patch. If something goes wrong, you can revert to the previous stable state.

ODABR will need enough space on local disks for creating the snapshots, and sometimes available free space is not enough. How to deal with that?

When taking the snapshots?

Before using ODABR, I would recommend doing a cleanup of the following filesystems: /, /u01 and /opt. You will need 20+% of free space in these filesystems, otherwise the patch prechecks won’t give you the green light. ODABR is based on Copy-On-Write technology, meaning that old versions of the changed blocks will be kept for going back in time if needed. Don’t unzip the patch files after taking the snapshots if the patch files reside on the local disks! Once you’re ready to patch, the minute before registering the patch and applying the DCS components update, you can use ODABR to take the snapshots.

When releasing the snapshots?

Once you successfully applied the system patch and GI patch, and once you verified that everything runs fine with the new GI binaries, you can safely use ODABR to delete the snapshots. If you need to apply multiple patches, make new snapshots before each jump, you will never need to revert to the oldest version once you are in a stable intermediate version.

Are the ODABR snapshots useful for DB homes?

Absolutely not. DB homes are located under /u01/app/odaorahome, a dedicated ACFS filesystem on the DATA diskgroup. You don’t need snapshot features on this very filesystem because patching a DB home will create a new home and move the database into this new DB home, the old one being kept as long as you want to. You can delete older DB homes months after patching was done. I usually delete old ones as part of the patching job once everything is fine, but I always backup the old DB home with a tar czf in case of. If you need to revert a database to an old DB home, you can do it manually and use a RMAN backup to restore the database, or eventually rollback the datapatch if needed. This is rarely used.

Location of the snapshots

ODABR relies on LVM snapshots, not something really specific to ODA. Snapshots are stored in the remaining space of the Volume Group. Most ODAs were sold with 2x 500GB local disks (SSDs) protected with a software RAID and allocated to a single Volume Group VolGroupSys. When doing the ODA setup, only a part of the storage is allocated to the system volumes as Logical Volumes (LogVolSwap, LogVolOpt, LogVolU01, LogVolRoot), meaning that a comfortable amount of space is available for ODABR:

pvs
  PV           VG          Fmt  Attr PSize   PFree
  /dev/md126p3 VolGroupSys lvm2 a--  446.09g 285.09g

/opt/odabr/odabr backup -snap
...
SUCCESS: 2026-04-22 17:10:33: LVM snapshots backup done successfully

pvs
  PV           VG          Fmt  Attr PSize   PFree
  /dev/md126p3 VolGroupSys lvm2 a--  446.09g 150.09g

ODABR additionally backups ASM metadata, because if you revert to a previous set of LVM snapshots with an older GI release, your ASM metadata would also need to be in the corresponding version.

Default snapshot sizes and real needs

Default snapshots size is the size of the Logical Volumes, here is an example for a 30GB /, a 55GB /opt and a 50GB /u01:

/opt/odabr/odabr infosnap

│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
 odabr - ODA node Backup Restore - Version: 2.0.2-06
 Copyright 2013, 2025, Oracle and/or its affiliates.
 --------------------------------------------------------
 RACPack, Cloud Innovation and Solution Engineering Team
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│

LVM snap name         status                COW Size              Data%
-------------         ----------            ----------            ------
root_snap             active                30.00 GiB             0.01%
opt_snap              active                55.00 GiB             0.01%
u01_snap              active                50.00 GiB             0.01%

This is quite confortable, there won’t be 100% blocks changed during the patching, for sure. If you don’t have these 135GB of free space, you can specify lower values for the snapshots, but make sure to limit modifications on the files residing on these 3 Logical Volumes during the patch:

/opt/odabr/odabr backup -snap -osize 18 -rsize 10 -usize 25

These settings were enough for my latest patches.

The ODA X9-2 series exception

As far as I remember, ODAs always had a confortable system disk sizes. 500GB is large enough for OS, odacli and GI stack plus a couple of files of your own. Since version 19.10, DB homes and diagnostic destination are now located within ACFS volumes, meaning that it doesn’t take a single MB from the local disks:

df -h | grep -e Filesystem -e u01
Filesystem                          Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01    49G   27G   20G  58% /u01
/dev/asm/odabase_n0-225              30G  2.5G   28G   9% /u01/app/odaorabase0
/dev/asm/orahome_sh-225              80G   44G   37G  54% /u01/app/odaorahome

As 19.10 was already available when X9-2 was released, Oracle decided to decrease the system disks from 500GB to 250GB, meaning that you didn’t have the margin you had with the previous generations. The problem was solved with X10 series and onwards, system disks are now back to 500GB as they were on X8-2 series.

What I mean is that you’ll be most probably concerned about filesystem contention for ODABR if you have X9-2 series.

How can you deal with insufficient disk space when using ODABR?

Imagine you have this kind of configuration:

pvs
  PV           VG          Fmt  Attr PSize   PFree
  /dev/md126p3 VolGroupSys lvm2 a--  222.56g 2.56g

On this ODA, /opt and /u01 have been extended for some reasons, and there is almost no space left on the Volume Group.

If you try to use ODABR to take snapshots, you will get this error:

Available LVM size X is less than required snapshot size Y

2.56GB is way too small for using ODABR: you will need to reduce the size of /u01, /opt or both. First, a cleanup is needed to free up some space. Once done, you’ll be able to reduce the size of /u01 and/or /opt. But these filesystems will need to be unmounted. And it cannot be done without stopping/starting some processes. Actually, pretty much all the processes running on your ODA.

Stopping the processes and reducing the filesystems

Let’s reduce the /u01 and /opt Logical Volumes’ size in this example:

su - root

df -h /u01 /opt

Filesystem                          Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01    61G   23G   36G  39% /u01
/dev/mapper/VolGroupSys-LogVolOpt    70G   48G   22G  68% /opt

export ORACLE_HOME=/u01/app/19.30.0.0/grid
$ORACLE_HOME/bin/crsctl stop crs

lvreduce -L 50G /dev/VolGroupSys/LogVolU01 -r
Do you want to unmount "/u01" ? [Y|n] y
umount: /u01: target is busy.
fsadm: Cannot proceed with mounted filesystem "/u01".
  /usr/sbin/fsadm failed: 1
  Filesystem resize failed.

lsof /u01/
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF    NODE NAME
java    23727 root   26r   REG  252,3 16303574 1208968 /u01/app/19.30.0.0/grid/jlib/srvm.jar

kill -9 23727 ; umount /u01
umount: /u01: target is busy.

kill -9 23727 ; umount /u01
-bash: kill: (23727) - No such process

lvreduce -L 50G /dev/VolGroupSys/LogVolU01 -r
fsck from util-linux 2.32.1
/dev/mapper/VolGroupSys-LogVolU01: 76718/4063232 files (7.3% non-contiguous), 6220430/16252928 blocks
resize2fs 1.46.2 (28-Feb-2021)
Resizing the filesystem on /dev/mapper/VolGroupSys-LogVolU01 to 13107200 (4k) blocks.
The filesystem on /dev/mapper/VolGroupSys-LogVolU01 is now 13107200 (4k) blocks long.

  Size of logical volume VolGroupSys/LogVolU01 changed from 62.00 GiB (1984 extents) to 50.00 GiB (1600 extents).
  Logical volume VolGroupSys/LogVolU01 successfully resized.


systemctl stop initdcsagent
systemctl stop initdcscontroller
systemctl stop initdcsadmin
systemctl stop oda-mysql
systemctl stop oracle-ODA_DCS-ODA_DCS0
systemctl stop oracle-tfa
systemctl stop oracle-ohasd


lsof /opt
COMMAND     PID USER   FD   TYPE DEVICE SIZE/OFF    NODE NAME
bash       6697 root  cwd    DIR  252,2     4096 1703937 /opt/dbi
OSWatcher  7799 grid  cwd    DIR  252,2     4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
OSWatcher  7799 grid    1w   REG  252,2     4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher  7799 grid    2w   REG  252,2     4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher  7799 grid  255r   REG  252,2    65286 2235047 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb/OSWatcher.sh
OSWatcher  9903 grid  cwd    DIR  252,2     4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
OSWatcher  9903 grid    1w   REG  252,2     4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher  9903 grid    2w   REG  252,2     4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher  9903 grid  255r   REG  252,2     8035 2235046 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb/OSWatcherFM.sh
sleep     56757 grid  cwd    DIR  252,2     4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
sleep     56757 grid    1w   REG  252,2     4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
sleep     56757 grid    2w   REG  252,2     4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
sleep     56835 grid  cwd    DIR  252,2     4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
sleep     56835 grid    1w   REG  252,2     4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
sleep     56835 grid    2w   REG  252,2     4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)

kill -9 7799 9903


lvreduce -L 55G /dev/VolGroupSys/LogVolOpt -r
Do you want to unmount "/opt" ? [Y|n] y
fsck from util-linux 2.32.1
/dev/mapper/VolGroupSys-LogVolOpt: Inode 655911 extent tree (at level 1) could be narrower.  IGNORED.
/dev/mapper/VolGroupSys-LogVolOpt: 57911/4587520 files (2.4% non-contiguous), 12785629/18350080 blocks
resize2fs 1.46.2 (28-Feb-2021)
Resizing the filesystem on /dev/mapper/VolGroupSys-LogVolOpt to 14417920 (4k) blocks.
The filesystem on /dev/mapper/VolGroupSys-LogVolOpt is now 14417920 (4k) blocks long.

  Size of logical volume VolGroupSys/LogVolOpt changed from 70.00 GiB (2240 extents) to 55.00 GiB (1760 extents).
  Logical volume VolGroupSys/LogVolOpt successfully resized.

As I don’t want to manually restart the various processes, and as I usually do a sanity reboot before patching, let’s reboot the server. Everything will then be properly started after.

reboot
...

df -h /u01 /opt
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01   49G   23G   25G  49% /u01
/dev/mapper/VolGroupSys-LogVolOpt   54G   48G  4.0G  93% /opt
Further recommendations

You don’t really need a lot of space on the system disks. Put your patch files and other specific files on a NFS volume shared across your ODAs, and keep the system disks to their default’s size to keep a confortable margin for ODABR, especially on ODA X9-2 series. Eventually, allow a few more GB to /u01 and/or /opt but keep at least 70GB free for ODABR snapshots. Anticipate the patching and test ODABR several days before, it does not cost anything, and snapshots can be removed immediately without any downtime. Worst case is having to patch now and discovering that there is no space left on disks for using ODABR.

Not being able to secure your patch with ODABR is the second NOGO for patching, the first one being a faulty component reported by odaadmcli show server.

Conclusion

If you don’t have enough free space for ODABR snapshots, postpone the patching. ODABR is a mandatory safety net. Reducing /u01 and /opt filesystems’s size is possible. It will require stopping everything, but as you will patch your ODA, you’ve already allocated a sufficient downtime.

L’article Not enough space for using ODABR? Reduce /u01 and /opt size on your ODA est apparu en premier sur dbi Blog.

ARRAY PROCESSING

Tom Kyte - Thu, 2026-04-23 05:59
Hi Tom, I have created a TYPE as: Create or replace type NUM_ARRAY as table of varchar2(1000); Then I have a procedure which accepts an ARRAY of sql statements and executes one at a time. Now this take a lot of time to process a batch of sql statement. I was wondering if I can use the forall to execute them in a bulk as these are all insert/update/delete statements; Here is the Procedure: CREATE OR REPLACE procedure give_me_an_array( p_array in num_array ) as begin for i in 1 .. p_array.count loop BEGIN execute immediate p_array(i); EXCEPTION WHEN OTHERS THEN NULL; END; --dbms_output.put_line( p_array(i) ); end loop; commit; end; We are using Java as the Front-End. Your input will be highly appricated as I am competing against SQLServer which is doing this extremly fast without using any ARRAYS. I am in the process of migrating from SQLServer to Oracle and the condition is if Oracle runs faster than SQLServer. Thanks Nirmal
Categories: DBA Blogs

SQL profile and SQL plan baseline in Standard Edition 2 (SE2)

Tom Kyte - Thu, 2026-04-23 05:59
Hello, If we are using Oracle SE2 there is no Tuning Pack or Diagnostic Pack available. So I don't think it's possible to create a SQL profile with SQL Tuning Advisor. would it be possible with SQLTXPLAIN coe_xfr_sql_profile.sql script ? I know it's possible to create a SQL plan baseline from cursor cache but what if the good plan is not in the cursor cache ? Thanks.
Categories: DBA Blogs

Bulk Update of GoldenGate Extract Profiles

Yann Neuhaus - Thu, 2026-04-23 01:30

After writing blogs about Default profile updates and a bug affecting profiles in GoldenGate 26ai, I realized it would be a good idea to present a quick way to update all processes at once with the REST API. Because even if you set a new profile as default with the {"isDefault": true} flag, previously created extracts and replicats will not be updated !

To explain this further, let’s look at an extract setting using the retrieve_extract endpoint. I’m using the Python client I presented in another blog. Everything I’m talking about here is not specific to extracts, and can be extended to replicats. However, it doesn’t apply to distribution paths.

Extract with no profile assigned

If an extract was created in the most basic configuration, it inherits the Default profile implicitly, but this information is not registered in the extract definition. Below is the configuration of such an extract. To save you time, just remember that there is no mention of any profile setting.

>>> ogg_client.retrieve_extract('EXTSRC')
{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
Extract with a profile set

If a profile is assigned to the extract, the managedProcessSettings key is added to the extract configuration. This is true whether you assign a custom profile or re-assign the Default profile after having changed the extract configuration. Below, we can see the difference with the following key/value pair : 'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile'

{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile', 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
Extract with a custom configuration

If an extract has a custom unique configuration that is not a profile, the full configuration is added as a JSON to the extract definition under the same managedProcessSettings key. I give below an example:

{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'managedProcessSettings': {'autoStart': {'enabled': True, 'delay': 60}, 'autoRestart': {'enabled': False, 'onSuccess': False, 'delay': 0, 'retries': 9, 'window': 0, 'disableOnFailure': True, 'failures': 0}}, 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
How to modify a single extract’s profile with the REST API ?

Modifying the profile of an extract or replicat with the REST API means updating the managedProcessSettings key of the process. For an extract, you would use the update_extract endpoint, with a data payload that only contains the managedProcessSettings key, and its value. It can be the profile name or the full profile definition, as shown above. Here is an example of a single update of an extract:

>>> ogg_client.update_extract(
extract='EXTSRC',
    data={
        'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile'
    }
)

And do not worry about entering a wrong name for the profile. If you do this, the API returns the following error:

>>> ogg_client.update_extract(
    extract='EXTSRC',
    data={
        'managedProcessSettings': 'ogg:managedProcessSettings:NonExistingProfile'
    }
)
Exception: ERROR - https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC: The managed process settings profile 'ogg:managedProcessSettings:NonExistingProfile' does not exist. ; INFO - https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC: The item type  with name 'ogg:managedProcessSettings:NonExistingProfile' does not exist.
How to modify all extracts and replicats ?

Based on the above, modifying all extracts and replicats is easy. You should just iterate over your extracts and replicats and apply the profile that you want. Below is an example script where you can modify the profile_name variable and the connection details to the Administration Service of your deployment. You could even decide to have different profiles assigned to extracts and replicats, if needed.

from oggrestapi import OGGRestAPI

profile_name = 'dbiProfile'

ogg_client = OGGRestAPI(
    url='https://vmogg',
    user='ogg',
    password='ogg',
    reverse_proxy=True,
    deployment='ogg_test_01'
)

for extract in ogg_client.list_extracts():
    extract_name = extract['name']
    print(f"Updating extract {extract_name}")
    ogg_client.update_extract(
        extract=extract_name,
        data={
            'managedProcessSettings': f'ogg:managedProcessSettings:{profile_name}'
        }
    )

for replicat in ogg_client.list_replicats():
    replicat_name = replicat['name']
    print(f"Updating replicat {replicat_name}")
    ogg_client.update_replicat(
        replicat=replicat_name,
        data={
            'managedProcessSettings': f'ogg:managedProcessSettings:{profile_name}'
        }
    )

After running this against the Administration Service, all your extracts and replicats will now follow the same new profile.

L’article Bulk Update of GoldenGate Extract Profiles est apparu en premier sur dbi Blog.

Loosing java remote console full control during ODA Reimage

Yann Neuhaus - Wed, 2026-04-22 03:18

Recently I had to patch an ODA at one of our customer from 19.20 to 19.26, going through 19.24. As you are aware of, knowing we go from Oracle Linux 7 (19.20) to Oracle Linux 8 (19.24), we need to patch the ODA using Data Preserving Reprovisioning. This includes a reimage of the nodes…

Data Preserving Reprovisioning will allow you reprovisioning an already deployed Oracle Database Appliance system keeping the storage and the databases on the appliance, so without any modifications on the information stored in the ASM (storage). The information of the source system will be saved in a server data archive files. The appliance will then be reimaged with Oracle Database Appliance release iso image and the saved metadata will be used to directly reprovision the system and bring back all the resources such as databases, DB systems, Oracle ASR, and others.

The reimage can only be done using the java remote console, available with the ILOM. Remember that to attach the iso on the java console and use it during the reimage, you need to add it and connect it as described in the next picture.

It is important to see the following red message to ensure that the connection with the iso is made and that it can be used.

The problem is that after a time, during the reimage process, the java remote console lost full control on the node, having as consequence a lost of the iso connection and a failure in the reimage process.

This could be seen in the console with the message (View Only) displayed in the bar and the Red Cross on the top of the Computer (screen)/mouse image.

Here:

When it should be:

Here the whole picture showing the connection problem.

And we can see the message : “Sorry, keyboard and mouse connection has been lost. \nIf storage was being shared it has been lost also”.

When a correct connection would be the following one, having (Full Control) message displayed in the bar, with no lost of keyboard/mouse and storage (iso) connection.

After several tries it was always the same problem. After a few minutes, lost of the connection…

I then decided to check the ILOM sessions, connecting to the ILOM through ssh connection.

login as: root
Keyboard-interactive authentication prompts from server:
| Password:
End of keyboard-interactive prompts from server

Oracle(R) Integrated Lights Out Manager

Version 5.1.1.23 r151958

Copyright (c) 2023, Oracle and/or its affiliates. All rights reserved.

Warning: HTTPS certificate is set to factory default.

Hostname: ODA01-ilo

-> show /SP/sessions

 /SP/sessions
    Targets:
        55342
        55346 (current)

    Properties:

    Commands:
        cd
        delete
        show

I found surprising to see so high sessions id. I decided to reset the ILOM. This would of course not restart the node.

-> reset /SP
Are you sure you want to reset /SP (y/n)? y
Performing reset on /SP

And connected again to the ILOM through ssh to checked the sessions:

login as: root
Keyboard-interactive authentication prompts from server:
| Password:
End of keyboard-interactive prompts from server

Oracle(R) Integrated Lights Out Manager

Version 5.1.1.23 r151958

Copyright (c) 2023, Oracle and/or its affiliates. All rights reserved.

Warning: HTTPS certificate is set to factory default.

Hostname: ODA01-ilo

-> show /SP/sessions

 /SP/sessions
    Targets:
        1 (current)

    Properties:

    Commands:
        cd
        delete
        show

The sessions id were back to one, and I did not have any lost connection issue any more. I could successfully reimage the ODA to 19.24 version.

L’article Loosing java remote console full control during ODA Reimage est apparu en premier sur dbi Blog.

Oracle AQ Scaling with PL/SQL Callbacks, DBMS_SCHEDULER Jobs

Tom Kyte - Tue, 2026-04-21 23:56
Hello, I have an Oracle Advanced Queuing queue and would like to be able to process this queue from inside of the database, as opposed to using an external app server. However, I am concerned about the scalability of internal solutions. Please assume the following: 1. The queue receives an arbitrary number of messages. 2. Each message results in a PLSQL procedure being called, which can take an arbitrary amount of time. 3. You want to limit the number of messages that can be processed at once to some value N. --- Solution #1: Run N permanent DBMS_SCHEDULER jobs that loop and call DBMS_AQ.DEQUEUE with WAIT_FOREVER. This is good because you can easily cap how many jobs you want to have processing this queue by adjusting the N number of permanent DBMS_SCHEDULER jobs. This is bad because all of these permanent jobs will reduce the available JOB_QUEUE_PROCESSES. It is fine if you only need a handful of jobs to process your queue, but as you scale the number of jobs up, eventually you will degrade the other unrelated jobs that need to run in the system. Does calling DBMS_LOCK.SLEEP or DBMS_AQ.DEQUEUE inside a DBMS_SCHEDULER job free up a JOB_QUEUE_PROCESSES slot while the job is sleeping? My guess is no. --- Solution #2: Use a PL/SQL callback, and in the callback, create a one-time DBMS_SCHEDULER job per message, and use a common resource constraint, such that only N scheduler jobs can run at once. For example, if you set a cap of 128 jobs in your resource constraint, and you receive 1000 messages, the PL/SQL callback will create 1000 jobs, but only 128 jobs will be running at once, and the rest will be blocked. The downside here is that you have to create a whole dbms_scheduler job to process a message. This will increase the time between receiving a message and starting to process it, and just seems like an overall heavy solution. Lightweight jobs won't help because the resource constraints don't work for lightweight jobs. In fact, you might as well not use AQ at all if you go down this route. Instead of writing messages to a queue, which later calls DBMS_SCEDHULER.CREATE_JOB, you could simply call DBMS_SCHEDULER.CREATE_JOB directly with a resource constraint. --- Solution #3: Use an external app server. Run N threads, where each thread grabs a connection from a connection pool, loops and calls DBMS_AQ.DEQUEUE with WAIT_FOREVER. This is the best approach because you can set N to cap the number of connections processing messages in parallel easily, and you do not have to block up and slots in JOB_QUEUE_PROCESSES. However, this has downsides. Your app server often has much more downtime than your database due to releases, network partitions, and many other various issues. If your session which is executing a long running PL/SQL procedure is terminated, you cannot assume whether the PL/SQL procedure on the server will complete or be stopped. While this is also true for DBMS_SCHEDULER Jobs that end up getting kil...
Categories: DBA Blogs

PL/SQL Execution Time

Tom Kyte - Tue, 2026-04-21 23:56
We have PL/SQL stored procedures that perform poorly, Using DBA_HIST or any other AWR / ASH metrics, is it possible to determine the runtimes of PL/SQL Procedures ?
Categories: DBA Blogs

Partitions evicting sequence cached values

Tom Kyte - Tue, 2026-04-21 23:56
Recently we observed that cached sequence values were lost significantly, appearing as large gaps in persisted values. Our system does not expect gapless sequence values, however we are trying to understand root cause. gv$rowcache shows high getmisses in histogram, objects, segments, followed by sequences. SGA dump shows many grows in shared pool and shrinks in buffer cache, and gv$db_object_cache shows high loads. We did a trend of sequence value jump or loss from cache based on gaps found in persisted values across time. I will list the events that may be contributory to the gap. Please let me know if this is a incorrect hypothesis. 1. Onset of moderately large large sequence jumps aligns with migration to multitenant database 2. Prior to multitenant migration, we never had histogram collection as part of stats, it appears a DBA has run stats with histogram collection turned on at the time of multitenant migration, sequence gaps are silently occuring 3. Few months after multitenant migration, a tablespace rebuild activity followed by stats collection was done. This time the standard stats collection script was run and removed stats from many tables, but not few core tables that are extensively used in the application. So never before seen histogram traffic is still continuing to dictionary cache. 4. After multitenant migration, another effort started where many tables started to get partitioned. There were 2 large on-time efforts that created several thousands of partitions, followed by regular scheduled jobs creating few hundreds of partitions for historical data management 5. The sequence jump (and loss from cache) seems to continue for many months, un-noticed as the application is not affected by gaps. 6. Some of the regular scheduled batch jobs were missed, so there was a large gap where tables were not monthly partitioned properly as expected. At discovery a one-time catchup activity was performed where around 800 partitions were created. 7. The sequence jump phenomenon exploded uncontrollably and was discovered by a partner system. We pinned the sequence in memory to calm it down. Here is my draft hypothesis for a root cause, please correct if it does not hold: a) Multitenant migration increased traffic to dictionary cache, sequence metadata is evicted and reload constantly, so values have jumps/gaps b) As many partitions are created, more traffic is arriving to dictionary cache and pressure increases, sequence jumps occur silently meanwhile. c) Cumulative traffic to dictionary cache is increasing every month as 100s of partitions are added by monthly job and pressure is getting intolerable, sequence evictions are on rise c) One-time gap-covering exercise for missed partition creation pushed the traffic over a threshold, sequence jump skyrocketed and now visible in the application as a very large gap. Does these dots connect as a root cause ?
Categories: DBA Blogs

PDB Snapshot Copy/Carousel

Tom Kyte - Tue, 2026-04-21 23:56
Hello Chris/Connor, Hope you are doing well. We are using ExaC@C with 19c databases at work and are exploring whether we can use PDB Snapshot Carousel and/or PDB Snapshot Copy feature. It might be just me but I am somewhat confused with the "art of possible" while using ASM. PDB Snapshot Copy Process (Doc ID 2730771.1) appears to suggest that we can use sparse disk group feature on exadata to either create PDB snapshot copy or PDB Snapshot Carousel However, ORA-65227 during pluggable database snapshot (Doc ID 3024542.1) appears to suggest that the feature is simply not supported in 19c and only available from 21c onwards. https://www.dbarj.com.br/en/2021/09/creating-a-snapshot-sparse-clone-from-a-different-release-update/ appears to even provide an example of how PDB Snapshot Copy can be used to patch a 19c database. So are we able to use PDB Snapshot Carousel with ASM Sparse Disk Group in ExaC@C by using Sparse Disk Group feature of ASM only (and not using any file system)? I am confused... Thanks in advance, Narendra
Categories: DBA Blogs

Flashback Schema

Tom Kyte - Tue, 2026-04-21 23:56
Hello Tom, I use Oracle Flashback features a lot. We have several development and test environments here and the combination flashback database + replay is priceless. My question is this: why don't we have a "flashback schema" feature ? I know you can simulate that with PL/SQL, but that's just for tables. A schema is much more than that: pl/sql code, grants, etc. If you consolidate databases into schemas inside a large machine, you lose the ability to flashback them; to maintain this ability you'll need virtualization (or pluggable databases :)). So, why was it never done ? Is it impossible and I fail to see the reason ? Thank you for your time.
Categories: DBA Blogs

Fuzzing PL/SQL

Pete Finnigan - Tue, 2026-04-21 23:56
I have not had chance to write any detailed blogs of late as we have been very busy with paying work and also working on our software products as we released version 2025 of PFCLScan and version 2026 is very....[Read More]

Posted by Pete On 20/04/26 At 09:34 AM

Categories: Security Blogs

Pages

Subscribe to Oracle FAQ aggregator