Feed aggregator
M5 Cross-Endian Platform Migration - KB144840
On Premisis MCP server
drop table without purge
Delete a GoldenGate deployment when the password is lost
Recently, I was asked how to recreate a GoldenGate deployment if the password for the Security user (the first user created with the deployment) was lost. This is an interesting question that I figured was worth writing about.
As a reminder, you have two main ways of creating and deleting deployments in GoldenGate:
- Using the configuration assistant
oggca.sh - Using the REST API and its deployment endpoints
I already covered the creation and removal of deployments with the configuration assistant in a previous blog. I also wrote about deployment creation with the REST API, but not yet about deletion.
Deleting the deployment fromoggca.sh won’t work
From the GoldenGate Configuration Assistant (oggca.sh script), deleting a deployment is straightforward, but as shown below, you will be asked to input not only the Service Manager credentials, but also the deployment credentials !
This means that oggca.sh cannot be used for such a task.
To overcome this issue, let’s list the steps taken by oggca.sh when deleting GoldenGate deployment:
- Verify the deployment credentials
- Stop the deployment services
- Unregister the deployment in the Service Manager
While asking for the deployment’s password might be relevant to avoid deleting the wrong deployment, you do not strictly need it. In fact, you can stop the deployment services from the Service Manager itself. To do so, use the update_deployment endpoint (PATCH /services/{version}/deployments/{deployment}) to change the status of the deployment to stopped. Using the Python client I presented in another blog, I will open a connection to the Service Manager and stop the deployment ogg_test_01.
>>> from oggrestapi import OGGRestAPI
>>> ogg_client = OGGRestAPI(url="http://vmogg:7809", username="ogg", password="***")
>>> ogg_client.update_deployment('ogg_test_01', data={"status":"stopped"})
{'$schema': 'api:standardResponse', 'links': [{'rel': 'canonical', 'href': 'https://vmogg/services/ServiceManager/v2/deployments/ogg_test_01', 'mediaType': 'application/json'}, {'rel': 'self', 'href': 'https://vmogg/services/ServiceManager/v2/deployments/ogg_test_01', 'mediaType': 'application/json'}], 'messages': []}
Once this is done, you can unregister the deployment from the Service Manager with the remove_deployment endpoint (DELETE /services/{version}/deployments/{deployment})
And for this, we never needed the deployment password !
To clean your installation, make sure to remove all files related to the old deployment. More specifically:
- The
deploymentConfiguration.datof your Service Manager was already edited, you do not need to clean it manually. - You can delete or archive the deployment main directory.
- The
$OGG_SM_HOME/var/rundirectory should be cleaned of the.datfiles named after the deployment :rm $OGG_SM_HOME/var/run/ogg_test_01*
That’s it. You have successfully deleted your deployment, and you can re-create it anytime you want. And this time, remember to store the password !
L’article Delete a GoldenGate deployment when the password is lost est apparu en premier sur dbi Blog.
Beyond TDE and TLS: Bridging the Data Security Governance Gap in Lower Environments
The Multi-Layered Threat: Why One Tool is Never Enough
We’ve all left the key in our bike lock at least once. This simple human oversight makes the heaviest chain irrelevant and we often see the exact same logic applied to data environments. Most organizations spend months hardening their production core but leave the keys in the locks of the dev and staging systems that sit right next to it.
The numbers back this up. While 91% of organizations are concerned about their exposure across lower environments, a staggering 86% still allow data compliance exceptions in non-production. This gap between concern and action has real consequences: more than half of these organizations have already experienced a breach or audit failure in their testing and development systems (PR Newswire).
Effective security is rarely a single-layer problem. Between the stolen backup that lands in the wrong hands, the analyst running a SELECT on a table they probably shouldn’t see, and the packet quietly crossing an unsecured network segment, the attack surface is wide, and no single mechanism covers it all.
Transport Layer Security (TLS), Transparent Data Encryption (TDE), symmetric encryption, dynamic masking, row-level security, data anonymization: for most RDBMS, the options exist and they work. Most teams already have access to at least one of them. The real challenge isn’t finding a solution; it’s understanding what each one actually protects, where it breaks down, and whether it survives contact with a production environment.
Shadow Environments: The Weakest Link in Your Data ChainHere is the uncomfortable truth: non-production environments are often where security policies are quietly buried. It starts with a backup restored without encryption, or real customer data seeding a dev database “just for a quick test“.
The fundamental problem is that most protections assume a controlled environment. Encryption can be bypassed by someone with the right credentials. Masking can be misconfigured. Row-level security doesn’t help much when the whole database is sitting on a developer’s laptop.
Technical Trade-offs: Finding Your Strategic FitTo make this reasoning concrete, the table below maps six core techniques against the operational criteria that define their success. The goal isn’t to pick a favorite tool, but to identify which combination actually addresses your specific vulnerabilities.
Physical File TheftRead Access (SELECT)Network SniffingPerformance ImpactGranularityApplicable in Prod(live data)Applicable in DEVTLS



Data packet
TDE


ColumnTablespace
Datafile

Symmetric encryption (applicative)


FieldValue

Dynamic Masking


Column
Row-level security


Row
Data anonymization


FieldColumn

- TLS protects data in motion. The moment a packet leaves a server, TLS ensures anyone intercepting it sees encrypted noise. What it doesn’t do is equally important: it has no opinion about who queries your database or what’s stored on disk. Once the data arrives, TLS’s job is done.
TLS is now the industry standard for securing data in motion.
(SQL Server technical blog about TLS here) - TDE encrypts the physical files that make up your database (data files, log files, backups), so that anyone who gets their hands on them without the encryption key can’t read them. The performance impact is a negligible overhead; in fact, Microsoft for example enables TDE by default for all its cloud-based databases.
(PostgreSQL technical blog about TDE here)
However, deploying TDE in development is a security best practice, but it quickly becomes an operational nightmare for environment refreshes, especially if you want to use distinct certificates to avoid leaking production secrets into lower environments. - Symmetric encryption is field-level encryption applied directly in the application layer. Unlike TDE, it survives a legitimate SELECT; even a user with full read access sees ciphertext unless they hold the applicative key. The tradeoff is performance: encrypting and decrypting at scale adds up quickly.
(MongoDB technical blog about Client-side Field Level Encryption here) - Dynamic masking doesn’t encrypt anything. It intercepts query results and replaces sensitive values with masked equivalents based on the user’s role. Fast, lightweight, zero application changes required. The catch: it only controls what’s displayed, not what’s stored. A user with sufficient privileges can bypass it entirely.
(SQL Server technical blog about dynamic masking here) - Row-Level Security enforces access at the row level directly inside the database engine. Users see only the rows they’re allowed to see, regardless of how the query is written. No application changes, no trust placed in the calling layer. The policy lives in the database and applies universally.
(Oracle technical blog about Virtual Private Database here) - Data anonymization doesn’t protect sensitive data, it eliminates it. Real values are replaced with realistic but fictional equivalents (synthetic data), permanently and irreversibly. No encryption key to steal, no masking rule to bypass. Whatever leaks simply isn’t sensitive anymore. This is why anonymization is the only control that makes unconditional sense in non-production environments. A stolen backup, a misconfigured SELECT, a sniffed packet: none of it matters if the data was anonymized before it ever reached a staging environment. We covered how to implement it in practice in a previous post
We are shifting from a technical challenge to a human and organizational one. The security landscape moves so fast that the struggle of mastering every layer has become overwhelming.
This complexity is where governance goes to die. Infrastructure teams build the walls, developers write the code, and DBAs manage the house, but the accountability for the data itself often falls through the cracks. The most dangerous gap isn’t a missing feature; it’s the absence of a governance model strong enough to stop the game of hot potato and force a cross-domain ownership of security.
The CISO’s role in this landscape is not to master every technical layer, it is to force the question of ownership into the open. Who signs off on what data enters a non-production environment? Who is accountable when a dev database is restored without encryption? Who audits that masking policies are still effective after a release?
Without explicit answers to these questions, security becomes a game of assumptions. Every team assumes another layer is holding. And the gaps compound silently, until they don’t.
From Handcrafted Scripts to Enterprise PlatformsEvery technique in this table can be implemented on a spectrum, from a carefully written script to a fully automated enterprise solution. The right choice depends on your scale, your team, and how much operational overhead you can realistically absorb.
- TLS certificate deployment: you can generate and rotate certificates manually, instance by instance. Or you can automate the entire lifecycle using Ansible against an internal PKI with a consistent and auditable way that is invisible to the teams consuming it. The security outcome is identical; the operational cost is not.
- Data anonymization: a custom script that detects PII columns and replaces values with masked data works well at small scale. The challenge appears when your data spans multiple database engines (SQL Server, Oracle, PostgreSQL, …) and when anonymized values need to remain consistent across foreign keys and referential constraints. Replacing a customer ID in one table while leaving it intact in another isn’t anonymization, it’s a GDPR incident waiting to happen. Solutions like Delphix Continuous Compliance handle cross-DBMS consistency, constraint awareness, and sensitive field detection out of the box, turning a fragile hand-rolled process into a governed, repeatable and auditable one.
- Dynamic masking and row-level security: defining a handful of policies manually in SSMS is perfectly reasonable for a contained environment. Automating policy deployment across environments and instances is a different challenge entirely. It is a level of scale where ad-hoc scripts quickly become a liability.
Security is not a one-time project. It is an operational discipline that requires the same rigor in a developer’s sandbox as it does in production, and that rigor has to be enforced by design, not by goodwill.
Most breaches in non-production environments don’t happen because a tool failed. They happen because nobody owned the decision to use it in the first place.
At dbi services, we help organizations move from fragile, handcrafted scripts to governed, auditable architectures across every environment, every database engine, and every team.
Because under GDPR, one incident is all it takes to make ownership everyone’s problem.
L’article Beyond TDE and TLS: Bridging the Data Security Governance Gap in Lower Environments est apparu en premier sur dbi Blog.
Not enough space for using ODABR? Reduce /u01 and /opt size on your ODA
The patching of an Oracle Database Appliance needs to be secured. This is because it won’t limit to patching the databases: there is plenty of other components embedded with the patch, the goal being to keep everything updated. A possible rollback may be needed.
For sure, there is no possible rollback for firmwares, BIOS and ILOM (it’s unlikely you would need to go back to previous version), but you may need to rollback when it comes to patching the OS and the GI stack. Unlike DB homes, OS and GI stack are located on the ODA local disks, and can be protected during patching with LVM snapshots. This is the purpose of ODABR: making LVM snapshots of /, /u01 and /opt before applying the patch. If something goes wrong, you can revert to the previous stable state.
ODABR will need enough space on local disks for creating the snapshots, and sometimes available free space is not enough. How to deal with that?
When taking the snapshots?Before using ODABR, I would recommend doing a cleanup of the following filesystems: /, /u01 and /opt. You will need 20+% of free space in these filesystems, otherwise the patch prechecks won’t give you the green light. ODABR is based on Copy-On-Write technology, meaning that old versions of the changed blocks will be kept for going back in time if needed. Don’t unzip the patch files after taking the snapshots if the patch files reside on the local disks! Once you’re ready to patch, the minute before registering the patch and applying the DCS components update, you can use ODABR to take the snapshots.
When releasing the snapshots?Once you successfully applied the system patch and GI patch, and once you verified that everything runs fine with the new GI binaries, you can safely use ODABR to delete the snapshots. If you need to apply multiple patches, make new snapshots before each jump, you will never need to revert to the oldest version once you are in a stable intermediate version.
Are the ODABR snapshots useful for DB homes?Absolutely not. DB homes are located under /u01/app/odaorahome, a dedicated ACFS filesystem on the DATA diskgroup. You don’t need snapshot features on this very filesystem because patching a DB home will create a new home and move the database into this new DB home, the old one being kept as long as you want to. You can delete older DB homes months after patching was done. I usually delete old ones as part of the patching job once everything is fine, but I always backup the old DB home with a tar czf in case of. If you need to revert a database to an old DB home, you can do it manually and use a RMAN backup to restore the database, or eventually rollback the datapatch if needed. This is rarely used.
Location of the snapshotsODABR relies on LVM snapshots, not something really specific to ODA. Snapshots are stored in the remaining space of the Volume Group. Most ODAs were sold with 2x 500GB local disks (SSDs) protected with a software RAID and allocated to a single Volume Group VolGroupSys. When doing the ODA setup, only a part of the storage is allocated to the system volumes as Logical Volumes (LogVolSwap, LogVolOpt, LogVolU01, LogVolRoot), meaning that a comfortable amount of space is available for ODABR:
pvs
PV VG Fmt Attr PSize PFree
/dev/md126p3 VolGroupSys lvm2 a-- 446.09g 285.09g
/opt/odabr/odabr backup -snap
...
SUCCESS: 2026-04-22 17:10:33: LVM snapshots backup done successfully
pvs
PV VG Fmt Attr PSize PFree
/dev/md126p3 VolGroupSys lvm2 a-- 446.09g 150.09g
ODABR additionally backups ASM metadata, because if you revert to a previous set of LVM snapshots with an older GI release, your ASM metadata would also need to be in the corresponding version.
Default snapshot sizes and real needsDefault snapshots size is the size of the Logical Volumes, here is an example for a 30GB /, a 55GB /opt and a 50GB /u01:
/opt/odabr/odabr infosnap
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
odabr - ODA node Backup Restore - Version: 2.0.2-06
Copyright 2013, 2025, Oracle and/or its affiliates.
--------------------------------------------------------
RACPack, Cloud Innovation and Solution Engineering Team
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
LVM snap name status COW Size Data%
------------- ---------- ---------- ------
root_snap active 30.00 GiB 0.01%
opt_snap active 55.00 GiB 0.01%
u01_snap active 50.00 GiB 0.01%
This is quite confortable, there won’t be 100% blocks changed during the patching, for sure. If you don’t have these 135GB of free space, you can specify lower values for the snapshots, but make sure to limit modifications on the files residing on these 3 Logical Volumes during the patch:
/opt/odabr/odabr backup -snap -osize 18 -rsize 10 -usize 25
These settings were enough for my latest patches.
The ODA X9-2 series exceptionAs far as I remember, ODAs always had a confortable system disk sizes. 500GB is large enough for OS, odacli and GI stack plus a couple of files of your own. Since version 19.10, DB homes and diagnostic destination are now located within ACFS volumes, meaning that it doesn’t take a single MB from the local disks:
df -h | grep -e Filesystem -e u01
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01 49G 27G 20G 58% /u01
/dev/asm/odabase_n0-225 30G 2.5G 28G 9% /u01/app/odaorabase0
/dev/asm/orahome_sh-225 80G 44G 37G 54% /u01/app/odaorahome
As 19.10 was already available when X9-2 was released, Oracle decided to decrease the system disks from 500GB to 250GB, meaning that you didn’t have the margin you had with the previous generations. The problem was solved with X10 series and onwards, system disks are now back to 500GB as they were on X8-2 series.
What I mean is that you’ll be most probably concerned about filesystem contention for ODABR if you have X9-2 series.
How can you deal with insufficient disk space when using ODABR?Imagine you have this kind of configuration:
pvs
PV VG Fmt Attr PSize PFree
/dev/md126p3 VolGroupSys lvm2 a-- 222.56g 2.56g
On this ODA, /opt and /u01 have been extended for some reasons, and there is almost no space left on the Volume Group.
If you try to use ODABR to take snapshots, you will get this error:
Available LVM size X is less than required snapshot size Y
2.56GB is way too small for using ODABR: you will need to reduce the size of /u01, /opt or both. First, a cleanup is needed to free up some space. Once done, you’ll be able to reduce the size of /u01 and/or /opt. But these filesystems will need to be unmounted. And it cannot be done without stopping/starting some processes. Actually, pretty much all the processes running on your ODA.
Stopping the processes and reducing the filesystemsLet’s reduce the /u01 and /opt Logical Volumes’ size in this example:
su - root
df -h /u01 /opt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01 61G 23G 36G 39% /u01
/dev/mapper/VolGroupSys-LogVolOpt 70G 48G 22G 68% /opt
export ORACLE_HOME=/u01/app/19.30.0.0/grid
$ORACLE_HOME/bin/crsctl stop crs
lvreduce -L 50G /dev/VolGroupSys/LogVolU01 -r
Do you want to unmount "/u01" ? [Y|n] y
umount: /u01: target is busy.
fsadm: Cannot proceed with mounted filesystem "/u01".
/usr/sbin/fsadm failed: 1
Filesystem resize failed.
lsof /u01/
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 23727 root 26r REG 252,3 16303574 1208968 /u01/app/19.30.0.0/grid/jlib/srvm.jar
kill -9 23727 ; umount /u01
umount: /u01: target is busy.
kill -9 23727 ; umount /u01
-bash: kill: (23727) - No such process
lvreduce -L 50G /dev/VolGroupSys/LogVolU01 -r
fsck from util-linux 2.32.1
/dev/mapper/VolGroupSys-LogVolU01: 76718/4063232 files (7.3% non-contiguous), 6220430/16252928 blocks
resize2fs 1.46.2 (28-Feb-2021)
Resizing the filesystem on /dev/mapper/VolGroupSys-LogVolU01 to 13107200 (4k) blocks.
The filesystem on /dev/mapper/VolGroupSys-LogVolU01 is now 13107200 (4k) blocks long.
Size of logical volume VolGroupSys/LogVolU01 changed from 62.00 GiB (1984 extents) to 50.00 GiB (1600 extents).
Logical volume VolGroupSys/LogVolU01 successfully resized.
systemctl stop initdcsagent
systemctl stop initdcscontroller
systemctl stop initdcsadmin
systemctl stop oda-mysql
systemctl stop oracle-ODA_DCS-ODA_DCS0
systemctl stop oracle-tfa
systemctl stop oracle-ohasd
lsof /opt
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 6697 root cwd DIR 252,2 4096 1703937 /opt/dbi
OSWatcher 7799 grid cwd DIR 252,2 4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
OSWatcher 7799 grid 1w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher 7799 grid 2w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher 7799 grid 255r REG 252,2 65286 2235047 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb/OSWatcher.sh
OSWatcher 9903 grid cwd DIR 252,2 4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
OSWatcher 9903 grid 1w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher 9903 grid 2w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher 9903 grid 255r REG 252,2 8035 2235046 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb/OSWatcherFM.sh
sleep 56757 grid cwd DIR 252,2 4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
sleep 56757 grid 1w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
sleep 56757 grid 2w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
sleep 56835 grid cwd DIR 252,2 4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
sleep 56835 grid 1w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
sleep 56835 grid 2w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
kill -9 7799 9903
lvreduce -L 55G /dev/VolGroupSys/LogVolOpt -r
Do you want to unmount "/opt" ? [Y|n] y
fsck from util-linux 2.32.1
/dev/mapper/VolGroupSys-LogVolOpt: Inode 655911 extent tree (at level 1) could be narrower. IGNORED.
/dev/mapper/VolGroupSys-LogVolOpt: 57911/4587520 files (2.4% non-contiguous), 12785629/18350080 blocks
resize2fs 1.46.2 (28-Feb-2021)
Resizing the filesystem on /dev/mapper/VolGroupSys-LogVolOpt to 14417920 (4k) blocks.
The filesystem on /dev/mapper/VolGroupSys-LogVolOpt is now 14417920 (4k) blocks long.
Size of logical volume VolGroupSys/LogVolOpt changed from 70.00 GiB (2240 extents) to 55.00 GiB (1760 extents).
Logical volume VolGroupSys/LogVolOpt successfully resized.
As I don’t want to manually restart the various processes, and as I usually do a sanity reboot before patching, let’s reboot the server. Everything will then be properly started after.
reboot
...
df -h /u01 /opt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01 49G 23G 25G 49% /u01
/dev/mapper/VolGroupSys-LogVolOpt 54G 48G 4.0G 93% /opt
Further recommendations
You don’t really need a lot of space on the system disks. Put your patch files and other specific files on a NFS volume shared across your ODAs, and keep the system disks to their default’s size to keep a confortable margin for ODABR, especially on ODA X9-2 series. Eventually, allow a few more GB to /u01 and/or /opt but keep at least 70GB free for ODABR snapshots. Anticipate the patching and test ODABR several days before, it does not cost anything, and snapshots can be removed immediately without any downtime. Worst case is having to patch now and discovering that there is no space left on disks for using ODABR.
Not being able to secure your patch with ODABR is the second NOGO for patching, the first one being a faulty component reported by odaadmcli show server.
ConclusionIf you don’t have enough free space for ODABR snapshots, postpone the patching. ODABR is a mandatory safety net. Reducing /u01 and /opt filesystems’s size is possible. It will require stopping everything, but as you will patch your ODA, you’ve already allocated a sufficient downtime.
L’article Not enough space for using ODABR? Reduce /u01 and /opt size on your ODA est apparu en premier sur dbi Blog.
ARRAY PROCESSING
SQL profile and SQL plan baseline in Standard Edition 2 (SE2)
Bulk Update of GoldenGate Extract Profiles
After writing blogs about Default profile updates and a bug affecting profiles in GoldenGate 26ai, I realized it would be a good idea to present a quick way to update all processes at once with the REST API. Because even if you set a new profile as default with the {"isDefault": true} flag, previously created extracts and replicats will not be updated !
To explain this further, let’s look at an extract setting using the retrieve_extract endpoint. I’m using the Python client I presented in another blog. Everything I’m talking about here is not specific to extracts, and can be extended to replicats. However, it doesn’t apply to distribution paths.
If an extract was created in the most basic configuration, it inherits the Default profile implicitly, but this information is not registered in the extract definition. Below is the configuration of such an extract. To save you time, just remember that there is no mention of any profile setting.
>>> ogg_client.retrieve_extract('EXTSRC')
{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
Extract with a profile set
If a profile is assigned to the extract, the managedProcessSettings key is added to the extract configuration. This is true whether you assign a custom profile or re-assign the Default profile after having changed the extract configuration. Below, we can see the difference with the following key/value pair : 'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile'
{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile', 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
Extract with a custom configuration
If an extract has a custom unique configuration that is not a profile, the full configuration is added as a JSON to the extract definition under the same managedProcessSettings key. I give below an example:
{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'managedProcessSettings': {'autoStart': {'enabled': True, 'delay': 60}, 'autoRestart': {'enabled': False, 'onSuccess': False, 'delay': 0, 'retries': 9, 'window': 0, 'disableOnFailure': True, 'failures': 0}}, 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
How to modify a single extract’s profile with the REST API ?
Modifying the profile of an extract or replicat with the REST API means updating the managedProcessSettings key of the process. For an extract, you would use the update_extract endpoint, with a data payload that only contains the managedProcessSettings key, and its value. It can be the profile name or the full profile definition, as shown above. Here is an example of a single update of an extract:
>>> ogg_client.update_extract(
extract='EXTSRC',
data={
'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile'
}
)
And do not worry about entering a wrong name for the profile. If you do this, the API returns the following error:
>>> ogg_client.update_extract(
extract='EXTSRC',
data={
'managedProcessSettings': 'ogg:managedProcessSettings:NonExistingProfile'
}
)
Exception: ERROR - https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC: The managed process settings profile 'ogg:managedProcessSettings:NonExistingProfile' does not exist. ; INFO - https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC: The item type with name 'ogg:managedProcessSettings:NonExistingProfile' does not exist.
How to modify all extracts and replicats ?
Based on the above, modifying all extracts and replicats is easy. You should just iterate over your extracts and replicats and apply the profile that you want. Below is an example script where you can modify the profile_name variable and the connection details to the Administration Service of your deployment. You could even decide to have different profiles assigned to extracts and replicats, if needed.
from oggrestapi import OGGRestAPI
profile_name = 'dbiProfile'
ogg_client = OGGRestAPI(
url='https://vmogg',
user='ogg',
password='ogg',
reverse_proxy=True,
deployment='ogg_test_01'
)
for extract in ogg_client.list_extracts():
extract_name = extract['name']
print(f"Updating extract {extract_name}")
ogg_client.update_extract(
extract=extract_name,
data={
'managedProcessSettings': f'ogg:managedProcessSettings:{profile_name}'
}
)
for replicat in ogg_client.list_replicats():
replicat_name = replicat['name']
print(f"Updating replicat {replicat_name}")
ogg_client.update_replicat(
replicat=replicat_name,
data={
'managedProcessSettings': f'ogg:managedProcessSettings:{profile_name}'
}
)
After running this against the Administration Service, all your extracts and replicats will now follow the same new profile.
L’article Bulk Update of GoldenGate Extract Profiles est apparu en premier sur dbi Blog.
Loosing java remote console full control during ODA Reimage
Recently I had to patch an ODA at one of our customer from 19.20 to 19.26, going through 19.24. As you are aware of, knowing we go from Oracle Linux 7 (19.20) to Oracle Linux 8 (19.24), we need to patch the ODA using Data Preserving Reprovisioning. This includes a reimage of the nodes…
Data Preserving Reprovisioning will allow you reprovisioning an already deployed Oracle Database Appliance system keeping the storage and the databases on the appliance, so without any modifications on the information stored in the ASM (storage). The information of the source system will be saved in a server data archive files. The appliance will then be reimaged with Oracle Database Appliance release iso image and the saved metadata will be used to directly reprovision the system and bring back all the resources such as databases, DB systems, Oracle ASR, and others.
The reimage can only be done using the java remote console, available with the ILOM. Remember that to attach the iso on the java console and use it during the reimage, you need to add it and connect it as described in the next picture.
It is important to see the following red message to ensure that the connection with the iso is made and that it can be used.
The problem is that after a time, during the reimage process, the java remote console lost full control on the node, having as consequence a lost of the iso connection and a failure in the reimage process.
This could be seen in the console with the message (View Only) displayed in the bar and the Red Cross on the top of the Computer (screen)/mouse image.
Here:
When it should be:
Here the whole picture showing the connection problem.
And we can see the message : “Sorry, keyboard and mouse connection has been lost. \nIf storage was being shared it has been lost also”.
When a correct connection would be the following one, having (Full Control) message displayed in the bar, with no lost of keyboard/mouse and storage (iso) connection.
After several tries it was always the same problem. After a few minutes, lost of the connection…
I then decided to check the ILOM sessions, connecting to the ILOM through ssh connection.
login as: root
Keyboard-interactive authentication prompts from server:
| Password:
End of keyboard-interactive prompts from server
Oracle(R) Integrated Lights Out Manager
Version 5.1.1.23 r151958
Copyright (c) 2023, Oracle and/or its affiliates. All rights reserved.
Warning: HTTPS certificate is set to factory default.
Hostname: ODA01-ilo
-> show /SP/sessions
/SP/sessions
Targets:
55342
55346 (current)
Properties:
Commands:
cd
delete
show
I found surprising to see so high sessions id. I decided to reset the ILOM. This would of course not restart the node.
-> reset /SP Are you sure you want to reset /SP (y/n)? y Performing reset on /SP
And connected again to the ILOM through ssh to checked the sessions:
login as: root
Keyboard-interactive authentication prompts from server:
| Password:
End of keyboard-interactive prompts from server
Oracle(R) Integrated Lights Out Manager
Version 5.1.1.23 r151958
Copyright (c) 2023, Oracle and/or its affiliates. All rights reserved.
Warning: HTTPS certificate is set to factory default.
Hostname: ODA01-ilo
-> show /SP/sessions
/SP/sessions
Targets:
1 (current)
Properties:
Commands:
cd
delete
show
The sessions id were back to one, and I did not have any lost connection issue any more. I could successfully reimage the ODA to 19.24 version.
L’article Loosing java remote console full control during ODA Reimage est apparu en premier sur dbi Blog.
Oracle AQ Scaling with PL/SQL Callbacks, DBMS_SCHEDULER Jobs
PL/SQL Execution Time
Partitions evicting sequence cached values
PDB Snapshot Copy/Carousel
Flashback Schema
Fuzzing PL/SQL
Posted by Pete On 20/04/26 At 09:34 AM


