Feed aggregator

ARRAY PROCESSING

Tom Kyte - 5 hours 30 min ago
Hi Tom, I have created a TYPE as: Create or replace type NUM_ARRAY as table of varchar2(1000); Then I have a procedure which accepts an ARRAY of sql statements and executes one at a time. Now this take a lot of time to process a batch of sql statement. I was wondering if I can use the forall to execute them in a bulk as these are all insert/update/delete statements; Here is the Procedure: CREATE OR REPLACE procedure give_me_an_array( p_array in num_array ) as begin for i in 1 .. p_array.count loop BEGIN execute immediate p_array(i); EXCEPTION WHEN OTHERS THEN NULL; END; --dbms_output.put_line( p_array(i) ); end loop; commit; end; We are using Java as the Front-End. Your input will be highly appricated as I am competing against SQLServer which is doing this extremly fast without using any ARRAYS. I am in the process of migrating from SQLServer to Oracle and the condition is if Oracle runs faster than SQLServer. Thanks Nirmal
Categories: DBA Blogs

SQL profile and SQL plan baseline in Standard Edition 2 (SE2)

Tom Kyte - 5 hours 30 min ago
Hello, If we are using Oracle SE2 there is no Tuning Pack or Diagnostic Pack available. So I don't think it's possible to create a SQL profile with SQL Tuning Advisor. would it be possible with SQLTXPLAIN coe_xfr_sql_profile.sql script ? I know it's possible to create a SQL plan baseline from cursor cache but what if the good plan is not in the cursor cache ? Thanks.
Categories: DBA Blogs

Bulk Update of GoldenGate Extract Profiles

Yann Neuhaus - 9 hours 59 min ago

After writing blogs about Default profile updates and a bug affecting profiles in GoldenGate 26ai, I realized it would be a good idea to present a quick way to update all processes at once with the REST API. Because even if you set a new profile as default with the {"isDefault": true} flag, previously created extracts and replicats will not be updated !

To explain this further, let’s look at an extract setting using the retrieve_extract endpoint. I’m using the Python client I presented in another blog. Everything I’m talking about here is not specific to extracts, and can be extended to replicats. However, it doesn’t apply to distribution paths.

Extract with no profile assigned

If an extract was created in the most basic configuration, it inherits the Default profile implicitly, but this information is not registered in the extract definition. Below is the configuration of such an extract. To save you time, just remember that there is no mention of any profile setting.

>>> ogg_client.retrieve_extract('EXTSRC')
{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
Extract with a profile set

If a profile is assigned to the extract, the managedProcessSettings key is added to the extract configuration. This is true whether you assign a custom profile or re-assign the Default profile after having changed the extract configuration. Below, we can see the difference with the following key/value pair : 'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile'

{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile', 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
Extract with a custom configuration

If an extract has a custom unique configuration that is not a profile, the full configuration is added as a JSON to the extract definition under the same managedProcessSettings key. I give below an example:

{'$schema': 'ogg:extract', 'credentials': {'alias': 'cdb01', 'domain': 'OracleGoldenGate'}, 'begin': 'now', 'encryptionProfile': 'LocalWallet', 'managedProcessSettings': {'autoStart': {'enabled': True, 'delay': 60}, 'autoRestart': {'enabled': False, 'onSuccess': False, 'delay': 0, 'retries': 9, 'window': 0, 'disableOnFailure': True, 'failures': 0}}, 'targets': [{'name': 'aa', 'path': 'PDB1', 'sizeMB': 2000, 'sequenceLength': 9, 'sequenceLengthFlip': False, 'sequence': 1, 'offset': 1399, 'remote': False}], 'config': ['--- Auto generated Parameter File, do not edit ---', 'EXTRACT EXTSRC', 'USERIDALIAS cdb01 DOMAIN OracleGoldenGate', 'EXTTRAIL PDB1/aa', '--- End of auto generated Parameter File ---', 'SOURCECATALOG PDB1', 'TABLE APP_PDB1.*;'], 'source': 'tranlogs', 'type': 'Integrated', 'registration': {'share': False, 'containers': ['PDB1'], 'csn': 2809889}, 'status': 'running'}
How to modify a single extract’s profile with the REST API ?

Modifying the profile of an extract or replicat with the REST API means updating the managedProcessSettings key of the process. For an extract, you would use the update_extract endpoint, with a data payload that only contains the managedProcessSettings key, and its value. It can be the profile name or the full profile definition, as shown above. Here is an example of a single update of an extract:

>>> ogg_client.update_extract(
extract='EXTSRC',
    data={
        'managedProcessSettings': 'ogg:managedProcessSettings:dbiProfile'
    }
)

And do not worry about entering a wrong name for the profile. If you do this, the API returns the following error:

>>> ogg_client.update_extract(
    extract='EXTSRC',
    data={
        'managedProcessSettings': 'ogg:managedProcessSettings:NonExistingProfile'
    }
)
Exception: ERROR - https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC: The managed process settings profile 'ogg:managedProcessSettings:NonExistingProfile' does not exist. ; INFO - https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC: The item type  with name 'ogg:managedProcessSettings:NonExistingProfile' does not exist.
How to modify all extracts and replicats ?

Based on the above, modifying all extracts and replicats is easy. You should just iterate over your extracts and replicats and apply the profile that you want. Below is an example script where you can modify the profile_name variable and the connection details to the Administration Service of your deployment. You could even decide to have different profiles assigned to extracts and replicats, if needed.

from oggrestapi import OGGRestAPI

profile_name = 'dbiProfile'

ogg_client = OGGRestAPI(
    url='https://vmogg',
    user='ogg',
    password='ogg',
    reverse_proxy=True,
    deployment='ogg_test_01'
)

for extract in ogg_client.list_extracts():
    extract_name = extract['name']
    print(f"Updating extract {extract_name}")
    ogg_client.update_extract(
        extract=extract_name,
        data={
            'managedProcessSettings': f'ogg:managedProcessSettings:{profile_name}'
        }
    )

for replicat in ogg_client.list_replicats():
    replicat_name = replicat['name']
    print(f"Updating replicat {replicat_name}")
    ogg_client.update_replicat(
        replicat=replicat_name,
        data={
            'managedProcessSettings': f'ogg:managedProcessSettings:{profile_name}'
        }
    )

After running this against the Administration Service, all your extracts and replicats will now follow the same new profile.

L’article Bulk Update of GoldenGate Extract Profiles est apparu en premier sur dbi Blog.

Loosing java remote console full control during ODA Reimage

Yann Neuhaus - Wed, 2026-04-22 03:18

Recently I had to patch an ODA at one of our customer from 19.20 to 19.26, going through 19.24. As you are aware of, knowing we go from Oracle Linux 7 (19.20) to Oracle Linux 8 (19.24), we need to patch the ODA using Data Preserving Reprovisioning. This includes a reimage of the nodes…

Data Preserving Reprovisioning will allow you reprovisioning an already deployed Oracle Database Appliance system keeping the storage and the databases on the appliance, so without any modifications on the information stored in the ASM (storage). The information of the source system will be saved in a server data archive files. The appliance will then be reimaged with Oracle Database Appliance release iso image and the saved metadata will be used to directly reprovision the system and bring back all the resources such as databases, DB systems, Oracle ASR, and others.

The reimage can only be done using the java remote console, available with the ILOM. Remember that to attach the iso on the java console and use it during the reimage, you need to add it and connect it as described in the next picture.

It is important to see the following red message to ensure that the connection with the iso is made and that it can be used.

The problem is that after a time, during the reimage process, the java remote console lost full control on the node, having as consequence a lost of the iso connection and a failure in the reimage process.

This could be seen in the console with the message (View Only) displayed in the bar and the Red Cross on the top of the Computer (screen)/mouse image.

Here:

When it should be:

Here the whole picture showing the connection problem.

And we can see the message : “Sorry, keyboard and mouse connection has been lost. \nIf storage was being shared it has been lost also”.

When a correct connection would be the following one, having (Full Control) message displayed in the bar, with no lost of keyboard/mouse and storage (iso) connection.

After several tries it was always the same problem. After a few minutes, lost of the connection…

I then decided to check the ILOM sessions, connecting to the ILOM through ssh connection.

login as: root
Keyboard-interactive authentication prompts from server:
| Password:
End of keyboard-interactive prompts from server

Oracle(R) Integrated Lights Out Manager

Version 5.1.1.23 r151958

Copyright (c) 2023, Oracle and/or its affiliates. All rights reserved.

Warning: HTTPS certificate is set to factory default.

Hostname: ODA01-ilo

-> show /SP/sessions

 /SP/sessions
    Targets:
        55342
        55346 (current)

    Properties:

    Commands:
        cd
        delete
        show

I found surprising to see so high sessions id. I decided to reset the ILOM. This would of course not restart the node.

-> reset /SP
Are you sure you want to reset /SP (y/n)? y
Performing reset on /SP

And connected again to the ILOM through ssh to checked the sessions:

login as: root
Keyboard-interactive authentication prompts from server:
| Password:
End of keyboard-interactive prompts from server

Oracle(R) Integrated Lights Out Manager

Version 5.1.1.23 r151958

Copyright (c) 2023, Oracle and/or its affiliates. All rights reserved.

Warning: HTTPS certificate is set to factory default.

Hostname: ODA01-ilo

-> show /SP/sessions

 /SP/sessions
    Targets:
        1 (current)

    Properties:

    Commands:
        cd
        delete
        show

The sessions id were back to one, and I did not have any lost connection issue any more. I could successfully reimage the ODA to 19.24 version.

L’article Loosing java remote console full control during ODA Reimage est apparu en premier sur dbi Blog.

Oracle AQ Scaling with PL/SQL Callbacks, DBMS_SCHEDULER Jobs

Tom Kyte - Tue, 2026-04-21 23:56
Hello, I have an Oracle Advanced Queuing queue and would like to be able to process this queue from inside of the database, as opposed to using an external app server. However, I am concerned about the scalability of internal solutions. Please assume the following: 1. The queue receives an arbitrary number of messages. 2. Each message results in a PLSQL procedure being called, which can take an arbitrary amount of time. 3. You want to limit the number of messages that can be processed at once to some value N. --- Solution #1: Run N permanent DBMS_SCHEDULER jobs that loop and call DBMS_AQ.DEQUEUE with WAIT_FOREVER. This is good because you can easily cap how many jobs you want to have processing this queue by adjusting the N number of permanent DBMS_SCHEDULER jobs. This is bad because all of these permanent jobs will reduce the available JOB_QUEUE_PROCESSES. It is fine if you only need a handful of jobs to process your queue, but as you scale the number of jobs up, eventually you will degrade the other unrelated jobs that need to run in the system. Does calling DBMS_LOCK.SLEEP or DBMS_AQ.DEQUEUE inside a DBMS_SCHEDULER job free up a JOB_QUEUE_PROCESSES slot while the job is sleeping? My guess is no. --- Solution #2: Use a PL/SQL callback, and in the callback, create a one-time DBMS_SCHEDULER job per message, and use a common resource constraint, such that only N scheduler jobs can run at once. For example, if you set a cap of 128 jobs in your resource constraint, and you receive 1000 messages, the PL/SQL callback will create 1000 jobs, but only 128 jobs will be running at once, and the rest will be blocked. The downside here is that you have to create a whole dbms_scheduler job to process a message. This will increase the time between receiving a message and starting to process it, and just seems like an overall heavy solution. Lightweight jobs won't help because the resource constraints don't work for lightweight jobs. In fact, you might as well not use AQ at all if you go down this route. Instead of writing messages to a queue, which later calls DBMS_SCEDHULER.CREATE_JOB, you could simply call DBMS_SCHEDULER.CREATE_JOB directly with a resource constraint. --- Solution #3: Use an external app server. Run N threads, where each thread grabs a connection from a connection pool, loops and calls DBMS_AQ.DEQUEUE with WAIT_FOREVER. This is the best approach because you can set N to cap the number of connections processing messages in parallel easily, and you do not have to block up and slots in JOB_QUEUE_PROCESSES. However, this has downsides. Your app server often has much more downtime than your database due to releases, network partitions, and many other various issues. If your session which is executing a long running PL/SQL procedure is terminated, you cannot assume whether the PL/SQL procedure on the server will complete or be stopped. While this is also true for DBMS_SCHEDULER Jobs that end up getting kil...
Categories: DBA Blogs

PL/SQL Execution Time

Tom Kyte - Tue, 2026-04-21 23:56
We have PL/SQL stored procedures that perform poorly, Using DBA_HIST or any other AWR / ASH metrics, is it possible to determine the runtimes of PL/SQL Procedures ?
Categories: DBA Blogs

Partitions evicting sequence cached values

Tom Kyte - Tue, 2026-04-21 23:56
Recently we observed that cached sequence values were lost significantly, appearing as large gaps in persisted values. Our system does not expect gapless sequence values, however we are trying to understand root cause. gv$rowcache shows high getmisses in histogram, objects, segments, followed by sequences. SGA dump shows many grows in shared pool and shrinks in buffer cache, and gv$db_object_cache shows high loads. We did a trend of sequence value jump or loss from cache based on gaps found in persisted values across time. I will list the events that may be contributory to the gap. Please let me know if this is a incorrect hypothesis. 1. Onset of moderately large large sequence jumps aligns with migration to multitenant database 2. Prior to multitenant migration, we never had histogram collection as part of stats, it appears a DBA has run stats with histogram collection turned on at the time of multitenant migration, sequence gaps are silently occuring 3. Few months after multitenant migration, a tablespace rebuild activity followed by stats collection was done. This time the standard stats collection script was run and removed stats from many tables, but not few core tables that are extensively used in the application. So never before seen histogram traffic is still continuing to dictionary cache. 4. After multitenant migration, another effort started where many tables started to get partitioned. There were 2 large on-time efforts that created several thousands of partitions, followed by regular scheduled jobs creating few hundreds of partitions for historical data management 5. The sequence jump (and loss from cache) seems to continue for many months, un-noticed as the application is not affected by gaps. 6. Some of the regular scheduled batch jobs were missed, so there was a large gap where tables were not monthly partitioned properly as expected. At discovery a one-time catchup activity was performed where around 800 partitions were created. 7. The sequence jump phenomenon exploded uncontrollably and was discovered by a partner system. We pinned the sequence in memory to calm it down. Here is my draft hypothesis for a root cause, please correct if it does not hold: a) Multitenant migration increased traffic to dictionary cache, sequence metadata is evicted and reload constantly, so values have jumps/gaps b) As many partitions are created, more traffic is arriving to dictionary cache and pressure increases, sequence jumps occur silently meanwhile. c) Cumulative traffic to dictionary cache is increasing every month as 100s of partitions are added by monthly job and pressure is getting intolerable, sequence evictions are on rise c) One-time gap-covering exercise for missed partition creation pushed the traffic over a threshold, sequence jump skyrocketed and now visible in the application as a very large gap. Does these dots connect as a root cause ?
Categories: DBA Blogs

PDB Snapshot Copy/Carousel

Tom Kyte - Tue, 2026-04-21 23:56
Hello Chris/Connor, Hope you are doing well. We are using ExaC@C with 19c databases at work and are exploring whether we can use PDB Snapshot Carousel and/or PDB Snapshot Copy feature. It might be just me but I am somewhat confused with the "art of possible" while using ASM. PDB Snapshot Copy Process (Doc ID 2730771.1) appears to suggest that we can use sparse disk group feature on exadata to either create PDB snapshot copy or PDB Snapshot Carousel However, ORA-65227 during pluggable database snapshot (Doc ID 3024542.1) appears to suggest that the feature is simply not supported in 19c and only available from 21c onwards. https://www.dbarj.com.br/en/2021/09/creating-a-snapshot-sparse-clone-from-a-different-release-update/ appears to even provide an example of how PDB Snapshot Copy can be used to patch a 19c database. So are we able to use PDB Snapshot Carousel with ASM Sparse Disk Group in ExaC@C by using Sparse Disk Group feature of ASM only (and not using any file system)? I am confused... Thanks in advance, Narendra
Categories: DBA Blogs

Flashback Schema

Tom Kyte - Tue, 2026-04-21 23:56
Hello Tom, I use Oracle Flashback features a lot. We have several development and test environments here and the combination flashback database + replay is priceless. My question is this: why don't we have a "flashback schema" feature ? I know you can simulate that with PL/SQL, but that's just for tables. A schema is much more than that: pl/sql code, grants, etc. If you consolidate databases into schemas inside a large machine, you lose the ability to flashback them; to maintain this ability you'll need virtualization (or pluggable databases :)). So, why was it never done ? Is it impossible and I fail to see the reason ? Thank you for your time.
Categories: DBA Blogs

Fuzzing PL/SQL

Pete Finnigan - Tue, 2026-04-21 23:56
I have not had chance to write any detailed blogs of late as we have been very busy with paying work and also working on our software products as we released version 2025 of PFCLScan and version 2026 is very....[Read More]

Posted by Pete On 20/04/26 At 09:34 AM

Categories: Security Blogs

The day your ECM System became unusable

Yann Neuhaus - Tue, 2026-04-21 04:00

It started like any normal Tuesday.
No alarms. No warning signs. No urgent emails flooding inboxes before 8 AM.
Just a regular working day, until people tried to open documents.
At first, it was a single complaint.
“Hey, I can’t access the invoice archive.”
Then another.
“The HR files are taking forever to load.”
Then ten more.
“Nothing is opening.”
And within an hour, the entire ECM system, the one everyone depended on every single day, had become nearly unusable.

the entire ECM system had become nearly unusable.
The situation

This wasn’t just a storage platform.
It was the operational backbone of the company.
Contracts, invoices, compliance records, procedures, employee files, approval workflows, audit trails, everything lived there.
Teams across finance, HR, legal, procurement, and operations relied on it constantly.
If the ECM system slowed down, business slowed down.
If it stopped… business stopped.
And that morning, it stopped.

What went wrong?

The issue had been building quietly for months.
Nobody noticed because the pain came slowly.
A few extra seconds to retrieve a file, a delayed workflow here, a failed search there.
People adapted. They worked around it.
IT assumed it was “normal system aging.”
But underneath, the real problem was growing:

  • No proper retention policy.
  • No archive strategy.
  • No performance monitoring.
  • No governance around content growth.

Years of duplicate files as well as outdated versions of documents no one needed.
Massive email attachments stored for nothing or expired records that should have been deleted years ago.
With all those things, the repository had become digital landfill.
As a result, storage became bloated, indexes became overloaded, and search performance collapsed. Backups also took a dangerous amount of time.
Finally, one morning, the system simply couldn’t keep up anymore.
Not because of one dramatic failure, but rather hundreds of small, ignored ones.

The impact

The finance department could not process payments, the HR department could not retrieve employee documentation, and the legal department could not access signed contracts.
In practical terms, stalled approvals delayed, customer responses and froze audit preparation.
Executives wanted answers, users wanted miracles, and IT needed time.
No one had enough of it.
Emergency meetings replaced actual work.
Consultants were called in.
Recovery plans were hastily put together.
Suddenly, budgets appeared for problems that had been ignored for years.
But the real cost wasn’t just technical; it was a loss of trust.
People stopped trusting the system.
Once users lose confidence in your ECM platform, rebuilding that trust is far more difficult than resolving storage issues.

The lesson

Most ECM failures don’t happen overnight.
They happen silently, slowly, and politely.
Then, one day, they become impossible to ignore.
Governance may feel boring until disaster makes it urgent.
Retention policies feel optional until storage becomes a crisis.
Performance reviews feel like “later” work until later arrives.

An ECM system doesn’t fail because it stores too much.
The system fails because no one decides what shouldn’t be stored

The lesson was painful.
You don’t manage an ECM system when it breaks.
You manage it so it never gets there.
By the time everyone notices, it’s already too late.

Unfortunately, this scenario isn’t rare. As consultants at dbi services, we’re often called in after the damage has been done. Of course, our job is to find solutions, it’s still better to act before a crisis hits. That way, we have time to implement strategies without rushing.

L’article The day your ECM System became unusable est apparu en premier sur dbi Blog.

Gemma 4 for Structured Data Extraction: Can It Beat Qwen 3.5?

Andrejus Baranovski - Tue, 2026-04-21 02:36
In this video, I put Gemma 4 to the test on a real-world task — extracting structured data from bank statements — and benchmark it head-to-head against Mistral's Ministral and Qwen 3.5.

I run both the MoE and Dense variants of Gemma 4 to see how architecture affects accuracy on financial documents, then compare the results side-by-side.

My takeaway: Gemma 4 holds its own and performs on par with Qwen 3.5 — a strong result for local structured extraction workflows.

 

PostgreSQL 19: Importing statistics from remote servers

Yann Neuhaus - Mon, 2026-04-20 03:15

Usually we do not see many foreign data wrappers being used by our customers. Most of them use the foreign data wrapper for Oracle to fetch data from Oracle systems. Some of them use the foreign data wrapper for files but that’s mostly it. Only one (I am aware of) actually uses the foreign data wrapper for PostgreSQL which obviously connects PostgreSQL to PostgreSQL. Some foreign data wrappers allow for collecting optimizer statistics on foreign tables and the foreign data wrappers for Oracle and PostgreSQL are examples for this. These local statistics are better than nothing but you need to take care that they are up to date and for that you need a fresh copy of the statistics over the remote data. PostgreSQL 19 will come with a solution for that when it comes to the foreign data wrapper for PostgreSQL. Actually, the solution is not in the foreign data wrapper for PostgreSQL but in the underlying framework and postgres_fdw uses can use that from version 19 on.

For looking at this we need a simple setup, so we initialize two new PostgreSQL 19 clusters and connect them with postgres_fdw:

postgres@:/home/postgres/ [pgdev] initdb --version
initdb (PostgreSQL) 19devel
postgres@:/home/postgres/ [pgdev] initdb --pgdata=/var/tmp/pg1
postgres@:/home/postgres/ [pgdev] initdb --pgdata=/var/tmp/pg2
postgres@:/home/postgres/ [pgdev] echo "port=8888" >> /var/tmp/pg1/postgresql.auto.conf 
postgres@:/home/postgres/ [pgdev] echo "port=8889" >> /var/tmp/pg2/postgresql.auto.conf 
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/pg1/ start
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/pg2/ start
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create extension postgres_fdw"
CREATE EXTENSION
postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "create table t ( a int, b text, c timestamptz )"
CREATE TABLE
postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "insert into t select i, md5(i::text), now() from generate_series(1,1000000) i"
INSERT 0 1000000
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create server srv_pg2 foreign data wrapper postgres_fdw options(port '8889', dbname 'postgres')"
CREATE SERVER
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create user mapping for postgres server srv_pg2 options (user 'postgres', password 'postgres')"
CREATE USER MAPPING
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create foreign table ft (a int, b text, c timestamptz) server srv_pg2 options (schema_name 'public', table_name 't')"
CREATE FOREIGN TABLE
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select count(*) from ft"
  count  
---------
 1000000
(1 row)

What we have now is one table in the cluster on port 8889 and this table is attached as a foreign table in the cluster on port 8888.

We already have statistics on the source table in the cluster on port 8889:

postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "select reltuples::bigint from pg_class  where relname = 't'"

 reltuples 
-----------
   1000000
(1 row)

… but we do not have any statistics on the foreign table in the cluster on port 8888:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select reltuples::bigint from pg_class  where relname = 'ft'"

 reltuples 
-----------
        -1

(1 row)

Only after manually analyzing the foreign table the statistics show up:

postgres@:/home/postgres/ [DEV] psql -p 8888 -c "analyze ft"
ANALYZE
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select reltuples::bigint from pg_class  where relname = 'ft'"

 reltuples 
-----------
   1000000
(1 row)

The issue that can arise with these local statistics is, that they probably become outdated when the source table is modified:

postgres@:/home/postgres/ [pgdev] psql -p 8889 -c "insert into t select i, md5(i::text), now() from generate_series(1000001,2000000) i"
INSERT 0 1000000
postgres@:/home/postgres/ [DEV] psql -p 8889 -c "select reltuples::bigint from pg_class  where relname = 't'"

 reltuples 
-----------
   2000000
(1 row)

postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select reltuples::bigint from pg_class  where relname = 'ft'"

 reltuples 
-----------
   1000000
(1 row)

As you can see, the row counts do not match anymore. Once the local statistics are gathered we again have the same picture on both sides:

postgres@:/home/postgres/ [DEV] psql -p 8888 -c "analyze ft"
ANALYZE
postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select reltuples::bigint from pg_class  where relname = 'ft'"

 reltuples 
-----------
   2000000
(1 row)

One way to avoid this issue even before PostgreSQL 19 is to tell postgres_fdw to run analyze on the remote table and to use those statistics:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "alter foreign table ft options ( use_remote_estimate 'true' )"

In this case the local statistics will not be used but of course this comes with the overhead of the additional analyze on the remote side.

From PostgreSQL 19 there is another option:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "alter foreign table ft options ( restore_stats 'true' )"
ALTER FOREIGN TABLE

This option tells postgres_fdw to import the statistics from the remote side and store them locally. If that fails it will run analyze as above, the commit message nicely explains this:

Add support for importing statistics from remote servers.

Add a new FDW callback routine that allows importing remote statistics
for a foreign table directly to the local server, instead of collecting
statistics locally.  The new callback routine is called at the beginning
of the ANALYZE operation on the table, and if the FDW failed to import
the statistics, the existing callback routine is called on the table to
collect statistics locally.

Also implement this for postgres_fdw.  It is enabled by "restore_stats"
option both at the server and table level.  Currently, it is the user's
responsibility to ensure remote statistics to import are up-to-date, so
the default is false.

As usual, thanks to all involved.

L’article PostgreSQL 19: Importing statistics from remote servers est apparu en premier sur dbi Blog.

GoldenGate 26ai Logs Explained

Yann Neuhaus - Mon, 2026-04-20 01:36

GoldenGate logs are a powerful source of information when troubleshooting or analyzing your deployments. However, there is at least one log file per service in the Microservices Architecture. This means that for a GoldenGate setup with multiple deployments, you can quickly have tens of log files to search for information. In that context, knowing where to look is important, especially if you are upgrading from a Classic Architecture where you only had to check the ggserr.log file.

REST API Log Files

The restapi.log files are one of the most important logs because of the amount of information they provide. They allow you to completely understand what is happening behind the scenes. It takes time to learn how to analyze them, but I wrote a dedicated article on REST API log analysis to help you through this. There is one restapi.log file per deployment, because the REST API calls are always made to a specific deployment. This is also true when using a reverse proxy.

These restapi.log files are also a source of information for other GoldenGate log files. For instance, REST API logs on the AI Service will be cleaned and filtered before ending up in the log dedicated to the AI Service. This is another reason why you should understand the content of these restapi.log files.

Here is a log example from a restapi.log file. Again, for more details, read the blog linked above.

2026-04-09 19:46:36.789+0000 ERROR|RestAPI.adminsrvr | Request #1581: {
    "context": {
        "httpContextKey": 140353514441552,
        "verbId": 7,
        "verb": "PATCH",
        "originalVerb": "PATCH",
        "uri": "/services/v2/extracts/EXTSRC",
        "protocol": "http",
        "headers": {
            "X-OGG-Context": "services/ogg_test_01/adminsrvr",
            "X-OGG-Deployment": "ogg_test_01",
            "X-OGG-Service": "adminsrvr",
            "X-OGG-Version": "v2",
            "X-OGG-Resource": "extracts/EXTSRC",
            "X-Real-IP": "**",
            "X-Forwarded-For": "**",
            "X-Forwarded-Host": "vmogg",
            "X-Forwarded-Proto": "https",
            "X-Forwarded-Server": "oggvm1",
            "X-SSL-Client-Verify": "NONE",
            "Connection": "keep-alive",
            "Host": "vmogg",
            "Content-Length": "75",
            "User-Agent": "python-requests/2.32.3",
            "Accept-Encoding": "gzip, deflate",
            "Accept": "application/json",
            "Content-Type": "application/json",
            "Cookie": "** Masked **",
            "Authorization": "** Masked **",
            "X-OGG-Requestor-Id": "",
            "X-OGG-Feature-List": ""
        },
        "host": "vmogg",
        "securityEnabled": false,
        "authorization": {
            "authUserName": "ogg",
            "authPassword": "** Masked **",
            "authMode": "Basic",
            "authUserRole": "Security"
        },
        "requestId": 4,
        "uriTemplate": "/services/{version}/extracts/{extract}"
    },
    "content": {
        "managedProcessSettings": "ogg:managedProcessSettings:NonExistingProfile"
    },
    "isScaRequest": true,
    "parameters": {
        "uri": {
            "extract": "EXTSRC",
            "version": "v2"
        }
    }
}
Response: {
    "context": {
        "httpContextKey": 140353514441552,
        "requestId": 4,
        "code": "404 Not Found",
        "headers": {
            "Content-Type": "application/json",
            "Set-Cookie": "** Masked **"
        },
        "Content-Type": "application/json",
        "contentType": "application/json"
    },
    "isScaResponse": true,
    "content": {
        "$schema": "api:standardResponse",
        "links": [
            {
                "rel": "canonical",
                "href": "https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC",
                "mediaType": "application/json"
            },
            {
                "rel": "self",
                "href": "https://vmogg/services/ogg_test_01/adminsrvr/v2/extracts/EXTSRC",
                "mediaType": "application/json"
            }
        ],
        "messages": [
            {
                "$schema": "ogg:message",
                "title": "The managed process settings profile 'ogg:managedProcessSettings:NonExistingProfile' does not exist.",
                "code": "OGG-08122",
                "severity": "ERROR",
                "issued": "2026-04-09T19:46:36Z",
                "type": "https://docs.oracle.com/en/middleware/goldengate/core/23.26/error-messages/"
            },
            {
                "$schema": "ogg:message",
                "title": "The item type  with name 'ogg:managedProcessSettings:NonExistingProfile' does not exist.",
                "code": "OGG-12029",
                "severity": "INFO",
                "issued": "2026-04-09T19:46:36Z",
                "type": "https://docs.oracle.com/en/middleware/goldengate/core/23.26/error-messages/"
            }
        ]
    }
}
Admin Client Log File

Depending on your configuration, the adminclient log file might not always be in the same location. More specifically, you can have multiple adminclient.log files for a single GoldenGate installation. The reason behind this is that the logs go to the location of your OGG_VAR_HOME environment variable. The adminclient.log can be located in :

  • In your Service Manager var/log directory, if your GoldenGate user has the Service Manager OGG_VAR_HOME set in the environment.
  • In your deployment var/log directory, if you change the OGG_VAR_HOME environment variable when switching between deployments at the OS level. This is not recommended, as it can lead to multiple adminclient.log files and mixed-up logs between deployments.
  • In your GoldenGate binaries, if you misconfigured your environment and are calling the adminclient with an empty OGG_VAR_HOME variable. This is not ideal, as you could lose the log file after patching or upgrading.

There is not much to see in this file with the default logging configuration, but it can be used to troubleshoot connection issues with the adminclient.

Service Manager Log Files

Let’s look at the var/log folder of the Service Manager deployment.

oggsm/var/log:
ServiceManager.log
restapi.log
AIService.log

In GoldenGate 26ai, there are three different logs for the Service Manager:

  • ServiceManager.log : In a healthy installation, this log file should only contain logs generated when services start or stop. But if your Administration Server doesn’t start, this log is a good starting point for troubleshooting.
2026-04-09T19:20:28.124+0000 INFO | Task 'ogg_test_01:adminsrvr' started with process id 2505 (ogg_test_01:adminsrvr)
2026-04-09T19:20:28.159+0000 INFO | Task 'ServiceManager:AIService' started with process id 2515 (ServiceManager:AIService)
2026-04-09T19:20:28.182+0000 INFO | Task 'ogg_test_01:distsrvr' started with process id 2522 (ogg_test_01:distsrvr)
2026-04-09T19:20:28.215+0000 INFO | Task 'ogg_test_01:recvsrvr' started with process id 2542 (ogg_test_01:recvsrvr)
2026-04-09T19:20:28.234+0000 INFO | Task 'ogg_test_01:pmsrvr' started with process id 2553 (ogg_test_01:pmsrvr)
2026-04-09T19:20:29.243+0000 INFO | Installation topology updated (ServiceManager.Topology)
  • restapi.log : See the chapter above about REST API Log Files.
  • AIService.log : This is where logs from the AI Service are gathered. It mostly consists of a cleaned collection of the REST API logs from calls made against the AI Service. For instance, here is a log you can get when adding an AI Provider to the Service Manager.
2026-04-09 19:32:21.056 DATA |AIService      |AIService.RestApi             |  247 RequestHandlers.cpp      | 00000000-0000-0000-D5F8-000000000006> {"$schema":"api:standardResponse","links":[],"response":{"providers":[{"id":"1","name":"GeminiAiStudio","type":"gemini","baseUrl":"https://generativelanguage.googleapis.com/v1beta/models/gemini-flash-latest:generateContent","authentication":{"type":"api_key","configured":true},"links":[{"rel":"parent","href":"/api/v1/providers","mediaType":"application/json"},{"rel":"canonical","href":"/api/v1/providers/1","mediaType":"application/json"}]}],"metadata":{"fromCache":false,"providerCount":1,"lastUpdatedEpochMicroseconds":1775827941054749}},"messages":[]} (Thread 2)
Deployment Log Files

GoldenGate deployments have more log file types than the Service Manager. In the var/log folder of your deployments, you should have the following files:

ogg_test_01/var/log:
adminsrvr.log
distsrvr.log
ER-events.log
extract.log
ggserr.log
pmsrvr.log
recvsrvr.log
replicat.log
restapi.log
  • adminsrvr.log, distsrvr.log, recvsrvr.log and pmsrvr.log : These files contain all the information, warning and error messages from actions or events in the main GoldenGate services (Administration, Distribution, Receiver and Performance Metrics Services). They are not as extensive as what you can find in the restapi.log file, but they provide a really good overview of what is happening inside your deployment.
  • ER-events.log : This is a log file that can be used for monitoring your GoldenGate deployments. All warning and error messages on your extracts and replicats are available here. It is sometimes verbose, but still very useful for monitoring.
2026-04-10T07:11:17.844+0000  ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, EXTA.prm:  PROCESS ABENDING.
2026-04-10T07:12:24.447+0000  WARNING OGG-06658  Oracle GoldenGate Capture for Oracle, EXTB.prm:  Unable to restore a failed connection to the database after 3 retries.
  • extract.log and replicat.log : In these log files, you can find status information, statistics, and transactions processed by your extracts and replicats. This does not provide fundamentally new information compared to other sources (adminclient, for instance), but it is a useful view when observing all processes at the same time.
2026-04-08T09:13:10.249+0000 INFO | INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, EXTA.prm:  Rolling over remote file PDB1/aa000000321. (main)
2026-04-08T09:14:10.540+0000 INFO | INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, EXTB.prm:  Rolling over remote file PDB1/ab000000645. (main)
2026-04-08T09:14:32.963+0000 INFO | INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, EXTC.prm:  Rolling over remote file PDB1/ac000000128. (main)
  • ggserr.log : This log serves the same purpose as it did in the old Classic Architecture, containing all information about your deployment. Contrary to what its name suggests, it does not only contain the errors happening in your deployment.
  • restapi.log : See the chapter above about REST API Log Files.
GoldenGate Logs in the Web UI

In the GoldenGate web UI, you can access, filter, or query some of the logs listed above. From the Service Manager, the Diagnosis Log tab displays the ServiceManager.log file with filtering options.

You can also quickly enable debug logs for the Service Manager, which is usually only needed when the Oracle Support requests it. This will generate a ServiceManager-debug.log file in your var/log directory, and it can be downloaded from the web UI and sent to the Oracle Support or your dbi GoldenGate expert for analysis.

When connected to a deployment in the web UI, you can also view different logs.

  • From the home page, the Critical Events view displays the content of the ER-events.log file.
  • From the Services page, you have access to the Diagnosis Log of each service. They will respectively display the adminsrvr.log, distsrvr.log, recvsrvr.log and pmsrvr.log files. Similar to what I just explained, you can also enable debug logs temporarily if needed.

L’article GoldenGate 26ai Logs Explained est apparu en premier sur dbi Blog.

It's Been a Minute...

Chet Justice - Sat, 2026-04-18 20:37

 

Hi.

It's been a while. 

Not sure if this is a one-off or not, but I'm going to give it another go. I miss writing (publicly).

Like, this is super awkward. "What do I say?"

"How are you?"

"Things are good."

"How are you?"






Categories: BI & Warehousing

Create Distribution Paths in NGINX-Secured GoldenGate 26ai

Yann Neuhaus - Fri, 2026-04-17 01:18

Setting up a distribution path is an important step when connecting two GoldenGate deployments. However, there are so many connection options that it’s hard to know how to proceed. In this blog, I will present the way to connect two deployments secured through NGINX.

Here are the prerequisites before attempting the distribution path creation:

  • Having two servers, each running a GoldenGate Microservices Architecture. I will use the latest version here, 26ai.
  • Having a running extract on the source setup.
  • Port openings: port 443 should be open between the two deployments.

In my environment, I have the following:

  • Source environment: oggvm1, with a deployment named ogg_test_01, a CDB01 with a PDB1 containing a APP_PDB1 schema and a table1 table.
  • Target environment: oggvm2, with the equivalent deployment ogg_test_02, CDB02, PDB2, APP_PDB2 schema, and a table2 table.

With all of this in mind, here are the three steps needed to create a working distribution path between two GoldenGate deployments secured with NGINX:

  • Create a Path Connection
  • Add the certificates used in NGINX to secure the target deployment to the source’s certificate management store.
  • Create and start the distribution path.
Path Connection Creation

To open a connection to the target deployment ogg_test_02, the source deployment needs a path connection. As explained in a previous blog, path connections are aliases of an existing user on a target deployment.

It is recommended to separate roles and not use the administrator account of your target deployment, so let’s first create the user.

  • On ogg_test_02 (target deployment), go to the User Administration tab and add a new user. For the role, Oracle recommends using the Operator role, so there is no reason to use the higher-privileged Administrator or Security roles.
  • On ogg_test_01 (source deployment), create the path connection. Only the Userid and Password fields must match what you just created on the target. The alias is just known on the source side and doesn’t have to match the username set on the target deployment. The alias defined here in the source deployment will only appear when choosing a connection during the distribution path creation.
Certificate Management

Because your deployments are secured with NGINX, you have to make sure that the certificates being used by one deployment are recognized by the other deployment. To do so, on the source only, you have to register the root certificate authority of the target deployment.

This can only be done at the Service Manager level. In the Service Manager web UI, go to Certificate Management and add a CA Certificate in the source deployment ogg_test_01. You have two options here.

  • If the CA only needs to be registered on this specific deployment, you can register it with the Local option. This is the more secure option, but it is very often not needed.
  • If the CA registration should be shared with other deployments, you can register it with the Shared option.

Paste the root certificate file used to secure your second deployment. The name used must be unique but will not be used anywhere else in the process.

How should I register a certificate chain ?

If the certificate file contains multiple certificates, GoldenGate doesn’t allow you to register them in one go. Instead, you need to break down the file and register each part individually. Again, the name and order in which you register the certificates do not matter, except for management purposes. I described the issue in more detail in a blog about the OGG-30007 error.

Distribution Path Creation

We can now finally create the distribution path. On the source deployment, go on the Distribution Service Paths tab and register the path as such:

  • Path information: Specify the path name. If possible, make the target (and the source) visible in the name, to ease management when having multiple distribution paths.
  • Source options: Here, you should only select the extract at the source of this distribution path. The rest will be filled automatically or can stay with the default value.
  • Target options: This is where the configuration is specific. Because we are using a NGINX-secured deployment on the target, you must click on Reverse proxy enabled and choose the wss target protocol.
  • Advanced, Filtering and Managed Options: Nothing here is specific to our setup, but you can of course customize these options as needed.

And that’s it. Your deployments are now connected, and you should see the trail files on the target once you start the distribution path.

oracle@oggvm2:~/ ll $OGG_DEPLOYMENT_HOME/var/lib/data/PDB1
total 0
-rw-r-----. 1 oracle oinstall 0 Mar 22 07:34 bb000000000
The remote peer submitted a certificate that failed validation

If your distribution path doesn’t start and generates a “certificate that failed validation” error, it means that you incorrectly registered your certificates. Make sure that the target deployment’s certificates are registered on the source deployment’s service manager and not the other way around.

You can also try to use the certificate in an OGG_CLIENT_TLS_CAPATH environment variable on the source and connect with the adminclient to check if it’s working.

L’article Create Distribution Paths in NGINX-Secured GoldenGate 26ai est apparu en premier sur dbi Blog.

PostgreSQL 19: Online enabling of data checksums

Yann Neuhaus - Fri, 2026-04-17 01:00

Since PostgreSQL 18 was released last year checksums are enabled by default when a new cluster is initialized. This also means, that you either need to explicitly disable that when you upgrade from a previous version of PostgreSQL or you need to enable this in the old version of PostgreSQL you want to upgrade from. The reason is, that pg_upgrade will complain if the old and new version of PostgreSQL do not have the same setting for this.

Enabling and disabling checksums in offline mode can be done since several versions of PostgreSQL using pg_checksums, but as mentioned: This will not work if the cluster is running:

postgres@:/home/postgres/ [181] pg_checksums --version
pg_checksums (PostgreSQL) 18.1 
postgres@:/home/postgres/ [181] pg_checksums --pgdata=$PGDATA
Checksum operation completed
Files scanned:   966
Blocks scanned:  2969
Bad checksums:  0
Data checksum version: 1  -> This means "enabled"
postgres@:/home/postgres/ [181] pg_checksums --pgdata=$PGDATA --disable
pg_checksums: error: cluster must be shut down

Even in PostgreSQL 19 this is still same: You cannot use pg_checksum to enable or disable checksums while the cluster is running.

What will change in version 19 is that two new functions have been added, one for enabling checksums and one for disabling checksums:

postgres=# \dfS *checksums*
                                                        List of functions
   Schema   |           Name            | Result data type |                     Argument data types                      | Type 
------------+---------------------------+------------------+--------------------------------------------------------------+------
 pg_catalog | pg_disable_data_checksums | void             |                                                              | func
 pg_catalog | pg_enable_data_checksums  | void             | cost_delay integer DEFAULT 0, cost_limit integer DEFAULT 100 | func
(2 rows)

As mentioned in the commit message this is implemented by background workers and to actually see those processes on the operating system lets create some data so the workers really have something to do:

postgres=# create table t ( a int, b text, c timestamptz );
CREATE TABLE
postgres=# insert into t select i, md5(i::text), now() from generate_series(1,10000000) i;
INSERT 0 10000000

As this is version 19 of PostgreSQL currently checksum are enabled:

postgres=# show data_checksums;
 data_checksums 
----------------
 on
(1 row)

To disable that online, pg_disable_data_checksums is the function to use:

postgres=# select * from pg_disable_data_checksums();
 pg_disable_data_checksums 
---------------------------
 
(1 row)

postgres=# show data_checksums;
 data_checksums 
----------------
 off
(1 row)

To enable checksums online pg_enable_data_checksums is the function to use. If you want to see the background workers you might grep for that in a second session on the operating system:

-- first session, connected to PostgreSQL
postgres=# select pg_enable_data_checksums();
 pg_enable_data_checksums 
--------------------------
 
(1 row)

postgres=# show data_checksums ;
 data_checksums 
----------------
 on
(1 row)

-- second session, on the OS
postgres@:/home/postgres/postgresql/ [pgdev] watch "ps -ef | grep checksum | grep -v watch"
Every 2.0s: ps -ef | grep checksum | grep -v watch                                                                                                                                                    pgbox.it.dbi-services.com: 09:49:20 AM
                                                                                                                                                                                                                               in 0.006s (0)
postgres    4931    2510  0 09:49 ?        00:00:00 postgres: pgdev: datachecksum launcher
postgres    4932    2510 25 09:49 ?        00:00:00 postgres: pgdev: datachecksum worker
postgres    4964    4962  0 09:49 pts/2    00:00:00 grep checksum

Because enabling the checksum comes with some overhead there is throttling control as it is already the case for autovacuum:

postgres=# select pg_enable_data_checksums(cost_delay=>1,cost_limit=>3000);
 pg_enable_data_checksums 
--------------------------
 
(1 row)

Very nice, thanks to all involved.

L’article PostgreSQL 19: Online enabling of data checksums est apparu en premier sur dbi Blog.

SQL macro with different table

Tom Kyte - Thu, 2026-04-16 14:40
Hi, I have a database with thousands of tables containing the same kind of information. I need to write a program to aggregate these informations and thought about using a sqlmacro. <code> -- This is a very simplified concept create or replace function get_val (p_table_name varchar2) return varchar2 SQL_Macro is return 'select col1,col2 from p_table_name'; end; / select col1, col2 from get_val(t.table_name) , table_list t; --Table_list contains the list of the table to take </code> And it always tells that the table doesn't exist. The documentation talks about DBMS_TF.TABLE_T, which works if you pass the table as the parameter (and not the table's name). How can I do that? Do I have to write a function returning the rows from the table? Thank you
Categories: DBA Blogs

user_tab_identity_cols.generation_type and BY DEFAULT ON NULL

Tom Kyte - Thu, 2026-04-16 14:40
<code>SQL> select banner from v$version where rownum=1; BANNER -------------------------------------------------------------------------------- Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production SQL> create table t1(id int generated BY DEFAULT ON NULL as identity); Tabelle wurde erstellt. SQL> create table t2(id int generated BY DEFAULT as identity); Tabelle wurde erstellt. SQL> create table t3(id int generated ALWAYS as identity); Tabelle wurde erstellt. SQL> select table_name, generation_type 2 from user_tab_identity_cols utic 3 where utic.table_name in ('T1', 'T2', 'T3'); TABLE_NAME -------------------------------------------------------------------------------- GENERATION ---------- T1 BY DEFAULT T2 BY DEFAULT T3 ALWAYS</code> Why doesn't user_tab_identity_cols.generation_type show "BY DEFAULT ON NULL" for T1 ? Behaviour is differently in comparison to T2, so where can I see it (besides DBMS_METADATA) ? <code> SQL> set long 5000 lines 300 pages 5000 SQL> select dbms_metadata.get_ddl('TABLE', table_name) from user_tables where table_name in ('T1', 'T2'); DBMS_METADATA.GET_DDL('TABLE',TABLE_NAME) -------------------------------------------------------------------------------- CREATE TABLE "YYY"."T1" ( "ID" NUMBER(*,0) GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVAL UE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NO CYCLE NOKEEP NOSCALE NOT NULL ENABLE ) SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "XXX" CREATE TABLE "YYY"."T2" ( "ID" NUMBER(*,0) GENERATED BY DEFAULT AS IDENTITY MINVALUE 1 MAXVALUE 99999 99999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE N OKEEP NOSCALE NOT NULL ENABLE ) SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "XXX" </code>
Categories: DBA Blogs

Reducing SS contention with Local Temp Tablespace fails

Tom Kyte - Thu, 2026-04-16 14:40
We use Business Objects against a database setup just for generating reports. This is an Exadata RAC with 2 nodes and ASM storage and all of the BO sessions login/connect to the same oracle user. During our last month-end, which coincided with quarter-end, we saw many session with "env: SS - contention" wait event. Also intermittently saw "buffer busy waits" as they all wait for access to the shared temporary tablespace, as indicated by the P1 Value. Searching for answers on how to reduce these wait events led us to Local Temporary Tablespaces. So we setup a Local Temp Tablespace in our development environment... <code>CREATE LOCAL TEMPORARY TABLESPACE FOR ALL temp_reporting_local TEMPFILE '+DTADVQ1/.../TEMPFILE/temp_reporting_local.dbf' SIZE 10G AUTOEXTEND OFF EXTENT MANAGEMENT LOCAL UNIFORM SIZE 2M; </code> Assigned it to the REPORT_USER as it's default Local Temp Tablespace... <code>ALTER USER report_user LOCAL TEMPORARY TABLESPACE temp_reporting_local; SELECT username, default_tablespace, temporary_tablespace, local_temp_tablespace FROM DBA_USERS WHERE username = 'REPORT_USER'; USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE LOCAL_TEMP_TABLESPACE REPORT_USER TBE_REPORT_USER_01 TEMP_REPORTING TEMP_REPORTING_LOCAL </code> Then ran some large queries while logged in as REPORT_USER. The query fails with same error message as before: "ORA-01652: unable to extend temp segment by 256 in tablespace TEMP_REPORTING". Monitoring Free Space, the Local Temps do not appear to have been used at all. <code>SELECT tablespace_name, inst_id, tablespace_size/1024/1024 AS total_mb, allocated_space/1024/1024 AS allocated_mb, free_space/1024/1024 AS free_mb FROM dba_temp_free_space WHERE tablespace_name LIKE 'TEMP_REPORTING%'; TABLESPACE_NAME INST_ID TOTAL_MB ALLOCATED_MB FREE_MB TEMP_REPORTING 10240 10240 0 (assumed to be zero at instant report died) TEMP_REPORTING_LOCAL 1 10240 2 10238 TEMP_REPORTING_LOCAL 2 10240 2 10238 </code> A hash join exceeded the 10GB of shared temp but did not use any of the Local temp. So, how can we get these queries to use Local Temp once Shared Temp "overflows"? I'm thinking it is because it cannot spit the hashed results between the two. Which makes me wonder how it will ever use local temp tablespaces. Second question: why did they not set it up to use the Local Temp first and then overflow into the Shared Temp, if needed? Seems like a more logical approach if you want to mitigate these wait events.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator