Feed aggregator

PostgreSQL 19: pg_plan_advice

Yann Neuhaus - Fri, 2026-03-13 02:16

In our performance tuning workshop, especially when attendees have an Oracle background, one question for sure pops up every time: How can I use optimizer hints in PostgreSQL. Up until today there are three answers to this:

Well, now we need to update the workshop material because this was committed for PostgreSQL 19 yesterday. The feature is not called “hints” but it does exactly that: Tell the optimizer what to do because you (might) know it better and you want a specific plan for a given query. Just be aware that this comes with the same issues as listed here.

The new feature comes as an extension so you need to enable it before you can use it. There are three ways to do this:

-- current session
postgres=# load 'pg_plan_advice';
LOAD

-- for all new sessions
postgres=# alter system set session_preload_libraries = 'pg_plan_advice';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

-- instance wide
postgres=# alter system set shared_preload_libraries = 'pg_plan_advice';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

To see this in action, let’s create a small demo setup:

postgres=# create table t1 ( a int primary key, b text );
CREATE TABLE
postgres=# create table t2 ( a int, b int references t1(a), v text );
CREATE TABLE
postgres=# insert into t1 select i, md5(i::text) from generate_series(1,1000000) i;
INSERT 0 1000000
postgres=# insert into t2 select i, i, md5(i::text) from generate_series(1,1000000) i;
INSERT 0 1000000
postgres=# insert into t2 select i, 1, md5(i::text) from generate_series(1000000,2000000) i;
INSERT 0 1000001

A simple parent child relation having a single match from one to one million and one million and one matches for the value one of the parent table.

EXPLAIN comes with a new option to generate the so called advice string for a given query, e.g.:

postgres=# explain (plan_advice) select * from t1 join t2 on t1.a = t2.b;
                                        QUERY PLAN                                        
------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.43..111805.81 rows=2000001 width=78)
   ->  Seq Scan on t2  (cost=0.00..48038.01 rows=2000001 width=41)
   ->  Memoize  (cost=0.43..0.47 rows=1 width=37)
         Cache Key: t2.b
         Cache Mode: logical
         Estimates: capacity=29629 distinct keys=29629 lookups=2000001 hit percent=98.52%
         ->  Index Scan using t1_pkey on t1  (cost=0.42..0.46 rows=1 width=37)
               Index Cond: (a = t2.b)
 JIT:
   Functions: 8
   Options: Inlining false, Optimization false, Expressions true, Deforming true
 Generated Plan Advice:
   JOIN_ORDER(t2 t1)
   NESTED_LOOP_MEMOIZE(t1)
   SEQ_SCAN(t2)
   INDEX_SCAN(t1 public.t1_pkey)
   NO_GATHER(t1 t2)
(17 rows)

What you see here are advice tags, and the full list of those tags is documented in documentation of the extension. First we have the join order, then nested loop memoize, a sequential scan on t2 and an index scan on the primary key of the parent table and finally an instruction that neither t1 nor t2 should appear under a gather node.

This can be given as an advice to the optimizer/planner:

postgres=# SET pg_plan_advice.advice = 'JOIN_ORDER(t2 t1) NESTED_LOOP_MEMOIZE(t1) SEQ_SCAN(t2) INDEX_SCAN(t1 public.t1_pkey) NO_GATHER(t1 t2)';
SET
postgres=# explain (plan_advice) select * from t1 join t2 on t1.a = t2.b;
                                        QUERY PLAN                                        
------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.43..111805.81 rows=2000001 width=78)
   ->  Seq Scan on t2  (cost=0.00..48038.01 rows=2000001 width=41)
   ->  Memoize  (cost=0.43..0.47 rows=1 width=37)
         Cache Key: t2.b
         Cache Mode: logical
         Estimates: capacity=29629 distinct keys=29629 lookups=2000001 hit percent=98.52%
         ->  Index Scan using t1_pkey on t1  (cost=0.42..0.46 rows=1 width=37)
               Index Cond: (a = t2.b)
 JIT:
   Functions: 8
   Options: Inlining false, Optimization false, Expressions true, Deforming true
 Supplied Plan Advice:
   SEQ_SCAN(t2) /* matched */
   INDEX_SCAN(t1 public.t1_pkey) /* matched */
   JOIN_ORDER(t2 t1) /* matched */
   NESTED_LOOP_MEMOIZE(t1) /* matched */
   NO_GATHER(t1) /* matched */
   NO_GATHER(t2) /* matched */
 Generated Plan Advice:
   JOIN_ORDER(t2 t1)
   NESTED_LOOP_MEMOIZE(t1)
   SEQ_SCAN(t2)
   INDEX_SCAN(t1 public.t1_pkey)
   NO_GATHER(t1 t2)
(24 rows)

Running the next explain with that advice will show you what you’ve advised the planner to do and what was actually done. In this case all the advises matched and you get the same plan as before.

Once you play e.g. with the join order, the plan will change because you told the planner to do so:

postgres=# SET pg_plan_advice.advice = 'JOIN_ORDER(t1 t2)';
SET
postgres=# explain (plan_advice) select * from t1 join t2 on t1.a = t2.b;
                                    QUERY PLAN                                     
-----------------------------------------------------------------------------------
 Merge Join  (cost=323875.24..390697.00 rows=2000001 width=78)
   Merge Cond: (t1.a = t2.b)
   ->  Index Scan using t1_pkey on t1  (cost=0.42..34317.43 rows=1000000 width=37)
   ->  Materialize  (cost=318880.31..328880.31 rows=2000001 width=41)
         ->  Sort  (cost=318880.31..323880.31 rows=2000001 width=41)
               Sort Key: t2.b
               ->  Seq Scan on t2  (cost=0.00..48038.01 rows=2000001 width=41)
 JIT:
   Functions: 7
   Options: Inlining false, Optimization false, Expressions true, Deforming true
 Supplied Plan Advice:
   JOIN_ORDER(t1 t2) /* matched */
 Generated Plan Advice:
   JOIN_ORDER(t1 t2)
   MERGE_JOIN_MATERIALIZE(t2)
   SEQ_SCAN(t2)
   INDEX_SCAN(t1 public.t1_pkey)
   NO_GATHER(t1 t2)
(18 rows)

Really nice, now there is an official way to influence the planner using advises but please be aware of the current limitations. Needless to say, that you should use this with caution, because you can easily make things slower by advising what is not optimal for a query.

Thanks to all involved with this, this is really a great improvement.

L’article PostgreSQL 19: pg_plan_advice est apparu en premier sur dbi Blog.

Continuous data capture

Tom Kyte - Thu, 2026-03-12 19:47
I need to propagate changes on a few tables to an external system. I have to do it near realtime and ultimately it will end up on Kafka from my end. The solution target is Oracle 19c and needs to be installed to multiple clients on premises. Example problem: Let's say I have table of messages. I have a column called change_id which is populated by trigger from a sequence on insert and update. Theoretically I always know which row was changed last. i might avoid needing to track deletes, not sure yet. Now I create a procedure to be ran as a batch and collect rows changed from last run. First it gets the max(change_id), then selects all rows between last_run_max_change_id and new_max_change_id. Exports data as json to another table. writes new_max_change_id as last_run_max_change_id. Commits. Another process will handle delivery of json to where it needs to be. The problem is another long running transaction might have consumed sequence with lower numbers but has not commited when batch was run, thus those change_ids will never be exported. Another problem is I don't have deleted rows. Solution 1: Golden Gate replication or OpenLogReplicator or something similar. I would have to convince all clients to commit to paying the GG licence, create tables as replication target, export from those tables, delete from them. Licence and getting all clients on board is difficult, because i need one solution for all. Also security, stability and maintenance concerns will likely make clients want to reject such ideas, and it has to be all of them onboard. I have also tried to use Oracle Streams before on another project and had stability issues and ORA-600 errors that were never resolved. Solution 2: use SCN instead of change_id. SCNs (ora_scnrow) are not indexed and select by scn in where clause on billions of rows is too slow. Solution 3: Flashback. Have something like SELECT * FROM messages VERSIONS BETWEEN SCN :last_scn AND :current_scn; My concern is if the program doesn't run for a while for whatever problem and reason, the flashback will be lost. I would need a backup solution. Solution 4: trigger after insert on messages table that will write to export table. Handles insert, update, delete and nothing will be skipped. I select 10k ids ordered, export to json, delete 10k records and commit; I'm worried about big transactions having additional load to write to export table and trigger overhead for each row. Additional context switching. Index contention on export table having rows at the same time inserted by inserts and updates on original table and rows deleted by export batch process. Exports will have to be ordered by change_id. Conclusion: The only stable and data consistent solution I can think of is solution 4, a trigger. But I'm worried about overhead. Instead of a trigger, I could check the code and program additional inserts into export log table so that the total number uf updates might be lower but not by much. A...
Categories: DBA Blogs

Question about Concurrent Statistics Gathering and Maintenance Windows

Tom Kyte - Thu, 2026-03-12 19:47
Hello, I have a question regarding concurrent statistics gathering in Oracle 19c. Currently, my global preference is set to OFF: SELECT DBMS_STATS.GET_PREFS('CONCURRENT') FROM DUAL; -- Result: OFF I can enable it manually with: BEGIN DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','ALL'); END; / I understand that enabling CONCURRENT allows Oracle to gather statistics on multiple tables and (sub)partitions at the same time, potentially reducing the total duration. Oracle uses the Job Scheduler and Advanced Queuing to manage these concurrent jobs. My question: Is there a relationship between the maintenance windows (like WEEKEND_WINDOW or WEEKNIGHT_WINDOW) and concurrent statistics gathering? Specifically: When CONCURRENT is enabled, does Oracle automatically schedule these parallel stats jobs within the maintenance windows? Or is the maintenance window unrelated, and the concurrent gathering runs independently of it? Thank you for your guidance.
Categories: DBA Blogs

Oracle View Becomes INVALID Intermittently Without DDL on Base Table

Tom Kyte - Thu, 2026-03-12 19:47
All, I have an Oracle view built on top of a partitioned table. Occasionally the view becomes INVALID, even though there are no DDL changes happening on the base table. The view becomes VALID again automatically when it is accessed or queried, but this behavior is causing issues in production. I'm trying to understand what could be causing the intermittent invalidation. <code> create view my_view as SELECT dly_fct_id, acct_ref_id, bus_dt FROM my_table WHERE bus_dt = TO_DATE('01/01/2500','MM/DD/YYYY'); </code> Note: The date above is only a placeholder. In reality this predicate changes dynamically based on the ETL run date. Base table structure: <code> CREATE TABLE my_table ( dly_fct_id NUMBER, acct_ref_id NUMBER, bus_dt DATE ) PARTITION BY RANGE (bus_dt) INTERVAL (NUMTODSINTERVAL(1,'DAY')) ( PARTITION p0 VALUES LESS THAN (TO_DATE('01-JAN-1900','DD-MON-YYYY')) ) COMPRESS; </code> Constraint: <code> ALTER TABLE my_table ADD CONSTRAINT xpk_my_table PRIMARY KEY (dly_fct_id, acct_ref_id) RELY; </code> The table actually contains ~50 columns, but I only included the relevant ones here. Additional details: The view does not become invalid daily, but it happens intermittently. There are no known DDL operations on the table except regular ETL data loads. The view becomes VALID again automatically when it is queried. This is happening in a production environment, so we want to understand the root cause. Questions: What could cause a view to become INVALID intermittently without explicit DDL on the base table? Could this be related to partition maintenance, statistics gathering, or constraint changes? What system views or logs should we check to identify the root cause? Any guidance on where to investigate would be greatly appreciated.
Categories: DBA Blogs

instnce vs database

Tom Kyte - Thu, 2026-03-12 19:47
is Instances can exist without the database and database can exist without instance
Categories: DBA Blogs

Fast Large Table Extraction: Sparrow + dots.ocr to JSON

Andrejus Baranovski - Thu, 2026-03-12 07:56
Sparrow provides table processing mode. It is optimized to handle large tables, it comes with separate template script (new templates can be easily added) to process dots.ocr markdown output into structure JSON with field mapping.

 

USERID Syntax Warning in GoldenGate Migration Utility

Yann Neuhaus - Thu, 2026-03-12 02:41

When running the GoldenGate migration utility, which I presented in an previous blog post, you might encounter the following warning on USERID syntax:

WARNING: USERID/PASSWORD parameter is no longer supported and will be modified to use USERIDALIAS for the database credentials.

This is not really a surprise, since USERID is an old syntax that should not be used anymore. In fact, it is not even part of the latest versions of GoldenGate. Let’s see a very basic example of a replicat running in the GoldenGate 19c Classic Architecture.

GGSCI (ogg) 1> view params rep
REPLICAT rep
USERIDALIAS dbi_blog
MAP PDB1.APP_PDB1.SOURCE, PDB2.APP_PDB2.TARGET,
COLMAP (
    COL_SOURCE_USER = COL_TARGET_USER,
    COL_SOURCE_USERID = COL_TARGET_USERID
);

Nothing here should be a cause for concern, because the database connection is done with the USERIDALIAS syntax. Yet, when running the migration utility in dryrun mode, I get the following warning:

Migration of Extract E2T Completed Successfully.

Parameter file for REPLICAT REP has the following warnings:
        WARNING: USERID/PASSWORD parameter is no longer supported and will be modified to use USERIDALIAS for the database credentials.

Migrating REPLICAT REP to http://oggma:port.

Parameter File rep.prm Saved Successfully.

Checkpoint File(s) Copied and Converted Successfully.

REPLICAT REP patched.

...

Migration Summary

    Migration of Replicat REP ..............................: Successful
...

It is technically not an error, and the migration utility seems to have no problem migrating this replicat to the new Microservices Architecture. However, a question remains. Will USERID be replaced in the process ? Of course, we do not want all USERID occurrences to be replaced with USERIDALIAS, if these are not connection keywords.

Let’s run the migration utility, for real this time, to see what happens. The output of the migration utility is exactly the same as before. The process is migrated with a warning on the USERID syntax.

WARNING: USERID/PASSWORD parameter is no longer supported and will be modified to use USERIDALIAS for the database credentials.

And if we look at the migrated parameter file:

> grep USERID $OGG_DEPLOYMENT_HOME/etc/conf/ogg/rep.prm
USERIDALIAS dbi_blog
    COL_SOURCE_USER = COL_TARGET_USER,
    COL_SOURCE_USERID = COL_TARGET_USERID

In this specific case, despite the warning on USERID, the migration utility did not change the parameter file. But of course, if you get the warning, you should always check the migrated parameter file before restarting your GoldenGate processes:

> diff $OLD_OGG_HOME/dirprm/rep.prm $OGG_DEPLOYMENT_HOME/etc/conf/ogg/rep.prm

L’article USERID Syntax Warning in GoldenGate Migration Utility est apparu en premier sur dbi Blog.

Dctm – IDS Source 16.7.5 config.bin crash during execution

Yann Neuhaus - Tue, 2026-03-10 13:56

Around six months ago, I faced a confusing issue with IDS Source 16.7.5 where the “config.bin” executable always crashed when I tried to run it. The installation of IDS binaries itself completed successfully without any errors. However, the configurator, which is supposed to set up the required objects inside the Repository, consistently crashed.

1. Environment context and IDS upgrade

This Documentum environment had just been upgraded to 23.4. The next step was to upgrade the associated IDS component. The latest version of IDS compatible with recent Documentum versions is 16.7.5.

The execution of the “idsLinuxSuiteSetup.bin” installer properly extracted all binaries and deployed the WebCache application in its Tomcat server. To quickly verify that, you can check the version properties file and try starting/stopping the Tomcat instance of the IDS. On my side, there were no problems with that.

To verify the installed version of IDS and ensure that the configurator was also updated:

[dmadmin@cs-0 ~]$ cd $DM_HOME/webcache
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ cat version/version.properties
#Please don't remove this values
#Fri Oct 10 09:52:49 UTC 2025
INSTALLER_NAME=IDS
PRODUCT_VERSION=16.7.5
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ ls -l install/config.bin
-rwxrwxr-x 1 dmadmin dmadmin 54943847 Oct 19  2024 install/config.bin
[dmadmin@cs-0 webcache]$

The above confirms that WebCache was properly updated to version 16.7.5 on October 10. It also confirms that the “config.bin” is fairly recent (Q4 2024), i.e. much more recent that the old 16.7.4 file.

2. Running the IDS configurator in silent

My next step was therefore to execute the configurator, still in silent mode, as I have done for all previous IDS installations and configurations. I have not written a blog about IDS silent installation yet, but I have done so for several other components. For example, you can refer to this post for the latest one published.

The silent properties file for the IDS Source configurator is quite simple, as it only requires the Repository name to configure:

[dmadmin@cs-0 webcache]$ cat ${install_file}
### Silent installation response file for IDS configurator
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Configuration parameters
DOC_BASE_NAME=REPO_01

[dmadmin@cs-0 webcache]$

Initially, I simply executed “config.bin“. Since it crashed and there was absolutely nothing in the logs, I had to run it again with the DEBUG flag enabled:

[dmadmin@cs-0 webcache]$ $DM_HOME/webcache/install/config.bin -DLOG_LEVEL=DEBUG -f ${install_file}
Preparing to install
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Picked up JAVA_TOOL_OPTIONS: -Djdk.util.zip.disableZip64ExtraFieldValidation=true -Djava.locale.providers=COMPAT,SPI --add-exports=java.base/sun.security.provider=ALL-UNNAMED --add-exports=java.base/sun.security.pkcs=ALL-UNNAMED --add-exports=java.base/sun.security.x509=ALL-UNNAMED --add-exports=java.base/sun.security.util=ALL-UNNAMED --add-exports=java.base/sun.security.tools.keytool=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ echo $?
1
[dmadmin@cs-0 webcache]$
3. Investigating the logs

As shown above, the execution failed, as the return code was “1“. With DEBUG enabled and after checking the generated files, I found the following:

[dmadmin@cs-0 webcache]$ find . -type f -mmin -20 -ls
92475248907    380 -rw-rw-r--   1  dmadmin  dmadmin    384222 Oct 10 11:58 ./install/logs/install.log
92470810541      4 -rw-rw-r--   1  dmadmin  dmadmin       219 Oct 10 11:57 ./install/installer.properties
92475252084     12 -rwxrwxrwx   1  dmadmin  dmadmin     10564 Oct 10 11:57 ./install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_10_2025_11_57_42.log
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ grep -iE "_E_|_F_|ERROR|WARN|FATAL" install/logs/install.log
TYPE ERROR_TYPE 0000000000000000 0 0 0
13:24:12,192 DEBUG [main] com.documentum.install.shared.common.error.DiException - null/dba/config/GR_REPO/webcache.ini (No such file or directory)
13:24:12,193 DEBUG [main] com.documentum.install.shared.common.error.DiException - null
13:24:12,194 DEBUG [main] com.documentum.install.shared.common.error.DiException - null/dba/config/REPO_01/webcache.ini (No such file or directory)
13:24:12,194 DEBUG [main] com.documentum.install.shared.common.error.DiException - null
13:24:12,199 DEBUG [main] com.documentum.install.shared.common.error.DiException - null/dba/config/GR_REPO/webcache.ini (No such file or directory)
13:24:12,199 DEBUG [main] com.documentum.install.shared.common.error.DiException - null
13:24:12,199 DEBUG [main] com.documentum.install.shared.common.error.DiException - null/dba/config/REPO_01/webcache.ini (No such file or directory)
13:24:12,199 DEBUG [main] com.documentum.install.shared.common.error.DiException - null
13:24:12,200 DEBUG [main] com.documentum.install.shared.common.error.DiException - null/dba/config/REPO_01/webcache.ini (No such file or directory)
13:24:12,200 DEBUG [main] com.documentum.install.shared.common.error.DiException - null
TYPE ERROR_TYPE 0000000000000000 0 0 0
[dmadmin@cs-0 webcache]$

The DEBUG logs above might make it look like the “$DOCUMENTUM” environment variable is missing, since it complains about “null/dba/xxx” not being found. However, that is not the issue. I checked all parameters and environment variables, and everything was configured correctly. In addition, Documentum had just been successfully upgraded from version 20.2 to 23.4 from start to finish, which confirmed that there was no problem with the OS or environment configuration. So I checked the second file:

[dmadmin@cs-0 webcache]$ cat install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_10_2025_11_57_42.log

__________________________________________________________________________

Fri Oct 10 11:57:50 UTC 2025

Free Memory: 15947 kB
Total Memory: 49152 kB
...
Summary
-------

Installation: Successful with errors.

8 Successes
0 Warnings
1 NonFatalErrors
0 FatalErrors
...

Custom Action:            com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase
                          Status: ERROR
                          Additional Notes: ERROR -     class com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase.install() runtime exception:

...
====================STDERR ENTRIES==================

RepositoryManager: Trying fallback repository location...
8. final log file name=$DM_HOME/webcache/install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_10_2025_11_57_42.log
java.lang.NumberFormatException: For input string: ""
        at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:67)
        at java.base/java.lang.Integer.parseInt(Integer.java:678)
        at java.base/java.lang.Integer.parseInt(Integer.java:786)
        at com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase.configureDocbase(DiWAWebcsConfigureDocbase.java:329)
        at com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase.install(DiWAWebcsConfigureDocbase.java:202)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.GhostDirectory.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.Installer.install(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
        at com.zerog.ia.installer.Main.main(Unknown Source)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:569)
        at com.zerog.lax.LAX.launch(Unknown Source)
        at com.zerog.lax.LAX.main(Unknown Source)
Execute Custom Code
    class com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase.install() runtime exception:
java.awt.HeadlessException:
No X11 DISPLAY variable was set,
but this program performed an operation which requires it.
        at java.desktop/java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:164)
        at java.desktop/java.awt.Window.<init>(Window.java:553)
        at java.desktop/java.awt.Frame.<init>(Frame.java:428)
        at java.desktop/java.awt.Frame.<init>(Frame.java:393)
        at java.desktop/javax.swing.JFrame.<init>(JFrame.java:180)
        at com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase.install(DiWAWebcsConfigureDocbase.java:215)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.GhostDirectory.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.Installer.install(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
        at com.zerog.ia.installer.Main.main(Unknown Source)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:569)
        at com.zerog.lax.LAX.launch(Unknown Source)
        at com.zerog.lax.LAX.main(Unknown Source)
Retrying Installables deferred in pass 0
Deferral retries done because:
There were no deferrals in the last pass.
8. final log file name=$DM_HOME/webcache/install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_10_2025_11_57_42.log
====================STDOUT ENTRIES==================
...
[dmadmin@cs-0 webcache]$

That log file appeared to indicate that a certain “Number” value might be missing (“NumberFormatException“). Without access to the IDS source code (and I always avoid de-compiling Documentum source files), it was impossible to determine exactly what was missing. There were no additional details in the logs, so in the end I had to reach out to OpenText support to find out what was causing the issue.

4. Root cause: missing value for TOMCAT_PORT_SELECTED

After several back-and-forth exchanges and around 12 days of waiting for a solution, I finally received confirmation that this was a bug in the IDS Source 16.7.5 software. This version is the first one deployed on Tomcat instead of WildFly, so it was somewhat expected that some issues might appear.

When installing the IDS Source binaries, the silent installation properties file requires you to define the port that Tomcat will use. This parameter is “USER_PORT_CHOICE=6677“. You can of course change the port if needed, but 6677 was the default port used with previous IDS versions running on WildFly, so I kept the same value when installing IDS 16.7.5 on Tomcat.

The bug is that even though this value is used correctly during the Tomcat installation step, it is not properly written into the properties file that the configuration process later relies on. The IDS Source “config.bin” looks for the file “$DM_HOME/webcache/scs_tomcat.properties” and reads the port from the “TOMCAT_PORT_SELECTED” parameter.

However, in IDS 16.7.5 this file is not updated during installation. As a result, the port value remains empty, which corresponds to the missing number referenced in the logs and causes the configurator to crash.

5. Fix: updating scs_tomcat.properties

The solution is fairly simple: manually update that file and run the configurator again. In my case, I used the HTTPS port 6679, since my Tomcat was already in SSL (6677 + 2 = 6679):

[dmadmin@cs-0 webcache]$ port=6679
[dmadmin@cs-0 webcache]$ sed -i "s,\(TOMCAT_PORT_SELECTED=\).*,\1${port}," $DM_HOME/webcache/scs_tomcat.properties
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ $DM_HOME/webcache/install/config.bin -DLOG_LEVEL=DEBUG -f ${install_file}
Preparing to install
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Picked up JAVA_TOOL_OPTIONS: -Djdk.util.zip.disableZip64ExtraFieldValidation=true -Djava.locale.providers=COMPAT,SPI --add-exports=java.base/sun.security.provider=ALL-UNNAMED --add-exports=java.base/sun.security.pkcs=ALL-UNNAMED --add-exports=java.base/sun.security.x509=ALL-UNNAMED --add-exports=java.base/sun.security.util=ALL-UNNAMED --add-exports=java.base/sun.security.tools.keytool=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ echo $?
0
[dmadmin@cs-0 webcache]$

As you can see above, the return code is now “0“, which indicates a successful execution. The logs generated during this new attempt are much cleaner, and there are no longer any exceptions or errors:

[dmadmin@cs-0 webcache]$ cat install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_22_2025_13_37_46.log
__________________________________________________________________________

Wed Oct 22 01:39:40 UTC 2025

Free Memory: 14800 kB
Total Memory: 49152 kB
...
Summary
-------

Installation: Successful.

9 Successes
0 Warnings
0 NonFatalErrors
0 FatalErrors
...

Custom Action:            com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase
                          Status: SUCCESSFUL

...
====================STDERR ENTRIES==================

RepositoryManager: Trying fallback repository location...
8. final log file name=$DM_HOME/webcache/install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_22_2025_13_37_46.log
Retrying Installables deferred in pass 0
Deferral retries done because:
There were no deferrals in the last pass.
8. final log file name=$DM_HOME/webcache/install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_22_2025_13_37_46.log
====================STDOUT ENTRIES==================
...
[dmadmin@cs-0 webcache]$

As mentioned earlier, this configurator is responsible for installing components inside the Repository. It creates the required IDS objects or updates them if they already exist. The DAR files were also successfully installed:

[dmadmin@cs-0 webcache]$ iapi $DOCBASE_NAME -Udmadmin -Pxxx << EOC
> ?,c,select r_object_id, r_modify_date, object_name from dmc_dar order by r_modify_date asc;
> EOC

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2023. OpenText Corporation
        All rights reserved.
        Client Library Release 23.4.0000.0180

Connecting to Server using docbase REPO_01
[DM_SESSION_I_SESSION_START]info:  "Session 011234568027fb88 started for user dmadmin."

Connected to OpenText Documentum Server running Release 23.4.0000.0143  Linux64.Oracle
1> 2>
r_object_id       r_modify_date              object_name
----------------  -------------------------  ---------------------------------
...               ...                        ...
08123456800c99a5  10/22/2025 13:38:32        SCSDocApp
08123456800c99be  10/22/2025 13:38:58        SCSWorkflow
08123456800c99e1  10/22/2025 13:39:29        icmRating

(43 rows affected)
1>
[dmadmin@cs-0 webcache]$
6. Another small bug

However, I later discovered another small bug. The “scs_admin_config.product_version” attribute in the Repository was not updated correctly. Previously installed version was 16.7.4, so it’s unclear whether the configurator updated the value (with 16.7.4 still) or not at all. In any case, the stored product version was incorrect.

This value is used by IDS to verify version compatibility during execution. For example, you can see this version referenced during the End-to-End tests. Therefore, I had to update the value manually. To correct the issue:

[dmadmin@cs-0 webcache]$ iapi $DOCBASE_NAME -Udmadmin -Pxxx << EOC
> ?,c,select product_version from scs_admin_config;
> ?,c,update scs_admin_config object set product_version='16.7.5' where product_version='16.7.4';
> ?,c,select product_version from scs_admin_config;
> exit
> EOC

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2023. OpenText Corporation
        All rights reserved.
        Client Library Release 23.4.0000.0180

Connecting to Server using docbase REPO_01
[DM_SESSION_I_SESSION_START]info:  "Session 011234568027fd13 started for user dmadmin."

Connected to OpenText Documentum Server running Release 23.4.0000.0143  Linux64.Oracle
Session id is s0
API>
product_version
------------------------
16.7.4
(1 row affected)

API>
objects_updated
---------------
              1
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "1 objects were affected by your UPDATE statement."

API>
product_version
------------------------
16.7.5
(1 row affected)

API> Bye
[dmadmin@cs-0 webcache]$

OpenText mentioned that both of these bugs should normally be fixed in a future update of the binaries. I have not checked in the last six months, but hopefully the issue has already been resolved. If not, at least you now have the information needed to fix it!

L’article Dctm – IDS Source 16.7.5 config.bin crash during execution est apparu en premier sur dbi Blog.

Commercial PostgreSQL distributions with TDE (3) Cybertec PostgreSQL EE (1) Setup

Yann Neuhaus - Tue, 2026-03-10 02:40

In the lasts posts in this series we’ve looked at Fujitsu’s distribution of PostgreSQL (here and here) and EnterpriseDB’s distribution of PostgreSQL (here and here) which both come with support for TDE (Transparent Data Encryption). A third player is Cybertec with it’s Cybertec PostgreSQL EE distribution of PostgreSQL and this is the distribution we’re looking at in this and the next post.

Cybertec provides free access to their repositories with the limitation of 1GB data per table. As with Fujitsu, the supported versions of Linux distributions are based RHEL (8,9 & 10) and SLES (15 & 16).

Installing Cybertec’s distribution of PostgreSQL is, the same as with Fujitsu and EnterpriseDB, just a matter of attaching the repository and installing the packages. Before I am going to do that I’ll disable the EnterpriseDB repositories for not running into any issues with those when installing another distribution of PostgreSQL:

[root@postgres-tde ~]$ dnf repolist
Updating Subscription Management repositories.
repo id                                         repo name
enterprisedb-enterprise                         enterprisedb-enterprise
enterprisedb-enterprise-noarch                  enterprisedb-enterprise-noarch
enterprisedb-enterprise-source                  enterprisedb-enterprise-source
rhel-9-for-x86_64-appstream-rpms                Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)
rhel-9-for-x86_64-baseos-rpms                   Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
[root@postgres-tde ~]$ dnf config-manager --disable enterprisedb-*
Updating Subscription Management repositories.
[root@postgres-tde ~]$ dnf repolist
Updating Subscription Management repositories.
repo id                                                                                                   repo name
rhel-9-for-x86_64-appstream-rpms                                                                          Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)
rhel-9-for-x86_64-baseos-rpms                                                                             Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
[root@postgres-tde ~]$

Attaching the Cybertec repository for version 18 of PostgreSQL:

[root@postgres-tde ~]$ version=18
[root@postgres-tde ~]$ sudo tee /etc/yum.repos.d/cybertec-pg$version.repo <<EOF
[cybertec_pg$version]
name=CYBERTEC PostgreSQL $version repository for RHEL/CentOS \$releasever - \$basearch
baseurl=https://repository.cybertec.at/public/$version/redhat/\$releasever/\$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
[cybertec_common]
name=CYBERTEC common repository for RHEL/CentOS \$releasever - \$basearch
baseurl=https://repository.cybertec.at/public/common/redhat/\$releasever/\$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
EOF
[cybertec_pg18]
name=CYBERTEC PostgreSQL 18 repository for RHEL/CentOS $releasever - $basearch
baseurl=https://repository.cybertec.at/public/18/redhat/$releasever/$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
[cybertec_common]
name=CYBERTEC common repository for RHEL/CentOS $releasever - $basearch
baseurl=https://repository.cybertec.at/public/common/redhat/$releasever/$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
[root@postgres-tde ~]$ dnf repolist
Updating Subscription Management repositories.
repo id                                                                                                 repo name
cybertec_common                                                                                         CYBERTEC common repository for RHEL/CentOS 9 - x86_64
cybertec_pg18                                                                                           CYBERTEC PostgreSQL 18 repository for RHEL/CentOS 9 - x86_64
rhel-9-for-x86_64-appstream-rpms                                                                        Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)
rhel-9-for-x86_64-baseos-rpms                                                                           Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
[root@postgres-tde ~]$

Let’s check what we have available:

[root@postgres-tde ~]$ dnf search postgresql18-ee
Updating Subscription Management repositories.
Last metadata expiration check: 0:00:10 ago on Mon 09 Mar 2026 09:33:05 AM CET.
================================================================================================== Name Exactly Matched: postgresql18-ee ===================================================================================================
postgresql18-ee.x86_64 : PostgreSQL client programs and libraries
================================================================================================= Name & Summary Matched: postgresql18-ee ==================================================================================================
postgresql18-ee-contrib-debuginfo.x86_64 : Debug information for package postgresql18-ee-contrib
postgresql18-ee-debuginfo.x86_64 : Debug information for package postgresql18-ee
postgresql18-ee-devel-debuginfo.x86_64 : Debug information for package postgresql18-ee-devel
postgresql18-ee-ecpg-devel-debuginfo.x86_64 : Debug information for package postgresql18-ee-ecpg-devel
postgresql18-ee-ecpg-libs-debuginfo.x86_64 : Debug information for package postgresql18-ee-ecpg-libs
postgresql18-ee-libs-debuginfo.x86_64 : Debug information for package postgresql18-ee-libs
postgresql18-ee-libs-oauth-debuginfo.x86_64 : Debug information for package postgresql18-ee-libs-oauth
postgresql18-ee-llvmjit-debuginfo.x86_64 : Debug information for package postgresql18-ee-llvmjit
postgresql18-ee-plperl-debuginfo.x86_64 : Debug information for package postgresql18-ee-plperl
postgresql18-ee-plpython3-debuginfo.x86_64 : Debug information for package postgresql18-ee-plpython3
postgresql18-ee-pltcl-debuginfo.x86_64 : Debug information for package postgresql18-ee-pltcl
postgresql18-ee-server-debuginfo.x86_64 : Debug information for package postgresql18-ee-server
====================================================================================================== Name Matched: postgresql18-ee =======================================================================================================
postgresql18-ee-contrib.x86_64 : Contributed source and binaries distributed with PostgreSQL
postgresql18-ee-devel.x86_64 : PostgreSQL development header files and libraries
postgresql18-ee-docs.x86_64 : Extra documentation for PostgreSQL
postgresql18-ee-ecpg-devel.x86_64 : Development files for ECPG (Embedded PostgreSQL for C)
postgresql18-ee-ecpg-libs.x86_64 : Run-time libraries for ECPG programs
postgresql18-ee-libs.x86_64 : The shared libraries required for any PostgreSQL clients
postgresql18-ee-libs-oauth.x86_64 : The shared libraries required for any PostgreSQL clients - OAuth flow
postgresql18-ee-llvmjit.x86_64 : Just-in-time compilation support for PostgreSQL
postgresql18-ee-plperl.x86_64 : The Perl procedural language for PostgreSQL
postgresql18-ee-plpython3.x86_64 : The Python3 procedural language for PostgreSQL
postgresql18-ee-pltcl.x86_64 : The Tcl procedural language for PostgreSQL
postgresql18-ee-server.x86_64 : The programs needed to create and run a PostgreSQL server
postgresql18-ee-test.x86_64 : The test suite distributed with PostgreSQL

This are the usual suspects, so for getting it installed:

[root@postgres-tde ~]$ dnf install postgresql18-ee-server postgresql18-ee postgresql18-ee-contrib
Updating Subscription Management repositories.
Last metadata expiration check: 0:00:29 ago on Mon 09 Mar 2026 10:30:17 AM CET.
Dependencies resolved.
============================================================================================================================================================================================================================================
 Package                                                       Architecture                                 Version                                                               Repository                                           Size
============================================================================================================================================================================================================================================
Installing:
 postgresql18-ee                                               x86_64                                       18.3-EE~demo.rhel9.cybertec2                                          cybertec_pg18                                       2.0 M
 postgresql18-ee-contrib                                       x86_64                                       18.3-EE~demo.rhel9.cybertec2                                          cybertec_pg18                                       755 k
 postgresql18-ee-server                                        x86_64                                       18.3-EE~demo.rhel9.cybertec2                                          cybertec_pg18                                       7.2 M
Installing dependencies:
 postgresql18-ee-libs                                          x86_64                                       18.3-EE~demo.rhel9.cybertec2                                          cybertec_pg18                                       299 k

Transaction Summary
============================================================================================================================================================================================================================================
Install  4 Packages

Total download size: 10 M
Installed size: 46 M
Is this ok [y/N]: y
Downloading Packages:
(1/4): postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm                                                                                                                                         1.4 MB/s | 299 kB     00:00    
(2/4): postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm                                                                                                                                      3.1 MB/s | 755 kB     00:00    
(3/4): postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm                                                                                                                                              6.8 MB/s | 2.0 MB     00:00    
(4/4): postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm                                                                                                                                        13 MB/s | 7.2 MB     00:00    
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                        13 MB/s |  10 MB     00:00     
CYBERTEC PostgreSQL 18 repository for RHEL/CentOS 9 - x86_64                                                                                                                                                 42 kB/s | 3.1 kB     00:00    
Importing GPG key 0x2D1B5F59:
 Userid     : "Cybertec International (Software Signing Key) <build@cybertec.at>"
 Fingerprint: FCFF 012F 4B39 9019 1352 BB03 AA6F 3CC1 2D1B 5F59
 From       : https://repository.cybertec.at/assets/cybertec-rpm.asc
Is this ok [y/N]: y
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                    1/1 
  Installing       : postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                           1/4 
  Running scriptlet: postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                           1/4 
  Installing       : postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                                2/4 
  Running scriptlet: postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                                2/4 
  Running scriptlet: postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                         3/4 
  Installing       : postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                         3/4 
  Running scriptlet: postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                         3/4 
  Installing       : postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                        4/4 
  Running scriptlet: postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                        4/4 
  Verifying        : postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                                1/4 
  Verifying        : postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                        2/4 
  Verifying        : postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                           3/4 
  Verifying        : postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                         4/4 
Installed products updated.

Installed:
  postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64  postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64  postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64  postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64 

Complete!

… and that’s it. As with the other posts in this little series, we’ll have a look at how to start the instance and enable TDE in the next post.

L’article Commercial PostgreSQL distributions with TDE (3) Cybertec PostgreSQL EE (1) Setup est apparu en premier sur dbi Blog.

OGG-08048 after patching GoldenGate: explanations and solutions

Yann Neuhaus - Mon, 2026-03-09 02:57

When patching GoldenGate Classic Architecture, you might encounter an OGG-08048 error when restarting your extracts and replicats.

OGG-08048: Failed to initialize timezone information. Check location of ORACLE_HOME.

What should you do exactly, and how do you avoid this error in the future ? In fact, this error is easy to reproduce, which also makes it easy to avoid. Usually, it happens when following the official documentation instructions for patching. They always include a modification of the ORACLE_HOME variable.

export ORACLE_HOME=GoldenGate_Installation_Path
Where does this come from ?

This is a good practice to make sure the OPatch utility knows what to patch. However, it might cause issues when restarting. As a rule of thumb, the modified environment should only be used to patch and roll back your installation. You shouldn’t do any management tasks with it !

The most important thing to remember is that you should start the manager with the correct environment variables ! If the manager is already started, you might still have the error when starting the extracts. What this means if that correcting your environment variables after restarting the manager might not solve the issue !

With this explanation, you now understand why rolling back the patch will not solve the OGG-08048. The rollback will work, but you will not be able to restart the extracts !

What should you do if you have an OGG-08048 error ?

If you have an OGG-08048 error when starting GoldenGate processes:

  • If your environment is patched, do not attempt to rollback. Just load your usual GoldenGate environment, restart the manager and attempt to restart the processes.
  • If you already rolled back the patch, you can apply it again. Then, follow the steps described above: load a standard GoldenGate environment, restart the manager and the GoldenGate processes.

And in the future, remember to always use your classic environment to manage your installation before and after applying a GoldenGate patch. To make it safer, I would suggest using separate sessions to avoid any confusion.

And after patching your GoldenGate classic architecture setup, you should definitely consider upgrading to GoldenGate 26ai, using the migration utility.

L’article OGG-08048 after patching GoldenGate: explanations and solutions est apparu en premier sur dbi Blog.

DBA technologies.

Tom Kyte - Sun, 2026-03-08 00:32
Hello i just wanted to ask you a question. Can you please provide me some good online oracle books, tutorials or courses of the dba technologies like RMAN, Golden Gates, Exadata, Data Guard, Data pump and RAC? thanks.
Categories: DBA Blogs

Get hashed password for old user

Tom Kyte - Sun, 2026-03-08 00:32
I am normally using "<code>SELECT DBMS_METADATA.GET_DDL('USER', 'User_Name') FROM DUAL;</code>" to get the hashed password ( which I use to re-create the user as it was ), with SYSTEM user. This unfortunately does not work for users which have password_versions = 10G ( I am using Oracle 19 but I think this happens with 12 as well ) The hashed password in that case is stored in a column of USER$ sys table that SYSTEM user is not allowed to see However I observed that I can use export datapump to dump the user definition and then import datapump to get a sql file that has the hashed password, even with SYSTEM user ( but it is a long way ). So how can SYSTEM user get the hashed password? I would like a simpler way to get a script to re-create a user Regards Mauro
Categories: DBA Blogs

Dealing with 100000000+ rows of data.

Tom Kyte - Sun, 2026-03-08 00:32
Well hello i just wanted to ask you when you have 100000000 rows of data and the best partitioning strategy is to use partition by range with subpartition by hash (having a column that is date type called hired and in the range we have YEAR(hired) and in the hash we have MONTH(hired)) how we will know the number of the partitions we will have to create in the table so we can have the best performance when we query data? thanks.If you don't understand my question i can provide the code if you ask me thanks.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator