DBA Blogs
Hello,
I have an Oracle Advanced Queuing queue and would like to be able to process this queue from inside of the database, as opposed to using an external app server. However, I am concerned about the scalability of internal solutions.
Please assume the following:
1. The queue receives an arbitrary number of messages.
2. Each message results in a PLSQL procedure being called, which can take an arbitrary amount of time.
3. You want to limit the number of messages that can be processed at once to some value N.
---
Solution #1: Run N permanent DBMS_SCHEDULER jobs that loop and call DBMS_AQ.DEQUEUE with WAIT_FOREVER. This is good because you can easily cap how many jobs you want to have processing this queue by adjusting the N number of permanent DBMS_SCHEDULER jobs. This is bad because all of these permanent jobs will reduce the available JOB_QUEUE_PROCESSES. It is fine if you only need a handful of jobs to process your queue, but as you scale the number of jobs up, eventually you will degrade the other unrelated jobs that need to run in the system.
Does calling DBMS_LOCK.SLEEP or DBMS_AQ.DEQUEUE inside a DBMS_SCHEDULER job free up a JOB_QUEUE_PROCESSES slot while the job is sleeping? My guess is no.
---
Solution #2: Use a PL/SQL callback, and in the callback, create a one-time DBMS_SCHEDULER job per message, and use a common resource constraint, such that only N scheduler jobs can run at once. For example, if you set a cap of 128 jobs in your resource constraint, and you receive 1000 messages, the PL/SQL callback will create 1000 jobs, but only 128 jobs will be running at once, and the rest will be blocked.
The downside here is that you have to create a whole dbms_scheduler job to process a message. This will increase the time between receiving a message and starting to process it, and just seems like an overall heavy solution. Lightweight jobs won't help because the resource constraints don't work for lightweight jobs.
In fact, you might as well not use AQ at all if you go down this route. Instead of writing messages to a queue, which later calls DBMS_SCEDHULER.CREATE_JOB, you could simply call DBMS_SCHEDULER.CREATE_JOB directly with a resource constraint.
---
Solution #3: Use an external app server. Run N threads, where each thread grabs a connection from a connection pool, loops and calls DBMS_AQ.DEQUEUE with WAIT_FOREVER. This is the best approach because you can set N to cap the number of connections processing messages in parallel easily, and you do not have to block up and slots in JOB_QUEUE_PROCESSES.
However, this has downsides. Your app server often has much more downtime than your database due to releases, network partitions, and many other various issues. If your session which is executing a long running PL/SQL procedure is terminated, you cannot assume whether the PL/SQL procedure on the server will complete or be stopped. While this is also true for DBMS_SCHEDULER Jobs that end up getting kil...
We have PL/SQL stored procedures that perform poorly, Using DBA_HIST or any other AWR / ASH metrics, is it possible to determine the runtimes of PL/SQL Procedures ?
Recently we observed that cached sequence values were lost significantly, appearing as large gaps in persisted values.
Our system does not expect gapless sequence values, however we are trying to understand root cause.
gv$rowcache shows high getmisses in histogram, objects, segments, followed by sequences.
SGA dump shows many grows in shared pool and shrinks in buffer cache, and gv$db_object_cache shows high loads.
We did a trend of sequence value jump or loss from cache based on gaps found in persisted values across time.
I will list the events that may be contributory to the gap. Please let me know if this is a incorrect hypothesis.
1. Onset of moderately large large sequence jumps aligns with migration to multitenant database
2. Prior to multitenant migration, we never had histogram collection as part of stats, it appears a DBA has run stats with histogram collection turned on at the time of multitenant migration, sequence gaps are silently occuring
3. Few months after multitenant migration, a tablespace rebuild activity followed by stats collection was done. This time the standard stats collection script was run and removed stats from many tables, but not few core tables that are extensively used in the application. So never before seen histogram traffic is still continuing to dictionary cache.
4. After multitenant migration, another effort started where many tables started to get partitioned. There were 2 large on-time efforts that created several thousands of partitions, followed by regular scheduled jobs creating few hundreds of partitions for historical data management
5. The sequence jump (and loss from cache) seems to continue for many months, un-noticed as the application is not affected by gaps.
6. Some of the regular scheduled batch jobs were missed, so there was a large gap where tables were not monthly partitioned properly as expected. At discovery a one-time catchup activity was performed where around 800 partitions were created.
7. The sequence jump phenomenon exploded uncontrollably and was discovered by a partner system.
We pinned the sequence in memory to calm it down.
Here is my draft hypothesis for a root cause, please correct if it does not hold:
a) Multitenant migration increased traffic to dictionary cache, sequence metadata is evicted and reload constantly, so values have jumps/gaps
b) As many partitions are created, more traffic is arriving to dictionary cache and pressure increases, sequence jumps occur silently meanwhile.
c) Cumulative traffic to dictionary cache is increasing every month as 100s of partitions are added by monthly job and pressure is getting intolerable, sequence evictions are on rise
c) One-time gap-covering exercise for missed partition creation pushed the traffic over a threshold, sequence jump skyrocketed and now visible in the application as a very large gap.
Does these dots connect as a root cause ?
Hello Chris/Connor,
Hope you are doing well.
We are using ExaC@C with 19c databases at work and are exploring whether we can use PDB Snapshot Carousel and/or PDB Snapshot Copy feature.
It might be just me but I am somewhat confused with the "art of possible" while using ASM.
PDB Snapshot Copy Process (Doc ID 2730771.1) appears to suggest that we can use sparse disk group feature on exadata to either create PDB snapshot copy or PDB Snapshot Carousel
However, ORA-65227 during pluggable database snapshot (Doc ID 3024542.1) appears to suggest that the feature is simply not supported in 19c and only available from 21c onwards.
https://www.dbarj.com.br/en/2021/09/creating-a-snapshot-sparse-clone-from-a-different-release-update/ appears to even provide an example of how PDB Snapshot Copy can be used to patch a 19c database.
So are we able to use PDB Snapshot Carousel with ASM Sparse Disk Group in ExaC@C by using Sparse Disk Group feature of ASM only (and not using any file system)? I am confused...
Thanks in advance,
Narendra
Hello Tom,
I use Oracle Flashback features a lot. We have several development and test environments here and the combination flashback database + replay is priceless. My question is this: why don't we have a "flashback schema" feature ? I know you can simulate that with PL/SQL, but that's just for tables. A schema is much more than that: pl/sql code, grants, etc. If you consolidate databases into schemas inside a large machine, you lose the ability to flashback them; to maintain this ability you'll need virtualization (or pluggable databases :)). So, why was it never done ? Is it impossible and I fail to see the reason ?
Thank you for your time.
Hi,
I have a database with thousands of tables containing the same kind of information.
I need to write a program to aggregate these informations and thought about using a sqlmacro.
<code>
-- This is a very simplified concept
create or replace
function get_val (p_table_name varchar2)
return varchar2 SQL_Macro
is
return 'select col1,col2 from p_table_name';
end;
/
select col1, col2
from get_val(t.table_name)
, table_list t; --Table_list contains the list of the table to take
</code>
And it always tells that the table doesn't exist.
The documentation talks about DBMS_TF.TABLE_T, which works if you pass the table as the parameter (and not the table's name).
How can I do that? Do I have to write a function returning the rows from the table?
Thank you
<code>SQL> select banner from v$version where rownum=1;
BANNER
--------------------------------------------------------------------------------
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
SQL> create table t1(id int generated BY DEFAULT ON NULL as identity);
Tabelle wurde erstellt.
SQL> create table t2(id int generated BY DEFAULT as identity);
Tabelle wurde erstellt.
SQL> create table t3(id int generated ALWAYS as identity);
Tabelle wurde erstellt.
SQL> select table_name, generation_type
2 from user_tab_identity_cols utic
3 where utic.table_name in ('T1', 'T2', 'T3');
TABLE_NAME
--------------------------------------------------------------------------------
GENERATION
----------
T1
BY DEFAULT
T2
BY DEFAULT
T3
ALWAYS</code>
Why doesn't user_tab_identity_cols.generation_type show "BY DEFAULT ON NULL" for T1 ?
Behaviour is differently in comparison to T2, so where can I see it (besides DBMS_METADATA) ?
<code>
SQL> set long 5000 lines 300 pages 5000
SQL> select dbms_metadata.get_ddl('TABLE', table_name) from user_tables where table_name in ('T1', 'T2');
DBMS_METADATA.GET_DDL('TABLE',TABLE_NAME)
--------------------------------------------------------------------------------
CREATE TABLE "YYY"."T1"
( "ID" NUMBER(*,0) GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVAL
UE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NO
CYCLE NOKEEP NOSCALE NOT NULL ENABLE
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "XXX"
CREATE TABLE "YYY"."T2"
( "ID" NUMBER(*,0) GENERATED BY DEFAULT AS IDENTITY MINVALUE 1 MAXVALUE 99999
99999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE N
OKEEP NOSCALE NOT NULL ENABLE
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "XXX"
</code>
We use Business Objects against a database setup just for generating reports. This is an Exadata RAC with 2 nodes and ASM storage and all of the BO sessions login/connect to the same oracle user.
During our last month-end, which coincided with quarter-end, we saw many session with "env: SS - contention" wait event. Also intermittently saw "buffer busy waits" as they all wait for access to the shared temporary tablespace, as indicated by the P1 Value.
Searching for answers on how to reduce these wait events led us to Local Temporary Tablespaces. So we setup a Local Temp Tablespace in our development environment...
<code>CREATE LOCAL TEMPORARY TABLESPACE FOR ALL temp_reporting_local
TEMPFILE '+DTADVQ1/.../TEMPFILE/temp_reporting_local.dbf'
SIZE 10G AUTOEXTEND OFF
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 2M;
</code>
Assigned it to the REPORT_USER as it's default Local Temp Tablespace...
<code>ALTER USER report_user LOCAL TEMPORARY TABLESPACE temp_reporting_local;
SELECT username, default_tablespace, temporary_tablespace, local_temp_tablespace
FROM DBA_USERS
WHERE username = 'REPORT_USER';
USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE LOCAL_TEMP_TABLESPACE
REPORT_USER TBE_REPORT_USER_01 TEMP_REPORTING TEMP_REPORTING_LOCAL
</code>
Then ran some large queries while logged in as REPORT_USER. The query fails with same error message as before: "ORA-01652: unable to extend temp segment by 256 in tablespace TEMP_REPORTING".
Monitoring Free Space, the Local Temps do not appear to have been used at all.
<code>SELECT tablespace_name, inst_id,
tablespace_size/1024/1024 AS total_mb,
allocated_space/1024/1024 AS allocated_mb,
free_space/1024/1024 AS free_mb
FROM dba_temp_free_space
WHERE tablespace_name LIKE 'TEMP_REPORTING%';
TABLESPACE_NAME INST_ID TOTAL_MB ALLOCATED_MB FREE_MB
TEMP_REPORTING 10240 10240 0 (assumed to be zero at instant report died)
TEMP_REPORTING_LOCAL 1 10240 2 10238
TEMP_REPORTING_LOCAL 2 10240 2 10238
</code>
A hash join exceeded the 10GB of shared temp but did not use any of the Local temp.
So, how can we get these queries to use Local Temp once Shared Temp "overflows"? I'm thinking it is because it cannot spit the hashed results between the two. Which makes me wonder how it will ever use local temp tablespaces.
Second question: why did they not set it up to use the Local Temp first and then overflow into the Shared Temp, if needed? Seems like a more logical approach if you want to mitigate these wait events.
I am trying to setup a database project that has three or so schemas that are named the same for most of the deployed environments but are different for one deployment. For example local development through production would have schema names schema1, schema2, ... but for one set of deployed environments the schemas have been renamed and are out of my control, e.g., dba_schema_db1, dba_schema_db2, ...
What I want to know is if there is a built in way using SQLcl projects to alias the schemas so that they can be configured per environment without too much manual intervention.
Hi. What is the technical reason why the package DBMS_DEBUG_JDWP is not available on the Oracle Autonomous Database? What does it do that makes calling it illegal in PL/SQL?
Thanks,
I have a table with partitions and I would like to find the most efficient way to empty a clob column for an entire partition.
I thought I could use DBMS_REDEFINITION with col_mapping and part_name but I am always getting ORA-42000.
Here are the statements I am using to reproduce the issue.
<code>create table tkvav_part_redefinition (
id number primary key,
num varchar2(10),
ts timestamp,
mynum number,
mylob clob
);
insert into tkvav_part_redefinition values (1, '42' , systimestamp + 59/23,12,'123');
insert into tkvav_part_redefinition values (2, '-9.876', systimestamp + 51/31,34,'234');
insert into tkvav_part_redefinition values (3, '1.2e3' , systimestamp + 61/17,25,'345');
insert into tkvav_part_redefinition values (4, '42' , systimestamp -10 + 59/23,68,'123');
insert into tkvav_part_redefinition values (5, '-9.876', systimestamp -10 + 51/31,69,'234');
insert into tkvav_part_redefinition values (6, '1.2e3' , systimestamp -10 + 61/17,70,'345');
insert into tkvav_part_redefinition values (7, '42' , systimestamp -20 + 59/23,75,'123');
insert into tkvav_part_redefinition values (8, '-9.876', systimestamp -20 + 51/31,76,'234');
insert into tkvav_part_redefinition values (9, '1.2e3' , systimestamp -20 + 61/17,77,'345');
commit;
select rowid,x.* from tkvav_part_redefinition x;
ALTER TABLE tkvav_part_redefinition MODIFY
partition by range (ts) interval (NUMTODSINTERVAL(7, 'DAY'))
( PARTITION P1 VALUES LESS THAN (to_date('20260202', 'yyyymmdd'))
) ONLINE;
select TABLE_NAME, PARTITION_NAME from user_tab_partitions where table_name = 'TKVAV_PART_REDEFINITION';
create table tkvav_part_redefinition_int4 FOR EXCHANGE WITH TABLE tkvav_part_redefinition;
begin
dbms_redefinition.start_redef_table(
uname => user,
orig_table => 'tkvav_part_redefinition',
int_table => 'tkvav_part_redefinition_int4',
col_mapping => q'[
id,
num,
ts,
cast(null as number) mynum,
empty_clob() mylob
]',
options_flag => dbms_redefinition.cons_use_pk,
orderby_cols => null,
part_name => 'SYS_P1438977',
continue_after_errors => false,
copy_vpd_opt => dbms_redefinition.cons_vpd_none,
refresh_dep_mviews => 'N',
enable_rollback => false
);
end;
/</code>
ORA-42000: invalid online redefinition column mapping for table "EP2_ST675"."TKVAV_PART_REDEFINITION"
ORA-06512: at "SYS.DBMS_REDEFINITION", line 116
ORA-06512: at "SYS.DBMS_REDEFINITION", line 4441
ORA-06512: at "SYS.DBMS_REDEFINITION", line 5835
ORA-06512: at line 2
It works without any issues when I set part_name => null.
what am I doing wrong when using part_name ?
Hi Tom!
We have a large load every day from one table to another and I wanted to split it up into chunks and run parallel jobs to speed it up.
I found that doing the split on rowid doesn't work on partitioned table. First I did it on partition level since this is how we load (partition by partition via dynamic SQL) but I should have realized that wouldn't work.
But I got really surprised that the split doesn't work even if I do it on whole table without partition. And I have tested at a few tables we have and it works, as long as its not a partitioned table.
I am quite confused with this, is there an explanation?
So this sql:
<code>SELECT MIN(r) start_id
,MAX(r) end_id
FROM (SELECT ntile(20) over (ORDER BY rownum) grp
,ROWID r
FROM my_big_table)
GROUP BY grp ;</code>
Gets it into 20 chunks but when testing the first chunk via this sql:
<code>SELECT COUNT(9)
FROM my_big_source_table
WHERE ROWID BETWEEN 'AACGW1AELAAN72EAAB' AND 'AACGW1AH7AADmn/AAY';</code>
it simply fetches the whole table. If I do the same on any other table that isn't partitioned it works fine as with the example in FreeSQL.
DBAs spend a lot of time reviewing reports about the health of their databases. I’ve used an LLM to speed up that process.
I took a daily report about our Oracle databases and used an LLM to generate a short summary that lets a DBA immediately see which databases need attention.
A typical report looks like this for each database:
The full report has over 2,000 lines that must be manually scanned by the on-call DBA each day.
The LLM-generated summary looks like this:
This summary immediately shows which databases need attention. We still manually scan the entire report but having the summary in the body of the email (with the full report attached) lets us see at a quick glance what needs attention and how urgent it is. The summary does not replace the full report; it only highlights the items that are most likely to be important. In our environment we chose 89% full as the point where we start reporting on space issues.
I’m using AWS Bedrock with the Claude Sonnet 4.6 model. Here is the Python function that sends the combined prompt and report to Bedrock and returns the summary:
Here is the prompt that preceeds the report:
This simple use of an LLM has saved me time by putting a quick summary in the email body while preserving the full report for detailed review.
Bobby
How to troubleshoot this error:
[oracle@localhost ~]$ sudo setenforce 0
[sudo] password for oracle:
[oracle@localhost ~]$ export ORACLE_BASE=/u01/app/oracle
[oracle@localhost ~]$ export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1
[oracle@localhost ~]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@localhost ~]$ export ORACLE_SID=FREE
[oracle@localhost ~]$ export CFGTOOLLOGS=/tmp/dbca_logs
[oracle@localhost ~]$ mkdir -p /tmp/dbca_logs
[oracle@localhost ~]$ chmod 777 /tmp/dbca_logs
[oracle@localhost ~]$ mkdir -p /u01/app/oracle/oradata
[oracle@localhost ~]$ chmod -R 775 /u01/app/oracle
[oracle@localhost ~]$ chown -R oracle:oinstall /u01/app/oracle
[oracle@localhost ~]$ unset DISPLAY
[oracle@localhost ~]$ $ORACLE_HOME/bin/dbca -silent -createDatabase \
> -templateName General_Purpose.dbc \
> -gdbname FREE \
> -sid FREE \
> -createAsContainerDatabase true \
> -numberOfPDBs 1 \
> -pdbName FREEPDB \
> -sysPassword Oracle \
> -systemPassword Oracle \
> -pdbAdminPassword Oracle \
> -databaseType MULTIPURPOSE \
> -memoryMgmtType auto_sga \
> -datafileDestination /u01/app/oracle/oradata \
> -emConfiguration NONE
[FATAL] [INS-00001] Unknown irrecoverable error
CAUSE: No additional information available.
ACTION: Refer to the logs or contact Oracle Support Services
SUMMARY:
- [DBT-00006] The logging directory could not be created.
- [DBT-00006] The logging directory could not be created.
[oracle@localhost ~]$
When I try to read data from a CSV file using an Oracle external table, I get the following error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
What are the possible causes of this error, and how can I fix it so that the external table can read the CSV file successfully?
I need to propagate changes on a few tables to an external system. I have to do it near realtime and ultimately it will end up on Kafka from my end. The solution target is Oracle 19c and needs to be installed to multiple clients on premises.
Example problem:
Let's say I have table of messages. I have a column called change_id which is populated by trigger from a sequence on insert and update. Theoretically I always know which row was changed last. i might avoid needing to track deletes, not sure yet.
Now I create a procedure to be ran as a batch and collect rows changed from last run. First it gets the max(change_id), then selects all rows between last_run_max_change_id and new_max_change_id. Exports data as json to another table. writes new_max_change_id as last_run_max_change_id. Commits. Another process will handle delivery of json to where it needs to be.
The problem is another long running transaction might have consumed sequence with lower numbers but has not commited when batch was run, thus those change_ids will never be exported. Another problem is I don't have deleted rows.
Solution 1: Golden Gate replication or OpenLogReplicator or something similar. I would have to convince all clients to commit to paying the GG licence, create tables as replication target, export from those tables, delete from them. Licence and getting all clients on board is difficult, because i need one solution for all. Also security, stability and maintenance concerns will likely make clients want to reject such ideas, and it has to be all of them onboard. I have also tried to use Oracle Streams before on another project and had stability issues and ORA-600 errors that were never resolved.
Solution 2: use SCN instead of change_id. SCNs (ora_scnrow) are not indexed and select by scn in where clause on billions of rows is too slow.
Solution 3: Flashback. Have something like SELECT * FROM messages VERSIONS BETWEEN SCN :last_scn AND :current_scn; My concern is if the program doesn't run for a while for whatever problem and reason, the flashback will be lost. I would need a backup solution.
Solution 4: trigger after insert on messages table that will write to export table. Handles insert, update, delete and nothing will be skipped. I select 10k ids ordered, export to json, delete 10k records and commit; I'm worried about big transactions having additional load to write to export table and trigger overhead for each row. Additional context switching. Index contention on export table having rows at the same time inserted by inserts and updates on original table and rows deleted by export batch process. Exports will have to be ordered by change_id.
Conclusion: The only stable and data consistent solution I can think of is solution 4, a trigger. But I'm worried about overhead.
Instead of a trigger, I could check the code and program additional inserts into export log table so that the total number uf updates might be lower but not by much.
A...
Hello,
I have a question regarding concurrent statistics gathering in Oracle 19c. Currently, my global preference is set to OFF:
SELECT DBMS_STATS.GET_PREFS('CONCURRENT') FROM DUAL;
-- Result: OFF
I can enable it manually with:
BEGIN
DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','ALL');
END;
/
I understand that enabling CONCURRENT allows Oracle to gather statistics on multiple tables and (sub)partitions at the same time, potentially reducing the total duration. Oracle uses the Job Scheduler and Advanced Queuing to manage these concurrent jobs.
My question:
Is there a relationship between the maintenance windows (like WEEKEND_WINDOW or WEEKNIGHT_WINDOW) and concurrent statistics gathering? Specifically:
When CONCURRENT is enabled, does Oracle automatically schedule these parallel stats jobs within the maintenance windows?
Or is the maintenance window unrelated, and the concurrent gathering runs independently of it?
Thank you for your guidance.
All,
I have an Oracle view built on top of a partitioned table. Occasionally the view becomes INVALID, even though there are no DDL changes happening on the base table. The view becomes VALID again automatically when it is accessed or queried, but this behavior is causing issues in production.
I'm trying to understand what could be causing the intermittent invalidation.
<code>
create view my_view as
SELECT dly_fct_id, acct_ref_id, bus_dt
FROM my_table
WHERE bus_dt = TO_DATE('01/01/2500','MM/DD/YYYY');
</code>
Note: The date above is only a placeholder. In reality this predicate changes dynamically based on the ETL run date.
Base table structure:
<code>
CREATE TABLE my_table
(
dly_fct_id NUMBER,
acct_ref_id NUMBER,
bus_dt DATE
)
PARTITION BY RANGE (bus_dt)
INTERVAL (NUMTODSINTERVAL(1,'DAY'))
(
PARTITION p0 VALUES LESS THAN (TO_DATE('01-JAN-1900','DD-MON-YYYY'))
)
COMPRESS;
</code>
Constraint:
<code>
ALTER TABLE my_table
ADD CONSTRAINT xpk_my_table
PRIMARY KEY (dly_fct_id, acct_ref_id)
RELY;
</code>
The table actually contains ~50 columns, but I only included the relevant ones here.
Additional details:
The view does not become invalid daily, but it happens intermittently.
There are no known DDL operations on the table except regular ETL data loads.
The view becomes VALID again automatically when it is queried.
This is happening in a production environment, so we want to understand the root cause.
Questions:
What could cause a view to become INVALID intermittently without explicit DDL on the base table?
Could this be related to partition maintenance, statistics gathering, or constraint changes?
What system views or logs should we check to identify the root cause?
Any guidance on where to investigate would be greatly appreciated.
is Instances can exist without the database and database can exist without instance
Hello i just wanted to ask you a question. Can you please provide me some good online oracle books, tutorials or courses of the dba technologies like RMAN, Golden Gates, Exadata, Data Guard, Data pump and RAC? thanks.
Pages
|