Tom Kyte
Question about Merge and UDT
I'm using a UDT with dataset inside there as a table and i want to use that UDT on Merge statement. When using that UDT, i put that UDT inside "ON (" as following: "... ON (SELECT * FROM TABLE (UDT)) ..." then, how can i to use that UDT inside INSERT statment putting a only variable in the "VALUES" statment? For Example:
MERGE INTO TABLE1 T1
USING (SELECT * FROM TABLE(UDT)) T2
ON (T1.X = T2.Y)
WHEN NOT MATCHED THEN
INSERT VALUES UDT; //<-------- here, i want only use UDT and that's all, how can i create something like this?
The Table1 (T1) and UDT (T2) has the same structure
Categories: DBA Blogs
Fetch limited rows for update
Hi Tom,
first at all, thanks for all your support. I wish a blessed christmas time for you, your team and families.
I'm looking forward for the best practise of a cursor to fetch just one (or a few) row(s) on my own sorting and lock them. Without locking the whole resultset or include all rows with the condition.
<code>
select *
from MyTable
where < MyCondition >
order By < MyOrderby >
fetch first 1 row only
for update skip locked;
</code>
Unfortunately the combination of fetch-first-only and for update causes a ORA-02014 :(
- without fetch-first-only it will lock the whole resultset
- a rownum condition instead of fetch-first-only will not consider my orderby
- an inner subselect with fetch-first-only will not skip the locked rows and finally fetch zero rows if the inner sql only fetch already locked rows
<code>
select *
from MyTable
where Rowid in (select Rowid
from MyTable
where < MyCondition >
order By < MyOrderby >
fetch first 1 row only)
for update skip locked;
</code>
Whats your best practice or advice?
Thanks, Norman
Categories: DBA Blogs
Recovery with SQL*PLUS in embedded korn shell script
We have a production DB that is used for reporting is been attempted to clone with NetApp Snapmirror technology. The current clone is done with old method placing the tablespaces of PROD DB in backup mode and then copying the files using OS ?cp? tool to the designated clone DB directories and run the necessary DB layer clone steps. This works fine but now taking extra long time to complete and impacting business functions.
With NetApp SNapmirror technology all steps are working up to the DB layer clone steps. There is an issue with very last steps of recovering the DB after SnapMirror has performed. The step used is ?alter database recover automatic using backup controlfile until cancel? as that seems to be the only option available according Oracle Doc ID: DATAFILE NEEDS MORE RECOVERY TO BE CONSISTENT ORA-1194 ORA-1547 ORA-1110 (Doc ID 1528788.1) - Scenario 2: Backup Controlfile is Used for recovery.
Doing this recovery interactively via SQL*PLUS needs placing the redo log file path when prompted. Then DB recover successfully. See below for output.
SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL ;
ORA-00279: change 75786588505 generated at 12/06/2024 13:41:35 needed for
thread 1
ORA-00289: suggestion : /arch_poct/archive/OFPOC/arch_1186748276_1_18.arc
ORA-00280: change 75786588505 for thread 1 is in sequence #18
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
'/redo_poct/oradata/OFPOC/redo06.log'
Log applied.
Media recovery complete.
SQL> alter database open resetlogs ;
Database altered.
But this clone process has to be done on daily basis and must be automated. When scripted there is no way with SQL*PLUS to be read from a file containing `/redo_poct/oradata/OFPOC/redo06.log' for it?s standard input.
Below is the script
#! /usr/bin/ksh
sqlplus "/ as sysdba" < /projects/of/poc/spt/log/setup_sql_redo_20241217.log <<EOF
set echo on
alter database recover automatic using backup controlfile until cancel;
EOF
Where
cat /projects/of/poc/spt/log/setup_sql_redo_20241217.log
'/redo_poct/oradata/OFPOC/redo06.log'
Output
? /projects/gen/ksh/working> ./TestRecovery.sh
SQL*Plus: Release 11.2.0.2.0 Production on Tue Dec 17 17:29:00 2024
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> SQL> alter database recover automatic using backup controlfile until cancel
*
ERROR at line 1:
ORA-00279: change 75787505640 generated at 12/16/2024 11:21:36 needed for
thread 1
ORA-00289: suggestion : /arch_poct/archive/OFPOC/arch_1186748276_1_157.arc
ORA-00280: change 75787505640 for thread 1 is in sequence #157
ORA-00278: log file '/arch_poct/archive/OFPOC/arch_1186748276_1_157.arc' no
longer needed for this recovery
ORA-00308: cannot open archived log
'/arch_poct/archive/OFPOC/arch_1186748276_1_15...
Categories: DBA Blogs
Oracle_Loader External Table using Fixed Width
I'm using external tables everyday to load csv files but I'm having great difficulty loading my first FIXED WIDTH file. I've tried several different access parameter variations and the CREATE TABLE command compiles without error but when I select the table, I get the following error. In my several iterations, I may get a different "KUP-01005: syntax error" but I can't seem to get past this:
Error Message:
<code>select * from rzedelq;</code>
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "badfile": expecting one of: "all, column, convert_error, csv, date_format, enclosed, ignore_chars_after_eor, (, ltrim, lrtrim, ldrtrim, missing, notrim, nullif, number_format, numeric_characters, optionally, rtrim, remove_quotes, reject, terminated, truncate_columns"
KUP-01007: at line 2 column 13
29913. 00000 - "error in executing %s callout"
*Cause: The execution of the specified callout caused an error.
*Action: Examine the error messages take appropriate action.
--
<code>
CREATE TABLE rzedelq
(
rzedelq_type VARCHAR2(1),
rzedelq_dob DATE,
rzedelq_last_name VARCHAR2(30),
rzedelq_first_name VARCHAR2(30),
rzedelq_balance NUMBER(6)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ext_data
ACCESS PARAMETERS
(
RECORDS FIXED 76 FIELDS
BADFILE 'rzedelq.bad'
LOGFILE 'rzedelq.log'
MISSING FIELD VALUES ARE NULL
(
rzedelq_record_type (01:01) CHAR(1),
rzedelq_dob (02:09) CHAR(8) DATE 'yyyymmdd' NULLIF rzedelq_dob='00000000',
rzedelq_last_name (10:39) CHAR(30),
rzedelq_first_name (40:69) CHAR(30),
rzedelq_balance (70:75) NUMBER(6)
)
)
LOCATION ('rzedelq.dat')
)
REJECT LIMIT UNLIMITED
;
</code>
Sample data file:
<code>
--'rzedelq.dat'--
119811218ANDERSEN AIMEE 366910
219121006COWGER AMANDA 030900
119030707GESLIN ANDREA 150910
319041125HATFIELD CARRIE 055900
119150913MERRELL CONNIE 018920
419761024MESSINGER JASON 010960
119170708PRIMROSE JOHN 030920
519980721REEVES KAILYNN 018930
119690511SAFARIK ROBERT 021980
</code>
--
Any Ideas? Any help?
Categories: DBA Blogs
Check if a pasword parameter is passed to an sql script and process based on that
I am trying to create an sql script that creates a user and if a password is supplied as a parameter to use it and if no parameter is passed have the sql prompt for the password.
This is what i tried but I am obviously a bit off since it asks for the Value of &1 immediately if it is not passed.
If I do pass it a password it gives errors and then prompts for Password and creates user using that password.
Create_TEST_User.sql
<code>define Password = '&1'
SET VERIFY OFF;
if &Password is NULL
THEN
ACCEPT Password PROMPT 'Enter value for Password: ' HIDE
end if;
CREATE USER "TEST" IDENTIFIED BY "&Password"
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP";
GRANT "CONNECT" TO "TEST"; </code>
@Create_TEST_User.sql testpassword-1234
Error starting at line : 3 File @ /opt/oracle/sql/Create_TEST_User.sql
In command -
if &Password is NULL
Error report -
Unknown Command
Error starting at line : 4 File @ /opt/oracle/sql/Create_TEST_User.sql
In command -
THEN
Error report -
Unknown Command
Enter value for Password: ********
Error starting at line : 6 File @ /opt/oracle/sql/Create_TEST_User.sql
In command -
end if
Error report -
Unknown Command
User "TEST" created.
Categories: DBA Blogs
Datetime.MaxValue timestamp ORA-01841
Hello,
I've been struggling with "strange" behavior with handling "extreme " timestamp values.
Our applications in .NET execute stored procedures with timestamp parameter and in some cases pass "31/12/9999 23:59:59.9999999" - it is value of Datetime.MaxValue function
After upgrade to 19.25 we get "ORA-01841: (full) year must be between -4713 and +9999, and not be 0" for this value.
Lets look on my examples
Following statement returns same value for oracle 19.25 and < 19.25 versions
<code>select to_timestamp('12/31/9999 23:59:59.9999999','MM/DD/YYYY HH24:MI:SS.FF') from dual;</code>
31-DEC-99 11.59.59.999999900 PM
Lets create simple procedure
<code>
CREATE OR REPLACE PROCEDURE testmh( p_data in timestamp )
IS
vdata timestamp(9);
BEGIN
dbms_output.put_line('a');
END;
</code>
and execute like
<code>execute testmh(to_timestamp('12/31/9999 23:59:59.9999999','MM/DD/YYYY HH24:MI:SS.FF'));</code>
on 19.25 I get
ORA-01841: (full) year must be between -4713 and +9999, and not be 0
on < 19.25
PL/SQL procedure successfully completed.
Another test - create the function
<code>
CREATE OR REPLACE FUNCTION testmhf (
p_data TIMESTAMP )
RETURN TIMESTAMP
IS
BEGIN
RETURN p_data;
END testmhf;
</code>
and execute
<code>select testmhf(to_timestamp('12/31/9999 23:59:59.9999999','MM/DD/YYYY HH24:MI:SS.FF')) from dual;</code>
for <19.25 result is
01-JAN-00 12.00.00.000000000 AM
for 19.25 result is
ORA-01841: (full) year must be between -4713 and +9999, and not be 0
Workaround for this problem is create the subtype and use that as type for procedure parameter
<code>
CREATE OR REPLACE PACKAGE XY
AS
SUBTYPE timestamp_t IS TIMESTAMP(7);
END;
/
CREATE OR REPLACE PROCEDURE testmh_t( p_data in XY.timestamp_t )
IS
BEGIN
insert into zone(zone) values(p_data);
END;
/
execute testmh_t(to_timestamp('12/31/9999 23:59:59.9999999','MM/DD/YYYY HH24:MI:SS.FF'));
</code>
Do you know any other workaround or why passing this "extreme" value is causing that this value is rounding up?
M.
Categories: DBA Blogs
Defragmentation in Oracle RAC Dataguard Environment
Hi Tom
Greetings
We had very high critical Application(with zero downtime) running in 4 Node RAC Environment with same HA Configuration. We are planning to perform defragmentation(Index) with minimal downtime.
We are planning the following steps
1) Disable Apply and open Standby database
2) Perform Index Defragmentation in Standby Database
3) After defragmentation, put back Physical Standby in Mount State
Now Standby is defragmentated
1) Switch Application to Standby(Switch Standby DB to Primary and vice versa)
2) Perform Defragmentation in Old Primary(Now HA)
3) Switch back application to Old Primary
My question is whether the above is possible? Or I am missing any steps here?
Or when switching HA as Primary and Primary as HA, whether defragmentation will go since it is block to block replication?
Please advise
Categories: DBA Blogs
2 databases got different IO stats on the same disk
Noticed 2 databases ran on the same disk got different performance on same query in identical plan. Ran AWR and saw big difference in IO stats section. i.e Buffer Cache Reads is in us level but another one is in ms. What could caused this difference if on the same disk, it is /u01 in my case ? thanks.
19c on linux
filesystemio_options none
=>One active database
<code>
IOStat by Function summary
'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
ordered by (Data Read + Write) desc
Function Name Reads: Data Reqs per sec Data per sec Writes: Data Reqs per sec Data per sec Waits: Count Avg Time
Direct Reads 2.9T 845.98 832.589M 8M 0.06 .002M 3.1M 65.22ns
Buffer Cache Reads 26.5G 756.04 7.484M 0M 0.00 0M 2.2M 165.58us
Others 519M 7.27 .143M 263M 1.57 .073M 29.4K 75.63us
</code>
=>Another much less active database
<code>IOStat by Function summary
'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
Function Name Reads: Data Reqs per sec Data per sec Writes: Data Reqs per sec Data per sec Waits: Count Avg Time
Direct Reads 49.2G 14.35 14.154M 0M 0.00 0M 51.1K 1.31us
Others 220M 3.45 .062M 53M 0.95 .015M 14K 24.99us
Buffer Cache Reads 214M 1.05 .06M 0M 0.00 0M 3163 2.56ms</code>
Categories: DBA Blogs
Oracle Database 23Free SYSAUX size causing ORA-12954
Hi Tom,
I?m encountering an issue with my Oracle Database 23Free. It throws the ORA-12954 error: "The request exceeds the maximum allowed database size of 12 GB." After investigating the database files, I discovered that the SYSAUX tablespace is the main contributor to the size issue.
<code>
/opt/oracle/oradata/FREE/FREEPDB1/sysaux01.dbf 6.1GB
/opt/oracle/oradata/FREE/FREEPDB1/system01.dbf 0.6GB
/opt/oracle/oradata/FREE/FREEPDB1/undotbs01.dbf 0.3GB
/opt/oracle/oradata/FREE/FREEPDB1/users01.dbf 0.3GB
/opt/oracle/oradata/FREE/pdbseed/sysaux01.dbf 0.3GB
/opt/oracle/oradata/FREE/pdbseed/system01.dbf 0.3GB
/opt/oracle/oradata/FREE/pdbseed/undotbs01.dbf 0.1GB
/opt/oracle/oradata/FREE/sysaux01.dbf 6.6GB
/opt/oracle/oradata/FREE/system01.dbf 1.2GB
/opt/oracle/oradata/FREE/undotbs01.dbf 0.6GB
/opt/oracle/oradata/FREE/users01.dbf 0.0GB
</code>
Upon further examination using
<code>
select * from V$SYSAUX_OCCUPANTS;
</code>
it seems that the majority of the space is being utilized by:
? SM/AWR
? SM/OPTSTAT
To address this, I attempted to purge statistics using the following PL/SQL block:
<code>
BEGIN
dbms_stats.purge_stats(dbms_stats.purge_all);
END;
</code>
However, I received the ORA-12954 error again during this process.
Do you have any advice on how to resolve this issue and bring the database back within the size limit? Your help would be greatly appreciated!
Best regards,
Christian
Categories: DBA Blogs
difference between 2 dates
I want to get difference between two dates say
(10-FEB-2000 - 02-JAN-1999)
and the result has to be displayed in following format
1 year, 1 month, 8 days
Categories: DBA Blogs
Create table statement and create index sequence in redo logs / archive logs
Hi Tom,
First of thank for your great work about helping so many people with brilliant explanations example.
My question is, when I am creating a table with primary keys, and then regenerating the redo data/statements information using log miner ordered by SCN, I see that oracle is creating unique index on table ?test1? first and then creates a table ?test1?.
Why and how oracle is doing that? If I would play these statement on some other database in same the sequence ordered by SCN, it has failed as it is trying to create an unique index on a table which does not exist.
Any help would be highly appreciated.
Environment is Oracle 11.1.7 on AIX.
Many thanks in advance.
Bobby
Categories: DBA Blogs
ORACLE_SID setting for databases in standby configuration
Kindly ask for Your opinion regarding ORACLE_SID setting of databases in standby configuration ..
We have different settings in our standby databases and both configurations are working without any complains.
? ORACLE_SID = db_unique_name ( ORACLE_SID in primary is different from ORACLE_SID in standby database )
? ORACLE_SID = db_name ( ORACLE_SID in primary is equal to ORACLE_SID in standby database )
What would be the advatage or downside of one or another configuraation .. ?
Regards Dejan
Categories: DBA Blogs
Two SUM from different tables give weird result
I have three tables. Clients, Invoices and Payments and I try to make a sum of invoiced amount and a sum of payments in the same script
If I run the sum of invoiced amount, all things are good, like below
select a.id, a.name,
sum(b.quantity*b.unitprice)
from clients a, invoices b
where a.id=b.client_id
group by a.id, a.name
ID NAME SUM(B.QUANTITY*B.UNITPRICE)
1 Client1 325
2 Client2 150
3 Client3 30
If I run a sum of payments the result is correct, like below
select a.id, a.name,
sum(c.amount)
from clients a, payments c
where a.id=c.client_id
group by a.id, a.name
ID NAME SUM(C.AMOUNT)
1 Client1 200
2 Client2 125
3 Client3 30
But if I try to make both SUM in a single select, the result is wrong and I don't know what I'm doing wrong
select a.id, a.name,
sum(b.quantity*b.unitprice),
sum(c.amount)
from clients a, invoices b, payments c
where a.id=b.client_id and a.id=c.client_id
group by a.id, a.name
ID NAME SUM(B.QUANTITY*B.UNITPRICE) SUM(C.AMOUNT)
1 Client1 650 400
2 Client2 300 250
3 Client3 30 30
Categories: DBA Blogs
log_buffer and write throughpput
Hello,
We have a batch process that commits on completion of each insert, which involves just a few bytes. It should ideally be done in batches (commit after processing 1000 entities etc). It is a legacy application and modifying the code is not possible. Will reducing the log_buffer size improve performance? Right now it is at 28 MB. It is a large server with 700+ GB RAM and SGA set to 220 GB. Since each commit involves writing only a few bytes and if flushing the REDO involves flushing the entire log_buffer, reducing the size of log_buffer will improve performance. But I read somewhere that not all log_buffer gets flushed and only minimal data gets flushed to disk (online redo) on commit. Requesting expert opinion.
Categories: DBA Blogs
BLOB file deleted and LOBSEGMENT is still big.
Hi ask Tom team,
at first sorry for my lacked knowledge of DBA.
I have a BLOB Table and BLOB total Length from the beginning ist over 3GB, and ofcourse the TABLESPACE size is more than 3GB
Now I deleted 3GB BLOB Files and only around 30MB left.
But I cannot shrink the Tablespace smaller because LOBSEGMENTS are now extended to Block_ID Position up to 3Gb.
Is there anyway to come over it, to rearrange segments in TABLESPACE like before they were extended
Many thanks
Categories: DBA Blogs
Time Dimension - Cardinality querying on correlated columns
We have nagging plan stability issues which are often traced back to bad cardinality estimates from our time dimension table. There are numerous workarounds (hinting, baselines, profiles, etc) but it would be so much easier if the CBO just got it right.
I think our time dimension table is typical. The most granular data is hourly, the table contains one row for every hour(key column) and has other columns for the day/month/year that the hour rolls up to. This example is simplified but illustrates the point.
<code>
create table tst_timetab (hour date, day date, month date, year date, primary key (hour) );
insert into tst_timetab
select day+offset/24 hour, trunc(day+offset/24, 'dd') day, trunc(day+offset/24, 'mm') month, trunc(day+offset/24, 'yy') year
from (select to_date('1-jan-2020', 'dd-mon-yyyy') day, (level-1) offset from dual connect by level<=5*365*24);
commit;
exec DBMS_STATS.GATHER_TABLE_STATS(sys_context('USERENV', 'CURRENT_USER'), 'tst_timetab', cascade=>true, method_opt => 'FOR ALL COLUMNS SIZE 255 FOR COLUMNS SIZE 255 (day,hour)', no_invalidate=>false );
select *
from tst_timetab
where day = to_date('20-jan-2022', 'dd-mon-yyyy')
and hour >= to_date('20-jan-2022', 'dd-mon-yyyy')
and hour < to_date('21-jan-2022', 'dd-mon-yyyy');
</code>
We would like to see this return a cardinality > 1 (preferably 24).
DAY and HOUR are correlated predicates (hour determines day). For example, DAY and HOUR might both be specified as predicates because other tables in the join are partitioned on HOUR (thus requiring a bind to do partition elimination or other types of predicate pushdowns).
We've tried multi-column extended stats and various histograms to no avail. Also looked into CREATE DIMENSION statement but not sure how it would help us here. Hundreds of queries are written in this manner, changing to use a new table (or dimension) would seem to have a lot of impact.
LiveSQL with data and stats included. Many thanks.
https://livesql.oracle.com/ords/livesql/s/colpdvbgiquu7vu9hpw572un4
Categories: DBA Blogs
Loading a large excel file (upto 2000 columns) into Oracle Database
I have a requirement to load a large excel file (upto 2000 columns) into Oracle database. I am aware of Oracle table limitation of maximum 1000 columns per table and happy to split the data into multiple tables.Could you please help.
Categories: DBA Blogs
LINGUISTIC and BINARY_CI performance problem
Hello,
We use Oracle 19c with c# ADO and EntityFramework.
We have performance issue when use ALTER SESSION SET NLS_COMP = LINGUISTIC NLS_SORT = BINARY_CI;
Our system use this for all queries.
I use this simple table for test:
<code>CREATE TABLE app."Customer" (
"Id" NUMBER(19) GENERATED BY DEFAULT ON NULL AS IDENTITY,
"Name" NVARCHAR2(254) NOT NULL,
PRIMARY KEY ("Id")
)
BEGIN
FOR i IN 1..2000000 LOOP
INSERT INTO app."Customer" ("Name")
VALUES (
DBMS_RANDOM.STRING('U', 10)
);
IF MOD(i, 10000) = 0 THEN
COMMIT;
END IF;
END LOOP;
-- Final commit for any remaining rows
COMMIT;
END;
CREATE INDEX app."IX_Customer_Name_NLS" ON app."Customer" (NLSSORT("Name", 'NLS_SORT=BINARY_CI'))
CREATE INDEX app."IX_Customer_Name" ON app."Customer" ("Name")</code>
Now if I select:
0.02ms
<code>SELECT * FROM app."Customer" WHERE "Name" LIKE N'test' || '%'</code>
0.02ms
<code>SELECT * FROM app."Customer" WHERE "Name" LIKE &name || '%'</code>
0.02ms
<code> BEGIN
FOR rec IN (SELECT "Name"
FROM app."Customer"
WHERE "Name" LIKE 'test%'
OFFSET 0 ROWS FETCH NEXT 100 ROWS ONLY)
LOOP
DBMS_OUTPUT.PUT_LINE(rec."Name");
END LOOP;
END;
</code>
0.700ms
<code> DECLARE
v_name NVARCHAR2(254) := N'test';
BEGIN
FOR rec IN (
SELECT "Name"
FROM app."Customer"
WHERE "Name" like v_name || '%'
OFFSET 0 ROWS FETCH NEXT 100 ROWS ONLY
)
LOOP
DBMS_OUTPUT.PUT_LINE(rec."Name");
END LOOP;
END;</code>
ADO+EF uses the last option when sending a request, so all system works slowly.
I try make 4 different query for last query with parameter:
BINARY+BINARY
LINGUISTIC +BINARY
BINARY+BINARY_CI
LINGUISTIC +BINARY_CI
Results:
<code>Query 1 duration: 14 ms
Query 2 duration: 11 ms
Query 3 duration: 10 ms
Query 4 duration: 557 ms <----LINGUISTIC +BINARY_CI</code>
If take only 1-5 rows - the result will be fast, if it found it. If not, it will be slow - possible it do a full scan.
When I crate another DB with max_string_size=EXTENDED
Create same table with "Name" NVARCHAR2(254) COLLATE BINARY_CI NOT NULL
I got this on all queres
Results:
<code>Query 1 duration: 173 ms
Query 2 duration: 173 ms
Query 3 duration: 171 ms
Query 4 duration: 180 ms</code>
Look like it that faster when use session, but not fast as if make plain parameterless sql request.
I want to achieve the speed of a plain sql request, if it is possible.
Thank you.
Categories: DBA Blogs
Oracle APEX 24.1 Export and Import
How can I export and import Oracle APEX 24.1 Applications and Pages from source Workspace to Destination Workspace having different IDs, Aliases?
Categories: DBA Blogs
How to resolve resmgr:cpu quantum wait event.
Hi Team,
I have tried to find good explanation for this wait event online, but not found anything that gives me a clear and concise answer.
Recently we have started getting "resmgr:cpu quantum" wait event on our production database. This is a database with 40 CPUs and is run on exadata.
I have tried all the scripts I could fin online to determine what is the problem, but I cannot see that we have too much happening on our machine at the time of the event. Are there maybe some other events I should try to find in conjunction to this one?
Hope you can help.
Ingimundur K. Gudmundsson
Categories: DBA Blogs