Tom Kyte
SQL effectivity metrics
Dear Ask Tom team memebers.
Is it possible to to measure sql effectivity via computed throw away row count ?
Its clear that optimizer tries to build the plan to minimize throw away rows for the next steps in the plan. So it tries to send only relevant rows to next operation.
I get that huge throw away in the beginig is very beneficial for next steps.
My intent is to find big(size of queried data and running time) sqls that do a lot of ineffective work on irrelevant data to find terrible design of tables, partitoning, indexes and all data access paths.
So the findings should be something like a step in the plan and a ratio of rowsouce_result(rows_sent_from_plan_step) / rowsource_out(rows_sent_to_next_step i.e parent operation.) .
I know its possible to see it in the plan statistics, the metric actual_rows in the plan step.
But is there a method how to get those metrics for all schema sqls in a similiar way like top ten sqls measured by cpu, i/o, memory consumption ?
I know that such metric could be misleading, it should be for finding ineffective database design and sqls, that waste a lot of work and resources.
I tried some sqls based on sql plan statistics, but I'm not sure if its precise.
Thank You.
Tomas.
Categories: DBA Blogs
PLS-00201: identifier 'APEX_DATA_EXPORT.T_EXPORT' must be declared
Hello everyone,
I am trying to create a CSV file and then attach it to an email and send the email but I am receiving the following error: PLS-00201: identifier 'APEX_DATA_EXPORT.T_EXPORT' must be declared
Below is the code I am attempting to run:
<code>declare
l_context apex_exec.t_context;
l_export apex_data_export.t_export;
l_mail_id NUMBER;
l_workspace_id NUMBER;
name varchar2(225);
begin
select LNAME || '_' || FNAME || '_' || MNAME into name
from IN_PROCESSING
where :P23_PERS_ID = PERS_ID;
l_workspace_id := apex_util.find_security_group_id (p_workspace => 'OPERATIONS MANAGEMENT');
apex_util.set_security_group_id (p_security_group_id => l_workspace_id);
l_context := apex_exec.open_query_context (
p_location => apex_exec.c_location_local_db,
p_sql_query => 'select i.EMPLOYEEID,
i.LNAME || '', '' || i.FNAME || '' '' || i.MNAME as NAME,
ASSIGNED_ORG,
PHONE_NUM,
(select p.LNAME || '', '' || p.FNAME || '' '' || p.MNAME from PERSONS p where i.SUPV = p.DISPLAY_NAME) as SUPERVISOR_NAME,
i.OFFICE_SYM,
i.POSITION_TYPE,
i.DUTY_TITLE || '' '' || i.RANK_GRADE as DUTY_TITLE_RANK_GRADE,
CYBERAWARE_DATE,
decode(i.GAIN_TYPE, ''Foreign National'', ''Yes'',NULL) as FOREIGN_NATIONAL,
COMPANY_NAME,
CONTR_NO,
CONTR_EXP_DATE
from IN_PROCESSING i
where PERS_ID = '||:P23_PERS_ID||';',
p_file_name => ''||name||'_test.csv');
l_export := apex_data_export.export (p_context => l_context, p_format => apex_data_export.c_format_csv);
apex_exec.close (l_context);
l_mail_id :=
APEX_MAIL.SEND (p_to => 'me@mail',
p_from => 'csv_test@us.af.mil',
p_body => 'words',
p_body_html => '<h2>Report is attached</h2>',
p_subj => 'Example Report');
APEX_MAIL.ADD_ATTACHMENT (p_mail_id => l_mail_id,
p_attachment => l_export.content_blob,
p_filename => ''||name||'_test.csv',
p_mime_type => 'text/csv');
apex_mail.push_queue;
end;</code>
And below is the error received:
ORA-06550: line 3, column 17:
PLS-00201: identifier 'APEX_DATA_EXPORT.T_EXPORT' must be declared
ORA-06550: line 0, column 0:
PL/SQL: Compilation unit analysis terminated
ORA-06512: at "SYS.DBMS_SQL", line 1721
1. declare
2. l_c...
Categories: DBA Blogs
Importdp with remap_table has run a full import
Hi Tom,
It was run a impdp command with remap_table in parfile with sqlplus in Oracle Linux but it run a full import. We've lost impdp logs, How could we get information about the impdp command that was run and clarify the issue?
Thanks a lot
Regards!
Categories: DBA Blogs
Oracle Scheduler - Lethargic
On my test database, Oracle scheduler seems a bit lethargic.
In my case, I am inserting data into 6 different tables ( million plus rows in each table ), these are done as insert into / select from via a stored procedure, I have a package with 6 procedures. In the package I create 6 jobs to do the insertion. At any given point only 4 jobs are running and they seem much slower then if I run these 6 procedures from a SQL Plus command line.
I created a new job class for these jobs and my job_queue_parameter is currently set to 160.
Is there something I can do to make these move faster via the scheduler ?
Thanks
BC,
CT,MI
Categories: DBA Blogs
Sub queries not working if db link used
hi Tom,
when a remote table is referred in sub query then it is not retrieving the data and it throws error.
refer the below sql, I am trying to get latest record, so I am doing order by DATEM in sub query and taking first record in outer sql. here, OWNERDOCUMENTS used in FROM clause is a synonym created for a remote table.
The below sql not working!
<code>select ownercode,img_path_back,img_path_front,DELETED_FLG,useridm,DATEM
FROM ( select ownercode,img_path_back,img_path_front,customer_name,NVL(DELETED_FLG,'N') DELETED_FLG,useridm,DATEM
from OWNERDOCUMENTS where OWNERCODE=NVl('xxxx',OWNERCODE) Order By DATEM DESC ) WHERE ROWNUM=1;</code>
The sql below is working!!
<code>select ownercode,img_path_back,img_path_front,customer_name,NVL(DELETED_FLG,'N') DELETED_FLG,useridm,DATEM
from OWNERDOCUMENTS where OWNERCODE=NVl('xxxx',OWNERCODE) Order By DATEM DESC;
</code>
what could be the issue? please advice.
Categories: DBA Blogs
parallel_force_local=true with hint parallel in sql
In our production environment ,a 2 nodes 11.2.0.4 rac ,the parameter parallel_force_local=true is set.
one sql use hint parallel ie: SELECT /*+ PARALLEL(t1 8) */ .in sql execute plan we found Parallel Execution on 2 instance
Parallel Execution Details (DOP=8 , Servers Allocated=8)
Instances : 2
=========================================================
| Instance | Name | Type | Server# | Elapsed |
| | | | | Time(s) |
=========================================================
| 2 | PX Coordinator | QC | | 0.01 |
| 1 | p000 | Set 1 | 1 | 7.31 |
| 1 | p001 | Set 1 | 2 | 7.28 |
| 1 | p002 | Set 1 | 3 | 7.39 |
| 1 | p003 | Set 1 | 4 | 7.53 |
| 2 | p001 | Set 1 | 6 | 7.43 |
| 2 | p002 | Set 1 | 7 | 7.36 |
| 2 | p003 | Set 1 | 8 | 7.36 |
=========================================================
parallel_force_local=true is not useable?
Categories: DBA Blogs
Free Space Fragmentation Index - Warning
Good afternoon, Tom.
I've started working for a new company recently and was just assigned a ticket regarding a "Free Space Fragmentation Index" warning for the SYSTEM tablespace of one of our databases (something I never ran into at my previous job). The first thing I did was check to see whether this tablespace is locally managed or not:
<code>SELECT tablespace_name, extent_management
FROM dba_tablespaces
WHERE tablespace_name = 'SYSTEM';
TABLESPACE_NAME EXTENT_MAN
------------------------------ ----------
SYSTEM DICTIONARY
</code>
The ticket mentions an FSFI of 14.14, so I ran the following query to confirm:
<code>SELECT Tablespace_Name,SQRT( MAX( Blocks ) / SUM( Blocks ) ) * (100 / SQRT( SQRT(COUNT(Blocks ) ) ) ) AS FSFI FROM DBA_Free_Space GROUP BY Tablespace_Name;
TABLESPACE_NAME FSFI
------------------------------ ----------
SYSTEM 14.1405943</code>
Since the tablespace is not locally managed, I attempted to coalesce it:
<code>ALTER TABLESPACE SYSTEM COALESCE;</code>
But that did nothing to improve the FSFI:
<code>SELECT Tablespace_Name,SQRT( MAX( Blocks ) / SUM( Blocks ) ) * (100 / SQRT( SQRT(COUNT(Blocks ) ) ) ) AS FSFI FROM DBA_Free_Space GROUP BY Tablespace_Name;
TABLESPACE_NAME FSFI
------------------------------ ----------
SYSTEM 14.1405943</code>
Am I taking the wrong approach to this problem? Is this even a problem? Any insight on how I could proceed would be greatly appreciated.
Categories: DBA Blogs
How to match cve to oracle patch number
Oracle's Critical Patch Update Advisory publishes Oracle Patch number along with a matrix of CVE's that are on some Risk Matrix.
For some reason, I am unable to see any way to see what CVE's a patch number affects.
Am I missing something or is this not a possibility?
Categories: DBA Blogs
Good Way to Capture Records of Previous Month from UNIFIED_AUDIT_TRAIL
v19.22
Requirement is to run some queries against the audit table at the beginning of the month to capture details about the previous calendar month.
EVENT_TIMESTAMP in UNIFIED_AUDIT_TRAIL is timestamp(6) (not date)
Is this an appropriate where clause to query this view? Or is there a better way when dealing with timestamp vs date?
<code>
/* first day of last month at 00:00:00 */
WHERE CAST( EVENT_TIMESTAMP AS DATE ) >= TRUNC( ADD_MONTHS( SYSDATE, -1 ), 'MM' )
/* first day of current month at 00:00:00 */
AND CAST( EVENT_TIMESTAMP AS DATE ) < TRUNC( SYSDATE, 'MM' )
</code>
Thanks
Categories: DBA Blogs
index fragmentation
Hi Tom,
i have a question about the index fragmentation, on the famous white paper "How to stop defragmenting and start living: the definitive world of fragmentation" by Bhaskar Himatsingka and Juan Loaiza, on section 4.2, they provide a script to tell if the index can reduce a blevel, the procedure is this:
1.) analyze index index_name validate structure
2.) select name NAME,
(br_rows_len*100)/(br_blk_len*br_blks) BRANCH_UTILIZATION,
((lf_rows_len - del_lf_rows_len)*100)/ (lf_blk_len*lf_blks) LEAF_UTILIZATI,
decode (sign(ceil(log(br_blk_len/(br_rows_len/br_rows),
lf_blk_len/((lf_rows_len - del_lf_rows_len)/(lf_rows - del_lf_rows)
+1 - height), -1,'YES','NO') CAN_REDUCE_LEVEL
from index_stats;
i have a couple of indexes, after run the script, "CAN_REDUCE_LEVEL" shows yes, after i did a "alter index xxx rebuild ...", the flag still shows yes, why the "height" didn't reduce after "rebuild" ?
Categories: DBA Blogs
Extract data from json response
now i have json response from oracle service like following code
{
"Status": null,
"RequestNumber": null,
<b>"validationResults":
[
{
"code":E1000,
"details":'Validation Error'
}
]</b>
}
now i can read keys Status and request number like below
<code> obj_rsp :=json.json(content_resp);
v_status :=json.json_ext.get_string(obj_rsp, 'status');
v_req_no :=json.json_ext.get_number(obj_rsp, 'data.requestNumber');
--
/*v_resp_code :=json.json_ext.get_string(obj_rsp, 'validationResults.Code');
v_resp_desc :=json.json_ext.get_string(obj_rsp, 'validationResults.details');*/</code>
i need to get <validation resultes>
Code and details
how could i do it ?
Categories: DBA Blogs
Optimizing Recursive SQL Query Performance
Problem Statement
I am working with a large dataset (~11-12 million records), where I need to identify key out-of-stock (OOS) event dates for different products (MASTER_ID) across multiple marketplaces (MARKETPLACE_ID).
Each record in the DETAILED_OOS_EVENTS table represents an OOS event on a particular date, and the combination of (MASTER_ID, MARKETPLACE_ID, OOS_DATE) is always unique.
Goal of the Query
I need to:
Find the earliest OOS event (MIN(OOS_DATE)) for each (MASTER_ID, MARKETPLACE_ID).
Recursively find the next OOS event, which occurs at least 7 days after the previous event.
Repeat this process until no more OOS events satisfy the condition.
Issue
I am using a recursive CTE (WITH RECURSIVE), but since the dataset is large (~11-12M rows), the query takes too long to execute. I?m looking for ways to optimize it.
I have provided table structure, sample data and current query as a livesql script.
Script URL: https://livesql.oracle.com/next/library/scripts/recursive-query-for-oos-events-47UhGS
Categories: DBA Blogs
XID differs for ROLLBACK TO SAVEPOINT redo logs in V$LOGMNR_CONTENTS
Oracle Version: Docker Image container-registry.oracle.com/database/enterprise:19.19.0.0
Setup Script (to enable logminer)
<code>
SELECT log_mode FROM v$database;
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
SELECT log_mode FROM v$database;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
select supplemental_log_data_min, supplemental_log_data_pk from v$database;
</code>
I'm using logMiner to get the redo logs, while i notice this strange behavior:
- Client Side Queries:
<code>SELECT CURRENT_SCN FROM V$DATABASE; -- 2554427
INSERT INTO t1 VALUES (1, 1);
SAVEPOINT s1;
INSERT INTO t1 VALUES (2, 2);
SELECT CURRENT_SCN FROM V$DATABASE; -- 2554481
INSERT INTO t1 VALUES (3, 3);
SELECT CURRENT_SCN FROM V$DATABASE; -- 2554528
ROLLBACK TO s1;
SELECT CURRENT_SCN FROM V$DATABASE; -- 2554565
COMMIT;
SELECT CURRENT_SCN FROM V$DATABASE; -- 2554583
</code>
- To query logMiner for all the redo logs in this transaction:
<code>
EXEC DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/opt/oracle/oradata/ORCLCDB/redo01.log', OPTIONS => DBMS_LOGMNR.NEW);
EXEC DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/opt/oracle/oradata/ORCLCDB/redo02.log', OPTIONS => DBMS_LOGMNR.ADDFILE);
EXEC DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/opt/oracle/oradata/ORCLCDB/redo03.log', OPTIONS => DBMS_LOGMNR.ADDFILE);
EXEC DBMS_LOGMNR.START_LOGMNR(STARTSCN => 2554427, ENDSCN => 2554565, OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SELECT SCN, XID, XIDUSN, XIDSLT, XIDSQN, SQL_REDO, ROLLBACK FROM V$LOGMNR_CONTENTS WHERE TABLE_NAME = 'T1';
</code>
This gives me:
<code>
SCN XID XIDUSN XIDSLT XIDSQN SQL_REDO ROLLBACK
__________ ___________________ _________ _________ _________ _________________________________________________________________ ___________
2554454 D4001F0014000000 212 31 20 insert into "MY_USER"."T1"("PK","V") values ('1','1'); 0
2554475 D4001F0014000000 212 31 20 insert into "MY_USER"."T1"("PK","V") values ('2','2'); 0
2554524 D4001F0014000000 212 31 20 insert into "MY_USER"."T1"("PK","V") values ('3','3'); 0
2554545 D4001F0014000000 212 31 20 delete from "MY_USER"."T1" where ROWID = 'AAASkaAAHAAAAFcAAC'; 1
2554545 D4001F0014000000 212 31 20 delete from "MY_USER"."T1" where ROWID = 'AAASkaAAHAAAAFcAAB'; 1
</code>
- To query logMiner for only the ROLLBACK TO s1 stmt:
<code>
EXEC DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/opt/oracle/oradata/ORCLCDB/redo01.log', OPTIONS => DBMS_LOGMNR.NEW);
EXEC DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/opt/oracle/oradata/ORCLCDB/redo02.log', OPTIONS => DBMS_LOGMNR.ADDFILE);
EXEC DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/opt/or...
Categories: DBA Blogs
ORA-01843: not a valid month
I'm trying to follow along with the live lab 'Build a 'Doctor Appointments Made Easy' App using Oracle APEX
Introduction' to get to know the new Workflows in Application Express.
In Lab 9 task 6 I run into an error in the Check Appointment Fee activity: ORA-01843: not a valid month.
Since I'm still very new to Application Express I have no clue where to look for cause of this error.
When I check the details for the workflow, the following activities are completed:
- Start
- Compute Doctor Availability
- Doctor Available?
- Raise Appointment Request
- Appointment Approved?
- Confirm Appointment
Then this activity is faulted:
- Check Appointment Fee
ORA-01843: not a valid month
Can you provide me some help with fixing this issue?
Categories: DBA Blogs
Regarding the error of qcdlgcd()+75
========= Dump for incident 2526042 (ORA 7445 [qcdlgcd]) ========
[TOC00003]
----- Beginning of Customized Incident Dump(s) -----
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0x4] [PC:0x7FF87F3C490B, qcdlgcd()+75]
The database prompts a socket error, and the above error was traced through trace. I don't know how to solve it
Categories: DBA Blogs
How to get the optimizer to automatically find the best explain plan for an SQL with an expression included
I was recently asked to find out why an SQL in our web solution took so much time to execute.
- The SQL involved querying vast amounts of data
- Without the expression involved in the query which took much time, the SQL took less than a second to execute.
- Our DBA used an "optimizer advisor" tool to make an optimal execution plan. It then ran in less than a second.
The part which took time was (at the top of the query):
- <code>SELECT DECODE(COUNT(*), 0, 0, 1)</code>
combined with (inside the query)
-<code> AND NVL(gu.dstopdate + in_exclude_prior_days, SYSDATE) >= SYSDATE</code>
Question
Is there a way to get the optimizer to automatically find the best explain plan for an SQL with an expression included
Note
I did find a way to always get an optimal result by simply making the expression run on only the resulting query rows instead of the whole query rowset. But that is besides the point. An answer to the question above would make life a lot simpler for our DBA and of course our customers.
Categories: DBA Blogs
sqlldr not loading multiline columns enclosed in double quotes
I get a .csv log file dropped from a microsoft server to a linux server using winSCP. The column REASON, is enclosed with "" and includes LF. Here is a 2 record snippet of the file:
5826431,5826431,,,New,17255483,,,,2e4cf893-078d-45b3-b548-2dc3c970ff25,5826431,1021715,"The following errors occurred:
Error while cancelling backordered line(s) 10
Sales document overall status is completed. Item cannot be Updated",909652,S2323358
5826431,5826431,,,New,17255481,,,,719e566e-f143-4126-8827-303d3e7d1aab,5826431,1021715,"The following errors occurred:
Error while cancelling backordered line(s) 10
Sales document overall status is completed. Item cannot be Updated",909652,S2323358
Here is the control file I use:
load
infile 'data.csv'
APPEND into table example_table
fields terminated by ',' optionally enclosed by '"' trailing nullcols
(
LOG_COL_1 nullif LOG_COL_1=BLANKS "to_number(:LOG_COL_1)"
,LOG_COL_2
,LOG_COL_3
,LOG_COL_4_TIME nullif LOG_COL_4_TIME=BLANKS "to_date(:LOG_COL_4_TIME,'MM/DD/YYYY HH24:MI:SS')"
,LOG_COL_5
,LOG_COL_6
,LOG_COL_7
,LOG_COL_8
,LOG_COL_9
,LOG_COL_10
,COL_1 nullif COL_1=BLANKS "to_number(:COL_1)"
,COL_2 nullif COL_2=BLANKS "to_number(:COL_2)"
,COL_3
,COL_4
,COL_5
,REASON
,COL_6
,COL_7
)
When I load, just as the ctl file is written, I get this error:
Record 1: Rejected - Error on table example_table , column REASON.
second enclosure string not present.
I have tried adding "str X'4C46'" (and variations to remove the LF code) after the csv file and then it says the record is too long.
I have also tried changing the def REASON to REASON char(4000) "replace(:REASON,'\n',' ')" enclosed by '"'
also trying variations of '\n':('LF','chr(10)','chr(13)||chr(10)', 'CRLF') etc.
All with no luck.
My unix machine is rhel 8. Please help me understand why this isn't working.
Thanks
Categories: DBA Blogs
Encountering error while loading Data with Toad for oracle
I was loading Large Data into a newly created table in Toad For Oracle 16.3 using SQL Loader with an Excel Data but i got error "Rejected - Error on table SCH_STRUCT_FIN.OFFSHORE_INV_COMM_MEETING_DATA, column COLLATERAL_FUNDING_REQUIREMENT . Field in data file exceeds maximum length"
and also Record 258: "Rejected - Error on table SCH_STRUCT_FIN.OFFSHORE_INV_COMM_MEETING_DATA, column TRIGGERING_EVENT_FOR_TRUST_CONVERSION.
no terminator found after TERMINATED and ENCLOSED field.
My VARCHAR2 is already set on 4000 but still getting the same error in SQL Loader.
Please help with solution
Categories: DBA Blogs
Execution Plan sharing across containers
If a SQL statement in pdb container A is <b>exactly</b> the same SQL statement as in pdb container B, will they share the same execution plan even though the owner of the tables and the user owning the session (including GTTs) will be different in each container?
Categories: DBA Blogs
Inline and Out-of-Line LOB Storage with update
In Oracle Release 19 Document: Database SecureFiles and Large Objects Developer's Guide
(https://docs.oracle.com/en/database/oracle/oracle-database/19/adlob/LOB-storage-with-applications.html#GUID-B82B3C24-1FAF-4661-96A0-28241FD2A052)
there are following description about "Inline and Out-of-Line LOB Storage":
============================== Oracle Docu =========================
13.3 LOB Storage Parameters
For a discussion of SECUREFILE parameters:
13.3.1 Inline and Out-of-Line LOB Storage
If you update a LOB that is stored out-of-line and the resulting LOB is less than approximately 4000 bytes, it is still stored out-of-line.
====================================================================
My question is if it is still true for all Oracle 19c on this text:
"If you update a LOB that is stored out-of-line and the resulting LOB is less than approximately 4000 bytes, it is still stored out-of-line."
If yes, how to construct a test code to observe this behavior ?
Categories: DBA Blogs