Tom Kyte
Best way to add the 60 CLOB columns to the oracle table with around 17TB in size.
Hi Team,
This is the first time I am asking question on asktom, while I am very much thankful to the solutions you have provided and kept helping people always.
We want to add 60 CLOB columns to the table which is around 17TB in size. This is a datawarehouse DB and data loading to this table is completely stopped now, as the source table of MS SQL Server has 60 additional columns to it. Kindly suggest the best and efficient way to add the column to the table.
The table is partitioned one.
Regards
Ojas
Categories: DBA Blogs
DBlink not connecting to MS SQL database
Dear Tom
I have Oracle EBS 12.8 with Oracle 12C. I upgraded my database to 19C with the latest patches, and everything is working fine, except my DBlink with another MS SQL server. Note that it was working before the upgrade. I dropped the DBlink and re-created it, but I have no luck connecting.
The Oracle gateway I installed in my SQL server is 11G.
When I connect, I get the following error:
<code>ORA-28511: lost RPC connection to heterogeneous remote agent using SID=ORA-28511: lost RPC connection to heterogeneous remote agent using SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=wapps24)(PORT=1521))(CONNECT_DATA=(SID=VestioOracleFinIntegration)))
ORA-02063: preceding line from VESTIOORACLEFININTEGRATION
Process ID: 42099</code>
My DBlink setup is
<code>OWNER PUBLIC
DB_LINK VESTIOORACLEFININTEGRATION.WAFRA.LOCAL
USERNAME OracleFin
HOST VESTIOORACLEFININTEGRATION
CREATED 16-APR-25
HIDDEN NO
SHARD_INTERNAL NO
VALID YES
INTRA_CDB NO</code>
Categories: DBA Blogs
Question MEMOPTIMZE - Fast ingest
Hi,
I see Oracle document shows an example of insert statement for fast ingest as
INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (1,'test');
Can fast ingest using memoptimize be used for insert statement, that is INSERT as select...
INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest select col1,col2,col2 from another_table;
Thanks,
Girish
Categories: DBA Blogs
Parallel index creation on different table on same tablespace
I?m having trouble with the parallel execution of an Oracle SQL query for creating indexes on <b>separate table (assume each table have single index)</b>.
The performance boost isn?t even close to what I was hoping for. I therefore need your help in determining what to check for in the Oracle DB host in order to troubleshoot the issue.
Although, I found all SQL queries having WAIT_CLASS as ?USER I/O?. I also did a check on IOPS through NetData UI on parallel index creation env, its max observed near ~24%.
Just to notice, we already aware that PARALLEL hint may help here, but we are looking further improvement on top of that.
Please let me know, whether any tunning on Oracle DB Side required or We can assume there won't be any benefit by executing index creation in parallel (even on different table) and we can only specify PARALLEL degree to enhance the performance.
Categories: DBA Blogs
Problems EntityFramework with Oracle.ManagedDataAccess
I created a new project in Visual Studio 2019 with Entity Framework v6.5.1, Oracle.ManagedDataAccess v23.8, and Oracle.ManagedDataAccess.EntityFramework v23.8.
When I add an ADO.NET Entity Data Model, I select EF Designer from the database. It then prompts me for the connection string. I add it, test it, and everything works. When I click Next, a window appears asking me to select the EntityFramework version. The wizard automatically exits and returns me to the first screen to reselect the ADO.NET element. Therefore, it doesn't create the model or the EDMX file.
This doesn't happen when I use Oracle.ManagedDataAccess v19.27 and Oracle.ManagedDataAccess.EntityFramework v19.27.
I can't update to the latest version? Please help.
Categories: DBA Blogs
Failed to set wallet path to system:
Hi,
While listening through the video "Developer Coaching - Oracle Database 23ai - Zero Slides, Zero Marketing, 100% Live Demo" there mentioned that we can set the oracle wallet path to system: to access the windows level certificates, but when I tried the same using below PLSQL code, I couldn't succeed, I'm getting the error "ORA-29248: an unrecognized WRL was used to open a wallet".
Kindly let me know where I went wrong with the code? and how to use the system: as mentioned?
Oracle Database: 19.3.0.0.0
Error: ORA-29248: an unrecognized WRL was used to open a wallet
Command Used: utl_http.set_wallet('system:',null);
full code is given below:
<code>
declare
p_url varchar2(200) := 'https://www.oracle.com';
l_http_request utl_http.req;
l_http_response utl_http.resp;
l_text varchar2(32767);
begin
utl_http.set_wallet('system:',null);
l_http_request := utl_http.begin_request(p_url);
l_http_response := utl_http.get_response(l_http_request);
utl_http.read_text(l_http_response,l_text,32766);
dbms_output.put_line(substr(l_text,1,100));
exception
when utl_http.end_of_body then
utl_http.end_response(l_http_response);
end;
</code>
Thanks for your support in advance.
Categories: DBA Blogs
gather Stale Statistics ASAP
hello I have a table FCM PARTITIONED ON (Q) Column BY RANGE INTERVAL( NUMTODSINTERVAL(7, 'DAY'))
i have an issue that "current/active" partition becomes stale several times a day
vast majority of DMLs on the table are INSERTS (basic inserts , with values , something like insert into table values :1 :2 )
while we do have a maintenance window , it is scheduled at night time , so during day we often have Stale statistics for current/active partition
here are the questions :
1.
is there any optimal way (something like trigger ) so that i can gather statistics on the partition AS SOON as it becomes stale ,
while it is possible to schedule a manual job to check for stale partitions , it still leaves the window between jobs , where partition can become stale , i dont really like the idea of a job running every 20 minutes , but as a last resort it can also be done .
Categories: DBA Blogs
Fetch last record each for multiple filter criteria
Hi Team
Please find re-producible data available in LiveSQL shared link -> https://livesql.oracle.com/ords/livesql/s/c9r56fcrqizrcrc5aiep4hrpn
Requirement is to fetch last record for "t_id" column value for each "code" + "status" column values respectively.
Looking for options on how to fetch below data w/ SQL.
<code>
ID T_ID CODE STATUS
---------- ---------- ------------- ------------
<id_val> t1 Lookup Enable -- fetch last record
<id_val> t1 Lookup Disable -- fetch last record
<id_val> t1 Sync Enable -- fetch last record
<id_val> t1 Sync Disable -- fetch last record
<id_val> t2 Lookup Enable -- fetch last record
<id_val> t2 Lookup Disable -- fetch last record
<id_val> t2 Sync Enable -- fetch last record
<id_val> t2 Sync Disable -- fetch last record
<id_val> t2 Search Enable -- fetch last record
<id_val> t2 Search Disable -- fetch last record
</code>
Appreciate all the help provided in Asktom.
Thanks..
Categories: DBA Blogs
Clarification regarding Oracle Advanced Queue partitioning/ parallel processing methods.
<b>Use Case:</b> We are implementing partitioning in the AQ to enable batch processing with array dequeue while ensuring:
1. Ordering maintained per partition (FIFO).
2. Possibility to Array Deqeue on Application side. ( Unless provided a more performant solution ).
<b>Approach & Observation:</b>
1. We partition messages using a sort of partition_id, ( User defined in queue table) assigning each message a hash (eg: 1-5 ) at enqueue time.
<code>
PROCEDURE Enqueu_Event_(
partition_id_ NUMBER,
message_content_ JSON_OBJECT_T
) IS
queue_payload_ TEST_PAYLOAD_TYPE;
r_enqueue_options_ DBMS_AQ.ENQUEUE_OPTIONS_T;
r_message_properties_ DBMS_AQ.MESSAGE_PROPERTIES_T;
v_message_handle_ RAW(16);
BEGIN
-- Construct payload with provided partition_id
queue_payload_ := TEST_PAYLOAD_TYPE(partition_id_, message_content_.to_blob());
-- Enqueue message
DBMS_AQ.ENQUEUE(
queue_name => OUT_QUEUE_NAME,
enqueue_options => r_enqueue_options_,
message_properties => r_message_properties_,
payload => queue_payload_,
msgid => v_message_handle_
);
COMMIT;
END Publish_Event_;
/
</code>
2. On the consumer side ( Polling ), we use AQDequeueOptions with:
<code>
dequeueOpts.navigation := DBMS_AQ.FIRST_MESSAGE; to fetch the first message for ordering.
deqeueOpts.deq_condition := ("partition_id = 1") to filter messages per partition.
PROCEDURE Dequeue_Events_(
partition_id_ NUMBER
) IS
r_dequeue_options_ DBMS_AQ.DEQUEUE_OPTIONS_T;
r_message_properties_ DBMS_AQ.MESSAGE_PROPERTIES_ARRAY_T;
v_message_handle_ DBMS_AQ.MSGID_ARRAY_T;
queue_payload_ TEST_PAYLOAD_TYPE;
batch_size_ CONSTANT PLS_INTEGER := 10; -- Adjust batch size as needed
BEGIN
r_dequeue_options_.navigation := DBMS_AQ.FIRST_MESSAGE;
r_dequeue_options_.dequeue_mode := DBMS_AQ.REMOVE;
-- Condition to filter by partition_id
r_dequeue_options_.condition := 'tab.partition_id = ' || TO_CHAR(partition_id_);
-- Array dequeue
DBMS_AQ.DEQUEUE_ARRAY(
queue_name => IN_QUEUE_NAME,
dequeue_options => r_dequeue_options_,
message_properties => r_message_properties_,
payload => queue_payload_,
num_msgs => batch_size_,
msgid => v_message_handle_
);
COMMIT;
END Dequeue_Events_;
/
</code>
<b>Questions & Clarification:</b>
Does setting<b> DBMS_AQ.FIRST_MESSAGE</b>; override the lack of ordering guarantee in deq_condition?
Can we reliably expect FIFO ordering per partition even when using deq_condition?
If not, what is the best approach to ensure parallel processing per partition while preserving order?
Is there a more efficient way to implement partitioned parallel dequeuing without sharded queues?
<b>Links:
</b>
deq_condition ordering not guaranteed - https://docs.oracle.com/en/database...
Categories: DBA Blogs
ORA-00604: error occurred at recursive SQL level
Hi,Tom:
The recycle bin is empty, and no DDL statements similar to
drop BIN$MmWdiJOqKt7gY0eQFAotmw==$0 were executed.
However, the following error was reported:
Caused by: java.sql.BatchUpdateException: ORA-00604: error occurred at recursive SQL level 1
ORA-38301: can not perform DDL/DML over objects in Recycle Bin
After performing the following operations, the error no longer occurred:
I disabled the SPACE ADVISOR.
BEGIN
DBMS_AUTO_TASK_ADMIN.disable(client_name => 'auto space advisor',operation => NULL,window_name => NULL);
END;
/
I dropped the table that caused the error and then recreated it.
I don't know the reason for this issue. Could you provide some guidance?
Thank you very much!
Best Regards
Categories: DBA Blogs
want to inport a file of version 24.2.2
how can i import a file version 24.2.2 into 24.1.7
Categories: DBA Blogs
List Partition - Range SubPartition
Hello,
Please advise on how to create List Partition with Range Sub-Partition.
Also on the Range Sub-Partition like to have name custom defined based on each day like DEL_AUG_082019.
<code>CREATE TABLE "DBB_USER"."STG_ES_STS"
(
"CS_ID" CHAR(7), "CWIN" NUMBER(9),
"PGM_TYP_CD" CHAR(2),
"ES_STS_ID" NUMBER(9),
"BGN_DT" DATE,
"END_DT" DATE,
"CRT_USR_ID" VARCHAR2(15),
"CRT_DTM" DATE,
"UPD_USR_ID" VARCHAR2(15),
"UPD_DTM" DATE,
"HIST_IND" CHAR(1),
"LAST_RTRV_DT" DATE,
"SRC_CITY_CD" VARCHAR2(2),
CONSTRAINT "STG_XPK_ES_STS_RSN"
PRIMARY KEY ("CS_ID", "CWIN", "PGM_TYP_CD", "ES_STS_ID", "ES_RSN_ID", "SRC_CITY_CD", "OP_TIME") VALIDATE )
TABLESPACE "DBB_USER" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE ( BUFFER_POOL DEFAULT)
PARTITION BY LIST ("SRC_CITY_CD")
(PARTITION "DEL" VALUES ('34') TABLESPACE "DBB_USER" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE ( BUFFER_POOL DEFAULT) ,
PARTITION "MUM" VALUES ('07') TABLESPACE "DBB_USER" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE ( BUFFER_POOL DEFAULT) ,
PARTITION "CAL" VALUES ('57') TABLESPACE "DBB_USER" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE ( BUFFER_POOL DEFAULT) ,
PARTITION "CHN" VALUES ('42') TABLESPACE "DBB_USER" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE ( BUFFER_POOL DEFAULT) ,
PARTITION "UNK" VALUES ('-1') TABLESPACE "DBB_USER" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE ( BUFFER_POOL DEFAULT) )
PARALLEL 4 ENABLE ROW MOVEMENT
</code>
Sample Data:
TABLE_OWNER TABLE_NAME PARTITION_NAME SUBPARTITION_NAME
-------------------- ----------------------------------- -------------------- ------------------------------
DBB_USER STG_ES_STS DEL DEL_AUG_012019
DBB_USER STG_ES_STS DEL DEL_AUG_022019
DBB_USER STG_ES_STS DEL DEL_AUG_032019
DBB_USER STG_ES_STS DEL DEL_AUG_042019
DBB_USER STG_ES_STS DEL DEL_AUG_052019
DBB_USER STG_ES_STS DEL DEL_AUG_092019
DBB_USER STG_ES_STS MUM DEL_AUG_012019
DBB_USER STG_ES_STS MUM DEL_AUG_022019
DBB_USER STG_ES_STS MUM DEL_AUG_032019
DBB_USER STG_ES_STS MUM DEL_AUG_042019
DBB_USER STG_ES_STS MUM DEL_AUG_052019
DBB_USER STG_ES_STS MUM DEL_AUG_092019
Categories: DBA Blogs
java.sql.SQLException: ORA-08103: object no longer exists
Hi,
I am getting error Ora-08103: Object No longer exists when select query uses type object in a procedure.
below the code used for select query
procedure upp_rate is
l_typ_detail_list typ_detail_list;
begin
select typ_oc_sec_detail(
sec_sid,
fed_open_rate,
eql_avg_rate_5,
eql_avg_rate_all,
ast_loan_rate_avg,
src_date,
null,
null,
shortcode,
cusip,
bbg_ticker,
sedol,
isin,
security_type,
dtc_eligible_derived,
country_of_quotation,
dtc_do_chill_ind,
settlement_agency )
bulk collect into l_typ_detail_list
from vw_details
;
type typ_detail_list as table of typ_detail ;
typ_detail object defined at oracle database.
Please can you help me in finding out why would the select queries throw Ora:08103 error and resolution for the same.
Thanks,
Raja
Categories: DBA Blogs
how do I calculate the temp space and undo space required for the "ALTER TABLE MOVE...LOB..."
Greetings,
how do I calculate the temp space and undo space required for the "ALTER TABLE $SCHEMA.$table MOVE LOB ($lob) STORE AS (TABLESPACE $NEW_LOB_TABLESPACE)"
We have very large segments, and currently have a small undo
<code>
-- show MAX SEGMENT SIZE
SELECT tablespace_name, SEGMENT_TYPE, MAX(CEIL((bytes / 1024 / 1024) / 1024)) AS max_undo_space_estimate_GB
FROM dba_segments
WHERE (SEGMENT_TYPE like '%LOB%' or SEGMENT_TYPE = 'TABLE')
AND tablespace_name like 'ELF%'
group by tablespace_name, SEGMENT_TYPE
/
TABLESPACE_NAME SEGMENT_TYPE MAX_SINGLE_SEGMENT_SIZE_GB
------------------------------ ------------------ --------------------------
S1_DATA TABLE 198
S1_LOB LOBSEGMENT 2350
-- Show UNDO Tablespace
SELECT
tablespace_name,
CEIL(bytes / 1024 / 1024 / 1024) AS total_size_gb
FROM dba_data_files
WHERE tablespace_name like 'UNDO%'
ORDER BY tablespace_name, file_name
/
TABLESPACE_NAME TOTAL_SIZE_GB
------------------------------ -------------
UNDO_1 25
UNDO_2 25
UNDO_3 25
UNDO_4 25
</code>
-Tom
Categories: DBA Blogs
Is primary key required in cursor when using WHERE CURRENT OF
I searched extensively but I couldn't find clarity, whether Primary Key (In this case, composite of Foo1ID and Foo2ID) should be selected into foo_cursor, even if not used in the procedure?
Is it sufficient to rely on some internal mechanism (ROWID?) that will be able to uniquely identify and update the records in the loop, or is it better to specifically select Primary Key?
If possible, with an answer, can you point me to the documentation section for this?
Thanks
<code>CREATE TABLE Foo (
Foo1ID VARCHAR2(255),
Foo2ID VARCHAR2(255),
FooCategory VARCHAR2(255),
FooValue NUMBER,
PRIMARY KEY (Foo1ID, Foo2ID)
);
CREATE OR REPLACE PROCEDURE UpdateFooValue(
p_FooCategory Foo.FooCategory%TYPE
) AS
CURSOR foo_cursor IS
-- Should we include Foo1ID, Foo2ID here ?
SELECT Foo1ID, Foo2ID, FooValue FROM Foo
WHERE Foo.FooCategory = p_FooCategory
FOR UPDATE;
v_FooValue Foo.FooValue%TYPE := 0;
BEGIN
FOR foo_rec IN foo_cursor LOOP
IF foo_rec.FooValue > 50 THEN
v_FooValue := 25;
ELSE
v_FooValue := 10;
END IF;
UPDATE Foo SET FooValue = v_FooValue WHERE CURRENT OF foo_cursor;
END LOOP;
END UpdateFooValue;</code>
Categories: DBA Blogs
Query block names disappear
Sometimes when I use a QB_NAME hint, when I later look at the execution plan, there is no record of that query block name.
Why is this the case ?
Categories: DBA Blogs
SGA size
Why is the size of shared pool in the query not exactly the same as in the init.ora (52,428,800)?
It is slightly large (54,361,560).
SVRMGR> select pool,to_char(sum(bytes),'999,999,999') from v$sgastat
group by pool ;
POOL TO_CHAR(SUM(
----------- ------------
java pool 20,000,768
shared pool 54,361,560
33,890,464
3 rows selected.
SVRMGR> show parameter shared
NAME TYPE VALUE
----------------------------------- ------- ------------------------------
hi_shared_memory_address integer 0
shared_memory_address integer 0
shared_pool_reserved_size string 2621440
shared_pool_size string 52428800
Thanks.
Categories: DBA Blogs
SQL effectivity metrics
Dear Ask Tom team memebers.
Is it possible to to measure sql effectivity via computed throw away row count ?
Its clear that optimizer tries to build the plan to minimize throw away rows for the next steps in the plan. So it tries to send only relevant rows to next operation.
I get that huge throw away in the beginig is very beneficial for next steps.
My intent is to find big(size of queried data and running time) sqls that do a lot of ineffective work on irrelevant data to find terrible design of tables, partitoning, indexes and all data access paths.
So the findings should be something like a step in the plan and a ratio of rowsouce_result(rows_sent_from_plan_step) / rowsource_out(rows_sent_to_next_step i.e parent operation.) .
I know its possible to see it in the plan statistics, the metric actual_rows in the plan step.
But is there a method how to get those metrics for all schema sqls in a similiar way like top ten sqls measured by cpu, i/o, memory consumption ?
I know that such metric could be misleading, it should be for finding ineffective database design and sqls, that waste a lot of work and resources.
I tried some sqls based on sql plan statistics, but I'm not sure if its precise.
Thank You.
Tomas.
Categories: DBA Blogs
PLS-00201: identifier 'APEX_DATA_EXPORT.T_EXPORT' must be declared
Hello everyone,
I am trying to create a CSV file and then attach it to an email and send the email but I am receiving the following error: PLS-00201: identifier 'APEX_DATA_EXPORT.T_EXPORT' must be declared
Below is the code I am attempting to run:
<code>declare
l_context apex_exec.t_context;
l_export apex_data_export.t_export;
l_mail_id NUMBER;
l_workspace_id NUMBER;
name varchar2(225);
begin
select LNAME || '_' || FNAME || '_' || MNAME into name
from IN_PROCESSING
where :P23_PERS_ID = PERS_ID;
l_workspace_id := apex_util.find_security_group_id (p_workspace => 'OPERATIONS MANAGEMENT');
apex_util.set_security_group_id (p_security_group_id => l_workspace_id);
l_context := apex_exec.open_query_context (
p_location => apex_exec.c_location_local_db,
p_sql_query => 'select i.EMPLOYEEID,
i.LNAME || '', '' || i.FNAME || '' '' || i.MNAME as NAME,
ASSIGNED_ORG,
PHONE_NUM,
(select p.LNAME || '', '' || p.FNAME || '' '' || p.MNAME from PERSONS p where i.SUPV = p.DISPLAY_NAME) as SUPERVISOR_NAME,
i.OFFICE_SYM,
i.POSITION_TYPE,
i.DUTY_TITLE || '' '' || i.RANK_GRADE as DUTY_TITLE_RANK_GRADE,
CYBERAWARE_DATE,
decode(i.GAIN_TYPE, ''Foreign National'', ''Yes'',NULL) as FOREIGN_NATIONAL,
COMPANY_NAME,
CONTR_NO,
CONTR_EXP_DATE
from IN_PROCESSING i
where PERS_ID = '||:P23_PERS_ID||';',
p_file_name => ''||name||'_test.csv');
l_export := apex_data_export.export (p_context => l_context, p_format => apex_data_export.c_format_csv);
apex_exec.close (l_context);
l_mail_id :=
APEX_MAIL.SEND (p_to => 'me@mail',
p_from => 'csv_test@us.af.mil',
p_body => 'words',
p_body_html => '<h2>Report is attached</h2>',
p_subj => 'Example Report');
APEX_MAIL.ADD_ATTACHMENT (p_mail_id => l_mail_id,
p_attachment => l_export.content_blob,
p_filename => ''||name||'_test.csv',
p_mime_type => 'text/csv');
apex_mail.push_queue;
end;</code>
And below is the error received:
ORA-06550: line 3, column 17:
PLS-00201: identifier 'APEX_DATA_EXPORT.T_EXPORT' must be declared
ORA-06550: line 0, column 0:
PL/SQL: Compilation unit analysis terminated
ORA-06512: at "SYS.DBMS_SQL", line 1721
1. declare
2. l_c...
Categories: DBA Blogs
Importdp with remap_table has run a full import
Hi Tom,
It was run a impdp command with remap_table in parfile with sqlplus in Oracle Linux but it run a full import. We've lost impdp logs, How could we get information about the impdp command that was run and clarify the issue?
Thanks a lot
Regards!
Categories: DBA Blogs