Home » SQL & PL/SQL » SQL & PL/SQL » BULK Collect, CTAS And PL/SQL with COMMIT AFTER 10,000 Records !!
BULK Collect, CTAS And PL/SQL with COMMIT AFTER 10,000 Records !! [message #184983] |
Sat, 29 July 2006 09:30  |
vx_2004
Messages: 10 Registered: July 2006
|
Junior Member |
|
|
BULK COLLECT IN PL/SQL ?
We have ORACLE 10g and an OLTP DB and its stand by DB.
I have a Query which uses Create Table as (CTAS) statement.
So For Example
CREATE TABLE TABLE3 AS
SELECT A.FIELD1,
A.FIELD2,
B.FIELD1
FROM TABLE1 A,
TABLE2 B
WHERE A.FIELD1 = B.FIELD1
This Query runs VERY FAST and Creates the Table TABLE3 with 300 MILLION Records in LESS THAN 5 Minutes.
Now My question is if we have a STandby DB then this SQL will generate lot of LOGS !! Is that Correct ??
Since its a fairly new DB, so This query almost KILLED the STANDBY DB as per the DBA.
So the DBA wants me to WRITE A PL/SQL Code with COMMIT AFTER SAY 10,000 Records which I believe is not a good idea and will definitely run for more than 10 Minutes on an average.
I Suggested to USE NOLOGGING Option as
CREATE TABLE TABLE3 AS
nologging
SELECT A.FIELD1,
A.FIELD2,
B.FIELD1
FROM TABLE1 A,
TABLE2 B
WHERE A.FIELD1 = B.FIELD1
But that will not work because of STANDBY DB as STANDBY DB gets updated from LOGS.
So does anyone have experience in a STANDBY DB Environment to handle things like this ??
I would really aprreciate your thoughts and suggestions to MAKE IT WORK !!
So my question is DOES THE DBA's IDea of committing will work ??
Is it a good idea to commit after 10,000 / 20,000 / 100,000
because Tom Kyte mentions that COMMIT Should not be after 10,000 or 20,000 records.
So Can anyone please take a look and let me know if the following PL/SQL code will work for a table with 300 Million Rows or not ?
CODE
====
---------------------------------------------------------------
CREATE OR REPLACE PROCEDURE fast_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
IS
TYPE ARRAY IS TABLE OF TABLE3%ROWTYPE;
l_data ARRAY;
CURSOR c IS
SELECT a.field1,
a.field2,
b.field1
FROM TABLE1 A,
TABLE2 B
WHERE A.FIELD1 = B.FIELD1 ;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
FORALL i IN 1..l_data.COUNT
INSERT INTO TABLE3 VALUES l_data(i);
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
END fast_proc;
---------------------------------------------------------------
Also if the OBJECTIVE is to LOAD 300 MILLION Rows from the
Joins of 2 tables or could be more into TABLE3.
Also when the number of columns ARE Over 150 from the JOINS of
TWO tables, Will the PL/SQL be able to handle it as an array.
???
Another Question is if I want to COMMIT AFter say 10,000 records in between BULK INSERT, IS It ALLOWED ???
What is the Best Way ??
So we just need to create the table TABLE3 and use its rowtype, Correct ???
I appreciate your feedback in advance !!
??
|
|
|
|
|
Re: BULK Collect, CTAS And PL/SQL with COMMIT AFTER 10,000 Records !! [message #185035 is a reply to message #185016] |
Sun, 30 July 2006 07:45   |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
Um, if you've gotta update the standby database then you've gotta update the standby database.
I didn't think CTAS created that much archive log action. I have only a very rudamentory knowlege of archive logs, but I thought it would log the "action" of the CTAS (which it would apply to the standby) rather than the results (the actual block-writes themselves). Like I say, I could be wrong. However, if I'm right, then the row-by-row will write way more archive logs, so I don't see how the situation is improved.
I don't think I've helped, but good luck anyway.
Ross Leishman
|
|
|
Re: BULK Collect, CTAS And PL/SQL with COMMIT AFTER 10,000 Records !! [message #185037 is a reply to message #185035] |
Sun, 30 July 2006 07:57  |
vx_2004
Messages: 10 Registered: July 2006
|
Junior Member |
|
|
I think the amount of log generated would be same.
Since the standby DB almost got killed and when I was running my script, I think what happened was, I did a DROP TABLE statement in a different session of TOAD and Created my table in another session, since some of the fields were needed to be added, so I did it couple of times that day. So I think there was some sort of problem for ORACLE in deciphering the actions needed and may be it got all messed up.
The logic given by DBA is that if you commit after 10,000 records, those commits would apply faster to standby DB also.
So its commiting often which should help the DB in the background.
I wrote a SP to do BULK INSERTS and commits after 10000 records and it is taking about 15-18 minutes to insert records to this new table if there are 300,000 records.
So I don't what the logic is.
|
|
|
Goto Forum:
Current Time: Thu May 22 21:25:31 CDT 2025
|