9582.69043 gigabytes of physical read total bytes and increasing! [message #500185] |
Thu, 17 March 2011 22:49 |
nabeelkhan
Messages: 73 Registered: May 2005 Location: Kuwait
|
Member |
|
|
In EM
Database Instance: PROD > Top Activity > I get following
physical read total bytes 62762844160 8082658164736 4183631872
cell physical IO interconnect bytes 62762844160 8082658164736 4183631872
physical read bytes 62762844160 8082658164736 4183631872
And the session is running following update procedure:
declare
FM_BBBB MT.BBBB_CODE%TYPE;
l_start NUMBER;
cursor code_upd is select /*+ parallel(FM_KWT_POP_BBBB_MISMATCH, 10) */ DDD_CID, DDD_BBBB, CCCC_BBBB from MT_MISMATCH;
begin
-- Time regular updates.
l_start := DBMS_UTILITY.get_time;
FOR rec IN code_upd LOOP
update /*+ parallel(MT, 10) nologging */ MT
set BBBB_code = rec.CCCC_BBBB
where source= 0
and cid_no = rec.DDD_CID
and BBBB_code = rec.DDD_BBBB;
commit;
END LOOP;
DBMS_OUTPUT.put_line('Bulk Updates : ' || (DBMS_UTILITY.get_time - l_start));
end;
There are 9.5 million records in MT but source=0 have only 3 million records and 376K records in MT_MISMATCH, What I don't understand why this is taking so much of time and so many bytes read? Both Tables are analyzed before running this procedure.
Can someone shed some light on this? Is there any better way of doing the same job?
[Updated on: Thu, 17 March 2011 22:56] Report message to a moderator
|
|
|
|
|
|
Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500189 is a reply to message #500185] |
Thu, 17 March 2011 23:31 |
flyboy
Messages: 1903 Registered: November 2006
|
Senior Member |
|
|
nabeelkhan wrote on Fri, 18 March 2011 04:49Can someone shed some light on this?
Without knowing the table structures (relevant columns, constraints and indexes), it is just a guessing, like: there is no index on (SOURCE, CID_NO) columns in MT table, so for each loop iteration this table has to be full scanned (reading 376k times 9.5M = over 3.5T rows).
nabeelkhan wrote on Fri, 18 March 2011 04:49Is there any better way of doing the same job?
Probably one UPDATE statement. How many rows are in MT with SOURCE = 0? If DDD_CID is at least unique MT_MISMATCH (and it should otherwise the result of that block would be nondeterministic), it would be as simple as (not tested): update MT
set BBBB_code = ( select CCCC_BBBB
from MT_MISMATCH
where ddd_cid = mt.cid_no
and ddd_bbbb = mt.bbbb_code )
where source = 0
and exists ( select CCCC_BBBB
from MT_MISMATCH
where ddd_cid = mt.cid_no
and ddd_bbbb = mt.bbbb_code );
|
|
|
Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500190 is a reply to message #500189] |
Thu, 17 March 2011 23:39 |
nabeelkhan
Messages: 73 Registered: May 2005 Location: Kuwait
|
Member |
|
|
flyboy wrote on Fri, 18 March 2011 07:31
Without knowing the table structures (relevant columns, constraints and indexes), it is just a guessing, like: there is no index on (SOURCE, CID_NO) columns in MT table, so for each loop iteration this table has to be full scanned (reading 376k times 9.5M = over 3.5T rows).
There is an index on both columns
flyboy wrote on Fri, 18 March 2011 07:31
Probably one UPDATE statement. How many rows are in MT with SOURCE = 0?
3.1 Million
flyboy wrote on Fri, 18 March 2011 07:31
If DDD_CID is at least unique MT_MISMATCH (and it should otherwise the result of that block would be nondeterministic), it would be as simple as (not tested):
update MT
set BBBB_code = ( select CCCC_BBBB
from MT_MISMATCH
where ddd_cid = mt.cid_no
and ddd_bbbb = mt.bbbb_code )
where source = 0
and exists ( select CCCC_BBBB
from MT_MISMATCH
where ddd_cid = mt.cid_no
and ddd_bbbb = mt.bbbb_code );
Let me try your example and see if that works, I killed the session and at the time of killing it was 27 TB
|
|
|
Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500214 is a reply to message #500190] |
Fri, 18 March 2011 03:19 |
Roachcoach
Messages: 1576 Registered: May 2010 Location: UK
|
Senior Member |
|
|
Its early but I think I'm correct in saying that they way that is running is accessing table MT 376k times. Now depending on the indexing (or rather what, if anything, oracle is doing with them) it is conceivable that code is doing 376k FTS of the table MT.
Is the MT table wide? i.e. lots of columns? I see you have given a row count, but is it a big table in terms of M/Gb size? If it is a fat table it may not be able to fit into the cache and accessing it 376k times might account for the physical reads you're seeing.
Not really sure why its forced into parallel to that degree, but that's a sideline, my main point being in the past I have seen parallel processing at higher degress stop using indexes in favour of FTS, this was on older versions doing specific queries but it may be useful regardless.
|
|
|
|
Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500312 is a reply to message #500299] |
Sat, 19 March 2011 03:12 |
John Watson
Messages: 8960 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
Quote:I am thinkin to move it to a 4k block size tablespace
I do not believe that there would be any point in this. I know all the theory about different block sizes - but I have never seen any evidence to support it. This is a quote from the current release of the Oracle Uni performance tuning course:
"In recent years all the TPC performance tests have used an 8 KB block size. Larger and smaller block sizes have not given significant performance benefits."
I would be interested to know if anyone does have real evidence. I am increasingly seeing 16K used, particularly with RAC. I hear suggestions such as "the interconnect protocols will be used more efficiently with larger blocks". But before recommending such a huge change as moving to a different blocksize, I need some proof.
An in the meantime, I'll follow the old rule: "no-one ever got fired for choosing 8K blocks".
|
|
|