Re: READ wait events when inserting data into a CLOB

From: David Fitzjarrell <oratune_at_yahoo.com>
Date: Wed, 5 Dec 2012 07:55:58 -0800 (PST)
Message-ID: <1354722958.77609.YahooMailNeo_at_web121603.mail.ne1.yahoo.com>



LOBs handle undo differently than other columns -- Before images of LOB data are stored in the segment itself provided that MAXEXTENTS has not been reached or the tablespace containing the LOB segments has not filled to capacity. Additionally Oracle keeps PCTVERSION of the storage allocated for LOBs for before images; if PCTVERSION is set to too low of a value older images will be overwritten and, you guessed it, an ORA-01555 error appears. The PCTVERSION setting reserves that percentage of the total number of chunks of LOB data which have been allocated during the update process. Let’s say you have 400 chunks already allocated, PCTVERSION is 5 and you need to update 33 chunks. 33 additional chunks are allocated for the before images of that data. This transaction succeeds and commits, freeing those 33 chunks for the next update. But, wait, PCTVERSION is set to 5 so 5% of 400 (20) of those chunks can’t be touched to preserve the before images. This  leaves 13 chunks of the previous allocation available for the next update transaction. If the next update affects 20 chunks then 7 additional chunks need to be allocated to complete that request.  So it's possible that the CLOB update is scanning before images for available space for the next update.

David Fitzjarrell



From: Martin Klier <Martin.Klier_at_klug-is.de> To: ORACLE-L <oracle-l_at_freelists.org> Sent: Wednesday, December 5, 2012 1:32 AM Subject: READ wait events when inserting data into a CLOB

Dear listers,
I have too much read IO waits (db file sequential read) on a CLOB, when inserting data, and I don't understand why.

Statement:
INSERT into TABLE (<list of all columns>) values <list  of all values>;

Runtime:
between 250ms and 500ms , both is too long by far

Situation:
Usually I have around 200 of those inserts in one transaction. Thus, usually such a transaction is open for 50 seconds at least. It's an OLTP environment, so I can't drop indexes or constraints "before" loading, in fact I am loading and reading the table all the time. Inserting happens concurrently by multiple Java threads, all coming in via JDBC thin driver.
The table is not partitioned (Standard Edition). The table contains <7 million rows.
I can't APPEND, because the table is purged on a logical basis, and I don't like to have maintenance windows for reorganizing the segments. There are lots of NOT NULL and simple insert-value based check constraints on the table, but only one UNIQUE constraint (as part of the numeric primary key).
One of the columns is a CLOB, but there is no constraint on the CLOB column.

What I already did:
Looked up the execution plan. Here it is:



| Id  | Operation                | Name | Starts | Cost  | A-Rows | A-Time   | Buffers | Reads  | Writes |
|  0 | INSERT STATEMENT        |      |      1 |    1 |      0 
|00:00:02.75 |    5064 |    840 |      2 |
|  1 |  LOAD TABLE CONVENTIONAL |      |      1 |      |      0 
|00:00:02.75 |    5064 |    840 |      2 |
------------------------------------------------------------------------------------------------------------

I tried to find out where all this runtime goes to, so I enabled 10046 level 12 tracing for this SQL, and the tkprof analysis shows:

Elapsed times include waiting on following events:   Event waited on                            Times  Max. Wait  Total Waited

  ----------------------------------------  Waited  ----------  


  db file sequential read                        61        0.01          0.29
  direct path write                              30        0.00          0.01
  direct path read                              15        0.00          0.00

=> So now I know where my runtime is spent on: reading!

The raw trace file tells me:

WAIT #139690535902720: nam='db file sequential read' ela= 8514 file#=5 
block#950925 blocks=1 obj#�425 tim54644922207564
WAIT #139690535902720: nam='direct path write' ela= 376 file number=5 
first dba956449 block cnt=1 obj#=-1 tim54644922208331
WAIT #139690535902720: nam='db file sequential read' ela= 321 file#=5 
block#956449 blocks=1 obj#�425 tim54644922208727
WAIT #139690535902720: nam='direct path read' ela= 216 file number=5 first 
dba956449 block cnt=1 obj#�425 tim54644922208998 WAIT #139690535902720: nam='direct path write' ela= 375 file number=5 first dba956449 block cnt=1 obj#�425 tim54644922209458 WAIT #139690535902720: nam='db file sequential read' ela= 5471 file#=5 block#�07590 blocks=1 obj#�422 tim54644922215025 EXEC
#139690535902720:c)99,e440,p=4,cr,cu8,mis=0,r=1,dep=0,og=1,plh=0,tim54644922215246 STAT #139690535902720 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL  (cr pr=4 pw=2 time415 us)' CLOSE #139690535902720:c=0,e,dep=0,type=1,tim54644922216200

obj#�425 is my CLOB

Now my question:
Why is my INSERT reading a CLOB so heavily when inserting one row of data into my table?
I was not able to find out, so please share your thoughts.

Thank you very much in advance!

Best regards

--
Mit freundlichem Gru�


Martin Klier
Senior Oracle Database Administrator

--
http://www.freelists.org/webpage/oracle-l

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Dec 05 2012 - 16:55:58 CET

Original text of this message