RE: Excessive transaction tables consistent reads - undo records applied in 11.2.0.3

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Thu, 4 Oct 2012 13:31:05 -0400
Message-ID: <018501cda256$09cf7350$1d6e59f0$_at_rsiz.com>



+1

Especially for clarifying exactly what tosses that error at us and reminding me that due to adaptive direct read a "simple" FTS is no longer good enough. That does require something special.

By the way, my dodge to selectively clean blocks until they give us that feature (not holding breath, even though really only asked for reverse FTS by chunk as acceptable) is to:

  1. Pick something indexed that has a "how recently changed" correlation: could be date last modified, could be a sequence number, or some natural or artificial key you know of in a particular table. It does NOT have to be anything like EXACT. The whole purpose is to efficiently clean SOME blocks.
  2. select non_indexed_column from dirty_table where the rowid is the minimum rowid by rowid_block from the range you scanned by index.

Like this: (I hope I managed the cut and paste so it is code and not garbage. I'll try to blog this as rsiz later with proper code formatting) It looks okay as I send it. (And for those wondering, these rows really are each in a different block as it is part of my hermitization experiment set, so 99 pctfree 0 pctused. So I know that id 4 is just the last block. You all can test this on your own tables. Also nice, because it totally skips any space that may be "empty front" and honeycomb empty space, since by intent it is driven from an index.

SQL> desc junk65_base

 Name                                      Null?    Type
 ----------------------------------- -------- ----------------------------
 ID                                                 NUMBER
 PADDING                                   VARCHAR2(100)

SQL> r
  1* select * from junk65_base

        ID PADDING

---------- ------------------------------------------------------
         1 abcdefghijklmnopqrstuvwxyzABCDEFGHJIKLMNOPQRSTUVWXYZ42
         2 abcdefghijklmnopqrstuvwxyzABCDEFGHJIKLMNOPQRSTUVWXYZ42
         3 abcdefghijklmnopqrstuvwxyzABCDEFGHJIKLMNOPQRSTUVWXYZ42
         4 last row

SQL> get q_junk65_base_last
  <snipped - showed on r>
SQL> r
  1 select padding from junk65_base jb1   2 where jb1.rowid in
  3 (select jb2.one_rowid
  4 from (select min(rowid) one_rowid,dbms_rowid.rowid_block_number(rowid)   5 from junk65_base
  6 where id > 3
  7 group by dbms_rowid.rowid_block_number(rowid)   8 ) jb2
  9* )

PADDING



last row

-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Administrator User
Sent: Thursday, October 04, 2012 11:37 AM To: oracle-l_at_freelists.org
Subject: RE: Excessive transaction tables consistent reads - undo records applied in 11.2.0.3

That statistic tells us that the session is looking at a block where it can see that a transaction has been committed, but not cleaned out - and it needs to know (roughly) when it committed (in particular whether it committed before or after the start of the current select statement - or transaction if you've set the isolation level to readonly or serializable).

This suggests that a large number of transactions has committed since the start of your "select * from table" - so many that the transaction tables in every undo segment header have been over-written many times (what is the corresponding value for "transaction tables consistent read rollbacks" - that will tell you how many times you had to go through this process for a single block, and the average number of transactions - less 40, approx - that you have to unwind to get the tranaction table header to the right state).

The suggestion to blast through the table (select max(earliest unindexed non-null column) from table) may help you to get these blocks cleaned out in the shortest possible time, thus bring the table sufficiently up to date that the problem is reduced when you run your real query.

There are two problems, though: (a) 11.2.0.3 may do this with direct path reads - which means it won't clean the blocks - (b) it's possible that this scan will take a very long time for exactly the same reason that your required tablescan does.

Part of the problem, of course, is that Oracle starts a tablescan from the beginning while typical batch processing is all about changing the most recent data (i.e. the stuff at the end of the table). So the time you spend reading the first 350 MB may be letter the other session wreck the last 50MB. (This is a topic close to Mark Farnham's heart - why doesn't Oracle give us a "tablescan descending" hint.)#

You might look at what the other batch job is doing - how many commits per second for how long, and see if changing the way it processes the data would make a difference.

Regards
Jonathan Lewis



From: oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org] on behalf of Laimutis.Nedzinskas_at_seb.lt [Laimutis.Nedzinskas_at_seb.lt] Sent: 03 October 2012 07:43
To: oracle-l_at_freelists.org
Subject: Excessive transaction tables consistent reads - undo records applied in 11.2.0.3

Hi

Did anyone came accross excessive "tables consistent reads - undo records" in 11.2.0.3 ?--
http://www.freelists.org/webpage/oracle-l

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Oct 04 2012 - 19:31:05 CEST

Original text of this message