RE: Excessive transaction tables consistent reads - undo records applied in 11.2.0.3

From: Administrator User <jonathan_at_jlcomp.demon.co.uk>
Date: Thu, 4 Oct 2012 15:37:14 +0000
Message-ID: <CE70217733273F49A8A162EE074F64D90B2183_at_exmbx06.thus.corp>


That statistic tells us that the session is looking at a block where it can see that a transaction has been committed, but not cleaned out - and it needs to know (roughly) when it committed (in particular whether it committed before or after the start of the current select statement - or transaction if you've set the isolation level to readonly or serializable).

This suggests that a large number of transactions has committed since the start of your "select * from table" - so many that the transaction tables in every undo segment header have been over-written many times (what is the corresponding value for "transaction tables consistent read rollbacks" - that will tell you how many times you had to go through this process for a single block, and the average number of transactions - less 40, approx - that you have to unwind to get the tranaction table header to the right state).

The suggestion to blast through the table (select max(earliest unindexed non-null column) from table) may help you to get these blocks cleaned out in the shortest possible time, thus bring the table sufficiently up to date that the problem is reduced when you run your real query.

There are two problems, though: (a) 11.2.0.3 may do this with direct path reads - which means it won't clean the blocks - (b) it's possible that this scan will take a very long time for exactly the same reason that your required tablescan does.

Part of the problem, of course, is that Oracle starts a tablescan from the beginning while typical batch processing is all about changing the most recent data (i.e. the stuff at the end of the table). So the time you spend reading the first 350 MB may be letter the other session wreck the last 50MB. (This is a topic close to Mark Farnham's heart - why doesn't Oracle give us a "tablescan descending" hint.)#

You might look at what the other batch job is doing - how many commits per second for how long, and see if changing the way it processes the data would make a difference.

Regards
Jonathan Lewis



From: oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org] on behalf of Laimutis.Nedzinskas_at_seb.lt [Laimutis.Nedzinskas_at_seb.lt] Sent: 03 October 2012 07:43
To: oracle-l_at_freelists.org
Subject: Excessive transaction tables consistent reads - undo records applied in 11.2.0.3

Hi

Did anyone came accross excessive "tables consistent reads - undo records" in 11.2.0.3 ?--
http://www.freelists.org/webpage/oracle-l Received on Thu Oct 04 2012 - 17:37:14 CEST

Original text of this message