Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: switching to 'direct path read' should help prevent thrashing buffer cache ?
Charles Hooper wrote:
> p.santos000_at_gmail.com wrote:
> > Folks,
> > Our environment is neither an OLTP nor a DSS system. We have
> > transactions that cross boundaries.
> >
> > Particularly we have these one of jobs that many of our customers run
> > at any time during the
> > day in which they chose to clear a table column.
> > Most clients clear anywhere from 1 million rows to 11 million rows
> > per job.
> >
> > The pseudo-code and SQL looks like this:
> >
> > while SQL%ROWCOUNT < 500,000
> > do
> > UPDATE DEMO_<CLIENT_ID>
> > set col1= null,
> > col2= null,
> > col3= null
> > WHERE ( col1 is not null OR
> > col2 is not null OR
> > col3 is not null ) AND ROWNUM <= 500,000;
> > commit;
> > done.
> >
> > We use a ROWNUM limit in order to prevent row blocking for other
> > processes that
> > might be processing single row dml against the table ..
> >
> > We have increased the efficiency of these processes .. making IO
> > faster and now customers
> > are just doing it more often. ... this obviously thrashes my buffer
> > cache.
> >
> > Nearly all updates spend most of their time waiting on 'db file
> > scattered read'.
> > We have db_file_multiblock_read_count = 128.
> >
> > Should also mention that this process connects via a shared/mts
> > connection... although
> > we can change that if needed.
> >
> > I'm thinking about having just this process run in parallel in order to
> > bypass the buffer cache because, I don't believe this process benefits
> > from caching and it causes blocks to age out faster for other clients
> > that are doing other things .. and do benefit from caching.
> >
> > My thought is that if I switch this to a dedicated connection and I
> > add a PARALLEL hint
> > ( even if it's just 2 parallel servers per job), the job will
> > complete faster, it will prevent my cache from being thrashed only at
> > the cost of more pga memory , and a little bit more io.
> >
> > I'm looking for the cons in doing something like this?
>
>
>
>
>
>
>
We want our system to favor full table scans because we have customer queries that can select any column with any predicate. Only batch lookups and a few other things use indexes to look up small pieces of these very large tables.
This access is faster than ever. Now although 128 is what is set, 128 is not always used. I've traced (10046) these processes many times and in some cases the 'db file scattered read' will read less blocks. The wait event shows that p3 value = 128, but in some cases it is less. I think the block size of the filesystem is 512 and our database block size is 8K.
Thanks for the feedback.
-peter
Received on Thu Dec 21 2006 - 11:21:12 CST