Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: switching to 'direct path read' should help prevent thrashing buffer cache ?
joel garry wrote:
> p.santos000_at_gmail.com wrote:
> > Folks,
> > Our environment is neither an OLTP nor a DSS system. We have
> > transactions that cross boundaries.
> >
> > Particularly we have these one of jobs that many of our customers run
> > at any time during the
> > day in which they chose to clear a table column.
> > Most clients clear anywhere from 1 million rows to 11 million rows
> > per job.
> >
> > The pseudo-code and SQL looks like this:
> >
> > while SQL%ROWCOUNT < 500,000
> > do
> > UPDATE DEMO_<CLIENT_ID>
> > set col1= null,
> > col2= null,
> > col3= null
> > WHERE ( col1 is not null OR
> > col2 is not null OR
> > col3 is not null ) AND ROWNUM <= 500,000;
> > commit;
> > done.
> >
> > We use a ROWNUM limit in order to prevent row blocking for other
> > processes that
> > might be processing single row dml against the table ..
> >
> > We have increased the efficiency of these processes .. making IO
> > faster and now customers
> > are just doing it more often. ... this obviously thrashes my buffer
> > cache.
>
>> > scattered read'.
> >
> > Nearly all updates spend most of their time waiting on 'db file
>
>> >
> >
> > Should also mention that this process connects via a shared/mts
> > connection... although
> > we can change that if needed.
> >
> > I'm thinking about having just this process run in parallel in order to
> > bypass the buffer cache because, I don't believe this process benefits
> > from caching and it causes blocks to age out faster for other clients
> > that are doing other things .. and do benefit from caching.
> >
> > My thought is that if I switch this to a dedicated connection and I
> > add a PARALLEL hint
> > ( even if it's just 2 parallel servers per job), the job will
> > complete faster, it will prevent my cache from being thrashed only at
> > the cost of more pga memory , and a little bit more io.
>
>
>
>
My apologies for not show the version/environment information.
I'm on Solaris 5.8 (veritas filesystem) running 10.1.0.4. Our Buffer Cache is 3GB and most of these tables that are being updated have are anywhere from 1GB size to 10GB. The columns are sometimes collocated, and the updated columns are always varchar2(1). They are usually set to '1' or '0' and when the are cleared they are set to NULL. The reason I don't use the KEEP / RECYCLE pools is because I would have to do this for every customer table. The fact our application allows a customer do clear an entire column during business hours ..while possibly blocking any number of single record transactions should show you that this is that an ideal system .. but I just a dba :-(.
Also these updates run from a perl client using DBD::Oracle. So array processing or using BULK COLLECT is not possible and I'm sure it will not be faster.
My thought was configure the larger customers to use dedicated connections, and set the update to execute in parallel. When I've traced this (10046) the biggest wait event is 'direct path read' instead of 'db file scatttered read'.
I also agree that on small customer tables, the parallelization would
be overhead.
My pga_aggregate_target is set to 1500M.
thanks for the feedback.
-peter
Received on Thu Dec 21 2006 - 11:13:01 CST