Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: log file sync
I am just attempt to see what the upper limits of the commit rate. We have
a job that needs to process X number of records per hour. I can not change
the application logic. This means one thing the database will be doing is
commiting 50,000 time per minute. The log buffer is 1 MB now, we have
slowly increased the size to eliminate as many log buffer space waits as
possible. Log file syncs are now the problem with most jobs. My question
was how come when I wrote a failry simple insert delete script and go 50,000
commits per minute, I go no log file syncs waits?
"Tony Hunt" <tonster_at_bigpond.net.au> wrote in message
news:LMBu7.116614$bY5.580330_at_news-server.bigpond.net.au...
> The commits force LGWR writes...
>
> I missed the previous post, but why do you need to INSERT, COMMIT, DELETE,
> COMMIT
>
> Can't you use PL/SQL ROLLBACK/SAVEPOINTS or variables to test the values
> before inserting them to the table? Why do you need to add the value and
> then delete it again?
>
> If it's temporarily needed while another transaction takes place, can't
you
> store it in a smaller table somewhere?
>
> "Ethan Post" <Blah_at_Blah.com> wrote in message
> news:OAuu7.19310$Xk4.1337160_at_news1.rdc1.sdca.home.com...
> > I always thought log file sync waits primary cause was commiting too
> often.
> > We have a few jobs that run during the day and this is the primary wait
> > event. However during some of my testing regarding potential
transaction
> > rates for our server (see recent post) I wrote a script that inserts a
> > record, commit's, deletes the record, commits for a large table. I am
> > getting 140 MB of redo per minute and over 50,000 commits per minute but
> no
> > log file sync's. Scratching my head on this one. The file layout is
> > optimal. - E
> >
> >
>
>
Received on Wed Oct 03 2001 - 09:24:47 CDT
![]() |
![]() |