Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: cpu usage
Charles Hooper wrote:
> hopehope_123 wrote:
> > Hi ,
> >
> > I am working on a database system which its cpu usage at the top
> > waiter list in statspack outputs.
> > Each statspack output ( in 60 min. intervals ) show same sql statement.
> > ( in the list of most buffer_gets )
> >
> > After examining the outputs , i came to a conclusion that cpu usage is
> > not problem.
> >
> > note also: log file sync and log file parallel write events are also in
> > the list, i know, my aim is to examine the cpu usage.
> >
> > I will be appreciated if you can comment .
> >
> > The server has 8 cpus.
> >
> >
> > Snap Id Snap Time Sessions Curs/Sess Comment
> > --------- ------------------ -------- ---------
> > -------------------
> > Begin Snap: 652 27-Dec-06 10:00:04 66 257.3
> > End Snap: 653 27-Dec-06 11:00:02 73 356.9
> > Elapsed: 59.97 (mins)
> >
> >
> > Top 5 Timed Events
> > ~~~~~~~~~~~~~~~~~~
> > % Total
> > Event Waits Time (s)
> > Ela Time
> > -------------------------------------------- ------------ -----------
> > --------
> > CPU time 7,084
> > 75.67
> > log file sync 166,839 819
> > 8.75
> > log file parallel write 356,741 529
> > 5.65
> > db file sequential read 38,676 484
> > 5.17
> > latch free 286,350 219
> > 2.34
> >
> > With 8 cpus, in 1 hour , i have total 8*3600=28800 sec. cpu time,
> > only 7084 sec. is spent.
> > So oracle db uses %24 of the total cpu time of the server.
> >
> >
> > Here is the sql list:
> >
> > CPU Elapsd
> > Buffer Gets Executions Gets per Exec %Total Time (s) Time (s)
> > Hash Value
> > --------------- ------------ -------------- ------ -------- ---------
> > ----------
> > 26,870,884 48,108 558.6 42.5 1470.76 1662.69
> > 1647928882
> > Module: JDBC Thin Client
> > BEGIN xxxx(:1,:2,:3,:4) END;
> >
> >
> > This proc is called 48108 in 1 hour , and takes 1470.76 secs. One call
> > takes 1470.76/48108 = 30msec.
> >
> >
> > Since average execution time of this procedure is 30 msec. , i think
> > there is not much thing to do with this sql. If in a way , i achieve
> > to decrease the response time of this sql to 10 msec. ,
> > than total time of this sql is also decreased proprtionaly ( 2/3)
> >
> > So i dont think the cpu time is an important factor and problematic on
> > this server.
> >
> > What do you think?
> >
> > Kind Regards,
> > hope
>
>
>
>
>
>
>
>
My original response sent via email. Also posting this here so that my response to the OP may be reviewed by others for accuracy. The OP's database uses a single RAID 5 array:
There are things that can contribute to this problem.
* Too small of a buffer cache may contribute to this problem. Dirty
blocks may remain in the buffer cache (this is normal behavior), but
the dirty blocks may be forced out of the buffer cache by the next SQL
statement that requires additional blocks to exist in the buffer cache.
The new blocks cannot be read into the buffer cache until the online
redo log file has been updated and the dirty blocks are written to the
data files.
* Too small of redo logs may contribute to this problem. A checkpoint
is performed every time a log file switch occurs. If the database is
running in archive redo log mode, Oracle will try to copy the old redo
log file to the archive location while still trying to write new redo
entries to the next online redo log file. This increases the
contention for the online redo log file. RAID 5 can service only one
write request at a time, while RAID 10 can potentially service more
than one write request at a time, if the stripe width is sufficiently
large.
* Using write-through caching when the RAID controller includes a
functioning battery backed cache may contribute to the problem. If the
RAID controller has a battery backed cache, consider using write-back
caching for writes, and no caching for reads. Write-back caching tells
Oracle that the redo logs are updated as soon as the data hits the RAID
controller's cache, rather than when the data is physically written to
the disk system. This slightly increases the risk of data loss if the
server encounters an extended power failure.
* Having multiple redo logs in each redo log group, all written to the
same RAID 5 array, may contribute to the problem. If this is the case,
you are doubling or tripling the amount of work required to write redo
entries with no, or very little, benefit of increased crash recovery.
* Having too small of a redo log buffer may contribute to the problem.
Try increasing the redo log buffer to 1MB to see if it helps. A value
larger than 1MB typically has little benefit.
CPU utilization may decrease once the system is not serializing around the writing of redo entries.
Charles Hooper
PC Support Specialist
K&M Machine-Fabricating, Inc.