Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Mailing Lists -> Oracle-L -> Perf and Tuning
These are a set of notes which I have written on Oracle Performance
and Tuning.
Feedback (and corrections, if necessary) from other DBAs is invited.
Hemant K Chitale
Senior Manager, Systems
Automation Dept
(Systems Administrator and DBA)
Stock Holding Corporation of India Ltd
DBA Group Login ID | hemant_at_shakti.ncst.ernet.in | Phone : +91-22-2882749/2045481/2045483Stock Holding Corporation of India Ltd| Fax : +91-22-2027211/2027202
Notes on Tuning of Oracle RDBMS
These notes *exclude* rules for writing SQL Statements.
Performance : The Database Buffer Hit Ratio is the only
relevant statistic.
Hit Ratio is computed as 1 - (physical reads / (db block gets +
consistent gets) ). Where the Hit Ratio is low (consistently
below 90%), DB_BLOCK_BUFFERS are to be increased. The maximum
value is 65536.
SQL : SELECT 1 - (sum(decode(name,'physical reads',value,0)) / ( sum(decode(name,'db block gets',value,0)) + ( sum(decode(name,'consistent gets',value,0)) ) ) from V$SYSSTAT
2. Log Buffers :
Definition : This is the size of memory allocated to Redo Log Buffering in memory. It is specified as : LOG_BUFFER parameter in the init.ora file ; specification in bytes, not supporting "M" or "K" suffix notation.
Usage : User Processes continously write to the Log Buffer. The LGWR writes down from the Buffer to the Redo Log files. If LGWR does not write down to the disk files fast enough, user processes have to wait before they can acquire space.
Perf Statistics : There are three Statistics to use :
SQL : SELECT name, value from V$SYSSTAT where name = 'redo log space requests" b. MISSES and IMMEDIATE_MISSES of "redo allocation". The ratio of MISSES to GETS should notexceed 1%.
SQL : SELECT name, gets, misses, immediate_gets, immediate_misses from V$LATCH x, V$LATCHNAME y where y.name = 'redo allocation' and y.latch# = x.latch# c. MISSES and IMMEDIATE_MISSES of "redo copy". The ratio of MISSES to GETS should notexceed 1%.
SQL : SELECT name, gets, misses, immediate_gets, immediate_misses froms V$LATCH x, V$LATCHNAME y where y.name = 'redo copy' and y.latch# = x.latch#Level Locking (only with Transaction Processing Option in V6 and Procedural Option in V7).
-----------------------------------------------------------------
3. Locks : Definition : Oracle supports Table Level Locking and Row
Usage : Every SELECT transaction acquires a SHARE lock while an UPDATE acquires an EXCLUSIVE lock. Generally, Locking is designed such that there is no blocking ie, readers waiting for writers and vice-versa. However, there can be situations caused by faults in the application design or usage where blocking and, finally, DEADLOCKs occur. When Oracle detects a DEADLOCK, it writes the DEADLOCK information with the preceding SQL statements into a Trace File (only in V7). Alternatively, users may get the error message : "Unable to reserve record for update/delete".
Performance : There are 2 scripts provided by Oracle (V7) :
CATBLOCK.sql is to be run as SYS. This creates the DBA_LOCKS
views.
UTLLOCKT.sql can be run by any DBA. It shows, in a hierarchical
manner, processes waiting to acquire locks and the processes they
are waiting for.
It is more important to design the Application and Coding
properly.
4. Object Access :
Definition : Every object accessed has to be loaded into the cache.
Performance : The "popularity" or frequency of access of an
object can be identified.
The SQL Statement is :
col owner format a12
col name format a25 trunc
col tablespace_name format a10 hea 'TBS' trunc
col type format a10 trunc
col l format 9999 hea 'Loads'
col e format 99,999 hea 'Exec'
spool object_usage
select v.owner,v.name,v.type, nvl(loads,0) l,nvl(executions,0) e
from v$db_object_cache v
where v.executions > 25
order by executions desc
/
spool off
5. Checkpointing :
Definition : A Checkpoint occurs at pre-defined intervals
whereby modified blocks are written back to the data files on
disk by the DBWR process. A Checkpoint thus requires less time
for an Instance Recovery should the running instance fail, as
only data modified since the last Checkpoint needs to be re-
applied from the Redo Logs. (Rollbacks of uncomitted transactions
will, in any case, be done whether they are Chekpointed or not).
Also, all Data File Headers are updated.
A Checkpoint also occurs when a Tablespace is taken OFFLINE or
when an Online Backup of the Tablespace is started. Another
Checkpoint occurs at SHUTDOWN (NORMAL|IMMEDIATE).
The frequency of Checkpointing is defined by :
LOG_CHECKPOINT_INTERVAL parameter in the init.ora ; defined in
terms of OS blocks.
Size of Redo Log File. A Checkpoint occurs at every Log Switch.
LOG_CHECKPOINT_TIMEOUT (only in V7) parameter in the init.ora ;
defined in terms of seconds (default 0 -- disabled). This
permits time-based Checkpointing.
CHECKPOINT_PROCESS=TRUE in the init.ora can be enabled to allow a
seperate process (CKPT) to do the checkpointing. Else, the LGWR
does Checkpointing by default.
Usage : At a Checkpoint, the LGWR or CKPT updates all Data File Headers while DBWR writes modified blocks (Table, Index, Rollback Segment) to the Data Files on disk.
Performance : Increasing the frequency of Checkpointing has two effects :
i) Reduces the time required for Instance Recovery.
ii) Increases I/O Overhead as the DBWR does writes more
frequently.
It is advisable to keep the number of Checkpoints to the minimum
(ie, the number of Log Switches). This can be done by setting
LOG_CHECKPOINT_INTERVAL to a value larger than the size of the
Redo Log Files.
If LGWR is too busy, it is advisable to enable the
CHECKPOINT_PROCESS. This causes one more Oracle Process.
SQL : SELECT name, value from V$SYSSTAT where name = 'dbwr checkpoints'Statements in the Shared Pool (only in V7). The size of the Shared Pool is specified as :
-----------------------------------------------------------------
6. Library Cache Definition : The Library Cache consists of the SQL
Usage : The Shared Pool is used to save parsed copies of SQL statements so that the parsing phase is avoided and process execution is speeded up.
Performance Statistics : The cache hit ratio is relevant.
This is available from the V$LIBRARYCACHE. PINS shows the number
of times an item was executed. RELOADS shows the number of
library cache misses on execution. RELOADS to PINS indicates the
proportion of reparsing.
The SHARED_POOL_SIZE can be increased to allocate more space for
SQLs in the SGA. CURSOR_SPACE_FOR_TIME can be set to TRUE
(default FALSE) in the init.ora to cause implicit pinning of SQL
statements for all open cursors. If, however, the number of
execution misses (RELOADS) is high, and the Shared Pool is full,
a user process is returned an error if the RDBMS is unable to
parse and load the SQL statement for it. Therefore, if
SHARED_POOL_SIZE is not high, CURSOR_SPACE_FOR_TIME should be set
to FALSE. The Application Developers and Users must be asked to
use identical SQL Statements. A CASE difference (e.g. "Select *
from EMP" and "Select * from emp") or even an additional space-
character results in re-parsing. Bind Variable Names must be the
same. SQL statements used must be optimised so that users re-use
optimised statements (else, a number of users using the same
unoptimised SELECT statement will cause a degradation).
SQL : SELECT sum(pins), sum(reloads) from V$LIBRARYCACHErelating to Database Objects.
-----------------------------------------------------------------
7. Data Dictionary Cache Definition : The Data Dictionary Cache stores information
Usage : Various different parameters (in V$ROWCACHE) are used for sizing the cache. Although they are seperately tunable in V6, V7 does not allow user-tuning of these parameters and allocates space in the Shared Pool by its own algorithm.
Performance Statistics : Here, too, the hit ratio is relevant. The hit ratio is determined by (1 - getmisses/gets). If the ratio is low, the SHARED_POOL_SIZE is to be increased. ROW_CACHE_CURSORS may also be increased.
SQL : SELECT 1 - (sum(getsmisses) / sum(gets)) from V$ROWCACHEi) They are PUBLIC Rollback Segments and are ONLINE. ii) They are PRIVATE Rollback Segments and are specified in the ROLLBACK_SEGMENTS parameter in the init.ora. iii) The number of Rollback Segments specified above is less than the value of TRANSACTIONS/TRANSACTIONS_PER_ROLLBACK_SEGMENT in the init.ora (TRANSACTIONS is derived as 1.1 times PROCESSES and TRANSACTIONS_PER_ROLLBACK_SEGMENT is a default of 30). This Ratio is the number of Rollback Segments the RDBMS tries to acquire automatically and will not OPEN the instance if it is unable to do so.
-----------------------------------------------------------------
8. Rollback Segments Definition : Rollback Segments are used by the RDBMS when :
Usage : Rollback Segments can be created online. Moreover, their Status (ONLINE, OFFLINE) also can be changed when the Database is OPEN (only in V7). PUBLIC Rollback Segments are available for all Instances of a Parallel Server Database while PRIVATE Rollback Segments belong to the Instance they are created in. PUBLIC Rollback Segments are *not* recommended.
Performance : There are three issues :
SQL : For Waits (Contention) in Rollback Segments : SELECT class, count from V$WAITSTAT where class in ( 'system undo header' , 'system undo block', 'undo header', 'undo block') For Rollback Segment Activity : SELECT a.name , b.xacts , b.waits , b.gets , b.shrinks , b.wraps , b.extends , b.rssize from V$ROLLNAME a, V$ROLLSTAT b where a.usn = b.usn and status = 'ONLINE' order by a.name Also, use the MONITOR ROLLBACK SEGMENTS option inSQL*DBA.
9. Sorts
Definition : The amount of memory acquired by an Oracle
Process for the purpose of a Sort. The Retained Size is the size
to which Oracle reduces the allocated sort area if the data is
not expected to be referenced again within the same transaction.
Overflows beyond the specified size go into the user's TEMPORARY
TABLESPACE.
This is defined by :
SORT_AREA_SIZE parameter in the init.ora ; specified in bytes ;
must be defined as a multiple of the memory page size.
SORT_AREA_RETAINED_SIZE parameter in the init.ora ; specified in
bytes ; default equal to the SORT_AREA_SIZE.
Default STORAGE Parameters of the user's TEMPORARY TABLESPACE.
Usage : When any transaction needs a Sort Operation, the Oracle Shadow Process acquires Memory equal to the SORT_AREA_SIZE. If this is insufficient, further sorting is done in the user's TEMPORARY TABLESPACE. Once a Sort is over, the size is shrunk to SORT_AREA_RETAINED_SIZE so as to release memory. A Sort is required for the following SQLs :
i) CREATE INDEX
ii) SELECT statement with ORDER BY
Temporary Tablespace is also required for :
i) SELECT statement with GROUP BY
Performance : There are three considerations :
SQL : SELECT name, value from V$SYSSTAT where name in ('sorts(memory)','sorts(disk)')relevant parameters are :
-----------------------------------------------------------------
10. Private SQL Area Definition : Private SQL Areas are used for Parsing. The
Usage : A Cursor is required for every SQL parsing.
Performance : The Application Code is to be run with the Trace facility enabled. The Trace file can then be read to identify the number of re-parses and re-executes and cursors opened.
Usage : This resides in memory. It is a Shared Memory Segment.
Performance : The SGA should fit into a single Shared Memory Segment. If SHMMAX (Unix Kernel Parameter) is less than the total SGA, the SGA is fragmented across Shared Memory Segments. If possible, increase the value of SHMMAX and rebuild the Unix Kernel.
SQL : SELECT * from V$SGA Unix : ipcs -m -b shows the number and sizes of each Shared MemorySegment.
Usage : Every INSERT statement requires reference to a FREELIST to identify an available block.
Performance : If FREELISTS is less than the number of
concurrent INSERTs into the Table, the additional INSERTs have to
wait for those with access to FREELISTs to complete.
Free List contention can be monitored by identifying the ratio of
waits for free-list to the total number of block requests.
However, the V$ views give global statistics (ie. for ALL tables
being accessed).
Since this value is defined for a Table only when the Table is
created, changing it for a Table requires dropping and recreating
the Table.
SQL : SELECT class, count from V$WAITSTAT where class = 'free list' SELECT sum(value) from V$SYSSTAT where name in ('db block gets','consistent gets')LOG_ARCHIVE_DEST=<destination_directory> in the init.ora LOG_ARCHIVE_FORMAT specifies the name format for the Archived Log file created in Unix.
-----------------------------------------------------------------
12. Archiving of Redo Logs Definition : The ARCH process can be enabled for Automatic Archiving with the following : LOG_ARCHIVE_START=TRUE and
Performance : Generally, the performance of ARCH is not an
issue. It is also recommended to seperate Redo Log files across
disks to avoid contention between LGWR and ARCH. Moreover, the
Log Archive Destination directory should be on a dedicated disk.
Another recommendation is to add Redo Log files so that there is
enough time-lag between the ARCH archiving a file and LGWR
wanting to re-use it.
If ARCH works very quickly, overall system performance can be
reduced while it runs, taking CPU time. If it runs very slowly,
a bottleneck can be created if a file is not archived and the
LGWR needs to overwrite the file. Therefore, tuning can be done
both ways, viz cause the ARCH to run either as slowly as possible
without being a bottleneck, or as quickly as possible without
reducing system performance.
Setting LOG_ARCHIVE_BUFFERS to 1 and LOG_ARCHIVE_BUFFER_SIZE to
th maximum (as per the Installation and User Guide for the OS),
makes ARCH work slowly.
To increase the speed of ARCH, LOG_ARCHIVE_BUFFERS can be set to
2,3 or 4 and LOG_ARCHIVE_BUFFER_SIZE to a lower value.
SQL : In SQL*DBA : ARCHIVE LOG LIST
Usage and Performance : All these parameters are defined at CREATE time. All (except INITIAL) can be modified later with an ALTER <Object> statement. However, if altered, the new values are effective only for Extents and Blocks created/allocated subsequent to the alteration.
INITIAL and NEXT sizes of All Objects in a Tablespace should be a multiple of the DB_BLOCK_SIZE and the lowest value within the Tablespace and they should all be a factor of the Tablespace size. This is to avoid fragmentation of the Tablespace. Also, the DEFAULT STORAGE parameters for the Tablespace should also follow this rule. Note that there are two overheads : i) Every Object Segment takes an additional Block. ii) Every Extent takes an additional Block.
PCTINCREASE should generally be set to 0 (explicitly) to avoid fragmentation of the Tablespace into odd-sized extents.
PCTFREE is to be low for Tables with very few UPDATEs. It results in :
i) less room for updates ii) allowing INSERTs to fill the Block more completely iii) saving in total space utilised iv) more CPU overhead as the Blocks are more frequentlyreorganised
PCTFREE is to be high for Tables with many UPDATEs that result in the size ofa row increasing. It results in :
i) more room for UPDATEs ii) fewer rows per Block iii) increase in total space needed iv) improvement in performance with less Row-Spanning or Row- Chaining A low PCTUSED results in :
A high PCTUSED results in :
i) keeping blocks more full ii) increasing processing cost iii) improving space efficiency PCTFREE + PCTUSED cannot exceed 100. If the sum is lessthan 100, the ideal compromise of space versus I/O performance is a sum that differs from 100 by the percentage of space in the available Block (ie., Block size minus Overhead) that the Average Row occupies. If the sum equals 100, a maximum of PCTFREE is kept free and processing costs are high. The smaller the difference between 100 and the sum, the more efficient space usage is, at the cost of performance.
A High UPDATEs Table should have a high PCTFREE (say 20) and a low (or default 40) PCTUSED. A High INSERT and DELETE with Low UPDATE Table should have a low PCTFREE (say 5) and a high PCTUSED (say 60). A Large Table should have a low PCTFREE (say 5) and a high PCTUSED (say 90) to improve space utilisation.
A Table which has few rows in each block can have a low MAXTRANS. Where the number of rows is high, the probability of multiple concurrent transactions increases and MAXTRANS may be increased.
Performance : As a general rule, it is still preferable to use RULE and write all SQL Statements as per the known rules.
SQL : ANALYZE TABLE|INDEX ESTIMATE|COMPUTE STATISTICS
Usage : Profiles can have limits for : SESSIONS_PER_USER CPU_PER_SESSION (Time in hundredths of a second) CPU_PER_CALL (Time in hundredths of a second) CONNECT_TIME (in minutes) IDLE_TIME (in minutes) LOGICAL_READS_PER_SESSION LOGICAL_READS_PER_CALL
Performance : A typical application of Resource Limits is for SQL*NET connects where users run random SELECT statements. All the Oracle User_names (e.g. "usr") can be limited by a Profile defined for them.
SQL : CREATE PROFILE <Profile_Name> ALTER PROFILE Alternatively, the above can be done from SQL*DBA. ALTER USER <user_names> PROFILE <Profile_Name> =================================================================
Annexure 1 :
Annexure 2:
The Execution Plan for an SQL SELECT Statement can be obtained by :
i) Creating a PLAN_TABLE under the Oracle User ID that is to run the SELECT Statement. The CREATE TABLE Statement for PLAN_TABLE is in $ORACLE_HOME/rdbms/admin/utlxplan.sql
ii) Executing the SQL SELECT Statement as :
"explain plan set statement_id = '&statement' for SELECT .... "
iii) On Running the EXPLAIN PLAN, the Plan can be obtained as :
"select operation ,options ,object_name ,id ,parent_id, position from plan_table where statement_id = '&statement' order by id " A tabular format for the output can be obtained as : select lpad(' ',2*(level-1))||operation||' '||options ||' '||object_name||' '||decode(id,0,'Cost = '||position)from plan_table start with id = 0 and statement_id = '&&statement' connect by prior id = parent_id and statement_id = '&&statement'
2. SQL_TRACE and tkprof
The command ALTER SESSION SET SQL_TRACE TRUE would enable generation of a Trace File for subsequent SQL Statements. The Trace file is created in USER_DUMP_DEST.
The Trace File is a Text file. It can be further analysed by the tkprof command as : tkprof <trace_file_name>.
tkprof can be compiled in $ORACLE_HOME/rdbms/install as "make -f oracle.mk tkprof" Received on Wed Jan 31 1996 - 11:48:14 CST