Home » RDBMS Server » Performance Tuning » Benchmarking UPDATE Statements
Benchmarking UPDATE Statements [message #224859] Thu, 15 March 2007 15:08 Go to next message
SnippetyJoe
Messages: 63
Registered: March 2007
Location: Toronto, Canada
Member
Hi All,

I'm trying to set up a test harness to compare different approaches to the same problem. What I've got so far works o.k. for SELECT statements, but for UPDATE statements I'm getting vastly different results (related to redo) from one run to the next when I use *the same command* in each run.

I'm running my tests on Oracle XE on a single user Windows PC with no activity other than my testing going on. Here are the steps I'm taking.
  1. create test tables
  2. set the instance to a neutral state
  3. perform a simple update
  4. set the instance to a neutral state
  5. perform a similar simple update on another table
  6. set the instance to a neutral state
  7. perform a similar simple update on another table
  8. report statistics for each update

A full log of what I did is available in the attached file called test-log-1.txt. The end results were as follows.

=--------------------------------------------------------------------=
= Statistics that differ by more than 100
=--------------------------------------------------------------------=
Attempt 1:

METRIC_NAME                           Run 1        Run 2        Run 3
------------------------------ ------------ ------------ ------------
buffer is pinned count               28,389        9,984        9,984
db block changes                     74,061       20,145       20,144
db block gets                        54,394       10,235       10,233
db block gets from cache             54,394       10,235       10,233
free buffer requested                   318          141          140
physical read bytes                 196,608      180,224      180,224
physical read total bytes           196,608      180,224      180,224
redo entries                         36,895       10,010       10,010
redo size                         7,645,992    2,328,832    2,328,704
rollback changes - undo record        8,433            0            0
session logical reads                54,538       10,331       10,329
session uga memory max              254,380      261,964      254,380
table scan rows gotten               28,557       10,000       10,000
undo change vector size           2,042,836      761,880      761,880

Attempt 2:

METRIC_NAME                           Run 1        Run 2        Run 3
------------------------------ ------------ ------------ ------------
buffer is pinned count               27,079       21,951       23,727
db block changes                     68,798       48,202       55,333
db block gets                        49,108       28,421       35,584
db block gets from cache             49,108       28,421       35,584
physical read bytes                 204,800      212,992      212,992
physical read total bytes           204,800      212,992      212,992
redo entries                         34,269       23,992       27,550
redo size                         7,130,744    5,114,376    5,812,536
rollback changes - undo record        7,121        1,986        3,764
session logical reads                49,250       28,556       35,721
session uga memory max              254,380      254,380      261,964
table scan rows gotten               27,243       22,644       23,958
undo change vector size           1,943,140    1,552,936    1,688,048


=--------------------------------------------------------------------=
= Latch Gets that differ by more than 100
=--------------------------------------------------------------------=
Attempt 1:

METRIC_NAME                           Run 1        Run 2        Run 3
------------------------------ ------------ ------------ ------------
cache buffers chains                238,484       50,843       50,860
cache buffers lru chain                 350          156          158
object queue header operation           614          254          255
redo allocation                      36,915       10,018       10,018
row cache objects                    25,563          245          257
session allocation                      604          602          792
simulator hash latch                    482          673          683
simulator lru latch                     438          665          664
                               ------------ ------------ ------------
sum                                 303,450       63,456       63,687

Attempt 2:

METRIC_NAME                           Run 1        Run 2        Run 3
------------------------------ ------------ ------------ ------------
cache buffers chains                218,715      141,318      168,113
object queue header operation           584          471          515
redo allocation                      34,289       24,006       27,587
row cache objects                    21,627        6,222       11,556
simulator hash latch                    484        1,470        3,004
simulator lru latch                     436        1,433        2,961
                               ------------ ------------ ------------
sum                                 276,135      174,920      213,736


As you can see, the results differ from one run to the next and from one attempt to the next. I would expect the results to be fairly similar since all runs perform virtually the same UPDATE statement.

I'd appreciate it if anyone could explain the cause of the differences or suggest a way to eliminate the variability (short of bouncing the instance between each run).

Thanks in advance.

--
SnippetyJoe
http://www.sqlsnippets.com/

Re: Benchmarking UPDATE Statements [message #224899 is a reply to message #224859] Thu, 15 March 2007 19:08 Go to previous messageGo to next message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
Some things that come to mind:
- Did you flush the shared pool and buffer cache before each run?
- Did you update any indexed columns? If you reset to a neutral state using DML then the first run may leave blanks in leaf blocks and potentially split index blocks - something that probably would not occur in subsequent runs.
- Did any updates extend numbers/varchars forcing row migration? Again, this is something that might not get "undone" if you reset to neutral state with DML.

If you reset to neutral state with a database restore and flush the shared pool / buffer cache, you could overcome these types of issues.

Ross Leishman
Re: Benchmarking UPDATE Statements [message #224901 is a reply to message #224899] Thu, 15 March 2007 19:55 Go to previous messageGo to next message
SnippetyJoe
Messages: 63
Registered: March 2007
Location: Toronto, Canada
Member
Thanks for the feedback Ross.

Re. "Did you flush the shared pool and buffer cache before each run? ... If you reset to neutral state with a database restore and flush the shared pool / buffer cache, you could overcome these types of issues."

Here's what I do before each UPDATE to bring the instance to what I call a "neutral" state.

SQL> ----------------------------------------
SQL> -- new SQL*Plus session starts here
SQL> ----------------------------------------
SQL> -- try a bunch of things to set the instance
SQL> -- to the same state for each run
SQL>
SQL> alter system checkpoint ;

System altered.

SQL> alter system flush shared_pool  ;

System altered.

SQL> alter system flush buffer_cache ;

System altered.

SQL> alter tablespace USERS offline ;

Tablespace altered.

SQL> alter tablespace USERS online  ;

Tablespace altered.


Are there any commands you would add to this? Since I'm creating and dropping test tables as part of each run I don't need to restore the tables.


Re. "Did you update any indexed columns?" - nope.

Re. "If you reset to a neutral state using DML then the first run may leave blanks in leaf blocks and potentially split index blocks - something that probably would not occur in subsequent runs." ...

I didn't use any DML to reset things. I simply drop each table used in each run after the end of Run #3. (The test tables I'm updating are created right before Run #1.)

Re. "Did any updates extend numbers/varchars forcing row migration?"

Good point. I didn't consider row migration initially. However, since I'm performing the exact same update on identical tables I would expect the effect of row migration to be the same for each run.

Regardless, I reran the test. Instead of updating nulls to "0"'s I changed the test to update "1"'s to "0"'s. This should prevent migration. The results were the same as before, significantly different stats.
Re: Benchmarking UPDATE Statements [message #224903 is a reply to message #224901] Thu, 15 March 2007 20:38 Go to previous message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
I'm running a bit short on ideas then.

How did you restore the table data to the original content?

Archive logs could (unlikely) be another issue - you can't control when they are written (can you?).

I don't muck about much with the V$ statistics, but I think they are cumulative, right? To get the stats for a particular run, you need to subtract the "before" from the "after". This is probably a stupid question, but your "before" for the first run is immediately before the first run, right? It doesn't include any booting up / open database activity.

Having said all that, it seems most likely that something happened in the first run that did not happen in the remaining runs. What could that be?
- Table data is organised differently on disk after run 1: row migration, index block splitting, anything else?
- Oracle's memory structures are in a different state after run 1: buffer cache, shared pool, anything else? Bouncing the database between each run should sort this out.
- Run 1 runs with a different execution plan. Dynamic sampling makes this more common on 10g - worth a look, but you said there were no indexes so all updates should be FTS.
- SYS tables are in a different state after run 1 resulting in different plans on recursive SQL. Could trace each session and compare the number of recursive parse, exec, and fetch steps in each run.

That's about it, cant think of anything else.

Ross Leishman
Previous Topic: when to use_merge or use_hash for large data sets
Next Topic: Delete statement and performance
Goto Forum:
  


Current Time: Wed Nov 27 01:47:43 CST 2024