Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: benchmark to disprove myths
Ed Stevens wrote:
> My partner and I are still disagreeing on two prime issues covered in this
> summer's famous "Oracle Myths" discussion. We're also disagreeing on the value
> of separating redo logs, archive log files, and other data. All of this has to
> do with placement of different files on available disk arrays and how to best
> configure the disks in a new server.
>
> I am proposing a simple benchmark test.
>
> Have a PL/SQL to implement this pseudo-code:
>
> Create table my_test
> (empno number,
> last_name char(10)
> first_name char(10)
> street char(10)
> city char(10)
> state char(2)
> zip char(5)
>
>
> for x = 1 to 100,000
> insert row using x as the empno value
> commit
> next x
>
> for x = 1 to 100,000
> update my_test
> set last_name = 'xxxxxx',
> first_name = 'xxxxx'
> street = 'xxxxxx'
> city = 'xxxxxxx'
> state = 'xx'
> zip = 'xxxxx'
> where empno = x
> commit
> next x
>
> what I propose is running this with various placements of data, index, redo,
> rbs, and archive logs. What I hope to demonstrate is the value (or lack
> thereof) of
>
> 1 - separating index and data
> 2 - giving redo its own raid set
> 3 - giving archive logs their own set
>
> Does this sound like a resonable means of proving/disproving long held
> assumptions and the assurances of our hardware guys who insist that drives are
> now so fast that we can ignore these kinds of considerations?
>
>
>
> --
> Ed Stevens
> (Opinions expressed do not necessarily represent those of my employer.)
Late 1999 I/we ran a series of benchmark tests on heavy gear. Solaris 8, Ora 8.1.6.2(64 bit) on E10000 64 cpu, 64 Gb ram, 3 fibre channel controlers handling 360 disks. We used 5 E6500 24 cpu/24Gb ram with A1000 as front-end. We were dooing approximately 20,000 transactions/second. We used oci calls and array processing/inserts/updates and array sizes of ~500. This setup generated up to 2Gb redo i less than ~ 30 seconds. We ran archivelog mode. To cope with logwriter writing sequentialy and the archive processes both reading and writing, I used a setup like this: Four or six redolog files (2G each) on a LUN, each made up of a stripeset containing 20 disks, run by veritas direct i/o configured to min. redo io size (512 bytes for solaris). The logs are interleaved on the LUN's, say log 1,5,9,13 on LUN 1, log 2,6,10,14 on LUN 2 .....
This kills a myth: redolog on striped disks doesnt work!
Archive to a similar LUN running UXFS or Veritas. (Veritas IS very OK!)
The trick is to have as many redologs on separate LUN's (or disks) so the archiver can keep up and not disturb the or block the log writer. That way avoid the sequential write from the log writer *and* the archiver processes reading 'out of sequence' (and maybe also writing) on the very same physical units.
rgds
/Svend Jensen Received on Thu Dec 05 2002 - 12:54:05 CST