Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> Re: 10g System statistics - single and multi
Woflfgang,
It was long weekend here, holidays on Monday. Sorry for delay.
>I disagree and raise you one :-)
Cool ! I am glad you put some humour in. I was gething the impression
that this discussing is gething a bit tense. I hope it's an
interesting topic for you too.
Let's forget about striping, disk head position raid lvls and etc. Let's leave those to the SAN. Today's SANs have good read ahead and write back capabilities and when I think about their caches, i prefer to refer to them as "working memory" rather then cache memory. So it doesn't matter where the disk heads are, and whether you request your next "sequencial" read within 1ms or 100ms, the SAN will pre-fetch as needed (based on it's algorithims which we're not discussing) and the next few blocks will be found in the SAN working (cache) memory.
"I presume with "sequential read" you mean reads of consecutive blocks. If we agree on that definition then it is my proposition that such a thing exists only in an isolated single-user lab environment."
Yes this is what I mean by sequencial. Agreed, but modern SANs help this is= sue.=20
Single vs multi user:
The should SAN will be able to perform similary with multiple
sequencial streams and pre-fetch in working memory as needed. Minimal
testing on OS side reveals SAN is indeed capable of doing this.
>>Time exectation are the following:
>>- sreadtim for sequencial =3D 0.2-0.5 ms
>>- sreadtim for random =3D 5-10 ms
>
>Again, what are the circumstances for you measurements. Are you measuring
>on the OS level? What else was happening in the system/database?
Test performed on OS side. ZERO other activity on disk drives.
>> >The cute thing is that by multiplying the plan cost with sreadtim you
>> >get an estimate for the elapsed time of the sql:
>>
>>I disagree. This would only be valid for FTS. The value for sreadtim
>>is for sequencial single reads. Random single reads have a much higher
>>sreadtim.
>
>I have to disagree again. I just did a mathematical transformation of the
>cost formula as documented in the Performance Tuning Guid
I was refering to the timing differences, not formula wise. Given that
FTS will have mostly sequencial multi-reads, while range scans will
have random single read, the reduction factor will be off by a
significant factor.
I tend to think about "cost" as time measured in "sreadtim" units.
>And your statement that "random single reads have a much higher sreadtim"
>is not correct. SREADTM is the average of all single block reads over the
>measurement period, comprising "random" as well as so-called "sequential"
>reads
About your tests.
Very nice tests, As soon as I have a database on my system, I will repeat those tests.
"oltp" test:
Your timings seem to be relativelly low (i.e. too fast). This would be
due to disk locality. In my OS side testing, i've done a test with
random IO through the whole disk drive, and random IO through 1/3 of
the drive. The difference is about 3 times in IO capacity (forgot to
get responce time results, but I assume same difference). Thus I think
if you were to do IO over the whole array you avg times for single
would the very least double, which would make the difference
significant. I am not sure how you could test this easyly on Oracle
side. One way I can think off, if your file system on the array in
question is relativelly empty, is to create a tablespace with multiple
large files until it fills up the entire file system. Oracle will then
use it's round-robin extent allocation and the data will be somewhat
spread over the whole array. I am putting this test on my "todo" list
too.
"warehouse" test:
I am curious, did you record the multi-read timings for the sessions
with mbrc=3D16 ?
In general.
The system I have in mind will have a very low IO workload, based on
what the arrays will be able to handle. Ocasionally there would be
queries that do FTS or large index range scans which will consume 95%
of the IO on the system, thus the timings will be very different
depending on the case.
>honest, the oltp_style average is not from all entries, only from the firs=
t
>65535 lines that fit into excel. But I believe the excerpt from the trace
>lines shows that the dw_style elapsed times are just as variable as the
Good enought.
RAC and system stats:
CPUspeed is the one which should have multiple values per instance, I agree, someone just forgot RAC, or did not have time to implement. For sread/mread I also agree, those should be per segment. In that same system in mind, we're already splitting the data on different arrays based on access patterns and it will not make sense to use the same system stats to cost both set of objects.
On 5/21/05, Wolfgang Breitling <breitliw_at_centrexcc.com> wrote:
> Ok, here is my test. To verify my proposition that "sequential" single
> block reads become random single block reads when many user sessions are
> involved I borrowed Tom Kyte's big_table and oltp_style (random reads) an=
d
> dw_style (FTS =3D sequential reads) procedures from his "Effective Oracle=
by
> Design" book. I created 3 separate "big_tables" of 400,000 rows each. The=
n
> I ran two tests. One to run several concurrent sessions invoking the
> oltp_style procedure, one of which I traced with level 8. For the other
> test I ran several concurrent sessions invoking the dw_style procedure
> setting db_file_multiblock_read_count to between 12 and 16 except for t=
he
> one session that is being traced where dfmrc is set to 1, generating
> "sequential" single block reads. Below are some of the db file sequential
> read wait lines from the traces:
>
>=20
> Regards
>=20
> Wolfgang Breitling
> Centrex Consulting Corporation
> http://www.centrexcc.com
>=20
>=20
--=20
Christo Kutrovsky
Database/System Administrator
The Pythian Group
-- http://www.freelists.org/webpage/oracle-lReceived on Tue May 24 2005 - 10:53:58 CDT
![]() |
![]() |