Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> Re: I/O and db_file_multiblock_read_count
I don't undersetand Mladen's and Stefan's recommendations about running dbms_stats.gather_system_stats.
Kevin says that he deliberately ran a test to check the performance
of multiblock reads.
He is not asking "how can I avoid multiblock reads ?".
He is asking "why is the total time for 128-block reads not
significantly lesser than the total time for 16-block reads ?".
Hemant
>On 12/8/06, Kevin Lidh
><<mailto:kevin.lidh_at_gmail.com>kevin.lidh_at_gmail.com> wrote:
>I was reading an article about the appropriate setting for
>db_file_multiblock_read_count. I'm on a HP-UX 11.11 64-bit system
>with Oracle 9.2.0.7.0. The original value was 16 and I bounced the
>database and ran a million record full-scan test (10046 trace) and
>then set the value to 128 (max value) and re-ran the test. Although
>I did have less calls to 'db file scattered read' (140 compared to
>1008), the time to get the same number of blocks was longer. A
>random example from the trace files looks like this:
><snip>
>And I ran the test again with 16 after my 128 test and the results
>were similar to the first test. The cumulative times for the 'db
>file scattered read' was 1.3181s for the 16 and 2.5149s when it was
>128. We use a Hitachi SAN and I know it has caching but I would
>still think that the fewer, bigger requests would be faster. Is
>there a measurement or setting on the operating system or even SAN
>that I can look at to help determine the optimal setting?
>
Hemant K Chitale
http://web.singnet.com.sg/~hkchital
-- http://www.freelists.org/webpage/oracle-lReceived on Sat Dec 09 2006 - 02:18:57 CST
![]() |
![]() |