Re: extreme high 'db file parallel wait' on ZFS

From: gengmao <gengmao_at_gmail.com>
Date: Tue, 27 Apr 2010 11:37:24 +0800
Message-ID: <r2rafe23fd51004262037v2df54306t8430332193d355a0_at_mail.gmail.com>



filesystemio_options was set to directio for this db. But seems zfs does not support directio, right?

On Mon, Apr 26, 2010 at 9:02 PM, Andrew Kerber <andrew.kerber_at_gmail.com>wrote:

> I am not really that familiar with ZFS, but is async IO or its equivalent
> turned on for that drive?
>
>
> On Mon, Apr 26, 2010 at 6:03 AM, gengmao <gengmao_at_gmail.com> wrote:
>
>> Hello,
>>
>> I am running a batch job on an oracle 10.2.0.3 database, which locates on
>> a ZFS file system. The job runs very slow. Got the AWR report, it shows
>> following top events.
>>
>> Top 5 Timed Events Avg %Total
>> ~~~~~~~~~~~~~~~~~~ wait Call
>> Event Waits Time (s) (ms) Time Wait
>> Class
>> ------------------------------ ------------ ----------- ------ ------
>> ----------
>> db file parallel write 544 5,579 10256 220.2
>> System I/O
>> log file parallel write 6,345 1,079 170 42.6
>> System I/O
>> CPU time 697 27.5
>> log file switch (checkpoint in 651 638 981 25.2
>> Configurat
>> db file sequential read 241,752 546 2 21.5
>> User I/O
>>
>> Obviously it is very IO intense. But I can't understand why the average
>> 'db file parallel write' is so high! Normally it should close to the average
>> I/O service time, right?
>>
>> Following is a sample output from iostat -xn 5. Seems the asvc_t is not
>> very high.
>> extended device statistics
>> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
>> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
>> 0.0 1.8 0.0 6.0 0.0 0.0 0.0 0.1 0 0 c0t0d0
>> 28.2 27.4 3596.2 1140.8 0.0 0.6 0.0 10.7 0 20 c0t1d0
>> 30.6 17.2 3916.2 1204.4 0.0 1.0 0.0 21.5 0 29 c0t2d0
>> 28.2 22.2 3609.0 1261.5 0.0 0.8 0.0 16.8 0 18 c0t3d0
>> 36.0 12.8 4581.7 1268.1 0.0 1.1 0.0 21.9 0 24 c0t4d0
>> 31.6 12.6 4044.2 1257.2 0.0 0.8 0.0 17.3 0 22 c0t5d0
>> 31.6 12.6 4044.2 1268.9 0.0 0.9 0.0 20.8 0 29 c0t6d0
>> 30.4 12.8 3890.6 1177.7 0.0 1.0 0.0 22.5 0 27 c0t7d0
>>
>> Following is a sample output from zpool iostat -v 5:
>> capacity operations bandwidth
>> pool used avail read write read write
>> ---------- ----- ----- ----- ----- ----- -----
>> oracle 332G 620G 175 1.33K 22.0M 31.9M
>> c0t1d0 47.3G 88.7G 23 196 2.97M 4.69M
>> c0t2d0 47.4G 88.6G 31 194 3.99M 4.53M
>> c0t3d0 47.4G 88.6G 29 195 3.67M 4.43M
>> c0t4d0 47.3G 88.7G 20 190 2.60M 4.33M
>> c0t5d0 47.4G 88.6G 24 191 3.12M 4.38M
>> c0t6d0 47.4G 88.6G 21 194 2.72M 4.80M
>> c0t7d0 47.3G 88.7G 23 198 2.90M 4.74M
>> ---------- ----- ----- ----- ----- ----- ----
>>
>> Could anyone explain the extreme high 'db file parallel wait'? Is it
>> possible tune zfs to get better performance?
>>
>> Thanks in advance!
>> Eric Geng
>>
>>
>
>
> --
> Andrew W. Kerber
>
> 'If at first you dont succeed, dont take up skydiving.'
>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Apr 26 2010 - 22:37:24 CDT

Original text of this message