Re: exadata write performance problems

From: Ls Cheng <exriscer_at_gmail.com>
Date: Sun, 17 Feb 2019 16:40:07 +0100
Message-ID: <CAJ2-Qb9w1FVH2v8Q2OdJ4T-HxFzWpPPbataDzDnu+9rKaWuXOQ_at_mail.gmail.com>



Hi Patrick

I believe critical writes from lgwr, ckpt have high priority by default. The problem in our situation is that when direct path or smart scan fires the object's dirty blocks have to be be flushed to disk, in this case I believe ckpt has to wait for dbwr to finish the dirty blocks flushes, since dbwr is writting slowly because the storage is saturated the cpt slows down as well and causing enq KO locking situation.

I will have look into AUTO IORM as your suggestion.

Thank you

On Sat, Feb 16, 2019 at 1:05 AM Patrick Jolliffe <jolliffe_at_gmail.com> wrote:

> >>IORM is default
> I'd look seriously to change that, we've had a few problems with default
> setting (BASIC) when hitting high IO load, and it has significantly
> improved by implementing the recommended setting (AUTO). This seems to
> prioritise critical database writes (DBWR/LGWR/???) over others.
> I still don't know why they haven't changed this default if it's not the
> recommended value.
> Starting point would be reading section 6.5.1 of following URL
>
> https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-storage-server-iorm.html
> Patrick
>
> On Wed, 13 Feb 2019 at 07:16, Ls Cheng <exriscer_at_gmail.com> wrote:
>
>> Hi
>>
>> Running Exadata 18.1.5.0.0. Grid 12.2.0.1, RDBMS 11.2.0.4 and 12.2.0.1.
>> IORM is default, no custom configuration. Contrlfile set to DATA and FRA
>> Disk Group and finally ASM High Redundanc
>>
>>
>> Thanks
>>
>>
>> On Tue, Feb 12, 2019 at 11:21 PM Andy Wattenhofer <watt0012_at_umn.edu>
>> wrote:
>>
>>> Which Exadata software version are you running? Which grid and database
>>> versions? Are you using IORM? What is your control_files parameter set to
>>> (i.e., where are your control files)? And what are your ASM redundancy
>>> levels for each of the disk groups?
>>>
>>> On Tue, Feb 12, 2019 at 3:16 PM Ls Cheng <exriscer_at_gmail.com> wrote:
>>>
>>>> Hi
>>>>
>>>> IHAC who has 1/8 Exadata x6-2 with High Capacity Disks is having heavy
>>>> performance problems whenever some massive DML operation kicks in, since
>>>> this is a 1/8 configuration the IOPS supporting write operation is not
>>>> high, roughly 1200 IOPS. I am seeing as high as 4000 Physical Writes Per
>>>> Sec in peak time. When this happens the user session starts suffering
>>>> because they are blocked by enq: KO - fast object checkpoint which is
>>>> blocked by "control file parallel write" by CKPT. So the idea is aliviate
>>>> CKPT. This is from hist ash
>>>>
>>>> INSTANCE_NUMBER SAMPLE_TIME EVENT
>>>> TIME_WAITED SESSION P1 P2 P3
>>>> --------------- --------------------------------
>>>> ------------------------------ ----------- ------- ---------- ----------
>>>> ----------
>>>> 2 12-FEB-19 12.11.24.540 AM control file parallel
>>>> write 1110465 WAITING 2 41 2
>>>> 2 12-FEB-19 12.16.34.754 AM Disk file Mirror Read
>>>> 1279827 WAITING 0 1 1
>>>> 1 12-FEB-19 12.16.44.012 AM control file parallel
>>>> write 1820977 WAITING 2 39 2
>>>> 2 12-FEB-19 12.20.34.927 AM control file parallel
>>>> write 1031042 WAITING 2 856 2
>>>> 1 12-FEB-19 12.21.14.256 AM control file parallel
>>>> write 1905266 WAITING 2 3 2
>>>> 2 12-FEB-19 12.21.14.977 AM control file parallel
>>>> write 1175924 WAITING 2 42 2
>>>> 1 12-FEB-19 12.21.54.301 AM control file parallel
>>>> write 2164743 WAITING 2 855 2
>>>> 2 12-FEB-19 12.22.35.036 AM control file parallel
>>>> write 1581684 WAITING 2 4 2
>>>> 1 12-FEB-19 12.23.44.381 AM control file parallel
>>>> write 1117994 WAITING 2 3 2
>>>> 1 12-FEB-19 12.23.54.404 AM control file parallel
>>>> write 4718841 WAITING 2 3 2
>>>>
>>>> Whe this happens we observe these cell metrics
>>>>
>>>> CELL METRICS SUMMARY
>>>>
>>>> Cell Total Flash Cache: IOPS=13712.233 Space
>>>> allocated=6083152MB
>>>> == Flash Device ==
>>>> Cell Total Utilization: Small=27.8% Large=14.2%
>>>> Cell Total Throughput: MBPS=471.205
>>>> Cell Total Small I/Os: IOPS=9960
>>>> Cell Total Large I/Os: IOPS=6005
>>>>
>>>> == Hard Disk ==
>>>> Cell Total Utilization: Small=69.5% Large=18.7%
>>>> Cell Total Throughput: MBPS=161.05
>>>> Cell Total Small I/Os: IOPS=5413.618
>>>> Cell Total Large I/Os: IOPS=166.2
>>>> Cell Avg small read latency: 245.67 ms
>>>> Cell Avg small write latency: 62.64 ms
>>>> Cell Avg large read latency: 308.99 ms
>>>> Cell Avg large write latency: 24.65 ms
>>>>
>>>>
>>>> We cannot not enable write-back flash cache right now because that may
>>>> cause another problems and although we are in process to upgrade 1/8 cells
>>>> to 1/4 cells it is going to take some months. I know it is not a best
>>>> practice but I was thinking in the mean time scarve some flash space and
>>>> create them as grid disk and store the controlfiles in Flash. Anyone have
>>>> experience with such setup?
>>>>
>>>> TIA
>>>>
>>>>
>>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Feb 17 2019 - 16:40:07 CET

Original text of this message