Re: Huge plain inserts response time bad with db file sequential read wait event.
Date: Fri, 21 Dec 2018 14:00:09 +0530
Message-ID: <CAOGpvWpd=AC2pHJzPdycD89TVA5g2VRjM8LwcTwft9Z0S9Xnug_at_mail.gmail.com>
Thanks Harmandeeep & Andy. Yes I am already checking with Storage on the CDOT filer performance and it looks to me like NetApp filer is taking more load what it can take up.
Thanks Mladen,
These inserts are being fired by Oracle GoldenGate replicat process. So need to check how we can enable parallel DML's for those and othe provided suggestions.
Regarding system statistics , does the system statistics affect some queries only? In my case I observe for all the GoldenGate related SQLID's i see the CPU costing if OFF. However, the queries coming from JDBC side I do see the stats.
Regards,
Rakesh RA
On Fri, Dec 21, 2018 at 1:32 PM Andy Sayer <andysayer_at_gmail.com> wrote:
> I agree with Harmandeep, the average time for your dB file sequential
> read is way too high for a modern system. You could be looking at about 1%
> of that! I would start by looking at what’s going on in your storage system
> - is it servicing more requests than it has the capacity for? Is it having
> to do something slower because some of your hardware has gone bad?
>
> Talk to your storage engineers, find out what speed their devices are
> supposed to be able to random reads at and let them know what you’re
> getting.
>
> It’s hard to tell if you’re doing more reads than one would expect because
> this could be an array based insert. It’s GG so I would expect it to be
> fairly sensible doing that. How many rows are being inserted per execution?
> You can estimate by checking executions and rows processed in v$sql.
>
> Are you writing to the same storage that you’re replicating from? It would
> be strange if your source data was being written faster than your
> replicated data (which should be using effecient batching).
>
> Hope this helps,
> Andy
>
> On Fri, 21 Dec 2018 at 07:37, Mladen Gogala <gogala.mladen_at_gmail.com>
> wrote:
>
>>
>> On 12/20/2018 12:48 PM, Rakesh Ra wrote:
>> >
>> >
>> > Can someone guide as to,
>> >
>> > a) Why INSERT is spending more time on db file sequential reads for PK
>> > index?
>>
>> Because insert needs to check whether the row being inserted already
>> exists. To do that, Oracle needs to check the primary key index,
>> therefore you get db file sequential reads. If the table is large, the
>> PK is likely to be large as well and you will need a lot of reads. Root
>> block and a couple of branch blocks will likely be cached, but
>> eventually you will encounter the situation in which you need to read
>> both the branch blocks and leaf blocks to check your data. There are
>> various methods to remedy that, one of which is enabling parallel DML.
>> Another method of speeding up inserts is to use compress for all
>> operations. The third is to use hybrid columnar compression, if
>> possible. You may even try using IOT. It's all done using the proven
>> trial and error method.
>>
>> > b) Why CPU costing is showing as off in the execution plan for OGG
>> > queries only?
>>
>> Because you haven't collected system statistics. CPU costing is off when
>> optimizer doesn't have an access to system statistics.
>>
>>
>> >
>> > Regards,
>> > RRA
>> --
>> Mladen Gogala Database Consultant Tel: (347) 321-1217
>> --
>> http://www.freelists.org/webpage/oracle-l
>>
>>
>>
-- http://www.freelists.org/webpage/oracle-lReceived on Fri Dec 21 2018 - 09:30:09 CET