Re: high Block change tracking waits

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Thu, 4 Feb 2021 15:28:51 +0000
Message-ID: <CAGtsp8n5Jc6rCL=2s+D93NPRmq0qVb4uj9Mv3vyxLsTY7MsKpA_at_mail.gmail.com>



To a large extent the size of the BCT file is dictated by the number of files in the database (and also by the sizes of those files). IIRC there are 8 "generations" of BCT bitmaps for each datafile, and each bit in a bitmap corresponds to a 32KB chunk of a data file. So the reason for the slowdown might be that you've been adding and/or growing datafiles over time but the buffer hasn't been growin to match.

I've just checked the event names for 11.2.0.4 and there are two that might be relevant

SQL> l
  1* select name, parameter1, parameter2, parameter3 from v$event_name where lower(name) like 'change%write'
SQL> /

NAME                                     PARAMETER1      PARAMETER2
 PARAMETER3
---------------------------------------- --------------- ---------------
---------------
change tracking file synchronous write   block#          blocks
change tracking file parallel write      blocks          requests


When your foreground session is waiting on " block change tracking buffer space" it's possible that the only thing it's waiting for will be CTWR writing buffer to disc, so look at the waits (v$session_event) for CTWR and check what those write times lookl like. Possibly they are just very slow.

The thought relating to "change with time" is that bitmaps have to be set when blocks are changed - so if you have a large number of block changes scattered over a large volume of space in the data files you will generate a lot of change in the BCT file - and very often as databases grow (and indexes, in particular, expand) the number of blocks you change in a unit time increases.

One possibility that you could put to Oracle Support is whether increasing the size of the change tracking buffer would help. There are a couple of hidden parameters that are relevant. I'd guess that if your buffer was a little larger than one-eighth of the size if the file then you might stop processes from needing to find space in the buffer (that's me assuming that the buffer could be mapped to the file, which may be an invalid assumption). Check the hidden "bct" parameters, there's one called _bct_buffer_allocation_max

It would be interesting to see how that compares in size with the 22GB of your file.

Regards
Jonathan Lewis

On Thu, 4 Feb 2021 at 09:55, Lok P <loknath.73_at_gmail.com> wrote:

> One thing we noticed from the wait event pattern , there was a very low
> amount of those waits(block change tracking buffer space) in the past . but
> gradually it started increasing and now it's enough to impact the
> application queries. How to debug/fix this one? or it can be related to the
> slowness of the underlying disk on which the BCT file lies?
>
> The BCT file size is 22GB.
>
> On Thu, Feb 4, 2021 at 2:35 AM Lok P <loknath.73_at_gmail.com> wrote:
>
>> Below is a sample batch insert query with its sql monitor and it shows
>> almost all the time is in OTHERs wait which is nothing but "block change
>> tracking buffer space".
>>
>>
>> SQL Text
>> ------------------------------
>> INSERT INTO TABLE1 (.....) VALUES(:1 , :2 , :3 , :4 , :5 ,
>> :6 , :7 , :8 , :9 , :10 , :11 , :12 , :13 , :14 , :15 , :16 , :17 , :18 ,
>> :19 , :20 , :21 , :22 , :23 , :24 , :25 , :26 , :27 , :28 , :29 , :30 , :31
>> , :32 , :33 , :34 , :35 , :36 , :37 , :38 , :39 , :40 , :41 , :42 , :43 ,
>> :44 , :45 , :46 ,:47 , :48 , :49 , :50 , :51 , :52 , :53 , :54 , :55 , :56
>> , :57 , :58 , :59 , :60 , :61 , :62 , :63 , :64 , :65 , :66 , :67 , :68 ,
>> :69 , :70 , :71 ,:72 ,:73 ,:74 ,:75 )
>>
>> Global Information
>> ------------------------------
>> Status : DONE
>> Instance ID : 2
>> SQL Execution ID : 36104551
>> Execution Started : 02/03/2021 15:56:51
>> First Refresh Time : 02/03/2021 15:56:55
>> Last Refresh Time : 02/03/2021 15:58:43
>> Duration : 112s
>> Module/Action : JDBC Thin Client/-
>> Program : JDBC Thin Client
>>
>> Global Stats
>>
>> ============================================================================================
>> | Elapsed | Cpu | IO | Concurrency | Cluster | Other |
>> Buffer | Read | Read |
>> | Time(s) | Time(s) | Waits(s) | Waits(s) | Waits(s) | Waits(s) |
>> Gets | Reqs | Bytes |
>>
>> ============================================================================================
>> | 112 | 0.19 | 1.23 | 0.00 | 0.00 | 111 |
>> 17090 | 555 | 4MB |
>>
>> ============================================================================================
>>
>> SQL Plan Monitoring Details (Plan Hash Value=0)
>>
>> ==============================================================================================================================================
>> | Id | Operation | Name | Rows | Cost | Time |
>> Start | Execs | Rows | Read | Read | Activity | Activity Detail |
>> | | | | (Estim) | | Active(s) |
>> Active | | (Actual) | Reqs | Bytes | (%) | (# samples) |
>>
>> ==============================================================================================================================================
>> | 0 | INSERT STATEMENT | | | | |
>> | 1 | | | | | |
>> | 1 | LOAD TABLE CONVENTIONAL | | | | 1 |
>> +112 | 1 | 0 | 54 | 432KB | | |
>>
>> ==============================================================================================================================================
>>
>> On Thu, Feb 4, 2021 at 2:25 AM Lok P <loknath.73_at_gmail.com> wrote:
>>
>>> Hi , We are on version 11.2.0.4 Oracle database and suddenly starting
>>> the last 5-10 days we are seeing many queries(mostly Inserts) are running
>>> longer and the waits it's showing as "block change tracking buffer space" .
>>> and it's going worse day by day. We do have BCT ON on this database. So
>>> wondering why these waits were not there previously as because we have the
>>> database kept on BCT enabled since long back, but suddenly appeared since
>>> the last few days and impacting application queries?
>>>
>>> Even simple insert queries like insert into table(c1,c2, c3) values(:1,
>>> :2, :3) (which are executing with batch size of ~1000 ) are suffering. What
>>> can be the cause for this?
>>>
>>> Regards
>>> Lok
>>>
>>>
>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Feb 04 2021 - 16:28:51 CET

Original text of this message