Re: Buffer gets - array size, fetch size - High latch activity - buffer gets??? bug or default behavior
Date: Mon, 30 Dec 2019 06:36:12 +0530
Message-ID: <CAP-Rywy-DFcWyVyBKgcGRdpH0fdG5-SXVCE4jeWA9cnDwL-dJA_at_mail.gmail.com>
Thanks Mladen,
what tanel and you said are accurate, I am just trying to piece together every thing there is.
Not sure how to thank you for the following clause: The real question with the array fetch is how many network round-trips have you saved.
It all makes sense now,, what if application requires displaying only top
10 rows.. I mean when we search an USB pendrive in Amazon... search results
displays only 40 results unless we scroll or go to next page... rarely we
go to the second page... finally this fetchsize buffergets discussion falls
apart, as the customer is satisfied in most cases with first results from
the first page..
in some cases we need to see all the results (frauds or tickets in
queue)... basically depends on the application type.. whether it is front
end or backend...
but it is in the cases where they have not configured the fetchsize and
application is backend... happens with many ORM models where developers
overlook this..
Thanks,
Vishnu
On Sun, Dec 29, 2019 at 9:53 PM Mladen Gogala <gogala.mladen_at_gmail.com> wrote:
> Hi Vishnu!
> That's entirely logical. The resources protected by latches are SGA
> buffers and their chains. The most likely situation in which a process will
> wait for a latch is SGA buffer allocation. The more data you need
> processed, the more buffers you need and greater the likelihood of having
> to perform cleanup. Cleanup is a very intense and complex operation which
> uses latches galore. Jonathan's advice to use order by actually directs the
> data to PGA instead of SGA. PGA is not protected by latches because it
> isn't shared. The real question with the array fetch is how many network
> round-trips have you saved. Latches are a side effect and usually can be
> ignored. If they can not, you should open a case with Oracle support.
> Regards
>
> On Sun, Dec 29, 2019, 01:17 Vishnu Potukanuma <vishnupotukanuma_at_gmail.com>
> wrote:
>
>> Thanks Tanel,
>>
>>
>> I performed many tests considering all the scenarios and appears that
>> this fetch size has serious implications at the database level which
>> basically undermines the prefetch and batching capabilities at the IO level.
>>
>>
>> found the following cases where side affects of low fetch sizes in terms
>> of buffer gets and latches required are minimal to none.
>>
>> - SORT ORDER BY to sort the results in which case the rows
>> obtained from the SQL execution are initially stored in the session’s PGA
>>
>> - Use of Aggregate functions such as group by, avg, sum, count,
>> distinct etc.,
>>
>> - most operations that results in storing the results in the PGA.
>>
>> - Effect of reacquiring the latches decreases with the increase in
>> the row size but a low FETCHSIZE eventually results in more IOPS on
>> server or inefficient prefetching or batched access.
>>
>> - Since the index leaf blocks can accommodate more entries, there is
>> a high chance that the index leaf block will be accessed multiple times
>> (provided that an indexed column value result in multiple rows).
>> Considering 90-10 and 50-50 splits, rebuilding an index which had more
>> 50-50 leaf block splits will elevate the number of fetches as the index
>> leaf block now accommodates more rows. In such cases, it is better not to
>> build the index or increase the PCTFREE of an index should help, but
>> remember that increasing the PCTFREE reduces the efficiency of the buffer
>> cache as the blocks contain more free space.
>>
>> - Index only scans are also affected since prefetching cannot be done for
>> range based queries as the next leaf block can be any location, unless an
>> index rebuild.
>>
>> - Also found that if the application uses asynchronous programming or
>> calls other APIs for each result that they obtain causes a mess at the
>> buffer cache level especially if the blocks gets modified in the process
>> and this magnifies if the table is small mutating table where updates are
>> more frequent... we can never detect this unless we explicitly ask them
>> (and the session ends up waiting on sql*net message from client).
>>
>> - short tables in which the clustering factor is incorrect (calculation
>> where the subsequent ROWIDs point to alternate blocks) in this case things
>> get even more ugly...
>>
>>
>> ORDER BY on the indexed column is also affected as the sorting step is
>> avoided... I was also looking at the ORM solutions such as hibernate which
>> generate queries, in these cases, if the application is heavily dependent
>> on such solutions, we have no control over the order by except increasing
>> the fetchsize to a very high value, but this takes much time if the
>> application uses FIRST_ROWS and most environments have a shared network
>> between the database and application servers...
>>
>>
>> Looks like there is more to this index range/skip scan.... have to do
>> many more tests to see any other aspects are also effected....
>>
>>
>> Thanks,
>>
>> Vishnu
>>
>> On Sun, Dec 29, 2019 at 2:11 AM Tanel Poder <tanel_at_tanelpoder.com> wrote:
>>
>>> Lower fetch size means less data is retrieved/sent back per *database
>>> call*. Buffer pins (and held latches) must be released at the end of
>>> every call to avoid some idle session holding these locks for too long. So
>>> if you only ask for 5 rows per fetch, you end up re-visiting the same block
>>> to get another 5 much more likely compared to fetching 500 rows every time.
>>>
>>> Also you can try to just add an ORDER BY to the end of your test query
>>> (and make sure that the ORDER BY operation actually shows up in the plan)
>>> and the LIO/CBC latch get difference with different array sizes should
>>> mostly go away as the ORDER BY step in the plan would use max internal
>>> batch sizes to populate the query resultset in the sortarea in PGA/TEMP and
>>> you'd fetch the results after that from there without needing CBC latches
>>> or LIOs.
>>>
>>> Tanel
>>>
>>> On Wed, Dec 25, 2019 at 7:56 PM Vishnu Potukanuma <
>>> vishnupotukanuma_at_gmail.com> wrote:
>>>
>>>> Got it,
>>>>
>>>> looks like it is the default/expected behaviour...
>>>> since Oracle doesn't use asynchronous calls which is evident from the
>>>> fact that it uses only single thread per process model (server processes),
>>>> the logic flow must shift at point in time or another to a different
>>>> function to transmit the data (which can take more time and is dependent on
>>>> various factors such as network delays etc) to the client and returns to
>>>> the previous call where it has to mandatorily touch or read the blocks to
>>>> continue processing from where it left and this is completely dependent on
>>>> the fetchsize, lower the fetchsize more buffegets and latchgets..
>>>>
>>>> looks like this is not the case all the time ...
>>>>
>>>> Thanks,
>>>> Vishnu
>>>>
>>>> On Wed, Dec 25, 2019 at 4:43 AM Vishnu Potukanuma <
>>>> vishnupotukanuma_at_gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> The situation is as follows:
>>>>> create table randomload(roll number, mark1 number, mark2 number, mark3
>>>>> number, mark4 number);
>>>>> load the table with random data;
>>>>> then create an different table so that the clustering factor on mark4
>>>>> column will be close to table blocks.
>>>>>
>>>>> create table randomload2 as select * from randomload order by mark4;
>>>>> create index idx on randomload2(mark4);
>>>>> exec dbms_stats.gather_table_stats('VISHNU','RANDOMLOAD', CASCADE=>
>>>>> TRUE);
>>>>>
>>>>> now carefully consider any value from mark4 (indexed column) that
>>>>> retrieves more than 500 rows (example) eventually we will find that 100
>>>>> rows also causes high latching activity...
>>>>>
>>>>> select * from randomload2 where mark4= 100;
>>>>> see the buffergets for this SQL and also the carefully monitor the
>>>>> latch activity on each of these latches holding these data blocks (CBC)
>>>>> latches.
>>>>>
>>>>> i used a query such as the following:
>>>>> select sum(gets) from v$latch_children where addr in
>>>>> ('000000006C9A4160','000000006CA29DA8','000000006CA37068','000000006CAFEF20','000000006CB52450','000000006CB52450','000000006CB52450','000000006CB91E28','000000006CB91EF0');
>>>>>
>>>>> it is very interesting that the latch activity or buffer gets for a
>>>>> SQL statement is dependent on the array size or fetch size..
>>>>> average latch gets or buffer gets will always be higher when the fetch
>>>>> size is small, even 100, the database repeatedly accesses the data blocks,
>>>>> or latches...
>>>>>
>>>>> testing with set fetchsize 20, 30, 60, 100, 200.
>>>>>
>>>>> we will see different buffer gets per execution each time and latch
>>>>> activity is very high...
>>>>> even though the number of blocks accessed will be less (physical
>>>>> blocks in my case only 6) since average row length is less and the mark4
>>>>> column had only 3200 distinct values).
>>>>>
>>>>> Can someone please tell me
>>>>> Is this expected? or I am missing any thing?
>>>>>
>>>>> Thanks
>>>>> Vishnu
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
-- http://www.freelists.org/webpage/oracle-lReceived on Mon Dec 30 2019 - 02:06:12 CET