RE: Any option in 19C for GTT to improve performance

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Fri, 2 Jun 2023 09:02:35 -0400
Message-ID: <1e4601d99552$809251b0$81b6f510$_at_rsiz.com>



To me that makes sense. I would also advocate testing any extra time to populate the GTT in the order fid,eid,etyp minus the run time of the queries.  

It can often be surprising the reduction in i/o required by index accesses when the cluster factor is “perfect” (which is WHY the CBO uses cluster factor as a tie breaker.)  

mwf  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Pap Sent: Friday, June 02, 2023 1:44 AM
To: Andy Sayer
Cc: Oracle L
Subject: Re: Any option in 19C for GTT to improve performance  

Thank You Andy.
For the temp table F_GTT we already have an index on columns (eid,etyp). However the first query is just having a filter on FID, so correct me here, there is no other option but to have a new index created column FID. But in case of the second query there are all the columns used as joins/filters (eid,etyp,fid). So I was thinking of adding a new index on (fid,eid,etyp) to help both the queries. Hope this approach will help us.

And out of curiosity, in regards to the amount of physical reads for the GTT here in these queries i.e. 22GB for retrieving 27K rows. Is it correct to assume, these reads can't be further optimized by other techniques like caching the table or any other technique? Also the sql monitoring plan shows the majority of the time in the table scan of the F_GTT, and those to "CPU time" only. So does this mean it's not the IO(read/write) speed but CPU speed which is the bottleneck here, and thus it may not help by improving the read/write IO speed further by caching etc but we should concentrate in the direction of how to optimize the CPU time here. Is this understanding correct?

As per the current design it populates all the data in the GTT in one shot , and then traverses that one by one and does aggregation based on a certain set of input values and presents it.The input values of the FID are coming from a tool which does the aggregation and format it and present it. Yes I can imagine it's like a plsql loop in which the FIDs are passed one by one. But the third party tool is something we don't have much idea of and so currently investigating, how easily we can make those queries submit in batches or in oneshot rather in a loop fashion.  

Regards

Pap  

On Fri, Jun 2, 2023 at 6:07 AM Andy Sayer <andysayer_at_gmail.com> wrote:

Hi Pap,  

The monitor report tells us that you're having to do 22GB of physical reads against the table to retrieve 27K rows which get sorted in 11MB of memory. I think that's definitely a good use case of an index.  

Why is so much data going into the GTT if you're not reading all of it? Or is this a small chunk of some big PL/SQL loop?  

Thanks,

Andy  

On Thu, 1 Jun 2023 at 14:30, Pap <oracle.developer35_at_gmail.com> wrote:

Hello Experts,  

We are using version 19.15.0.0.0 of oracle. And we are having two queries on the global temporary table as below. The key problem is that as part of a process these queries are getting executed thousands of times and thus increasing the overall elapsed time of the job. We have a discussion going on to minimize the execution of these queries by passing the input bind values/literals into this query in batches or storing them in a global temporary table and thus hitting the query once. However, as that needs a bigger design change and will take time , we wanted to understand if any other quick changes can be done to make the query faster for individual execution?

From the sql monitor below it shows the majority of time is spent on scanning table F_GTT in full. And also we saw the filter on the column - FID is not a selective one, so even a new index on this column is not going to help the query. and Also the input value of this FID is coming as a literal and comes as a different value for each execution. As we are recently moved to 19C, are there any other features/option of caching or anything available for global temporary tables , which we can utilize in this scenario to have some quick improvements for individual execution , which will eventually make the overall execution of the job faster?  

Below is the sql query and the real time sql monitor associated with it. And it shows the majority of the time has been spent on scanning the table F_GTT in full in both the cases and also the time is ON CPU only.  

https://gist.github.com/oracle9999/b991022f72cd9fdd5776f797a187f7c1  

Regards

Pap

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Jun 02 2023 - 15:02:35 CEST

Original text of this message