Re: Stats collection issue in 19C
Date: Tue, 8 Nov 2022 10:36:16 +0100
Message-ID: <dbc5458d-d581-9f77-f462-fd2d6c235512_at_bluewin.ch>
Hi,
it seems the quick execution is doing one partition, but the slow
executions is doing all partitions. The number of actual rows will also
support that.
is identifying one partition if Morgan's Library is righthttps://morganslibrary.org/reference/undocumented.html
Thus it is probably true that all partitions are done in the slow run.
What does trigger it? I don't know either.
Maybe it helps to list all preferences :https://oracle-base.com/dba/script?category=monitoring&file=statistics_prefs.sql
There might be a trace to diagnose the decision. Did you open an SR?
You can experiment by setting histogram preferences fixed and then start a gathering and break it once you know the recursive statement.
Thanks
LOthar
Am 07.11.2022 um 16:09 schrieb Pap:
The condition
TBL$OR$IDX$PART$NUM ("USER1"."TAB1",
0,
4,
0,
"ROWID") = :objn
> Thank you Nenad, Lothar, Lok.
> Apology for late response. Actually if it would have been just because
> of some new histogram creation, it would have happened once in a
> while, but in our case we are seeing the incremental stats running
> longer(~1hr+) for every run in one database but is faster(<10mins) in
> another database with the same stats preferences and data
> volume. However ,we are able to get the recursive queries executed
> behind the stats gather proc for both quick and slow databases. And
> the difference it's making is because of the one recursive sql i am
> able to get hold of. It's as below.
>
> Below is the recursive sql and its sql monitor which is appearing to
> underline the long execution database.
> https://gist.github.com/oracle9999/e6ef3ef44fa9a7125ef38d93e33a3e5b
>
> Below is the recursive sql and its sql monitor which is appearing to
> underline the quick execution database.
> https://gist.github.com/oracle9999/b13bd79b1f026eb486d8c75a7b08ac3c
>
> Some differences in the recursive sql text which gets executed behind
> the scene for quick execution vs long execution are below..
>
> The quick execution one having from clause of the query as "/FROM
> "USER1"."TAB1" t WHERE TBL$OR$IDX$PART$NUM
> ("USER1"."TAB1",0,4,0,"ROWID") = :objn/ "
> The slow execution one having from clause of the query as "/FROM
> "USER1"."TAB1" t/ "
>
> The quick execution shows the strings in comment something like
> "/SYN,NIL,NIL" ,"RWID, NDV_HLL,B67430/".... etc after the where clause.
> The slow execution shows the strings in comment something like
> "/NDV,NIL,NIL" , "TOPN,NIL,NIL/".. etc after the where clause.
>
> I believe these above differences in the sql text give some clue,
> however I am still not able to figure out ,what is that difference
> making oracle go for executing such sql texts in both the cases?
>
> Table preferences are as below and the METHOD_OPT is kept purposely as
> "REPEAT" but not "AUTO", so as to avoid any new histogram
> automatically introduced by optimizer endup collecting stats on these
> big tables. These tables are in the 10's of TB's in size and are
> partitioned tables. Just the Degree is kept 8 on the slow database vs
> 4 on the fast database, just to finish the stats quicker as that is
> running longer because of this issue.
>
> GRANULARITY AUTO
> INCREMENTAL TRUE
> INCREMENTAL_STALENESS USE_STALE_PERCENT,USE_LOCKED_STATS
> METHOD_OPT FOR ALL COLUMNS SIZE REPEAT
>
> *Snippet of 'FROM" clause of slow run :-
> *
> FROM "USER1"."TAB1" t
> WHERE TBL$OR$IDX$PART$NUM ("USER1"."TAB1",0,4,0,"ROWID") = :objn /*
> SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,STOPN,NIL,NIL,STOPN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,RWID,
> NDV_HLL,
> B67430,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1U*/
>
> *Snippet of 'FROM" clause of fast run :-*
>
> FROM "USER1"."TAB1" t /*
> NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,TOPN,NIL,NIL,TOPN,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,RWID,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1U*/
>
> On Tue, Oct 4, 2022 at 12:35 PM Nenad Noveljic
> <nenad.noveljic_at_gmail.com> wrote:
>
> Could you check if new histograms were created:
>
> |SELECT * FROM table(dbms_stats.diff_table_stats_in_history(
> ownname => 'USER', tabname => 'TABLE', time1 => localtimestamp,
> time2 => localtimestamp-to_dsinterval('5 00:00:15'), pctthreshold
> => 0 ));|
>
> New histograms could have triggered the recreation of the synopses.
>
> Best regards,
> Nenad
>
> Von meinem iPhone gesendet
>
>> Am 04.10.2022 um 09:34 schrieb Lok P <loknath.73_at_gmail.com>:
>>
>>
>> Hi Pap, I can't remember exactly the discussion (mostly from
>> Jonathan Lewis) sometimes in past I read , you have to do it
>> something as below..
>>
>> Export your existing stats manually to a table and then update
>> the global_stats flag to NO there and then import it back to the
>> data dictionary back.
>>
>> Or else you can pass the granularity parameter as 'partition'
>> such that the underlying SQL taking time I. E global column stats
>> won't trigger. So your stats collection will be faster. But then
>> your partition stats will be accurate but it won't gets rolled up
>> to global level automatically. Thats an issue for queries relying
>> on global statistics.
>>
>> Regarding your incremental stats, the global_stats flag will be
>> YES only. But as you mentioned it's still going for scanning full
>> table and gathering column stats each time it triggers. So it may
>> be because of change in behaviour of granularity parameter ALL
>> from 11g vs 19C, so I would suggest you try with AUTO.
>>
>>
>>
>>
>> On Tue, 4 Oct, 2022, 2:11 am Pap, <oracle.developer35_at_gmail.com>
>> wrote:
>>
>> Thank You Lothar.
>>
>> Something odd I am noticing. Because we have many partitioned
>> tables and this issue we are noticing for a handful of them
>> and another thing i notice, for the table which has no table
>> level preferences set and also we are just passing a hard
>> coded partition_name without any incremental, the
>> Global_stats column in dba_tables is showing "YES". Ideally
>> it should show Global_stats as 'NO'. Not sure how it happened
>> and if it's responsible for making the stats to be collected
>> as global each time. How to fix this ?
>>
>> And also the synopsis you mentioned is related to incremental
>> stats only, and for the partitioned table with incremental
>> set as TRUE, i see the Granularity set as 'ALL', so is that
>> the cause for making it go for collecting global column stats
>> each time and we should turn it to AUTO then? But my worry is
>> if just tweaking the 'Granularity" to AUTO will make the
>> oracle go scanning/collecting synopsis from scratch for all
>> the partitions again for the first stats run?
>>
>> Also if i remember correctly , in the 11G period we
>> encountered an issue in which Granularity =>AUTO was not
>> collecting stats on the subpartitions, so we have set it to
>> ALL and it was working perfectly fine with that setup in the
>> 11.2 version. Not sure if some changes happened in 19C with
>> respect to that.
>>
>> On Tue, Oct 4, 2022 at 1:59 AM Lothar Flatz
>> <l.flatz_at_bluewin.ch> wrote:
>>
>> Hi,
>>
>> I am trying some educated guess right now. In order to do
>> incremental
>> stats, an auxiliary table called synopsis must be generated.
>> Basically the synopsis is a sample of the NDV of a partition.
>> When you set a table to incremental, the synopsis must be
>> build for all
>> existing partitions once. So this first run still scans
>> all partitions,
>> only the next runs from there on will be incremental.
>> The synopsis from version 11g differs from that in 19c
>> IMHO. Thus, it
>> might be that after the upgrade a new, full size synopsis
>> must be generated.
>> That could look similar to global stats.
>> If that is correct, things should switch bach to normal
>> by themselves.
>>
>> Thanks
>>
>> Lothar
>>
>> Am 03.10.2022 um 22:14 schrieb Pap:
>> > It's a 19.15 version oracle customer database. We are
>> seeing the stats
>> > gathering is running a lot longer than it used to
>> happen in the same
>> > database when it used to be in version 11G. And it
>> seems to be only
>> > happening for partitioned tables.
>> >
>> > When trying to look into the underlying sql when the
>> stats gather was
>> > running , I am seeing one type of sqls. It seems to be
>> gathering the
>> > column level stats for the whole table even if we have
>> just passed the
>> > exact partition_name as parameter to the stats gather
>> block as below
>> > and also its happening irrespective of whether the
>> table has
>> > INCREMENTAL set as TRUE or not.
>> > So my question is even if INCREMENTAL is set as TRUE
>> for TAB2, but
>> > still each time it should not collect column level
>> stats in the whole
>> > table level. So why is it doing so? And in case of TAB1
>> even if we
>> > have no preference set in table level, then Oracle
>> should only gather
>> > stats for that partition, so why is it gathering for
>> global level
>> > column stats each time?
>> >
>> > exec
>> >
>> Dbms_stats.gather_table_stats(ownname=>'SCHEMA_APP',tabname=>'TAB1',partname=>'DAY_20221104');
>> > exec
>> >
>> Dbms_stats.gather_table_stats(ownname=>'SCHEMA_APP',tabname=>'TAB2',partname=>'TAB2_11182022_P');
>> >
>> > Below is the sql monitoring report for two of the
>> underlying sqls for
>> > above two calls. Here TAB2 is defined as INCREMENTAL as
>> TRUE in the
>> > dba_tab_stat_prefs. And TAB1 has no preference set in
>> table level.
>> >
>> >
>> https://gist.github.com/oracle9999/c2268195a01a11479e18fb488c4bfd69
>> >
>> > Regards
>> > Pap
>>
-- http://www.freelists.org/webpage/oracle-lReceived on Tue Nov 08 2022 - 10:36:16 CET