Re: Stats gather for big tables
Date: Tue, 30 Mar 2021 21:14:33 +0530
Message-ID: <CAEjw_fiZR-zMLD6ApoDF0ShQOEpiXtzENoA7XAGOeXV-Bh7ksg_at_mail.gmail.com>
Thanks much, Jonathan. After adding columns, having a manual stats gathering happen with all the incremental preferences but with method_opt=>"for column {new column name} size 1" seems to be a brilliant idea. But as stated in just previous post , considering Oracle stores rows but not columns in data blocks so it may end up doing the same amount of work with respect to full scan but as you mention the UPDATE synopsys part may give us benefit. So if Oracle truly restricts itself to only new added columns synopsis and uses those synopsis in a future global stats gathering but not again going for gathering synopsis for all the columns , that would save us a lot of time and resources possibly. I will try it on a big table.
Regards
Pap
On Tue, Mar 30, 2021 at 5:15 PM Jonathan Lewis <jlewisoracle_at_gmail.com> wrote:
>
> I would test something like the following:
>
> After adding the column set a table preference the table's method_op that
> includes the phrase "for column {new column name} size 1" so that Oracle
> doesn't (at least initially) gather a histogram.
> Then - before the next auto gather - issue an explcit gather_table_stats
> restricting the stats collection to just that one column.
>
> The entire table will be scanned, the work load and update MIGHT restrict
> itself only to a sysnopsis gathering on the one column, and that might be
> something you can fit into the time-frame you need to avoid a massive
> collection across the whole table.
>
> I can't give you a detailed strategy - there are so many differences in
> what goes on in the different versions in different circumstances that it's
> always a question of coming up with an idea, thinking of possible
> drawbacks, then modelling (usually with extended tracing enabled).
>
> Regards
> Jonathan Lewis
>
>
> P.S. If you are adding columns on a fairly regular basis, the way you
> populate them may make a big difference to how efficiently you can gather
> stats, and how long it will take to get to steady state. And if you have
> added several columns there's a fair chance that you are introduing a huge
> number of chained or migrated rows to your data and need to do something to
> clean up a very big mess.
>
>
>
>
>
>
> On Mon, 29 Mar 2021 at 19:56, Pap <oracle.developer35_at_gmail.com> wrote:
>
>> We are using incremental stats collection approach for gathering
>> statistics on some big partition tables(having size ~20-30 TB). And thus we
>> just have to pass table name to the stats gather block and rest is taken
>> care by oracle from the synopsys information. Now we encountered a
>> situation multiple times in which the team endup adding new columns and the
>> stats collection job started from scratch running for days and then
>> impacting other jobs. So want to understand if there is any better way of
>> handling this situation? If we can by some way restrict Oracle to not scan
>> all the data again and create synopsis from scratch rather utilize existing
>> synopsis and just scan those delta column information which was added to
>> the table? (Oracle version - 11.2.0.4)
>>
>> Regards
>> Pap
>>
>
-- http://www.freelists.org/webpage/oracle-lReceived on Tue Mar 30 2021 - 17:44:33 CEST