Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> analyze stats question
Version 8.0.5.2.1
I'm getting some interesting results when analyzing tables using different estimate sizes and was hoping someone can explain why I'm seeing these results.
I'm analyzing a partition of a table which contains nearly 800,000 rows (for that partition). I've used 5, 15, 25, 45 percent and 10000 rows and the exact same results show up for all 'analyze' columns in dba_tab_partitions and dba_part_col_statistics. When I calculate the statistics the only thing that changes is num_distinct in dba_part_col_statistics for 2 columns. The difference is about 110,000 distinct values which I suspect would be enough to change the values when using the various different estimate values. Whenever I've looked closely at this before I've never seen the compute and estimate come up with the exact same numbers for num rows with large tables but that is what is happening here.
Wouldn't you think that with a difference of 110,000 distinct values in a 800,000 row partition that estimate 5% and estimate 45% would generate different values for column stats and even num_rows? The estimate calculations return results in about 6 seconds and the compute is taking about 5 minutes. It seems like the estimate is using cached values no matter what the estimate size is but there are too many people using the database to shutdown the database and flush the buffers.
Any thoughts?
![]() |
![]() |