statistics update [message #641366] |
Thu, 13 August 2015 13:34 |
|
sant_new1
Messages: 46 Registered: June 2014
|
Member |
|
|
Hi friends,
We are troubleshooting multiple performance problems and suspect stale statistics could be the base problem.. We found some tables that are not analyzed for few months to a year now.. We run 'dbms_stats.gather_database_stats' on a monthly basis but since some tables are being skipped after running this we are wondering if these tables did not have any significant data changes that they got skipped to gather statistics.. Is there a way to find out what tables have old information(tables that did not have much of data changes since the last analyzed) and so did not get analyzed to update statistics?
Thank you so much
|
|
|
Re: statistics update [message #641368 is a reply to message #641366] |
Thu, 13 August 2015 13:46 |
|
BlackSwan
Messages: 26766 Registered: January 2009 Location: SoCal
|
Senior Member |
|
|
>We are troubleshooting multiple performance problems and suspect stale statistics could be the base problem
Why?
what proof do you have that supports this speculation?
Enumerate the troubleshooting steps that have been completed?
How long have these performance problems existed?
Have they been getting worse?
When was the last time application performed OK?
What changed since then?
[Updated on: Thu, 13 August 2015 13:51] Report message to a moderator
|
|
|
|
|
|
|
|
|
|
|
Re: statistics update [message #641427 is a reply to message #641425] |
Fri, 14 August 2015 12:03 |
|
Kevin Meade
Messages: 2103 Registered: December 1999 Location: Connecticut USA
|
Senior Member |
|
|
First let us have some perspective on statistics.
The purpose of collecting statistics is to change query plans.
Once a production system is stable (queries are meeting SLA) most DBAs and Developers do NOT want query plans to change.
What does the above make you think?
The goal of statistics collection is make every query on a system run with the fastest possible query plan.
The goal of DBAs and Developers (the intelligent ones anyway), is to build a system where all queries run within their SLA and are thus stable and predictable.
Are the two above goals the same?
So where am I going with this. Well... no where in particular. I just want everyone to understand that in order to work with statistics, your first priority is not to learn the technical stuff behind them, but rather to understand the difference between the THEORY of statistics, and the PRACTICE of statistics. You are worried about stale statistics, but though it is hard to measure, statistics are only stale when they cause query plans to be generated that are much slower that expected. Year old statistics are not stale if they are generating acceptable query plans. Day old statistics are stale if they are generating unacceptable query plans.
Getting on to your specific tuning need, here are questions from me:
1. do you have a query that you know is running slow or otherwise want to tune?
2. what is the good runtime of this query (what you liked before)
3. what is the slow runtime of this query that you are trying to improve
Assuming you have a query you are trying to tune, do you know how to use the GATHER_PLAN_STATISTICS hint? If not, read up about it, then use it to produce a query plan which will show us Estimated and Actual cardinalities of the plan steps. Then post the full QEP here (minimally that means PLAN_OUTPUT and PREDICATES).
After you do this, we can talk about the query plan and what might be its problem. It could indeed be stats. I will show you how to figure that out once you post the plan.
Kevin
|
|
|
Re: statistics update [message #641428 is a reply to message #641425] |
Fri, 14 August 2015 12:07 |
|
BlackSwan
Messages: 26766 Registered: January 2009 Location: SoCal
|
Senior Member |
|
|
> db file sequential read 97528 0.61 320.24
above is where you should focus your search & resolution efforts
Which objects are involved with all these db file sequential reads? I suspect these are reading INDEX
|
|
|