Home » RDBMS Server » Performance Tuning » Performance thumb-rules
Performance thumb-rules [message #200832] |
Wed, 01 November 2006 08:58 |
panda
Messages: 3 Registered: November 2006
|
Junior Member |
|
|
Hello,
I have collected some statistics from V$SYSSTAT, now what?
Are there any deterministic rules that can be checked?
for example:
1) active txn count during cleanout should be less then 80
2) db block gets + consistent gets should be smaller then OS Page faults * 60.
The data I have collected is :
CPU used by this session (V$SYSSTAT 1/)
active txn count during cleanout (V$SYSSTAT 1/)
consistent gets
db block gets (V$SYSSTAT 1/) (rate):
enqueue waits (V$SYSSTAT 1/) (absolute):
execute count (V$SYSSTAT 1/) (rate):
index fast full scans (direct read) (V$SYSSTAT 1/)
index fast full scans (full) (V$SYSSTAT 1/)
logons cumulative (V$SYSSTAT 1/) (absolute):
OS Page faults (V$SYSSTAT 1/) (absolute):
Is it possible to have such guidelines?
Thanks.
|
|
|
|
Re: Performance thumb-rules [message #201540 is a reply to message #201489] |
Sun, 05 November 2006 14:10 |
panda
Messages: 3 Registered: November 2006
|
Junior Member |
|
|
Hi kimant,
Thanks for getting involved with my subject.
Quote: | Have You asked Yourself WHY You are looking on the internet for rules-of-thumb?
|
Actually, the question WHY is a bit tricky here, I am a performance test expert, I focus mainly on methodologies and simulation and not on troubleshooting,
However, I tried to understand if I could (generally speaking) apply a set of thumb-rules to find PASS/FAIL situations on the database.
I think that ratio-based analysis is what I am looking for, right?
The next step is to build a program which will automatically check for these thumb-rules.
Is this possible? where can I find such rules?
Thanks
|
|
|
Re: Performance thumb-rules [message #201614 is a reply to message #201540] |
Mon, 06 November 2006 01:52 |
kimant
Messages: 201 Registered: August 2006 Location: Denmark
|
Senior Member |
|
|
What does a performance test expert do/mean?
The problem with Your approach is that it is extremely easy to show why any rule of thumb fail (with real world examples).
Check this thread: http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:8764517459743
It only covers SQL tuning, but it is also an important part of tuning.
Tuning using ROT is all about the details.
Take my examples from below.
An OLTP is typically a lot different from a datewarehouse.
OLTP should have many short running sql. Users are impatient.
A datawarehouse can easily have sql's running for hours.
And in general batch processes can too.
In may places, the developers 'give up' writing the code in single statements, and resign to procedural pl/sql/java/etc code.
This typically results in 99,99 cached buffer gets, so Your ratio of cached/total gets are very nice. But the code is not.
Statements that run only "once in a while" can experience a high number of uncached gets. This is because nobody else has asked for the data, so Oracle decided to cache something else.
Does this mean that the statement are bad?
NO. It means that it was more optimal to cache something else.
And I could go on. You will NEVER be able to write such code Your self, without having to explain and apologize so many times that it will hurt a lot.
What You could use are tools like Enterprise Manager for instance. It has stuff like ADDM etc. It will help You, it is a good start, but it is far from perfect, believe me. And Oracle has put in MANY hours of good thoughts and work.
Have fun!
Kim Anthonisen
|
|
|
Re: Performance thumb-rules [message #201766 is a reply to message #201614] |
Mon, 06 November 2006 11:24 |
panda
Messages: 3 Registered: November 2006
|
Junior Member |
|
|
kimant wrote on Mon, 06 November 2006 01:52 | What does a performance test expert do/mean?
Basically, I come to a working system, Analyze its activity, simulate the leading business processes and time it. Meanwhile during the simulation I collect performance data from different parts of the system (e.g. Network, application servers, web servers, databases). My objective is to indicate on expected performance problems (by timing end to end transactions) and hopefully to pinpoint the performance problem as close as I can..(for example... network segment from web server to application server). Right now, I can say that my bottleneck identification knowledge can be almost represented as a set of rules that I check (of course everything is done in the context of the system purpose and response times). I dont have DB tunning knowledge. I want to build a tool that much like I do almost automatically, checks for abnormalities and deduce about the system's bottleneck. Naturally it is not an easy task but I also use a not trival algorithm/technology (it's not a list of "if -> then"s) I can represent each rule with a context of the system purpose for example(e.g. if Rule1 AND System=OLTP then OK else ...)
Not all, but at least some of the indicators are checked almost automatically by me (check that Memory is ok, CPU is not too high, no extreme Disk activity.. and then for example (if CPU=high and Disk=high and memory=low then probably "swapping" etc.)
The problem with Your approach is that it is extremely easy to show why any rule of thumb fail (with real world examples).
Check this thread: http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:8764517459743
It only covers SQL tuning, but it is also an important part of tuning.
Tuning using ROT is all about the details.
Take my examples from below.
An OLTP is typically a lot different from a datewarehouse.
OLTP should have many short running sql. Users are impatient.
A datawarehouse can easily have sql's running for hours.
And in general batch processes can too.
If there is no response time problem, there is no issue as far as I can handle
In may places, the developers 'give up' writing the code in single statements, and resign to procedural pl/sql/java/etc code.
This typically results in 99,99 cached buffer gets, so Your ratio of cached/total gets are very nice. But the code is not.
That's intresting....
But if my rule set will be comprehensive enough I think I can reveal the high "procedural usage"/"single statements" ratio.
It is not an easy task to come up with these rules, I know...
Statements that run only "once in a while" can experience a high number of uncached gets. This is because nobody else has asked for the data, so Oracle decided to cache something else.
I won't simulate these statements from the first place...
Does this mean that the statement are bad?
NO. It means that it was more optimal to cache something else.
And I could go on. You will NEVER be able to write such code Your self, without having to explain and apologize so many times that it will hurt a lot.
It's intended for a general "hint"/"recommandation" or even "you better check this, just in case..." system.
What You could use are tools like Enterprise Manager for instance. It has stuff like ADDM etc. It will help You, it is a good start, but it is far from perfect, believe me. And Oracle has put in MANY hours of good thoughts and work.
It costs money...
Have fun!
Kim Anthonisen
|
|
|
|
Re: Performance thumb-rules [message #201867 is a reply to message #201766] |
Tue, 07 November 2006 02:44 |
kimant
Messages: 201 Registered: August 2006 Location: Denmark
|
Senior Member |
|
|
So You are saying that You come in when there is a performance problem, and then try to find it using ROT's?
Or do You tell them that "yes, they really do have an issue"?
I am sorry to say, but Your approach is .... not the best.
In Oracle, we can do much better by
*) Monitor active sessions (Enterprise Manager, TOAD etc.)
*) Trace the sessions (records in a file exactly what happened, what the session waited for etc.)
*) Use tools like Enterprise Manager to diagnose issues.
*) etc etc...
From a database perspective, tracing is the true/best tool for diagnosing issues. You get to see WHY it took the time to complete and WHAT it waited for. This can fx. be reading data or sending them over the network to the client. Here You can see network issues (between the database and the client), disk issues, configuration issues, concurrency issues etc.
I am a little surprised that You hope You can diagnose database issues without having knowledge of the database. Sorry - strike out "a little" - I am stunned!
Take a look at this: http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:8764517459743
Maybe someone else can give You some ROT's, but not me. I do not believe in them, and I do not use them. The ROT's that I know of was used back in Oracle 7. Much have changed since then, and we (the Oracle community) have come to realize that using ROT's was not very beneficial. They failed to detect many issues and they detected issues that were not there. There were simply too much guessing involved.
Br
Kim Anthonisen
Ps.
If a tool costs money....
But what about Your TIME ???
- You could be out there working, solving issues, playing with Your friends/kids (if You have any),
And tracing btw. are FREE - it somes with the database!
|
|
|
Goto Forum:
Current Time: Wed Nov 27 05:52:57 CST 2024
|