Home » RDBMS Server » Performance Tuning » Oracle DB Performance Tests (Oracle 11g)
Oracle DB Performance Tests [message #600377] |
Tue, 05 November 2013 12:21 |
ind9
Messages: 65 Registered: January 2009
|
Member |
|
|
Hi All,
I am coming from oracle database developer PL/SQL background and have to perform the performance tests.
What are the skills that are needed to perform oracle DB peformance tests in order to identify and analyze the hotspots? I must be able to perform tests and pull out the problematic statements in the project.
Any suggestions are welcome and very much helpful to me?
Please help.
Thanks in advance!
|
|
|
|
Re: Oracle DB Performance Tests [message #600386 is a reply to message #600378] |
Tue, 05 November 2013 22:03 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
Here are some useful Oracle features - they should probably all figure into your plan
- Real Application Testing - Database Replay: Capture a production workload and replay it on a test datbase
- Real Application Testing - SQL Performance Analyzer: Test a workload pre-and post upgrade to see which SQLs change execution plan
- ADDM - Can be used to help find resource-intensive SQLs
- SQL Tuning Advisor - Can help identify and deal with some SQLs with inefficient execution plans
- SQL Access Advisor - Can recommend indexes and materialized views that would benefit a workload of DML and DDL statements
Assuming you don't have any SQL Tuning training or experience, this is as good a place to start as any.
In reality, I think that these tools work best on well-written applications, whereas most performance tuning problems occur on poorly written applications. If that is the case, there really are no short-cuts; you just need to learn how to use Explain Plan and TKPROF then start tuning SQL by SQL, learning how indexes work, learning how joins work, learning how the optimizer works, and learning how to avoid traps. There are plenty of resources in the sticky post at the top of the Performance Tuning forum that can *help* you learn, but like everything they are best combined with hands-on experience.
If you don't have time to learn and make mistakes, you should hire a professional.
Ross Leishman
|
|
|
Re: Oracle DB Performance Tests [message #600387 is a reply to message #600386] |
Tue, 05 November 2013 23:08 |
|
Kevin Meade
Messages: 2103 Registered: December 1999 Location: Connecticut USA
|
Senior Member |
|
|
This can be a fun assignment for you. You will have an opportunity to go your own way in deciding how you want to proceed and what you want to learn mostly because few of your bosses will have a clue what you are supposed to do so you can pretty much do whatever you want. That said there will be some choices for you to make.
As in all things you need to make certain people happy. That means for an assignment like this you want to do some basic interviewing of your bosses and other stakeholders. If you are lucky and ask a few good questions, they can give you a list of goals to achieve, maybe even some specific artifacts they expect to be produced. You particularly want to know if their expectations center on scalability of the system as a whole, or on efficient code and efficient SQL. The two tuning processes are different, the first being very heavy on planning and simulation, the second best approached with a real system doing real work that is analyzed for problem areas. Which one are you doing? Don't say both cause they are two different things and two different people should work on each.
Second, you want to keep a good record of what you do. You will need this to show people your progress. Keep decent notes as you go by simply writing your thoughts down. You can always organize them later if need be. This will help you to explain later what you were doing which will keep you employed among other things.
Third you will be faced with the choice of who to listen to. Oracle and other vendors tend to push tuning from the perspective of using their tools. Blogs like this will focus more on the why of it all and not care too much about which tools you use. The difficulty with tools is any one tool can take a great deal of time to learn how to use properly. Some require months of use before achieving basic proficiency. This does not make these tools bad, just not as easy as their vendors might want us to think. So ask yourself if you have the time for that. You likely will only be able to get good at one so if you decide to use a specific tool make sure you understand its niche and them don't hold back on learning it.
If one of the deliverables is FINDING POORLY PERFORMING SQL, this can actually be pretty easy as you can use any number of TOP-N tuning queries to find them. I bet Ross has some good ones. I have some too. What version of the database will you be working with?
There are some typical ways you can find "BAD" SQL, keeping in mind that poor performance is relative.
1. a user says it takes 6 seconds to go from page X to page Y in web app ABC and that is getting annoying. You must get with an app developer to find the problem SQL in the app based on the user's description of what they are doing when it happens. Even though these queries sound like they are fast, these short delays can actually be a big deal because the application is Customer Facing as they say.
2. a maintenance manager says they just migrated an application to a new version of the database (or has some other story to tell) and now have lots of queries that are running much slower than before. They usually have identified the queries for you. You know they are in big trouble if you hear the word SLA in anything they say.
3. a developer may say that they have finished their application but that they cannot get out of QA because the performance using higher data volumes is much slower than there test environments. You will need to isolate the SQL using possibly TOP-N query finding SQL, or ask the developers to identify them. They can put hints in their SQL to identify each one so you can modify your TOP-N scripts to find only them, and they can instrument their code with simple instrumentation to show where there time is going. If you have PL/SQL and Toad then you might look at the PL/SQL monitoring tools from Oracle as they get easier to use via Toad. Not plugging for Toad, just talking from some basic experience. Importance can be gauged by an approaching deadline so ask what that is.
4. you can just go trolling with your TOP-N bad SQL finder queries. Every system, even those that have been tuned, have some SQL that can go faster. These bubble to the top for various stats like CPU LOAD / READs & WRITES / MEMORY CONSUMPTION / NETWORK UTILIZATION. I run one from my sqlplus login.sql file so it runs every time I log into one of my databases. Lets me see if there is anything interesting going on. My cut-off for a problem query when trolling for possible opportunities is any SQL taking > 1000 CPU seconds for all executions. This generally works out to < 20 statements with just a couple really big ones. These can lead to big savings although one can question the value of tuning something that is not causing any one pain yet.
Kevin
|
|
|
|
Re: Oracle DB Performance Tests [message #600425 is a reply to message #600387] |
Wed, 06 November 2013 07:21 |
ind9
Messages: 65 Registered: January 2009
|
Member |
|
|
Thanks a lot for your suggestion. I had a chance to discuss with my lead and got following information
My activies are performing LOAD and STRESS testing using MICROSOFT TEST MANAGER (Lab Management) and preparing the reports based on the results using ORACLE ASH, AWR and also making more analysis based on the tests and identifying hot spots and later passing on this information to the DEV colleagues.
Sounds interesting!
One thing is clear for me that I have to learn performing load tests and would like to request the experts to suggest regarding analyzing the test results by using Oracle ASH or AWR.
Thanks in advance.
|
|
|
Re: Oracle DB Performance Tests [message #600917 is a reply to message #600377] |
Wed, 13 November 2013 17:51 |
|
alan.kendall@nfl.com
Messages: 163 Registered: June 2012 Location: Culver City, California
|
Senior Member |
|
|
Ind9,
Look at the event waits and here are some of my findings over the last years
along with my five top tuning tips.
A RAC database processing from memory will give "gc" event waits like "gc current request" or "gc cr request".
But a database that is I/O bound will give a lot "db file sequential read" or "direct path read".
A database that has too small a memory for sorting will give a disk sort events like "direct path read temp".
A database has too big of a default cache and that needs better caching of objects or better indexing will have
too much "latch free" due to memory management.
A database with a lot of "log buffer space" needs a larger log_buffer parameter but will require a shutdown/startup.
A database with a lot of "log file switch (checkpoint incomplete)" needs a larger log size and can be done online.
A database with a lot of "cache buffers chains" could be in need of better indexes to reduce cpu and sequential scans.
1) Set pga_aggregate_target large so that more sorts will occur in memory.
SYS AS SYSDBA> alter system set pga_aggregate_target=20g scope=both;
System altered.
The following sql will show you if you have small sorts occuring in memory due
to a small pga_aggregate_target.
select host_name,instance_name instance,round((select sum(Phyblkwrt) from V$tempSTAT)/
(select value+1 from V$SYSSTAT where Name = 'sorts (disk)')
*8192/1024/1024) average_disk_sort_in_meg,value/1024/1024/1024 pga_in_gigabytes
from dual,v$instance,v$parameter v
where v.name='pga_aggregate_target';
INSTANCE AVERAGE_DISK_SORT_IN_MEG PGA_IN_GIGABYTES
--------- ------------------------ ----------------
CSCDAD 370 .8
CSCDAP1 Poor 197 20.0
CSCDAP2 Poor 154 20.0
CSCDAP3 Poor 187 20.0
CSCDAP4 313 20.0
CSCDAQ 314 3.9
CSCDAS1 240 20.0
CSCDAS2 422 20.0
CSESBD 268 1.0
CSESBP1 629 6.0
CSESBP2 638 6.0
CSESBQ 501 6.0
CSESBS1 518 6.0
NALFD 570 .0
NALFP1 Poor 83 .0
NALFS1 Poor 126 .0
NALFT Poor 198 .0
NDOCP1 Good 2581 .0
NDOCP2 Good 1208 .0
NDOCP3 Good 1042 .0
NDOCP4 Good 1158 .0
NTOOLS Poor 134 .6
NWEBPD Poor 187 .0
2) Calculate the PHYSRDS_PER_HOUR (from disk)
and the LOGICAL_GIG_PER_MIN (from memory) as part
of your hit ratio and strive to keep the physical reads
per hour less than 1 million.
select
host_name host,instance_name instance,sysdate-i.startup_time updays,
ROUND(((SUM(DECODE(Name, 'consistent gets', Value, 0))+
SUM(DECODE(Name, 'db block gets', Value, 0)) -
SUM(DECODE(Name, 'physical reads', Value, 0)) )/
(SUM(DECODE(Name, 'consistent gets',Value,0))+
SUM(DECODE(Name, 'db block gets', Value, 0)))) *100,2)
BlkHit,
round(SUM(DECODE(Name, 'physical reads',Value,0))/
to_number(sysdate-i.startup_time)/24) Physrds_per_hour
,SUM(DECODE(Name, 'consistent gets',Value,0))/
to_number(sysdate-i.startup_time)*
8192/1024/1024/1024/24/60 Logical_GIG_PER_MIN
,SUM(DECODE(Name, 'physical reads',Value,0)) Physrds
,SUM(DECODE(Name, 'consistent gets',Value,0)) consistent
,SUM(DECODE(Name, 'consistent gets',Value,0))/
to_number(sysdate-i.startup_time)/24 Con_per_hour
,SUM(DECODE(Name, 'db block gets',Value,0)) Dbblock
,SUM(DECODE(Name, 'db block gets',Value,0))/
to_number(sysdate-i.startup_time)/24 Dbblock_per_hour
from V$SYSSTAT,v$instance i
group by host_name,instance_name,(sysdate-i.startup_time);
HOST INSTANCE UPDAYS BLKHIT PHYSRDS_PER_HOUR LOGICAL_GIG_PER_MIN
--------------- -------- -------- ------- ---------------- -------------------
proddb04 NDOCP1 11.589 97.83 91350 Low usage .53286698
proddb05 NDOCP2 11.575 94.42 Poor 8696269 19.7911387
proddb06 NDOCP3 11.562 77.81 Poor 6005822 3.43087816
proddb07 NDOCP4 11.547 90.89 97018 Low usage .13358338
proddb04 NWEBP1 11.589 91.91 Poor 5410039 8.48724881
proddb05 NWEBP2 11.575 94.19 poor 3979153 8.6766406
proddb06 NWEBP3 11.562 96.07 poor 2386923 7.69341975
proddb07 NWEBP4 11.547 95.09 poor 3465115 8.94368193
proddb04 NALFP1 11.590 99.88 363 Low usage .03294715
csprdesbdb01-fe CSESBP1 383.984 96.49 Poor 504409 1.79572242
csprdesbdb02-fe CSESBP2 383.983 97.97 Poor 394711 2.44242447
csprdcdadb11 CSCDAP1 35.552 99.99 Good 30725 High Usage 69.49822
csprdcdadb12 CSCDAP2 35.549 100.00 Good 15558 High Usage 84.09462
csprdcdadb13 CSCDAP3 35.548 100.00 Good 12831 High Usage 69.82458
csprdcdadb14 CSCDAP4 35.547 100.00 Good 14901 High Usage 77.88647
I have been getting the physical reads per hour lower by adding indexes and
caching important objects in memory.
3) Run a query against dba_hist_seg_stat to see what is slowing down your
system in excessive I/O. Add indexes, rewrite sql or cache tables/indexes to fix.
select to_char(c.BEGIN_INTERVAL_TIME,'YYYY-MM-DD Day') "DATE",a.object_name,
sum(b.PHYSICAL_READS_DELTA) total_daily_physical_reads
from dba_objects a,dba_hist_seg_stat b,sys.wRM$_SNAPSHOT c
where b.SNAP_ID >(select max(SNAP_ID)-24*3 from sys.wRM$_SNAPSHOT)
and a.object_id=b.OBJ#
and b.PHYSICAL_READS_DELTA>1000
and c.instance_number=(select instance_number from v$instance)
and c.snap_id=b.snap_id
group by to_char(c.BEGIN_INTERVAL_TIME,'YYYY-MM-DD Day'),a.object_name
order by 1,3;
DATE OBJECT_NAME TOTAL_DAILY_PHYSICAL_READS
---------- -------------------- --------------------------
2013-11-13 DMR_CONTENT_R 1171630
2013-11-13 NFL_CONTENT_R_COMP1 1551174
2013-11-13 DM_RELATION_S 1803247
2013-11-13 DMI_QUEUE_ITEM_S 2882564
2013-11-13 NFL_CONTENT_R 4293163
2013-11-13 DMR_CONTENT_S 5468590
2013-11-13 NFL_CONTENT_S 66334761
2013-11-13 DM_SYSOBJECT_S 81096067
2013-11-13 DM_SYSOBJECT_R (High I/O Usage) 165812568
4) You can modify the above query to see what is using the most CPU
by changing "physical" to "logical" as in the following.
select to_char(c.BEGIN_INTERVAL_TIME,'YYYY-MM-DD Day') "DATE",a.object_name,
sum(b.LOGICAL_READS_DELTA) total_daily_LOGICAL_READS
from dba_objects a,dba_hist_seg_stat b,sys.wRM$_SNAPSHOT c
where b.SNAP_ID >(select max(SNAP_ID)-24*3 from sys.wRM$_SNAPSHOT) and a.object_id=b.OBJ#
and b.LOGICAL_READS_DELTA>1000000
and c.instance_number=(select instance_number from v$instance)
and c.snap_id=b.snap_id
group by to_char(c.BEGIN_INTERVAL_TIME,'YYYY-MM-DD Day'),a.object_name
order by 1,3;
DATE OBJECT_NAME TOTAL_DAILY_LOGICAL_READS
---------- -------------------- -------------------------
2013-11-13 DM_SYSOBJECT_S_COMP1 28899056
2013-11-13 D_1F000D5D80000903 95916752
2013-11-13 DM_SYSOBJECT_R 97093584
2013-11-13 D_1F000D5D80000902 114757120
2013-11-13 DM_SYSOBJECT_R_COMP1 115339280
2013-11-13 DM_WEBC_800129A9_L 116756464
2013-11-13 D_1F000D5D80000146 175874032
2013-11-13 D_1F000D5D8000090C 192055856
2013-11-13 NFL_VIDEO_S 207416768
2013-11-13 DM_SYSOBJECT_R_COMP2 254522976
2013-11-13 D_1F000D5D80000901 430375152
2013-11-13 NFL_CONTENT_S 439930096
2013-11-13 D_1F000D5D80000109 867889856
2013-11-13 DM_SYSOBJECT_S (High Cpu Usage) 909816240
5) Join Dba_objects with gv$bh to see what is in memory and
causing too many Physical Reads and excessive management by oracle.
SELECT o.object_type,i.instance_name db,
COUNT(*)*8192/1024/1024 meg_in_memory,
o.owner||'.'||o.OBJECT_NAME Object_in_Memory
FROM DBA_OBJECTS o, gV$BH bh, gv$instance i
WHERE o.DATA_OBJECT_ID = bh.OBJD
and bh.status<>'free'
and bh.inst_id = i.inst_id
GROUP BY o.owner||'.'||o.OBJECT_NAME,
o.object_type,i.instance_name
having count(*)>0
ORDER BY COUNT(*);
OBJECT_TYPE DB MEG_IN_MEMORY OBJECT_IN_MEMORY
----------- -------- ------------- ----------------------------
INDEX NDOCP2 223 NFLPROD.DM_SYSOBJECT_S_COMP1
INDEX NDOCP2 223 NFLPROD.D_1F000D5D80000109
TABLE NDOCP2 224 NFLPROD.DM_SYSOBJECT_R
TABLE NDOCP3 231 NFLPROD.NFL_CONTENT_S
TABLE NDOCP1 252 NFLPROD.DM_SYSOBJECT_S
INDEX NDOCP4 253 NFLPROD.D_1F000D5D80000902
INDEX NDOCP3 286 NFLPROD.IDX_CONTENT_R_TEAM_O
INDEX NDOCP2 299 NFLPROD.D_1F000D5D8000000F
TABLE NDOCP4 396 NFLPROD.NFL_CONTENT_R
TABLE NDOCP1 441 NFLPROD.DM_WEBC_800129A9_L
INDEX NDOCP3 443 NFLPROD.NFL_CONTENT_R_COMP1
INDEX NDOCP2 457 NFLPROD.NFL_CONTENT_R_COMP1
TABLE NDOCP4 500 NFLPROD.DM_RELATION_S
INDEX NDOCP3 588 NFLPROD.D_1F000D5D80000043
TABLE NDOCP3 644 NFLPROD.NFL_CONTENT_R
INDEX NDOCP2 709 NFLPROD.DM_SYSOBJECT_R_COMP2
TABLE NDOCP4 768 NFLPROD.DMR_CONTENT_S
TABLE NDOCP2 908 NFLPROD.NFL_CONTENT_S
TABLE NDOCP4 919 NFLPROD.DM_SYSOBJECT_R
TABLE NDOCP3 935 NFLPROD.DM_SYSOBJECT_R
TABLE NDOCP4 1143 NFLPROD.DM_SYSOBJECT_S
INDEX NDOCP3 1204 NFLPROD.D_1F000D5D80000051
TABLE NDOCP3(Memory Hog)1544 NFLPROD.DM_SYSOBJECT_S
TABLE NDOCP2(Memory Hog)2175 NFLPROD.DM_SYSOBJECT_S
Many Memory Hogs can be optimized by better indexing, or purging of old rows and sometimes by just rebuilding indexes.
Alan
|
|
|
Goto Forum:
Current Time: Thu Feb 06 14:03:19 CST 2025
|