Skip navigation.

Bobby Durrett's DBA Blog

Syndicate content
Oracle database performance
Updated: 16 hours 9 min ago

DCLI to back up Oracle home and inventory before patch

Mon, 2015-03-16 15:48

I was preparing for my weekend patch of our Exadata system and I needed to back up all of our Oracle homes and inventories on our production system.  On our 2 node dev and qa clusters I just ran the backups by hand like this:

login as root

cd /u01/app/oracle/product/11.2.0.4

tar -cvf - dbhome_1 | gzip > dbhome_1-20150211.tgz

cd /u01/app

cp -r oraInventory oraInventory.20150211

But the production cluster has 12 nodes so I had to figure out how to use DCLI to run the equivalent on all 12 nodes instead of doing them one at a time.  To run a DCLI command you need go to the directory that has the list of database server host names.  So, first you do this:

login as root

cd /opt/oracle.SupportTools/onecommand

The file dbs_group contains a list of the database server host names.

Next, I wanted to check how much space was free on the filesystem and how much space the Oracle home occupied so I ran these commands:

dcli -g dbs_group -l root "df|grep u01"

dcli -g dbs_group -l root "cd /u01/app/oracle/product/11.2.0.4;du -ks ."

The first command gave me how much space was free on the /u01 filesystem on all database nodes.   The second command gave me how much space the 11.2.0.4 home consumed.  I should have done “du -ks dbhome_1″ since I’m backing up dbhome_1 instead of everything under 11.2.0.4, but there wasn’t much else under 11.2.0.4 so it worked out.

Now that I knew that there was enough space I ran the backup commands using DCLI.

dcli -g dbs_group -l root "cd /u01/app/oracle/product/11.2.0.4;tar -cvf - dbhome_1 | gzip > dbhome_1-20150316.tgz"

dcli -g dbs_group -l root "cd /u01/app;cp -r oraInventory oraInventory.20150316"

I keep forgetting how to do this so I thought I would post it.  I can refer back to this later and perhaps it will be helpful to others.

– Bobby

 

Categories: DBA Blogs

Loads of fun with DBA_HIST_OSSTAT

Fri, 2015-03-13 17:35

I saw a load of 44 on a node of our production Exadata and it worried me.  The AWR report looks like this:

Host CPU
            Load Average
 CPUs     Begin       End     %User   %System      %WIO     %Idle
----- --------- --------- --------- --------- --------- ---------
   16     10.66     44.73      68.3       4.3       0.0      26.8

So, why is the load average 44 and yet the CPU is 26% idle?

I started looking at ASH data and found samples with 128 processes active on the CPU:

     select
  2  sample_time,count(*)
  3  from DBA_HIST_ACTIVE_SESS_HISTORY a
  4  where
  5  session_state='ON CPU' and
  6  instance_number=3 and
  7  sample_time
  8  between
  9  to_date('05-MAR-2015 01:00:00','DD-MON-YYYY HH24:MI:SS')
 10  and
 11  to_date('05-MAR-2015 02:00:00','DD-MON-YYYY HH24:MI:SS')
 12  group by sample_time
 13  order by sample_time;

SAMPLE_TIME                    COUNT(*)
---------------------------- ----------
05-MAR-15 01.35.31.451 AM           128

... lines removed for brevity

Then I dumped out the ASH data for one sample and found all the sessions on the CPU were running the same parallel query:

select /*+  parallel(t,128) parallel_index(t,128) dbms_stats ...

So, for some reason we are gathering stats on a table with a degree of 128 and that spikes the load.  But, why does the CPU idle percentage sit at 26.8% when the load starts at 10.66 and ends at 44.73?  Best I can tell load in DBA_HIST_OSSTAT is a point measurement of load.  It isn’t an average over a long period.  The 11.2 manual describes load in v$osstat in this way:

Current number of processes that are either running or in the ready state, waiting to be selected by the operating-system scheduler to run. On many platforms, this statistic reflects the average load over the past minute.

So, load could spike at the end of an hour-long AWR report interval and still CPU could average 26% idle for the entire hour?  So it seems.

– Bobby

Categories: DBA Blogs

Arizona Oracle User Group Meeting March 18

Thu, 2015-03-12 09:39
I just found out that this meeting was cancelled.  We will have to catch the next one. :) – 3/16/2015

Sign up for the Arizona Oracle User Group (AZORA) meeting next week: signup url

The email that I received from the meeting organizer described the topic of the meeting in this way:

“…the AZORA meetup on March 18, 2015 is going to talk about how a local business decided to upgrade their Oracle Application from 11i to R12 and give you a first hand account of what went well and what didn’t go so well. ”

Description of the speakers from the email:

Becky Tipton

Becky is the Director of Project Management at Blood Systems located in Scottsdale, AZ. Prior to coming to Blood Systems, Becky was an independent consultant for Tipton Consulting for four years.

Mike Dill

Mike is the Vice President of Application Solutions at 3RP, a Phoenix consulting company. Mike has over 10 years of experience implementing Oracle E-Business Suite and managing large-scale projects.

I plan to attend.  I hope to see you there too. :)

– Bobby

Categories: DBA Blogs

Delphix User Group Presentation

Wed, 2015-03-11 16:30

My Delphix user group presentation went well today. 65 people attended.  It was great to have so much participation.

Here are links to my PowerPoint slides and a recording of the WebEx:

Slides: PowerPoint

Recording: WebEx

Also, I want to thank two Delphix employees, Ann Togasaki and Matthew Yeh.  Ann did a great job of converting my text bullet points into a visually appealing PowerPoint.  She also translated my hand drawn images into useful drawings.  Matthew did an amazing job of taking my bullet points and my notes and adding meaningful graphics to my text only slides

I could not have put the PowerPoint together in time without Ann and Matthew’s help and they did a great job.

Also, for the first time I wrote out my script word for word and added it to the notes on the slides.  So, you can see what I intended to say with each slide.

Thank you to Adam Leventhal of Delphix for inviting me to do this first Delphix user group WebEx presentation.  It was a great experience for me and I hope that it was useful to the user community as well.

– Bobby

Categories: DBA Blogs

Blog third anniversary

Thu, 2015-03-05 09:31

My first blog post was March 5, 2012, three years ago today.

I have enjoyed blogging.  Even though I am talking about topics related to my work blogging does not feel like work. The great thing about blogging is that it’s completely in my control.  I control the content and the time-table. I pay a small amount each year for hosting and for the domain name, but the entertainment value alone is worth the price of the site.  But, it also has career value because this blog has given me greater credibility both with my employer and outside the company.  Plus, I think it makes me better at my job because blogging forces me to put into words the technical issues that I am working on.

It’s been three good years of blogging.  Looking forward to more in the future.

– Bobby

Categories: DBA Blogs

Joined twitter

Wed, 2015-03-04 17:27

I joined twitter.  I don’t really know how to use it.  I’m setup as Bobby Durrett, @bobbydurrettdba if that means anything to you. :)

– Bobby

Categories: DBA Blogs

Different plan_hash_value same plan

Mon, 2015-03-02 15:38

I mentioned this same effect in an earlier post about SQL profiles: link

I get a different plan_hash_value values for a query each time I run an explain plan or run the query.  I see this in queries whose plan includes a system generated temporary segment like this:

|   1 |  TEMP TABLE TRANSFORMATION   |                             |
...
|  72 |    TABLE ACCESS STORAGE FULL | SYS_TEMP_0FD9D668C_764DD84C |

For some reason the system generated temporary table name gets included in the plan_hash_value calculation.  This makes plan_hash_value a less than perfect way to compare two plans to see if they are the same.

Last week I was using my testselect package to test the effect of applying a patch to fix bug 20061582.  I used testselect to grab 1160 select statements from production and got their plans with and without the patch applied on a development database.  I didn’t expect many if any plans to change based on what the patch does.  Surprisingly, 115 out of the 1160 select statements had a changed plan, but all the ones I looked at had the system generated temporary table names in their plan.

Now, I am going to take the queries that have different plans with and without the patch and execute them both ways.  I have a feeling that the plan differences are mainly due to system generated temp table names and their execution times will be the same with and without the patch.

I’ve run across other limitations of plan hash value as I mentioned in an earlier post: link

I’m still using plan_hash_value to compare plans but I have a list of things in my head that reminds me of cases where plan_hash_value fails to accurately compare two plans.

– Bobby

P.S. After posting this I realized that I didn’t know how many of the 115 select statements with plans that differed with and without the patch had system generated temp tables.  Now I know.  114 of the 115 have the string “TEMP TABLE TRANSFORMATION” in their plans.  So, really, there is only one select statement for which the patch may have actually changed its plan.

P.P.S. I reapplied the patch and verified that the one sql_id didn’t really change plans with the patch.  So, that means all the plan changes were due to the system generated name.  Also, all the executions times were the same except for one query that took 50 seconds to parse without the patch and 0 with the patch.  So, one of the queries with the system generated temp table name happened to benefit from the patch.  Very cool!

P.P.P.S This was all done on an 11.2.0.4 Exadata system.

Categories: DBA Blogs

Speaking at Delphix User Group Webex March 11

Tue, 2015-02-24 17:42

On March 11 at 10 am California time I will be speaking in a Delphix User Group Webex session.

Here is the sign up url: WebEx sign up.

Adam Leventhal, the Delphix CTO, will also be on the call previewing the new Delphix 4.2 features.

I will describe our experience with Delphix and the lessons we have learned.  It is a technical talk so it should have enough details to have value to a technical audience.  Hopefully I have put enough effort into the talk to make it useful to other people who have or are considering getting Delphix.

There will time for questions and answers in addition to our talks.

I really enjoy doing user group presentations.  I had submitted an abstract for this talk to the Collaborate 2015 Oracle user group conference but it was not accepted so I wont get a chance to give it there.  But, this WebEx event gives me a chance to present the same material, so I’m happy to have this opportunity.

If you have an interest in hearing about Delphix join the call.  It is free and there will be some good technical content.

– Bobby

P.S. If this talk interests you I also have some earlier blog posts that relate to some of the material I will be covering:

Delphix first month

Delphix direct i/o and direct path reads

Delphix table recovery

Also, I plan to post the slides after the talk.

Categories: DBA Blogs

A new index on a small table makes a big difference

Mon, 2015-02-16 16:12

A few weeks back on the weekend just before I went on call we got a complaint about slowness on an important set of reports.  I worried that the slowness of these reports would continue during my support week so I tried to figure out why they were slow.  I reviewed an AWR report for the 24 hours when the reports were running and found a simple query against a tiny table at the top of the “SQL ordered by Elapsed Time” report:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6      307,516.33 3.416388E+09

I edited the AWR report to show just elapsed seconds and number of executions.  3.4 billion executions totaling 307,000 seconds of elapsed time.  This was about 90 microseconds per execution.

The previous weekend the same query looked like this:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6      133,143.65 3.496291E+09

So, about the same number of executions but less than half of the elapsed time.  This was about 38 microseconds per execution.  I never fully explained the change from week to week, but I found a way to improve the query performance by adding a new index.

The plan was the same both weekends so the increase in average execution time was not due to a plan change.  Here is the plan:

SQL_ID 77hcmt4kkr4b6
--------------------
SELECT DIV_NBR FROM DIV_RPT_GEN_CTL WHERE RPT_NM = :B1 AND
GEN_STAT = 1

Plan hash value: 1430621991

------------------------------------------------------------------
| Id  | Operation                   | Name               | Rows  |
------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                    |       |
|   1 |  TABLE ACCESS BY INDEX ROWID| DIV_RPT_GEN_CTL    |     1 |
|   2 |   INDEX RANGE SCAN          | DIV_RPT_GEN_CTL_U1 |     1 |
------------------------------------------------------------------

I found that the table only had 369 rows and 65 blocks so it was tiny.

The table’s only index was on columns RPT_NM and RPT_ID but only RPT_NM was in the query.  For the given value of RPT_NM the index would look up all rows in the table with that value until it found those with GEN_STAT=1.  I suspect that on the weekend of the slowdown that the number of rows being scanned for a given RPT_NM value had increased, but I can not prove it.

I did a count grouping by the column GEN_STAT and found that only 1 of the 300 or so rows had GEN_STAT=1.

SELECT GEN_STAT,count(*)
FROM DIV_RPT_GEN_CTL
group by GEN_STAT;

  GEN_STAT   COUNT(*)
---------- ----------
         1          1
         2        339
         0         29

So, even though this table is tiny it made sense to add an index which included the selective column GEN_STAT.  Also, since the reports execute the query billions of times per day it made sense to include the one column in the select clause as well, DIV_NBR.  By including DIV_NBR in the index the query could get DIV_NBR from the index and not touch the table.  The new index was on the columns RPT_NM, GEN_STAT, and DIV_NBR in that order.

Here is the new plan:

SQL_ID 77hcmt4kkr4b6
--------------------
SELECT DIV_NBR FROM DIV_RPT_GEN_CTL WHERE RPT_NM = :B1 AND
GEN_STAT = 1

Plan hash value: 2395994055

-------------------------------------------------------
| Id  | Operation        | Name               | Rows  |
-------------------------------------------------------
|   0 | SELECT STATEMENT |                    |       |
|   1 |  INDEX RANGE SCAN| DIV_RPT_GEN_CTL_U2 |     1 |
-------------------------------------------------------

Note that it uses the new index and does not access the table.  Here is the part of the AWR report for the problem query for last weekend:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6       84,303.02 4.837909E+09

4.8 billion executions and only 84,000 seconds elapsed.  That is about 17.4 microseconds per execution.  That is less than half of what the average execution time was the weekend before the problem started.

The first Monday after we put the index in we found that one of the slow reports had its run time reduced from 70 minutes to 50 minutes.  It was great that we could improve the run time so much with such a simple fix.

It was a simple query to tune.  Add an index using the columns in the where clause and the one column in the select clause.  It was a tiny table that normally would not even need an index.  But, any query that an application executes billions of times in a day needs to execute in the most efficient way possible so it made sense to add the best possible index.

– Bobby

Categories: DBA Blogs

Long parse time with interval partitioning and subpartitions

Mon, 2015-02-09 13:52

In an earlier post I described how some queries with long parse times were causing long library cache lock waits.  Friday I applied a patch from Oracle that resolved the long parse times.  Here are the conditions which may relate to this bug:

  1. Interval partitioned table
  2. Partitioned by range
  3. Sub-partitioned by list
  4. Open ended range on partitioning column in where clause
  5. Tens of thousands of sub-partitions

Prior to applying the patch I did an explain plan on two versions of the problematic query.  One version specified an open-ended range on the partitioning column and the other closed off the range.

Slow version:

...
T392658.CLNDR_DT >= TO_DATE('2014-11-17' , 'YYYY-MM-DD') and 
...
Explained.

Elapsed: 00:00:46.20

Fast version:

...
T392658.CLNDR_DT >= TO_DATE('2014-11-17' , 'YYYY-MM-DD') and
T392658.CLNDR_DT <= TO_DATE('2014-11-26' , 'YYYY-MM-DD') and
...
Explained.

Elapsed: 00:00:00.09

The queries are the same except for the extra date condition which closes off the date range.  Note that the explain plan took 46 seconds with the open-ended range and less than a tenth of a second with the closed off range.

With the patch the slow version is just as fast as the fast one.

This is bug 20061582 in Oracle’s system.

From my conversations with Oracle it sounds like in certain cases the optimizer is reviewing information for many or all of the sub-partitions and since we have 20,000 for this table in production it can take minutes to parse.  I also messed with the dates on the open-ended condition and found that if I made the date early enough the parse time issue went away.  So, it seems that there is some set of conditions, which I don’t know how to easily reproduce, which lead the optimizer to look at sub-partition information and slows parse time.

This is on 11.2.0.4 on an Exadata system.  I was able to reproduce the long parse times on non-Exadata Linux 11.2.0.4 and 12.1.0.2 systems so it does not occur only on Exadata.

This issue surprised me because I thought that the optimizer would not look at partition or sub-partition statistics in a query that has a range condition which spanned more than one partition.  In the past I have always seen the global or table level stats used in these situations.  But, now I know that there are some cases where the optimizer will dig into the sub-partition stats even though the query covers more than one partition.

Looking at the plan of my slow parsing query before and after the patch I get a clue that the optimizer is looking at sub-partition stats:

Partition range without the patch:

KEY(SQ)|KEY(SQ)

Partition range with the patch:

     1 |   136

Evidently KEY(SQ) relates to some sort of filtering of the sub-partitions which cause the long parse time.  The manuals describe KEY(SQ) in a section titled “Dynamic Pruning with Subqueries” so maybe the problem queries have some sub-query that the optimizer was using to choose which sub-partitions that the query needed.

If you have an interval partitioned table with tens of thousands of sub-partitions and parse times in the minutes with open-ended ranges specified on the partitioning column your system could be hitting this same issue.

– Bobby

Categories: DBA Blogs

Arizona User Group Meeting tomorrow

Mon, 2015-01-19 14:07

Here is the link: url

It looks like our meeting tomorrow will be on Oracle 12c new features.  I’m looking forward to participating. :)

– Bobby

Categories: DBA Blogs

Cary Millsap

Wed, 2015-01-14 14:31

This is my third of four posts about people who have made a major impact on my Oracle database performance tuning journey.  This post is about Cary Millsap.  The previous two were about Craig Shallahamer and Don Burleson.

I am working through these four people in chronological order.  The biggest impact Cary Millsap had on me was through the book Optimizing Oracle Performance which he co-authored with Jeff Holt.  I have also heard Cary speak at conferences and we had him in for a product demo one time where I work.

I have delayed writing this post because I struggle to put into words why Cary’s book was so useful to me without repeating a long explanation of the book’s contents.  Just before reading the book I had worked on a system with high CPU usage and queuing for the CPU.  I had just read the paper “Microstate Response-time Performance Profiling” by Danisment Gazi Unal which talked about why CPU measurements in Oracle do not include time spent queued for the CPU.  Then I read Cary Millsap’s book and it was very enlightening.  For one thing, the book was very well written and written in a convincing way.  But the key concept was Cary Millsap’s idea of looking at the waits and CPU time that Oracle reports at a session level and comparing that to the real elapsed time.  This performance profile with waits, CPU, and elapsed time formed the basis of my first conference talk which I gave at Collaborate 06: PowerPoint, Word, Zip

Here is an example of a session profile from my presentation:

TIMESOURCE                  ELAPSED_SECONDS
--------------------------- ---------------
REALELAPSED                             141
CPU                                   44.81
SQL*Net message from client            9.27
db file sequential read                 .16

This is a profile of a session that spent roughly two-thirds of its time queued for the CPU.

Since reading Optimizing Oracle Performance I have resolved many performance problems by creatively applying the concepts in the book.  The book focuses on using traces to build profiles.  I have made my own scripts against V$ views and I have also used Precise.  I have used traces as the book suggests but only with TKPROF.  I have not had a chance to use the tool that the book describes, the Method R Profiler.

However I do it the focus is on waits, CPU as reported by Oracle, and real elapsed time all for a single session.  It is a powerful way to approach performance tuning and the main thing I learned from Cary Millsap.  I highly recommend Cary Millsap and Jeff Holt’s book to anyone who wants to learn more about Oracle database performance tuning because it made such a profound impact on my career.

– Bobby



Categories: DBA Blogs

Career Day – Oracle Database Administrator

Tue, 2015-01-06 16:10

I will be talking at my daughter’s high school for career day on Monday, explaining my job as an Oracle Database Administrator.  Wish me luck!

The funny thing is that no one understands what Oracle DBAs do, unless they are one or work closely with one.  I have a feeling that my talk is going to fall flat, but if it helps one of the students in any way it will be worth it.

To me the best thing about being an Oracle DBA is that you can do a pretty interesting and technically challenging job and companies that are not technology centric will still hire you to do it.  I’ve always been interested in computer technology but have worked in non-technical companies my entire career – mainly a non-profit ministry and a food distribution company.  Neither companies make computers or sell software!

My other thought is how available computer technology is to students today.  Oracle, in one of the company’s more brilliant moves, made all of its software available for download so students can try out the very expensive software for free.  Plus all the manuals are available online.  What is it like to grow up as a student interested in computer technology in the age of the internet?  I can’t begin to compare it to my days in the 1980s when I was in high school and college.  Did we even have email?  I guess we must have but I can’t remember using it much.  Today a student who owns a laptop and has an internet connection has a world of technology at their fingertips far beyond what I had at their age.

Hopefully I wont bore the students to tears talking about being an Oracle DBA.  They probably still won’t know what it really is after I’m done.  But at least they will know that such a job exists, and maybe that will be helpful to them.

– Bobby

P.S.  There were over 100 students there.  They were pretty polite with only a little talking.  Here is a picture of myself on the left, my daughter in the center, and a coworker who also spoke at the career day on the right.

careerday





Categories: DBA Blogs

AZORA – Arizona Oracle User Group meeting January 20th

Fri, 2014-12-12 16:25

AZORA is planning a meeting January 20th.  Here is the link to RSVP: url

Hope to see you there. :)

– Bobby

Categories: DBA Blogs

10128 trace to see partition pruning

Tue, 2014-12-09 10:57

I am working on an SR with Oracle support and they asked me to do a 10128 trace to look at how the optimizer is doing partition pruning.  I did some quick research on this trace and wanted to pass it along.

Here are the names of the two Oracle support documents that I found most helpful:

How to see Partition Pruning Occurred? (Doc ID 166118.1)

Partition Pruning Min/Max Optimization Fails when Parallel Query Run in Serial (Doc ID 1941770.1)

The first was the one Oracle support recommended.  But, the SR said to run both a level 2 and a level 7 trace and the first document did not mention level 7.  But, the second document has an example of a level 7 trace and more details on how to set it up.

I also found these two non-Oracle sites or blog posts:

http://cbohl.blogspot.com/2006/10/verify-that-partition-pruning-works.html

http://www.juliandyke.com/Diagnostics/Events/EventReference.html#10128

I do not have time to delve into this further now but if you are trying to understand partition pruning then the 10128 trace may help you understand how it works.

– Bobby


Categories: DBA Blogs