Skip navigation.

Bobby Durrett's DBA Blog

Syndicate content
Oracle database performance
Updated: 17 hours 43 min ago

Reminder: Great free computer science and Python programming class starts Wednesday

Mon, 2015-08-24 13:01

I mentioned this class earlier in a blog post but I wanted to remind people who read this blog that the class is starting again on Wednesday.  Here is the URL for the class: link

The class is completely free and taught at a very high level of quality.

It teaches computer science concepts that apply in any programming language but also teaches Python programming.

It is valuable information in the increasingly computer oriented world and economy and the class is free which is remarkable given its quality.

Here is the class name:

MITx: 6.00.1x Introduction to Computer Science and Programming Using Python

Bobby

Categories: DBA Blogs

Memory added based on buffer pool advisory did not give desired result

Wed, 2015-08-19 14:37

Development asked me to look at ways to cut the run time of a series of PeopleSoft payroll batch jobs so I took an AWR report of the entire 4 hour period.  Based on the waits, the percentage of the elapsed time spent using I/O and the buffer pool advisory I chose to double the size of the buffer cache. But, this added memory did not improve the run time of the batch jobs. Maybe the affected blocks are only read into memory once so they would not get cached no matter how big the buffer pool was.

Here is the original run on June 22 with the original memory settings:

Cache Sizes

Begin End Buffer Cache: 3,328M 3,424M Std Block Size: 8K Shared Pool Size: 1,600M 1,520M Log Buffer: 7,208K

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class db file sequential read 1,910,393 10,251 5 72.03 User I/O DB CPU 2,812 19.76 log file sync 35,308 398 11 2.80 Commit resmgr:cpu quantum 31,551 62 2 0.43 Scheduler db file scattered read 7,499 60 8 0.42 User I/O Buffer Pool Advisory
  • Only rows with estimated physical reads >0 are displayed
  • ordered by Block Size, Buffers For Estimate
P Size for Est (M) Size Factor Buffers (thousands) Est Phys Read Factor Estimated Phys Reads (thousands) Est Phys Read Time Est %DBtime for Rds D 336 0.10 41 2.71 6,513,502 1 9842530.00 D 672 0.20 83 2.42 5,831,130 1 8737799.00 D 1,008 0.29 124 2.18 5,241,763 1 7783636.00 D 1,344 0.39 166 1.96 4,720,053 1 6939010.00 D 1,680 0.49 207 1.77 4,250,981 1 6179603.00 D 2,016 0.59 248 1.59 3,825,904 1 5491420.00 D 2,352 0.69 290 1.43 3,438,372 1 4864023.00 D 2,688 0.79 331 1.28 3,083,734 1 4289879.00 D 3,024 0.88 373 1.15 2,758,459 1 3763273.00 D 3,360 0.98 414 1.02 2,459,644 1 3279504.00 D 3,424 1.00 422 1.00 2,405,118 1 3191229.00 D 3,696 1.08 455 0.91 2,184,668 1 2834329.00 D 4,032 1.18 497 0.80 1,931,082 1 2423784.00 D 4,368 1.28 538 0.71 1,696,756 1 2044421.00 D 4,704 1.37 579 0.62 1,479,805 1 1693185.00 D 5,040 1.47 621 0.53 1,278,370 1 1367070.00 D 5,376 1.57 662 0.45 1,090,505 1 1062925.00 D 5,712 1.67 704 0.38 914,112 1 777352.00 D 6,048 1.77 745 0.31 746,434 1 505888.00 D 6,384 1.86 786 0.24 580,310 1 236941.00 D 6,720 1.96 828 0.17 414,233 1 149325.00

In the SQL ordered by Elapsed Time report the top batch job SQL was 99.14% I/O

Based on this report it seems that the number of physical reads could be reduced to about 20% what they were on June 22 by doubling the size of the buffer cache. But, adding the memory did not cut the number of physical reads in any major way.

Here is yesterday’s run:

Cache Sizes

Begin End Buffer Cache: 6,848M 6,816M Std Block Size: 8K Shared Pool Size: 3,136M 3,136M Log Buffer: 16,572K

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class db file sequential read 1,789,852 10,173 6 72.15 User I/O DB CPU 2,970 21.06 log file sync 37,562 200 5 1.42 Commit resmgr:cpu quantum 24,996 59 2 0.42 Scheduler db file scattered read 5,409 54 10 0.38 User I/O Buffer Pool Advisory
  • Only rows with estimated physical reads >0 are displayed
  • ordered by Block Size, Buffers For Estimate
P Size for Est (M) Size Factor Buffers (thousands) Est Phys Read Factor Estimated Phys Reads (thousands) Est Phys Read Time Est %DBtime for Rds D 672 0.10 83 11.25 516,440 1 1309098.00 D 1,344 0.20 166 5.98 274,660 1 683610.00 D 2,016 0.29 248 4.02 184,712 1 450915.00 D 2,688 0.39 331 2.90 133,104 1 317404.00 D 3,360 0.49 414 2.20 100,860 1 233990.00 D 4,032 0.59 497 1.80 82,768 1 187185.00 D 4,704 0.69 580 1.53 70,445 1 155305.00 D 5,376 0.79 663 1.31 60,345 1 129176.00 D 6,048 0.88 745 1.14 52,208 1 108127.00 D 6,720 0.98 828 1.01 46,477 1 93301.00 D 6,848 1.00 844 1.00 45,921 1 91862.00 D 7,392 1.08 911 0.95 43,572 1 85785.00 D 8,064 1.18 994 0.89 40,789 1 78585.00 D 8,736 1.28 1,077 0.85 38,889 1 73671.00 D 9,408 1.37 1,160 0.81 37,112 1 69073.00 D 10,080 1.47 1,242 0.77 35,490 1 64876.00 D 10,752 1.57 1,325 0.75 34,439 1 62158.00 D 11,424 1.67 1,408 0.73 33,353 1 59347.00 D 12,096 1.77 1,491 0.71 32,524 1 57204.00 D 12,768 1.86 1,574 0.69 31,909 1 55613.00 D 13,440 1.96 1,656 0.68 31,361 1 54194.00

After the memory add the same top batch job SQL was 98.80% I/O.  Some improvement but not nearly as much as I expected based on the buffer pool advisory.

I guess the moral of the story is that the buffer pool advisory does not apply to specific workloads and is only a general guideline.  Maybe this is the same kind of fallacy that you have with buffer cache hit ratios where certain workloads make the ratio irrelevant.  Here were the hit ratios:  Before 98.59% After 98.82%.  Basically these are the same.

I just thought I would share this to document a real case of using the buffer pool advisory and having it not produce the expected results.

Bobby

Categories: DBA Blogs

Script to get previous month’s AWR report

Tue, 2015-08-18 11:58

We keep 6 weeks of history in the AWR on our databases, but I want to capture some information for long-term trending. What I really want to do is capture some metrics and put them in some database tables to use to generate reports, but I have not had time to build the scripts to do that.  So, instead I built a simple set of scripts to capture an AWR for the previous month. Since we have 6 weeks of history if I run my report in the first week of a month all the days of the previous month should still be in the AWR. I have just finished building this script so I can not promise that there is value in keeping monthly AWR reports but I thought it was worth sharing it.  Maybe something in the code will be useful to someone.  Here is the script:

-- Has to be run in the first week of the month so the entire
-- previous month is available. We keep 6 weeks of awr history.

-- setup columns for snapshots

column bsnap1 new_value bsnap1s noprint;
column esnap1 new_value esnap1s noprint;
column filenm new_value filenms noprint;

-- get snap id for first day of previous month

select min(snap_id) bsnap1
from dba_hist_snapshot
where 
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))
and
STARTUP_TIME=
(select max(STARTUP_TIME)
from dba_hist_snapshot
where 
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD')))));

-- get snap id for last day of previous month

select max(snap_id) esnap1
from dba_hist_snapshot
where 
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD'))));

-- get html file name

select 
name||
'_'|| 
to_char(extract(month from 
(sysdate-to_number(to_char(sysdate,'DD')))))||
'_'|| 
to_char(extract(year from 
(sysdate-to_number(to_char(sysdate,'DD')))))||
'.html' filenm
from v$database;

-- get awr report

define report_type='html';
define begin_snap = &bsnap1s;
define end_snap = &esnap1s;
define report_name = '&filenms';

define num_days = 0;

@@$ORACLE_HOME/rdbms/admin/awrrpt.sql

undefine report_type
undefine report_name
undefine begin_snap
undefine end_snap
undefine num_days

If the database bounced during the previous month we get the last set of snapshots after the last bounce.

I am not sure whether this approach will give us any benefits but I think it may help to show how to use queries to pick begin and end snapshots and then run an AWR report.

The tricky part of the code is this:

extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))

It just returns the previous month as a number.  It is August now so here is what it returns today:

SQL> select
 2 extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))
 3 prev_month
 4 from dual;

PREV_MONTH
----------
 7

sysdate-to_number(to_char(sysdate,’DD’)) is the last day of the previous month:

SQL> select sysdate-to_number(to_char(sysdate,'DD')) last_day
 2 from dual;

LAST_DAY
---------
31-JUL-15

– Bobby

Categories: DBA Blogs

Applied July Patch Sets To Test Databases

Tue, 2015-08-11 16:09

I applied the current July patch sets to a 11.2 and a 12.1 test database.  Now I have a 11.2.0.4.7 and a 12.1.0.2.4 test database.  It is helpful to have test databases that are on the most current patch sets and releases.  If I see unexpected behavior on some other database I can try the same thing on the patched test databases to see if some patch changed the behavior to what I expect.  Also, our production databases are all on 11.2.0.4 or earlier releases so I can check whether the new fully patched 12.1 release has different behavior than our older systems.

Here are the patch numbers:

6880880 – current version of opatch

20760982 – 11.2.0.4.7

20831110 – 12.1.0.2.4

My test environments are on x86-64 Linux.

– Bobby

Categories: DBA Blogs

Registered for Oracle OpenWorld

Mon, 2015-08-10 13:41

I registered myself for Oracle OpenWorld and I have my hotel reserved and my flights ticketed.

I think it has been over 12 years – probably more like 15 years – since I went to OpenWorld. I went at least once between December 1994 and November 2003 when I still lived in Florida and was working on Oracle databases.  But since I moved from Florida I do not believe that I have been to the conference.  I have presented at Collaborate and ECOUG conferences since then.  I’m thinking that maybe next year I will try to present at the RMOUG conference.  I live in Arizona so RMOUG is close.  ECOUG was a nice distance when I still lived near the East Coast.  I like the smaller conferences and I have a better shot at getting a presentation accepted there.

But, this year it is OpenWorld and I am looking forward to it.  I may get a chance to interact with some Delphix employees and customers.  Also, I’m hoping to check out some technical presentations by the Oak Table members.  And it does not hurt to hear from Oracle itself on its technology.  No doubt there will be many of Oracle’s top technical leaders presenting.  And, any interaction I get with fellow DBA’s will be great.  It is always good to hear from people about their own experiences which may differ from mine.

Anyway, I’m all booked for OpenWorld.  Hope to see you there.

– Bobby

 

 

Categories: DBA Blogs

Check out 6.00.1x computer science class on edX!

Sat, 2015-07-25 11:15

I just finished the last program for a computer science class on edX and I urge you to try it.

I took this class:

MITx: 6.00.1x Introduction to Computer Science and Programming Using Python

I was more interested in how MIT taught the class than in the material itself because I already know the subjects covered.

The class taught the basics of programming – expressions, variables, loops, if statements, and functions.

It also had a large focus on bisection or binary search and the performance benefits of this type of search over sequentially reading through a list.

It also covered lists, hash tables, trees, stacks, and queues.

It discussed object-oriented programming.

The class concluded with the professor stating that the programming and computer science skills taught in this class are key to advancing your career, even if you do not work in a computer related job.

I interacted with a number of students in the class and found some that were in other fields and were having success taking the class.  Others were in business computing or IT and yet did not have a computer science background so they were good programmers but learning new concepts.  Many struggled with the class but, it is free, and is given often. The class starts up again August 26th.  Nothing stops you from taking it multiple times.

I tried to think about whether I should recommend this class to the people I work with as a method of helping develop my coworkers that do not have experience in these areas.  At first I thought that the subject is too academic and has no connection to their jobs. But, after thinking about it for a while, I now believe that just the opposite is true.

Searching for practical applications of the class, I first remembered the programs that we wrote that compared searching sequentially through a list to using binary search.  In one test case the sequential method took 15 seconds but the binary search took less than one second.  This reminded me so much of tuning Oracle SQL queries.  The sequential scan of the list was like a full table scan in Oracle.  The binary search was like looking up a single row using an index scan.  As I tune Oracle queries my computer science knowledge of binary search and binary trees makes it easy to understand index and full table scans.

In another example, we recently had slowness on a Weblogic portal server.  CPU was getting maxed out and the CPU spent most of its time in a Java ConcurrentHashMap object.  I don’t know the internals of Weblogic and I have never used a ConcurrentHashMap but I know how hashing works.  I know that hashing is very fast until your hash table fills up or if the hash function distributes the items in an unequal way. My knowledge of hashing helped me grasp why our portal server was using a lot of CPU despite my lack of access to its internals.

So, contrary to my original fear that the edX class was too academic and not practical I believe that the concepts covered are very practical.  If you do not know how binary search works or what a binary tree is you will benefit from 6.00.1x on edX.  If you can not explain how a hash table works and what causes hashing to slow down you can learn from 6.00.1x. And, if you have never written a computer program, although you may find the class difficult and have to take it more than once, you will benefit from 6.00.1x on edX.

– Bobby

 

Categories: DBA Blogs

Flushing Shared Pool Does Not Slow Its Growth

Thu, 2015-06-18 17:14

I’m still working on resolving the issues caused by bug 13914613.

Oracle support recommended that we apply a parameter change to resolve the issue but that change requires us to bounce the database  and I was looking for a resolution that does not need a bounce.  The bug caused very bad shared pool latch waits when the automatic memory management feature of our 11.2.0.3 database expanded the shared pool.  Oracle support recommending setting _enable_shared_pool_durations=false and I verified that changing this parameter requires a bounce.  It is a big hassle to bounce this database because of the application so I thought that I might try flushing the shared pool on a regular basis so the automatic memory management would not need to keep increasing the size of the shared pool.  The shared pool was growing in size because we have a lot of SQL statements without bind variables.  So, I did a test and in my test flushing the shared pool did not slow the growth of the shared pool.

Here is a zip of the scripts I used for this test and their outputs: zip

I set the shared pool to a small value so it was more likely to grow and I created a script to run many different sql statements that don’t use bind variables:

spool runselects.sql

select 'select * from dual where dummy=''s'
||to_char(sysdate,'HHMISS')||rownum||''';'
from dba_objects;

spool off

@runselects

So, the queries looked like this:

select * from dual where dummy='s0818111';
select * from dual where dummy='s0818112';
select * from dual where dummy='s0818113';
select * from dual where dummy='s0818114';
select * from dual where dummy='s0818115';
select * from dual where dummy='s0818116';
select * from dual where dummy='s0818117';

I ran these for an hour and tested three different configurations.  The first two did not use the _enable_shared_pool_durations=false setting and the last did.  The first test was a baseline that showed the growth of the shared pool without flushing the shared pool.  The second test including a flush of the shared pool every minute.  The last run included the parameter change and no flush of the shared pool.  I queried V$SGA_RESIZE_OPS after each test to see how many times the shared pool grew.  Here is the query:

SELECT OPER_TYPE,FINAL_SIZE Final,
to_char(start_time,'dd-mon hh24:mi:ss') Started, 
to_char(end_time,'dd-mon hh24:mi:ss') Ended 
FROM V$SGA_RESIZE_OPS
where component='shared pool'
order by start_time,end_time;

Here are the results.

Baseline – no flush, no parameter change:

OPER_TYPE       FINAL STARTED         ENDED
--------- ----------- --------------- ---------------
GROW      150,994,944 18-jun 05:03:54 18-jun 05:03:54
GROW      134,217,728 18-jun 05:03:54 18-jun 05:03:54
STATIC    117,440,512 18-jun 05:03:54 18-jun 05:03:54
GROW      167,772,160 18-jun 05:04:36 18-jun 05:04:36
GROW      184,549,376 18-jun 05:47:38 18-jun 05:47:38

Flush every minute, no parameter change:

OPER_TYPE       FINAL STARTED         ENDED
--------- ----------- --------------- ---------------
GROW      134,217,728 18-jun 06:09:15 18-jun 06:09:15
GROW      150,994,944 18-jun 06:09:15 18-jun 06:09:15
STATIC    117,440,512 18-jun 06:09:15 18-jun 06:09:15
GROW      167,772,160 18-jun 06:09:59 18-jun 06:09:59
GROW      184,549,376 18-jun 06:22:26 18-jun 06:22:26
GROW      201,326,592 18-jun 06:42:29 18-jun 06:42:29
GROW      218,103,808 18-jun 06:47:29 18-jun 06:47:29

Parameter change, no flush:

OPER_TYPE        FINAL STARTED         ENDED
--------- ------------ --------------- ---------------
STATIC     117,440,512 18-jun 07:16:09 18-jun 07:16:09
GROW       134,217,728 18-jun 07:16:18 18-jun 07:16:18

So, at least in this test – which I have run only twice – flushing the shared pool if anything makes the growth of the shared pool worse.  But, changing the parameter seems to lock it in.

– Bobby

Categories: DBA Blogs

Overall I/O Query

Tue, 2015-06-16 14:57

I hacked together a query today that shows the overall I/O performance that a database is experiencing.

The output looks like this:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:59        359254               20              711636
2015-06-15 16:00:59        805884               16              793033
2015-06-15 17:00:13        516576               13              472478
2015-06-15 18:00:27        471098                6              123565
2015-06-15 19:00:41        201820                9              294858
2015-06-15 20:00:55        117887                5              158778
2015-06-15 21:00:09         85629                1               79129
2015-06-15 22:00:23        226617                2               10744
2015-06-15 23:00:40        399745               10              185236
2015-06-16 00:00:54       1522650                0               43099
2015-06-16 01:00:08       2142484                0               19729
2015-06-16 02:00:21        931349                0                9270

I’ve combined reads and writes and focused on three metrics – number of IOs, average IO time in milliseconds, and average IO size in bytes.  I think it is a helpful way to compare the way two systems perform.  Here is another, better, system’s output:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:25        331931                1              223025
2015-06-15 16:00:40        657571                2               36152
2015-06-15 17:00:56       1066818                1               24599
2015-06-15 18:00:11        107364                1              125390
2015-06-15 19:00:26         38565                1               11023
2015-06-15 20:00:41         42204                2              100026
2015-06-15 21:00:56         42084                1               64439
2015-06-15 22:00:15       3247633                3              334956
2015-06-15 23:00:32       3267219                0               49896
2015-06-16 00:00:50       4723396                0               32004
2015-06-16 01:00:06       2367526                1               18472
2015-06-16 02:00:21       1988211                0                8818

Here is the query:

select 
to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS') "End snapshot time",
sum(after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS) "number of IOs",
trunc(10*sum(after.READTIM+after.WRITETIM-before.WRITETIM-before.READTIM)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO time (ms)",
trunc((select value from v$parameter where name='db_block_size')*
sum(after.PHYBLKRD+after.PHYBLKWRT-before.PHYBLKRD-before.PHYBLKWRT)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO size (bytes)"
from DBA_HIST_FILESTATXS before, DBA_HIST_FILESTATXS after,DBA_HIST_SNAPSHOT sn
where 
after.file#=before.file# and
after.snap_id=before.snap_id+1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number
group by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS')
order by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS');

I hope this is helpful.

– Bobby

Categories: DBA Blogs

Bug 13914613 Example Shared Pool Latch Waits

Tue, 2015-06-09 17:18

Oracle support says we have hit bug 13914613.  Here is what our wait events looked like in an AWR report:

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class latch: shared pool 3,497 17,482 4999 38.83 Concurrency latch: row cache objects 885 12,834 14502 28.51 Concurrency db file sequential read 1,517,968 8,206 5 18.23 User I/O DB CPU 4,443 9.87 library cache: mutex X 7,124 2,639 370 5.86 Concurrency

What really struck me about these latch waits were that the average wait time was several thousand milliseconds which means several seconds.  That’s a long time to wait for a latch.

Oracle pointed to the Latch Miss Sources section of the AWR.  This is all gibberish to me.  I guess it is the name of internal kernel latch names.

Latch Miss Sources Latch Name Where NoWait Misses Sleeps Waiter Sleeps shared pool kghfrunp: clatch: wait 0 1,987 1,956 shared pool kghfrunp: alloc: session dur 0 1,704 1,364

Bug description says “Excessive time holding shared pool latch in kghfrunp with auto memory management” so I guess the “kghfrunp” latch miss sources told Oracle support that this was my issue.

I did this query to look for resize operations:

SELECT COMPONENT ,OPER_TYPE,FINAL_SIZE Final,
  2  to_char(start_time,'dd-mon hh24:mi:ss') Started,
  3  to_char(end_time,'dd-mon hh24:mi:ss') Ended
  4  FROM V$SGA_RESIZE_OPS;

COMPONENT                 OPER_TYPE               FINAL STARTED                   ENDED
------------------------- ------------- --------------- ------------------------- -------------------------
DEFAULT 2K buffer cache   STATIC                      0 12-may 04:33:01           12-may 04:33:01
streams pool              STATIC            134,217,728 12-may 04:33:01           12-may 04:33:01
ASM Buffer Cache          STATIC                      0 12-may 04:33:01           12-may 04:33:01
DEFAULT buffer cache      INITIALIZING   10,401,873,920 12-may 04:33:01           12-may 04:33:08
DEFAULT 32K buffer cache  STATIC                      0 12-may 04:33:01           12-may 04:33:01
KEEP buffer cache         STATIC          2,147,483,648 12-may 04:33:01           12-may 04:33:01
shared pool               STATIC         13,958,643,712 12-may 04:33:01           12-may 04:33:01
large pool                STATIC          2,147,483,648 12-may 04:33:01           12-may 04:33:01
java pool                 STATIC          1,073,741,824 12-may 04:33:01           12-may 04:33:01
DEFAULT buffer cache      STATIC         10,401,873,920 12-may 04:33:01           12-may 04:33:01
DEFAULT 16K buffer cache  STATIC                      0 12-may 04:33:01           12-may 04:33:01
DEFAULT 8K buffer cache   STATIC                      0 12-may 04:33:01           12-may 04:33:01
DEFAULT 4K buffer cache   STATIC                      0 12-may 04:33:01           12-may 04:33:01
RECYCLE buffer cache      STATIC                      0 12-may 04:33:01           12-may 04:33:01
KEEP buffer cache         INITIALIZING    2,147,483,648 12-may 04:33:02           12-may 04:33:04
DEFAULT buffer cache      SHRINK         10,334,765,056 20-may 21:00:12           20-may 21:00:12
shared pool               GROW           14,025,752,576 20-may 21:00:12           20-may 21:00:12
shared pool               GROW           14,092,861,440 27-may 18:06:12           27-may 18:06:12
DEFAULT buffer cache      SHRINK         10,267,656,192 27-may 18:06:12           27-may 18:06:12
shared pool               GROW           14,159,970,304 01-jun 09:07:35           01-jun 09:07:36
DEFAULT buffer cache      SHRINK         10,200,547,328 01-jun 09:07:35           01-jun 09:07:36
DEFAULT buffer cache      SHRINK         10,133,438,464 05-jun 03:00:33           05-jun 03:00:33
shared pool               GROW           14,227,079,168 05-jun 03:00:33           05-jun 03:00:33
DEFAULT buffer cache      SHRINK         10,066,329,600 08-jun 11:06:06           08-jun 11:06:07
shared pool               GROW           14,294,188,032 08-jun 11:06:06           08-jun 11:06:07

The interesting thing is that our problem ended right about the time the last shared pool expansion supposedly started.  The latch waits hosed up our database for several minutes and it ended right about 11:06.  I suspect that the system was hung up with the bug and then once the bug finished then the normal expansion work started.  Or, at least, the time didn’t get recorded until after the bug finished slowing us down.

So, I guess it’s just a bug.  This is on 11.2.0.3 on HP-UX Itanium.  I believe there is a patch set with the fix for this bug.

Maybe it will be helpful for someone to see an example.

– Bobby

Categories: DBA Blogs

Surprising calibrate_io result

Fri, 2015-05-29 17:26

Recently I used DBMS_RESOURCE_MANAGER.calibrate_io to measure disk I/O performance using a call like this:

DECLARE
 l_latency PLS_INTEGER;
 l_iops PLS_INTEGER;
 l_mbps PLS_INTEGER;
BEGIN

 DBMS_RESOURCE_MANAGER.calibrate_io (num_physical_disks => 10,
 max_latency => 20,
 max_iops => l_iops,
 max_mbps => l_mbps,
 actual_latency => l_latency);

 DBMS_OUTPUT.put_line('Max IOPS = ' || l_iops);
 DBMS_OUTPUT.put_line('Max MBPS = ' || l_mbps);
 DBMS_OUTPUT.put_line('Latency = ' || l_latency);

END;
/

Using this call I have a surprising result.  This test returns very different results when run against two different databases on the same Linux virtual machine and the same filesystem.

database 1:

Max IOPS = 7459
Max MBPS = 863
Latency = 18

database 2:

Max IOPS = 39921
Max MBPS = 1105
Latency = 0

Both databases use direct and asynchronous I/O.  The databases differ in size and configuration.  It seems that something about the databases themselves affects the results since they share the same filesystem on the same machine.

I did not get useful numbers from calibrate_io and have wasted a lot of time trying to interpret its results.  You may want to focus on other tools for measuring disk I/O performance.

– Bobby

Categories: DBA Blogs

Free MIT Computer Science Classes Online

Wed, 2015-05-20 14:30

This is very cool:

There are a bunch of nice computer science classes online from MIT: Free online MIT computer science classes

Here is an introductory computer science class: Intro to computer science

Here is a graded version of the same class on edX: Graded version of MIT intro CS class starting June 10th.

edX does not have as many computer science classes but edX may motivate students because edX classes include grades and certificates for those who pass.

I use computer science every day in my database work but I have not taken a formal class since 1989.

I have been on a computer science kick ever since watching The Imitation Game.  I downloaded Turing’s 1936 paper after watching the movie.  I got about halfway through it before giving up.  It was dense! Maybe will take another stab at it some day. But, the MIT classes are neat because they are the way computer science is now taught, and hopefully they are easier to understand than Turing’s paper.

– Bobby

Categories: DBA Blogs

Simple C program for testing disk performance

Wed, 2015-05-13 13:48

I dug up a simple C program that I wrote years ago to test disk performance.  I hesitated to publish it because it is rough and limited in scope and other more capable tools exist. But, I have made good use of it so why not share it with others?  It takes a file name and the size of the file in megabytes.  It sequentially writes the file in 64 kilobyte chunks.  It opens the file in synchronous mode so it must write the data to disk before returning to the program. It outputs the rate in bytes/second that the program wrote to disk.

Here is a zip of the code: zip

There is no error checking so if you put in an invalid file name you get no message.

Here is how I ran it in my HP-UX and Linux performance comparison tests:

HP-UX:

$ time ./createfile /var/opt/oracle/db01/bobby/test 1024
Bytes per second written = 107374182

real 0m10.36s
user 0m0.01s
sys 0m1.79s

Linux:

$ time ./createfile /oracle/db01/bobby/test 1024
Bytes per second written = 23860929

real 0m45.166s
user 0m0.011s
sys 0m2.472s

It makes me think that my Linux system’s write I/O is slower.  I found a set of arguments to the utility dd that seems to do the same thing on Linux:

$ dd if=/dev/zero bs=65536 count=16384 of=test oflag=dsync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 38.423 s, 27.9 MB/s

But I couldn’t find an option like dsync on the HP-UX version of dd.  In any case, it was nice to have the C code so I could experiment with various options to open().  I used tusc on hp-ux and strace on Linux and found the open options to some activity in the system tablespace.  By grepping for open I found the options Oracle uses:

hp trace

open("/var/opt/oracle/db01/HPDB/dbf/system01.dbf", O_RDWR|0x800|O_DSYNC, 030) = 8

linux trace

open("/oracle/db01/LINUXDB/dbf/system01.dbf", O_RDWR|O_DSYNC) = 8

So, I modified my program to use the O_DSYNC flag and it was the same as using O_SYNC.  But, the point is that having a simple C program lets you change these options to open() directly.

I hope this program will be useful to others as it has to me.

– Bobby

p.s. Similar program for sequentially reading through file, but with 256 K buffers: zip

Categories: DBA Blogs

db file parallel read on Linux and HP-UX

Tue, 2015-05-05 13:39

In my previous post I described how I could not explain why I got better db file parallel read wait times in a test on Linux than I got running the same test on HP-UX.  I have discovered that the Linux wait times were better because Linux cached the data in the filesystem cache and HP-UX did not.

Neither system used direct I/O for the tests so both could cache data in the filesystem cache.  Evidently Linux does this faster than HP-UX.  I figured this out by repeatedly running the query flushing the buffer cache before each run.  Flushing the buffer cache prevented the table and index from being cached within the database.  On Linux the query ran for the same amount of time for all 5 executions.  On HP-UX it ran much faster after running it for the first time.  Apparently Linux cached the table and index before the first run and HP-UX cached them after the first run.

Here is how I ran the query:

alter system flush buffer_cache;

select /*+ index(test testi) */ sum(blocks) from test;

alter system flush buffer_cache;

select /*+ index(test testi) */ sum(blocks) from test;

alter system flush buffer_cache;

select /*+ index(test testi) */ sum(blocks) from test;

alter system flush buffer_cache;

select /*+ index(test testi) */ sum(blocks) from test;

alter system flush buffer_cache;

select /*+ index(test testi) */ sum(blocks) from test;

Here are the elapsed times for the query on Linux:

Elapsed: 00:00:09.16
Elapsed: 00:00:09.17
Elapsed: 00:00:09.28
Elapsed: 00:00:09.18
Elapsed: 00:00:09.20

Here is the same thing on HP-UX:

Elapsed: 00:01:03.27
Elapsed: 00:00:19.23
Elapsed: 00:00:19.28
Elapsed: 00:00:19.35
Elapsed: 00:00:19.43

It’s not surprising that the HP-UX times with the data cached are twice that of Linux.  An earlier post found the processor that I am evaluating on Linux was about twice as fast as the one I’m using on HP-UX.

Just to double-check that the caching was really at the filesystem level I turned direct I/O on for the Linux system using this parameter:

alter system set filesystemio_options=DIRECTIO scope=spfile;

I ran the test again after bouncing the database to make the parameter take effect and the run times were comparable to the slow first run on HP-UX:

Elapsed: 00:01:12.03
Elapsed: 00:01:06.69
Elapsed: 00:01:12.98
Elapsed: 00:01:10.14
Elapsed: 00:01:07.21

So, it seems that without the filesystem cache this query takes about 1 minute to run on either system.  With caching the query runs under 20 seconds on both systems.

In some ways I think that these results are not important.  Who cares if Linux caches things on the first attempt and HP-UX on the second?

The lesson I get from this test is that HP-UX and Linux are different in subtle ways and that when we migrate a database from HP-UX to Linux we may see performance differences that we do not expect.

Here is a zip of my script and its logs: zip

– Bobby

Categories: DBA Blogs