Skip navigation.

DBA Blogs

A (BIG) Trick Listing Windows Updates Using PowerShell

Pythian Group - Mon, 2014-12-08 10:29

If you (like me) are using Microsoft.Update.Session for listing the installed windows updates, you might be surprised about something I spent a lot of time today.

Our dream team here is deploying a very cool project for a client where we automated all the install/controlling  of the windows updates in an environment. After some weeks, another script is run in another environment and gets the differences between these environments and just install in this environment those updates that are  differences.  It means that was tested and approved. We will blog about it.

Well, one of our functions get the installed updates in the computer passed as parameter, but in an specific server  a few KB were being listed  two times. It would not  be weird if the UpdatedID and the RevisionNumber was not the same. How can the same update with the same KB, UPdateID and RevisionNumber installed two times ? Ha! It cannot .

First let’s take a look in a part of my code :

$session = [activator]::CreateInstance([type]::GetTypeFromProgID(“Microsoft.Update.Session”,$ComputerName))
$us = $session.CreateUpdateSearcher()
$qtd = $us.GetTotalHistoryCount()
$hot = $us.QueryHistory(0, $qtd)
foreach ($Upd in $hot) {
$OutPut = New-Object -Type PSObject -Prop @{
‘ComputerName’=$computername
‘UpdateDate’=$Upd.date
‘KB’=[regex]::match($Upd.Title,’KB(\d+)’)
‘UpdateTitle’=$Upd.title
‘UpdateDescription’=$Upd.Description
‘SupportUrl’=$Upd.SupportUrl
‘UpdateId’=$Upd.UpdateIdentity.UpdateId
‘RevisionNumber’=$Upd.UpdateIdentity.RevisionNumber
}
Write-Output $OutPut
}

and the output was :

ComputerName : DEATHSTAR
UpdateDate : 8/6/2014 2:15:36 PM
RevisionNumber : 200
SupportUrl : http://support.microsoft.com
UpdateTitle : Security Update for Windows Server 2012 R2 (KB2961072)
KB : KB2961072
UpdateDescription : A security issue has been identified in a Microsoft software product that could affect your system. You can help
protect your system by installing this update from Microsoft. For a complete listing of the issues that are included
in this update, see the associated Microsoft Knowledge Base article. After you install this update, you may have to
restart your system.
UpdateId : f9180040-e423-4fab-9a5b-78c46e9db72c

ComputerName : DEATHSTAR
UpdateDate : 8/6/2014 2:47:19 PM
RevisionNumber : 200
SupportUrl : http://support.microsoft.com
UpdateTitle : Security Update for Windows Server 2012 R2 (KB2961072)
KB : KB2961072
UpdateDescription : A security issue has been identified in a Microsoft software product that could affect your system. You can help
protect your system by installing this update from Microsoft. For a complete listing of the issues that are included
in this update, see the associated Microsoft Knowledge Base article. After you install this update, you may have to
restart your system.
UpdateId : f9180040-e423-4fab-9a5b-78c46e9db72c

As you can see the it have a difference in the time..a few minutes between them.

Well. as I am outputting a psobject  in he function   I decide to use the live object  and changed the output ho have more data  to analyse

$session = [activator]::CreateInstance([type]::GetTypeFromProgID(“Microsoft.Update.Session”,$ComputerName))

$us = $session.CreateUpdateSearcher()
$qtd = $us.GetTotalHistoryCount()
$hot = $us.QueryHistory(0, $qtd)
$hot

and then the force awakens : Take a look at the output in Bold :

Operation : 1

ResultCode : 2

HResult : 0
Date : 8/6/2014 2:15:36 PM
UpdateIdentity : System.__ComObject
Title : Security Update for Windows Server 2012 R2 (KB2961072)
Description : A security issue has been identified in a Microsoft software product that could affect your system. You can help
protect your system by installing this update from Microsoft. For a complete listing of the issues that are included
in this update, see the associated Microsoft Knowledge Base article. After you install this update, you may have to
restart your system.
UnmappedResultCode : 0
ClientApplicationID : AutomaticUpdatesWuApp
ServerSelection : 1
ServiceID :
UninstallationSteps : System.__ComObject
UninstallationNotes : This software update can be removed by selecting View installed updates in the Programs and Features Control Panel.
SupportUrl : http://support.microsoft.com

Operation : 1
ResultCode : 4
HResult : -2145099757
Date : 8/6/2014 2:47:19 PM
UpdateIdentity : System.__ComObject
Title : Security Update for Windows Server 2012 R2 (KB2961072)
Description : A security issue has been identified in a Microsoft software product that could affect your system. You can help
protect your system by installing this update from Microsoft. For a complete listing of the issues that are included
in this update, see the associated Microsoft Knowledge Base article. After you install this update, you may have to
restart your system.
UnmappedResultCode : -2145099757
ClientApplicationID : AutomaticUpdatesWuApp
ServerSelection : 1
ServiceID :
UninstallationSteps : System.__ComObject
UninstallationNotes : This software update can be removed by selecting View installed updates in the Programs and Features Control Panel.
SupportUrl : http://support.microsoft.com
Categories : System.__ComObject

Can you see the difference ?  Can you feel the force ?

The second  one was tried and failed to install (result code 4) The codes are :

0 = Not Started
1 = In Progress
2  = Succeeded
3 = Succeeded With Errrors

4 = Failed

5 = Aborted

It means that the  Com Object also list the updates  that were tried to install and failed or in any situation describe above. IF you have the same update trisd to install 6 times failed and 1 successfully it will show to you 7 times in the list.

You need to filter the resultcode to get only the successfully updates – resultcode = 2  (in my case)  :

So I change my code to :

$session = [activator]::CreateInstance([type]::GetTypeFromProgID(“Microsoft.Update.Session”,$ComputerName))
$us = $session.CreateUpdateSearcher()
$qtd = $us.GetTotalHistoryCount()
$hot = $us.QueryHistory(0, $qtd)
foreach ($Upd in $hot) {
 if ($Upd.operation -eq 1 -and $Upd.resultcode -eq 2) {
$OutPut = New-Object -Type PSObject -Prop @{…………………………….

Remember… If it is PowerCool, it is PowerShell!

 

Categories: DBA Blogs

Watch: Hadoop vs. HBase

Pythian Group - Mon, 2014-12-08 09:58

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“…It’s actually not quite fair comparing them,” Alex says. “HBase is part of the Hadoop ecosystem… You could see them living with each other in the same cluster.” Learn how HBase and Hadoop can work together by watching Alex’s video Hadoop vs. HBase.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Statistics on this blog

Hemant K Chitale - Sun, 2014-12-07 08:40
I began this blog on 28-Dec-2006.  For the 8 years 2007 to 2014, I have averaged 56 posts per year.  Unfortunately, this year, 2014, has produced the fewest posts -- 40 including this one.  This includes the "series" on Grid / ASM / RAC and the series on StatsPack / AWR.

2011 was my most prodigious year -- 99 posts.

There were 8,176 page views in July 2010.  To date, there have been more than 930thousand (946thousand at the end of 2014) page views on this blog.  By month, the peak count has been for March 2013 -- 24,346 page views.

My largest viewer counts are from USA, India, UK, Germany and France.  www.google.com has been the largest source of traffic to this blog.

.
.
.



Categories: DBA Blogs

Log Buffer #400, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-12-05 09:40

Another centurion mark achieved by the Log Buffer as it reaches 400. Freshness and uniqueness of Log Buffer still is as youthful as was with the edition 1. Enjoy the gems of Oracle, SQL Server and MySQL.

Oracle:

What Cloud Infrastructure Will Best Deliver?

Adaptive Case Management 12c and ADF Human Tasks.

What Does “Backup Restore Throttle Speed” Wait Mean?

All You Need, and Ever Wanted to Know About the Dynamic Rolling Year.

Using grant connect through to manage database links.

The Future of Oracle Forms Straight From the Source’s Mouth.

SQL Server:

Create a repository of all your database devices and stay informed about changes in their size and usage.

When a hospital’s mission-critical database fails at Christmas, disaster for the hospital – and its hapless DBA – seems certain. With less than an hour to spare before catastrophe, can the DBA Team save the day?

How do you use SQL Server, and how do you expect this to change next year?

How can you get a list of columns that have changed within a trigger in T-SQL? How can you see what bits are set within a varbinary or integer? How would you pass a bitmap parameter to a system stored procedure?

Have you ever wanted to run a query across every database on a server with the convenience of a stored procedure? If so, Microsoft provided a stored procedure to do so. It’s unreliable, outdated, and somewhat obfuscated, though. Let’s improve on it!

MySQL:

Thanks, Oracle, for fixing the stupid and dangerous SET GLOBAL sql_log_bin!

Auto-bootstrapping an all-down cluster with Percona XtraDB Cluster.

Proposal to deprecate collation_database and character_set_database settings.

Puppet is a powerful automation tool that helps administrators manage complex server setups centrally. You can use Puppet to manage MariaDB.

Tips from the trenches for over-extended MySQL DBAs.

Categories: DBA Blogs

Join Us For a Networking Event at UKOUG

Pythian Group - Fri, 2014-12-05 09:25
UKOUG event photo

Ask not what you can do for your data. Ask what your data can do for you!

Join us for an informal networking event alongside Rittman Mead on Monday December 8th during UKOUG. We will be discussing how to leverage data to drive your organization’s success. Come meet with peers and industry experts, Mark Rittman and Jon Mead of Rittman Mead, and Marc Fielding and Christo Kutrovsky of Pythian. The networking event will take place at PanAm Bar and Restaurant in Liverpool from 6-8 PM, and will include drinks and light refreshments.

Please be sure to RSVP to the event here—we hope to see you there! Find more information about Pythian’s speaking sessions here.

Questions? Please contact Elliot Zissman, Director of Sales at zissman@pythian.com.

Categories: DBA Blogs

What Does “Backup Restore Throttle Speed” Wait Mean?

Pythian Group - Thu, 2014-12-04 11:29

After migrating a 10g database to 11g, I asked the Application team to start their tests in order to validate that everything was working as expected. I decided to keep an eye on OEM’s Top Activity page while they were running the most mportant job. I already knew what kind of “colors” I would  find because I had checked its behavior in the former version. Suddenly, a strange kind of wait appeared on my screen: it was my first encounter with Backup Restore Throttle Speed.

 

OEM graph 2

I had never seen this wait before. It was listed in a user’s session so its name really confused me. No RMAN operations were running at that time. FRA was almost empty. I checked Oracle’s documentation and My Oracle Support. I found nothing but one Community post from 24-SEP-2013 with no conclusions. In the meantime, the job ended and I got the confirmation that everything was well, even faster than in the old version. Weird, very weird. It was time to review the PL/SQL code.

After reading lots of lines, a function inside the package caught my attention:

Sleep (l_master_rec.QY_FIRST_WAIT_MIN * 60);

Since the job was using a log table to keep track of its execution, I was able to match the wait time with this function pretty quickly. This code was inside the function’s DDL:

for i in 1 .. trunc( seconds_to_sleep/600 )
loop
sys.DBMS_BACKUP_RESTORE.SLEEP( 600 );
end loop;
sys.DBMS_BACKUP_RESTORE.SLEEP( seconds_to_sleep-trunc(seconds_to_sleep/
600)*600 );

Finally I found the reason for this wait (and the explanation for its backup/restore related name): DBMS_BACKUP_RESTORE.SLEEP. As described in MOS note “How to Suspend Code Execution In a PL/SQL Application (Doc ID 1296382.1)”, the package was used to pause job’s execution while waiting for another task to be finished.

Lastly, it’s worth noting that OEM did not graph this wait on the 10g database but it was always there.

Categories: DBA Blogs

Influence execution plan without adding hints

Oracle in Action - Thu, 2014-12-04 04:54

RSS content

We often encounter situations when a SQL runs optimally when it is hinted but  sub-optimally otherwise. We can use hints to get the desired plan but it is not desirable to use hints in production code as the use of hints involves extra code that must be managed, checked, and controlled with every Oracle patch or upgrade. Moreover, hints freeze the execution plan so that you will not be able to benefit from a possibly better plan in future.

So how can we make such queries use optimal plan until a provably better plan comes along without adding hints?

Well, the answer is to use SQL Plan Management which ensures that you get the desirable plan which will evolve over time as optimizer discovers better ones.

To demonstrate the procedure, I have created two tables CUSTOMER and PRODUCT having CUST_ID and PROD_ID respectively as primary keys. PROD_ID column in CUSTOMER table is the foreign key and is indexed.

SQL>onn hr/hr

drop table customer purge;
drop table product purge;

create table product(prod_id number primary key, prod_name char(100));
create table customer(cust_id number primary key, cust_name char(100), prod_id number references product(prod_id));
create index cust_idx on customer(prod_id);

insert into product select rownum, 'prod'||rownum from all_objects;
insert into customer select rownum, 'cust'||rownum, prod_id from product;
update customer set prod_id = 1000 where prod_id > 1000;

exec dbms_stats.gather_table_stats (USER, 'customer', cascade=> true);
exec dbms_stats.gather_table_stats (USER, 'product', cascade=> true);

– First, let’s have a look at the undesirable plan which does not use the index on PROD_ID column of CUSTOMER table.

SQL>conn / as sysdba
    alter system flush shared_pool;

    conn hr/hr

    variable prod_id number
    exec :prod_id := 1000

    select cust_name, prod_name
    from customer c, product p
    where c.prod_id = p.prod_id
    and c.prod_id = :prod_id;

    select * from table (dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------SQL_ID  b257apghf1a8h, child number 0
-------------------------------------
select cust_name, prod_name from customer c, product p where c.prod_id
= p.prod_id and c.prod_id = :prod_id

Plan hash value: 3134146364

----------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |   412 (100)|          |
|   1 |  NESTED LOOPS                |              | 88734 |    17M|   412   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| PRODUCT      |     1 |   106 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | SYS_C0010600 |     1 |       |     1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS FULL          | CUSTOMER     | 88734 |  9098K|   410   (1)| 00:00:01 |
----------------------------------------------------------------------

– Load undesirable plan into baseline  to establish a SQL plan baseline for this query into which the desired plan will be loaded later

SQL>variable cnt number
    exec :cnt := dbms_spm.load_plans_from_cursor_cache(sql_id => 'b257apghf1a8h');

    col sql_text for a35 word_wrapped
    col enabled for a15

    select  sql_text, sql_handle, plan_name, enabled 
    from     dba_sql_plan_baselines
    where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
----------------------------------- ----------------------------------------------------------------------
select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   YES

– Disable undesirable plan so that this plan will not be used

SQL>variable cnt number
    exec :cnt := dbms_spm.alter_sql_plan_baseline (-
    SQL_HANDLE => 'SQL_7d3369334b24a117',-
    PLAN_NAME => 'SQL_PLAN_7ucv96d5k988rfe19664b',-
    ATTRIBUTE_NAME => 'enabled',-
    ATTRIBUTE_VALUE => 'NO');

    col sql_text for a35 word_wrapped
    col enabled for a15

    select  sql_text, sql_handle, plan_name, enabled 
    from   dba_sql_plan_baselines
     where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
----------------------------------------------------------------------select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   NO

– Now we use hint in the above SQL to generate the optimal plan which uses index on PROD_ID column of CUSTOMER table

SQL>conn hr/hr

variable prod_id number
exec :prod_id := 1000

select /*+ index(c)*/ cust_name, prod_name
from customer c, product p
where c.prod_id = p.prod_id
and c.prod_id = :prod_id;

select * from table (dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  5x2y12dzacv7w, child number 0
-------------------------------------
select /*+ index(c)*/ cust_name, prod_name from customer c, product p
where c.prod_id = p.prod_id and c.prod_id = :prod_id

Plan hash value: 4263155932

-----------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |              |       |       |  1618 (100)|          |
|   1 |  NESTED LOOPS                        |              | 88734 |    17M|  1618   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID        | PRODUCT      |     1 |   106 |    2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN                 | SYS_C0010600 |     1 |       |    1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS BY INDEX ROWID BATCHED| CUSTOMER     | 88734 |  9098K|  1616   (1)| 00:00:01 |
|   5 |    INDEX FULL SCAN                   | SYS_C0010601 | 89769 |       |  169   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

– Now we will load the hinted plan  into baseline –
– Note that we have SQL_ID and PLAN_HASH_VALUE of the hinted statement and SQL_HANDLE for the unhinted statement i.e. we are associating hinted plan with unhinted statement.

SQL>variable cnt number
exec :cnt := dbms_spm.load_plans_from_cursor_cache(-
sql_id => '5x2y12dzacv7w',  -
plan_hash_value => 4263155932, -
sql_handle => 'SQL_7d3369334b24a117');

– Verify that there are now two plans loaded for that SQL statement:

  •  Unhinted sub-optimal plan is disabled
  •  Hinted optimal plan which even though is for a  “different query,”  can work with earlier unhinted query (SQL_HANDLE is same)  is enabled.
SQL>col sql_text for a35 word_wrapped
col enabled for a15

select  sql_text, sql_handle, plan_name, enabled from dba_sql_plan_baselines
where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
----------------------------------------------------------------------
select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rea320380                                                   YES

select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   NO

– Verify that hinted plan is used even though we do not use hint in the query  –
– The note confirms that baseline has been used for this statement

SQL>variable prod_id number
exec :prod_id := 1000

select cust_name, prod_name
from customer c, product p
where c.prod_id = p.prod_id
and c.prod_id = :prod_id;

select * from table (dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  b257apghf1a8h, child number 0
-------------------------------------
select cust_name, prod_name from customer c, product p where c.prod_id
= p.prod_id and c.prod_id = :prod_id

Plan hash value: 4263155932

-----------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |              |       |       |  1618 (100)|          |
|   1 |  NESTED LOOPS                        |              | 88734 |    17M|  1618   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID        | PRODUCT      |     1 |   106 |    2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN                 | SYS_C0010600 |     1 |       |    1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS BY INDEX ROWID BATCHED| CUSTOMER     | 88734 |  9098K|  1616   (1)| 00:00:01 |
|   5 |    INDEX FULL SCAN                   | SYS_C0010601 | 89769 |       |  169   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("P"."PROD_ID"=:PROD_ID)
4 - filter("C"."PROD_ID"=:PROD_ID)

Note
-----
- SQL plan baseline SQL_PLAN_7ucv96d5k988rea320380 used for this statement

With this baseline solution, you need not employ permanent hints the production code and hence no upgrade issues. Moreover, the plan will evolve with time as optimizer discovers better ones.

Note:  Using this method, you can swap  the plan for only a query which is fundamentally same i.e. you should get the desirable plan by adding hints, modifying  an optimizer setting, playing around with statistics etc. and then associate sub-optimally performing statement with the optimal plan.

I hope this post was useful.

Your comments and suggestions are always welcome!

References:
http://www.oracle.com/technetwork/issue-archive/2014/14-jul/o44asktom-2196080.html

—————————————————————————————————————————————–

Related links:

HOME
Tuning Index



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Influence execution plan without adding hints], All Right Reserved. 2014.

The post Influence execution plan without adding hints appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

ORA 700's make me grumpy ( well or at least confused )

Grumpy old DBA - Wed, 2014-12-03 09:16
Geez Louise I guess I coulda/shoulda known about this before now.

At some point the oracle software ( 11.2 ish ? 11.1 ish ? ) now starts getting worried and kind of grumpy.

You apparently can get ORA 700's under certain circumstances.  It's not a huge problem apparently ( yet ) when the software decides to let you know ... ( but it might become one later maybe ? ).

I saw this one on a new test vm I am setting up ( Database 12.1.0.2 using ASM and also OEM 12.1.0.4 ).

ORA 700 [kskvmstatact: excessive swapping observed]

So anyway I started doing some looking around at the memory config stuff after seeing that ( but that's a longer story ).

Categories: DBA Blogs

StatsPack and AWR Reports -- Bits and Pieces -- 4

Hemant K Chitale - Tue, 2014-12-02 08:05
This is the fourth post in a series.

Post 1 is here.
Post 2 is here.
Post 3 is here.

Buffer Cache Hit Ratios

Many novice DBAs may use Hit Ratios as indicators of performance.  However, these can be misleading or incomplete.

Here are two examples :

Extract A: 9i StatsPack

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer  Hit   %:   99.06

It would seem that with only 0.94% of reads being physical reads, the database is performing optimally.  So, the DBA doesn't need to look any further.  
Or so it seems.
If he spends some time reading the report, he also then comes across this :
Top 5 Timed Events~~~~~~~~~~~~~~~~~~                                                     % TotalEvent                                               Waits    Time (s) Ela Time-------------------------------------------- ------------ ----------- --------db file sequential read                           837,955       4,107    67.36CPU time                                                        1,018    16.70db file scattered read                             43,281         549     9.00


                                                                   Avg                                                     Total Wait   wait    WaitsEvent                               Waits   Timeouts   Time (s)   (ms)     /txn---------------------------- ------------ ---------- ---------- ------ --------db file sequential read           837,955          0      4,107      5    403.3db file scattered read             43,281          0        549     13     20.8
Physical I/O is a significant proportion (76%) of total database time.  88% of the physical I/O is single-block  reads ("db file sequential read").  This is where the DBA must identify that tuning *is* required.
Considering the single block access pattern it is likely that a significant proportion are index blocks as well.  Increasing the buffer cache might help cache the index blocks.


Extract B : 10.2 AWR
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %:99.98Redo NoWait %:100.00Buffer Hit %:96.43In-memory Sort %:99.99Library Hit %:97.16Soft Parse %:98.16Execute to Parse %:25.09Latch Hit %:99.85Parse CPU to Parse Elapsd %:89.96% Non-Parse CPU:96.00
The Buffer Hit Ratio is very good.  Does that mean that I/O is not an issue ?
Look again at the same report 
Top 5 Timed Events
EventWaitsTime(s)Avg Wait(ms)% Total Call TimeWait ClassCPU time147,59342.3db file sequential read31,776,67887,659325.1User I/Odb file scattered read19,568,22079,142422.7User I/ORMAN backup & recovery I/O1,579,31437,6502410.8System I/Oread by other session3,076,11014,21654.1User I/O
User I/O is actually significant.  The SQLs with the highest logical I/O need to be reviewed for tuning.

.
.
.

Categories: DBA Blogs

The Perfect Gift For The Oracle DBA: Top 5 DBA T-Shirts

The Perfect Gift For The Oracle DBA: Top 5 DBA T-Shirts
It's that time of year again and I can already hear it, "Dad, what do you want for Christmas?" This year I'm taking action. Like forecasting Oracle performance, I'm taking proactive action.

Like most of you reading this, you have a, let's say, unique sense of humor. I stumbled across the ultimate geek website that has an astonishing variety of t-shirts aimed at those rare individuals like us that get a rush in understanding the meaning of an otherwise cryptic message on a t-shirt.

I picked my Top 5 DBA Geek T-Shirts based on the challenges, conflicts and joys of being an Oracle DBA. With each t-shirt I saw, a story came to mind almost immediately. I suspect you will have a similar experience that rings strangely true.

So here they are—the Top 5 T-Shirts For The Oracle DBA:
Number 5: Change Your Password
According to Slash Data the top password is now "Password".  I guess the upper-case "P" makes people feel secure, especially since last years top password was "123456" and EVERYBODY knows thats a stupid password. Thanks to new and improved password requirements, the next most popular password is "12345678". Scary but not surprising.

As Oracle Database Administrators and those who listened to Troy Ligon's presentation last years IOUG conference presentation, passwords are clearly not safe. ANY passwords. Hopefully in the coming years, passwords will be a thing of the past.


Number 4: Show Your Work
Part of my job as a teacher and consultant is to stop behavior like this: I ask a DBA, "I want to understand why you want to make this change to improve performance." And the reply is something like one of these:

  1. Because it has worked on our other systems.
  2. I did a Google search and an expert recommended this.
  3. Because the box is out of CPU power, there is latching issues, so increasing spin_count will help.
  4. Because we have got to do something and quick!

I teach Oracle DBAs to think from the user experience to the CPU cycles developing a chain of cause and effect. If we can understand the cause and effect relationships, perhaps we can disrupt poor performance and turn it to our favor. "Showing your work" and actually writing it down can be really helpful.

Number 3: You Read My T-Shirt
Why do managers and users think their presence in close proximity to mine will improve performance or perhaps increase my productivity? Is that what they learn in Hawaii during "end user training"?

What's worse is when a user or manager wants to talk about it...while I'm obviously in concentrating on a serious problem.

Perhaps if I wear this t-shirt, stand up, turn around and remain silent they will stop talking and get the point. We can only hope.

Number 2: I'm Here Because You Broke Something
Obnoxious but true. Why do users wonder why performance is "slow" when they do a blind query returning ten-million rows and then scroll down looking for the one row they are interested in.... Wow. The problem isn't always the technology... but you know that already.

Hint to Developers: Don't let users do a drop down or a lookup that returns millions or even thousands or even hundreds of rows... Please for the love of performance optimization!


Number 1 (drum roll): Stand Back! I'm Going To Try SCIENCE
One of my goals in optimizing Oracle Database performance is to be quantitative. And whenever possible, repeatable. Add some basic statistics and you've got science. But stand back because, as my family tells me, it does get a little strange sometimes.

But seriously, being a "Quantitative Oracle Performance Analyst" is always my goal because my work is quantifiable, reference-able and sets me up for advanced analysis.


So there you go! Five t-shirts for the serious and sometimes strange Oracle DBA. Not only will these t-shirts prove and reinforce your geeky reputation, but you'll get a small yet satisfying feeling your job is special...though a little strange at times.

All the best in your Oracle performance endeavors!

Craig.
Categories: DBA Blogs

Watch: Hadoop vs. Cassandra

Pythian Group - Mon, 2014-12-01 10:53

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“Hadoop is generally deployed in a single data center, multi-RAC deployment, but they’re all reasonably geographically co-located with each other,” Alex explains. Cassandra on the other hand, “…is frequently deployed in a very distributed fashion… Somewhere in Asia, Europe, North America… So you end up with a very fault-tolerant environment.” Learn how the two platforms compare by watching Alex’s video Hadoop vs. Cassandra.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

How Linux Works, 2nd Edition What Every Superuser Should Know by Brian Ward; No Starch Press

Surachart Opun - Sun, 2014-11-30 08:23
Everyone knows about Linux. It's a popular operating system that is the software on a computer that enables applications and the computer operator to access the devices on the computer to perform desired functions.
You can read more on link what I pointed to it. For me, Linux is a great operating system that I can use it as Desktop and Server. I have used it over ten years. It's very interesting operation system. I have used/worked it with many Open Source Software such as Apache HTTP, Bind, Sendmail, Postfix, Cyrus Imap, Samba and etc. It's operating system that I can play with programming languages as C, PHP, JAVA, Python, Perl and etc. I don't wanna say "too much".
Today, I have a chance to pick up some... a book that was written about Linux - How Linux Works, 2nd Edition What Every Superuser Should Know by Brian Ward. It's a cool book that you can learn about Linux as Starter and Linux Administrator. You could learn some things you have never used, but find in this book. It's fun to learn. However, A book, it's not support every skills in Linux. You will learn
  • How Linux boots, from boot loaders to init implementations (systemd, Upstart, and System V)
  • How the kernel manages devices, device drivers, and processes
  • How networking, interfaces, firewalls, and servers work
  • How development tools work and relate to shared libraries
  • How to write effective shell scripts 
It might not be something too much for learning as you are expecting. However, It 's a good book that you can enjoy to read a book about Linux. There's easy to read and understanding in a book. It's for some people who are starting with Linux and Linux Administrators who are enjoying to learn and want to get something new that can use in their fields.

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Cloud Adapters for ORACLE Service Cloud (RightNow Cloud 12.1.3) Released

The ORACLE Cloud Adapter for ORACLE Service Cloud (RightNow 12.1.3) reaches general availability! With the ORACLE Cloud Adapters Integrations between Cloud and On-Premise Applications are...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Upgrade Oracle GoldenGate 12.1.2.0.x to 12.1.2.1.x

DBASolved - Fri, 2014-11-28 22:33

Recently, I’ve been engaged in a conversation on the OTN community pages about upgrading Oracle GoldenGate 12c from 12.1.2.0.0 to the 12.1.2.1.0.  During the discussion I mentioned that you can upgrade using the Oracle Universal Installer (OUI) that is now available with Oracle GoldenGate 12c.  The upgrade process I’m going to show you here is an in-place upgrade of Oracle GoldenGate 12c for Oracle Database 12c.

Note: Before doing any upgrades of Oracle GoldenGate, make sure to stop all processes and backup your existing binaries and associated files needed for your environment.

The first thing that needs to be done is to download the 12.1.2.1.0 binaries from either edelivery.oracle.com or My Oracle Support (Image 1).

Image 1:

edelivery_ogg121210.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After downloading the new binaries to a location where they can be extracted; they need to be unzipped.

 $ unzip ./OracleGoldenGate12121.zip -d ./ggate12121 

Once the binaries are unzipped, lets go into the ./ggate12121 directory to find the runInstaller.  On my system the runInstaller is found at this location.

/media/sf_Oracle/GGATE/GG12c/ggate12121/fbo_ggs_Linux_x64_shiphome/Disk1

Before running the runInstaller, I need to make sure that all my Oracle GoldenGate processes are down.  Since this is on my target (test) system, that means the manager, all replicats and collector processes should be stopped.  A simple “ps -ef” command can help identify what is running.

 $ ps -ef | grep dirpm oracle 2401 1 0 22:47 ? 00:00:00 ./mgr PARAMFILE /opt/app/oracle/product/12.1.2/oggcore_1/dirprm/MGR.prm REPORTFILE /opt/app/oracle/product/12.1.2/oggcore_1/dirrpt/MGR.rpt PROCESSID MGR USESUBDIRS oracle 2407 2401 1 22:47 ? 00:00:02 /opt/app/oracle/product/12.1.2/oggcore_1/replicat PARAMFILE /opt/app/oracle/product/12.1.2/oggcore_1/dirprm/REP.prm REPORTFILE /opt/app/oracle/product/12.1.2/oggcore_1/dirrpt/REP.rpt PROCESSID REP USESUBDIRS $ ps -ef | grep server oracle    2486  1848  0 22:51 pts/0    00:00:00 grep server 

After identifying the processes, they need to be stopped from within GGSCI.

 Oracle GoldenGate Command Interpreter for Oracle Version 12.1.2.0.0 17185003 OGGCORE_12.1.2.0.0_PLATFORMS_130924.1316_FBO Linux, x64, 64bit (optimized), Oracle 12c on Sep 25 2013 02:33:54 Operating system character set identified as UTF-8. Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved. GGSCI (ggtest2.acme.com) 1> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING <br />REPLICAT RUNNING REP 45:06:32 00:04:35 GGSCI (ggtest2.acme.com) 2> stop er * Sending STOP request to REPLICAT REP ... Request processed. GGSCI (ggtest2.acme.com) 3> stop mgr ! Sending STOP request to MANAGER ... Request processed.<br />Manager stopped. GGSCI (ggtest2.acme.com) 9> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER STOPPED <br />REPLICAT STOPPED REP 45:14:08 00:00:05 

To verify that everything has been stopped, the “ps -ef” command can be ran to verify. After verifying that everything is stopped, the runInstaller can be used to start the OUI for the upgrade (Image 2).

Image 2:

ogg_upgrade12121.png

 

 

 

 

 

 

 

Notice that there are 5 steps to the OUI.  Also notice that there is not an “upgrade” option.  Not to worry, we can still perform the upgrade.  Since this is an upgrade of Oracle GoldenGate 12c for Oracle Database 12c, make sure to select the option for Oracle Database 12c (default), then click the next button.

On the Installation screen, the location of the existing binaries need to be selected from the drop down box for Software Location.  In the example, the location is /opt/app/oracle/product/12.1.2/oggcore_1.  Also notice in image 3, that I do not want the manager process to start.  After making sure everything is correct and as expected, click Next.

Note: The information on this screen is read from the oraInventory files.  Make sure you know where the oraInventory located and set as needed.

Image 3:

NewImage

 

 

 

 

 

 

 

 

 

 

The wizard now moves to the Summary screen (Image 4).  On this screen, the typical information is seen. Click Install when ready.

Image 4:

ogg_upgrade12121_4.png

 

 

 

 

 

 

 

 

Image 5 shows the progress of the install/upgrade.

Image 5:

ogg_upgrade12121_5.png

 

 

 

 

 

 

 

 

 

 

 

After the install/upgrade is done (Image 6), we get a nice message saying that it was successful.

Image 6:

ogg_upgrade12121_6.png

 

 

 

 

 

 

 

Once the upgrade is complete, then the Oracle GoldenGate processes (manger, replicats) can be restarted.  Notice that the version of Oracle GoldenGate 12c that is running now is 12.1.2.1.0

 [oracle@ggtest2 Disk1]$ cd /opt/app/oracle/product/12.1.2/oggcore_1 [oracle@ggtest2 oggcore_1]$ pwd /opt/app/oracle/product/12.1.2/oggcore_1 [oracle@ggtest2 oggcore_1]$ . oraenv ORACLE_SID = [oracle] ? remote12c The Oracle base has been set to /opt/app/oracle [oracle@ggtest2 oggcore_1]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO Linux, x64, 64bit (optimized), Oracle 12c on Aug 7 2014 10:21:34 Operating system character set identified as UTF-8. Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.</p>
<p>GGSCI (ggtest2.acme.com) 1&gt; info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER STOPPED &lt;br /&gt;REPLICAT STOPPED REP 45:14:08 00:24:08 GGSCI (ggtest2.acme.com) 2&gt; start mgr Manager started. GGSCI (ggtest2.acme.com) 3&gt; start replicat rep Sending START request to MANAGER ... REPLICAT REP starting GGSCI (ggtest2.acme.com) 4&gt; info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING REPLICAT RUNNING REP 45:39:08 00:00:01 

As you can tell the upgrade from Oracle GoldenGate 12c (12.1.2.0.0) to Oracle GoldenGate 12c (12.1.2.1.0) can be completed using the Oracle Universal Installer that now comes with Oracle GoldenGate 12c.  I wish Oracle would give an option for an upgrade in the OUI, it would cut down on some confusion when it comes to upgrades with Oracle GoldenGate using the OUI.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Log Buffer #399, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-11-28 09:52

This Log Buffer Edition brings some freshly picked blog posts right out of the gardens of Oracle, SQL Server and MySQL.

Oracle:

Suppose you have a process A that you want to run faster. This process doesn’t commit (much), so it doesn’t wait on log file sync. However, there is another multi-threaded process, B, that commits very frequently, and spends a lot of time on “log file sync”.

New Solaris 11 CPU package to install and track CVE security fixes.

Upgrade/Migrate/Consolidate to Oracle 12c and Parallel Multitenant Upgrade Internals.

CVE metadata in Solaris IPS packages for improved Compliance reporting.

Why Your Supply Chain Needs Science?

SQL Server:

SQL Server expert David Poole discusses how teams can work together and share templates in Management Studio.

Explaining The SQL Server 2014 Analysis Services Data Mining Model Lift Chart.

By this stage, you should be familiar with the basics of SQL Server indexes. We’ve discussed what an Index actually is, as well as some of the most common types you’re likely to encounter. Now that we’ve seen some simple examples of how Indexes can be useful, we’re going to delve deeper into nonclustered indexes, as we’ll see how they can improve the performance of more complex queries.

The Project Deployment Model introduced in SSIS 2012 speeds up the deployment of database projects in which there may be hundreds of SSIS packages per project. Not only that, but deployments can be configured differently for each environments such as test and staging, and there are now ways of monitoring the status and performance of packages and of versioning the SSIS Catalog.

A technique to deal with lack of metadata for stored procedures when used with SSIS.

MySQL:

How to mess up your data using ONE command in MySQL/Galera.

High season is coming, how do you make sure that MySQL will handle the increased load? Stress tests could help with that, but it’s not a good idea to run them in a production environment. In this case Select_scan, Select_full_join and other MySQL counters could quickly give you an idea of how many queries are not performing well and could cause a performance degradation as the load goes up.

Nasty MySQL Replication Bugs that Affect Upgrade to 5.6.

(More) Secure local passwords in MySQL 5.6 and up.

Alternatives for chunking bulk deletes in common_schema.

Categories: DBA Blogs

Integrating Oracle MAF Application With Social Services

v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Nowadays we widely use social networks every...

We share our skills to maximize your revenue!
Categories: DBA Blogs

White Papers

Oracle in Action - Wed, 2014-11-26 23:44

RSS content

Oracle’s Approach to Performance Tuning Part-I by Darrick Addison

Oracle’s Approach to Performance Tuning Part-II by Darrick Addison

SQL Plan Management in 11g (Oracle White Paper)

SQL Plan Management in 12c (Oracle White Paper)

Adaptive Cursors And SQL Plan Management (Arup Nanda)

Partitioning in 11g (Oracle White paper)

Oracle Database Parallel Execution Fundamentals (Oracle White Paper)

Understanding Parallel Execution Part-I (Randolf Geist)

Understanding Parallel Execution Part-II (Randolf Geist)

Oracle Active Dataguard 11g (Oracle White Paper)

Oracle 11g RAC (Oracle White paper)

Oracle 11gR2 RAC (Oracle White Paper)

Oracle Single Client Access Name (Oracle White Paper)

Oracle RAC One Node (Oracle White Paper)

11g R2 RAC : Architecture, Best Practices And Troubleshooting (Kai Yu)

Automatic Workload Management With Oracle RAC (Oracle White Paper)

RAC Administering Parallel Execution (Riyaz Shamsudeen)

Using RAC Parallel Instance Groups (Chris Lawson)

Oracle 12c RAC (Oracle White paper)

Maximize Availability with Oracle 12c (Oracle White Paper)



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [White Papers], All Right Reserved. 2014.

The post White Papers appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Pythian at UKOUG 14

Pythian Group - Tue, 2014-11-25 13:39

Will you be joining us at the UKOUG Conference and Exhibition in Liverpool, UK? Over 200 world-class speakers and industry experts will be in attendance, including some of our very own.

Michael Abbey, Oracle ACE and Team Lead at Pythian notes that Pythian’s attendance is important, not only for the company and its employees, but for the Oracle community as a whole. “Pythian’s presence at UKOUG this year is the next chapter in an ongoing participation in database and EIS technical events around the world,” Michael explains. “We pride ourselves on presence on all seven continents and our appearances in Liverpool are strategic to the worldwide user community, as are all the locations we frequent every year.The user group community is one of the fundamental building blocks for technical resources as they hone their skills to better serve their clients and the masses in general. Since its founding, Pythian has been a strategic and financial support organization feeding talent to many of the world’s largest technical shows.”

“You will see a number of presenters from the Pythian suite of experts including Oracle ACEs, Oracle ACE Directors, and members of the OakTable Network. We look forward to catching up with new acquaintances and rekindling existing relationships.” In the meantime, you can find their speaking sessions down below.

UPDATE: Join us for an informal networking event alongside Rittman Mead on Monday December 8th during UKOUG. We will be discussing how to leverage data to drive your organization’s success. Come meet with peers and industry experts, Mark Rittman and Jon Mead of Rittman Mead, and Marc Fielding and Christo Kutrovsky of Pythian. The networking event will take place at PanAm Bar and Restaurant in Liverpool from 6-8 PM, and will include drinks and light refreshments.

Please be sure to RSVP to the event—we hope to see you there!

 

RMAN: The Necessary Basics

Presented by Michael Abbey | Monday December 8, 2014 — 9:00-9:50 AM

This session, best suited for attendees just getting started with RMAN, the basic skills to write consistent backups and perform recovery activities from day one. Highlighted in this masterclass will be the following: RMAN architecture, using a catalog, backups to disk, backups to tape, recovery (complete and incomplete), database duplication, and tips and tricks.

About Michael: Michael Abbey is a seasoned and experienced presenter on the Oracle Database CORE technology. He first cut his teeth on V3 in 1986 and it has been a whirlwind of Oracle since then. Michael co-authored the first work in the Oracle Press series in 1994.

 

Why Use OVM for Oracle Database?
Presented by Francisco Munoz Alvarez | Monday December 8, 2014 — 5:00-5:50 PM

Vibrant session about OVM, that will explain how, when and why use this product for Virtualization. It will also give an overview of how Revera is currently using this product in NZ (Biggest OVM Farm in the ANZ region) and show benchmark results between Bare Metal, OVM and ESX concluding with some tips and showing the scalability and break point of load of the Virtualization solutions.

Come and discover the answers for the following questions:

  • Does an Oracle Database perform well on a virtualized environment?
  • What virtualization technology is more stable and allows an Oracle database to perform faster?
  • What is the performance difference between using a bare metal and a virtualized guest?
  • Is it safe to run a production database in a virtualized environment?

About Francisco:  Working out of Pythian’s Australian office in Macquarie Park, Francisco Munoz Alvarez is Vice President and Managing Director of Service Delivery in Asia Pacific, overseeing its regional expansion effort. Francisco previously served as Chief Technology Officer at Database Integrated Solutions Ltd and has more than two decades’ experience in Oracle databases. As President of the Chilean Oracle Users Group and founder of the Oracle Technology Network (OTN) tours in Latin America and Asia Pacific, he is best known for his evangelist work with the Oracle community.

 

Big Data with Exadata
Presented by Christo Kutrovsky | Tuesday December 9, 2014 — 5:30-6:20 PM

In this presentation, Oracle ACE Christo Kutrovsky will discuss common big data use cases and how they can be implemented efficiently with Exadata. Attendees will learn how Exadata actually delivers most of the benefits touted by newer big data technologies, and can often be the right platform for data scalability.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Measuring Performance in Oracle Solaris and Oracle Linux
Presented by Christo Kutrovsky | Wednesday December 10, 2014 —9:00-9:50 AM

You can’t improve what you can’t measure. If you want to get the most value from your database, you need to start with the basics: are you using your hardware and operating systems efficiently? Attend this session to learn how to measure system utilization in the Linux and Oracle Solaris operating systems and how to use this information for tuning and capacity planning.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Lessons Learned in Implementing Oracle Access Manager 11g with Forms, Reports, and Discoverer
Presented by Sudeep Raj and Maris Elsins | Wednesday December 10, 2014 — 12:30-1:20 PM

Support for SSO has ended in December 2011. To take advantage of the latest security enhancements, it’s always recommended for customers to upgrade their system to the latest and the greatest version of the product i.e OAM/OID 11g, very important to stay on the supported configurations. This session will give you an opportunity to understand how OAM 11g can be configured with Forms/Reports/Discoverer 11g, integration with external directory service like Microsoft AD and discuss about the upgrade considerations for customers planning to upgrade from 10g SSO/OID to OAM 11g and OID 11g.

About Sudeep: Sudeep Raj is a Team Lead/Oracle Applications Database Consultant at Pythian, managing a group of expert DBAs spread across the globe. With nearly a decade of experience as an Apps DBA, he has been involved in and led multiple Oracle E-Business Suite 11i/R12 implementations, maintenance, migration and upgrade projects. Sudeep Raj is an OCP certified professorial and holds a Bachelor of Engineering degree in Computer Science.

About Maris: Recently awarded the Oracle ACE designation, Maris Elsins is a Lead Database Consultant at Pythian. He is a blogger and frequent speaker at many Oracle related conferences like Collaborate, UKOUG, and LVOUG where he is a board member. Maris is an exceptional trouble shooter and enjoys learning why things behave the way they do.

 

Optimizing and Simplifying Complex SQL with Advanced Grouping
Presented by Jared Still | Wednesday December 10, 2014 — 3:30-4:20 PM

This presentation will show how these features can be used to simplify SQL that was previously quite complex by reducing the amount of code needed and improving readability, and perhaps most importantly, greatly optimizing the performance of SQL statements.

About Jared: Jared Still is a Senior Database Consultant at Pythian. His experience includes working with Oracle databases beginning with version 7.0. While Oracle has expanded to encompass many aspects of the application environment, Jared’s focus has been on the database itself and related infrastructure.

 

Oracle RAC — Designing Applications for Scalability
Presented by Christo Kutrovsky | Wednesday December 10, 2014 — 3:30-4:20 PM

Oracle Real Application Clusters (RAC) promises 100% transparent active-active clustering technology – true horizontal scaling, but does it work in all cases? This presentation explores the challenges with Oracle’s active-active solution and how to solve them from both database side and application side. Both conceptual design and highly practical solutions are explored.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Database as a Service on the Oracle Database Appliance Platform
Presented by Marc Fielding and Maris Elsins | Wednesday December 10, 2014 — 3:30-4:20 PM

Oracle Database Appliance provides a robust, highly-available, cost-effective, and surprisingly scalable platform for database as a service environment. By leveraging Oracle Enterprise Manager’s self-service features, databases can be provisioned on a self-service basis to a cluster of Oracle Database Appliance machines. Discover how multiple ODA devices can be managed together to provide both high availability and incremental, cost-effective scalability. Hear real-world lessons learned from successful database consolidation implementations.

About Marc: Marc Fielding is a passionate and creative problem solver, drawing on deep understanding of the full enterprise application stack to identify the root cause of problems, and to implement effective and sustainable solutions. He has extensive experience implementing Oracle’s engineered system portfolio, including leading one of the first enterprise Oracle Exadata implementations. Marc has a strong background in performance tuning and high availability.

About Maris: Recently awarded the Oracle ACE designation, Maris Elsins is a Lead Database Consultant at Pythian. He is a blogger and frequent speaker at many Oracle related conferences like Collaborate, UKOUG, and LVOUG where he is a board member. Maris is an exceptional trouble shooter and enjoys learning why things behave the way they do.

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Oracle expertise or read some of our Oracle-related blog posts.

Categories: DBA Blogs

Parsing blocks stats blocks parsing

Bobby Durrett's DBA Blog - Mon, 2014-11-24 13:33

I had a five-minute conversation with Oracle development Friday that rocked my world.  I found out that parsing blocks stats which blocks parsing.

We have a system with queries that are taking minutes to parse.  These queries include the main tables on our database, one of which is interval partitioned and has 20,000 sub-partitions.  We have seen a situation where Oracle’s delivered statistics gathering job hangs on a library cache lock waiting for a query to finish parsing.  But, much worse than that, we find most queries on our system hanging on library cache lock waits blocked by the statistics job.

We have an SR open on this situation because it seems to be a bug, but Friday someone on the phone from Oracle development explained that this parse blocks stats blocks parse situation is normal.  Later after I got off the phone I built a simple test case proving that what he said was true.  I took a query that took a long time to parse in production and ran it on our development database and it took 16 seconds to parse there.  I choose the smallest table that the query included and gathered stats on it.  The stats ran in a fraction of a second when run by itself, but if I started the long parsing query in one window and ran the stats in another window the stats hung on a library cache lock wait for 15 seconds.  Then I created a trivial query against the same small table I had gathered stats on.  The query ran instantly by itself.  But, if I ran the long parsing query first, kicked off the stats which hung on the lock, and then kicked off the short query against the table I was gathering stats on the short query hung on a library cache lock also.  This example convinced me that the parse blocks stats blocks parse chain was real.

This morning I built a standalone test case that others can run to prove this out on their databases: zip of testcase.  To run the testcase you need three windows where you can run three sqlplus scripts in rapid succession.  In one window first just run tables.sql to create the test tables.  Then run these three scripts one after the other in each window to create the three link chain: chain1.sql, chain2.sql, chain3.sql.  Chain1.sql has the explain plan of a query that takes a long time to parse.  Chain2.sql gathers stats on one table.  Chain3.sql runs a simple query against the table whose stats are being gathered.  Chain1 spends all of its time on the CPU doing the parse.  Chain2 and 3 spends all of their time on library cache lock waits.

First I created two tables:

create table t1 as select * from dba_tables;
create table t2 as select * from dba_tables;

Next I kicked off the explain plan that takes a long time to run.  It joined 100 tables together:

explain plan into plan_table for 
select 
count(*)
from
     t1,
     t2,
     t2 t3,
...
     t2 t100
where
  t1.owner=t2.owner and
...
  t1.owner=t100.owner
/

This explain plan ran for 26 seconds, almost all of which was CPU:

Elapsed: 00:00:26.90

...

CPU in seconds
--------------
         26.81

Right after I kicked off the explain plan I kicked off this statement which gathered stats on the first table in the from clause:

execute dbms_stats.gather_table_stats(NULL,'T1');

This ran for 25 seconds and almost all of the time was spent on a library cache lock wait:

Elapsed: 00:00:25.77

...

Library cache lock in seconds
-----------------------------
                        25.55

Right after I kicked off the gather table stats command I ran this simple query making sure that it was unique and required a hard parse:

select /* comment to force hard parse */ count(*) from T1;

This ran for 24 seconds and almost all of the time was spent on a library cache lock wait:

Elapsed: 00:00:24.48

...

Library cache lock in seconds
-----------------------------
                        24.48

Evidently when a session parses a query it needs to obtain a shared lock on every table that the query includes.  When you gather statistics on a table you need to obtain an exclusive lock on the table, even if you are gathering statistics on one partition or sub-partition of the table.  While the statistics gathering session waits to acquire an exclusive lock any new parses that include the same table will hang.

Prior to Friday I did not think that there was any non-bug situation where gathering optimizer statistics would lock up sessions.  I thought that the only negative to gathering statistics at the same time as other application processing was that statistics gathering would compete for system resources such as CPU and I/O and possibly slow down application code.  But, now I know that gathering statistics can hang all queries that use the given table if stats gathering gets hung up waiting for a query that takes a long time to parse.

– Bobby

P.S. After reviewing the SR I wanted to understand what this parse blocks stats blocks parse looked like in a state dump.  The Oracle support analyst explained how the locks looked in a state dump that we uploaded but I didn’t get a chance to look at it closely until today.  I found the most important information in lines with the string “LibraryObjectLock” at the front of the line after some spaces or tabs.  There were three types of lines – the holding share lock, the waiting exclusive lock, and the many waiting share locks:

LibraryObjectLock:  Address=... Handle=0x5196c8908 Mode=S ...
LibraryObjectLock:  Address=... Handle=0x5196c8908 RequestMode=X ...
LibraryObjectLock:  Address=... Handle=0x5196c8908 RequestMode=S ...

The “…” indicates places I edited out other details.  The handle 0x5196c8908 identifies the table being locked.  The “Mode=S” string indicates a successful share lock of that table by the session with the long parse time.  The “RequestMode=X” was from the stats job trying to get exclusive access to the table.  The “RequestMode=S” was all the other sessions trying to get shared access to the table waiting for stats to first get exclusive access.  Anyway, I just wanted to translate what Oracle support told me into something that might be useful to others.  Plus I want to remember it myself!




Categories: DBA Blogs