Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 27 min 12 sec ago

Oracle Database 12c Patching: DBMS_QOPATCH, OPATCH_XML_INV, and datapatch

Mon, 2015-01-19 19:36
Background

Oracle Database 12c brings us many new features including: the long needed ability to run OPatch and query the patch software installed in the Oracle Home programatically through SQL using the new DBMS_QOPATCH package.

If you’re a DBA working in an environment where patching consistently among databases is important, this is a welcomed enhancement. Sometimes one-off critical bug fix patches are important in the environment. Other times regular and consistent application of the quarterly PSUs is important to the organization (i.e. for regulatory compliancy). For sake of illustration I’m going to use the later as my need case for the duration of this article as I’ve regularly run into the situation where clients need PSUs applied properly, regularly, and consistently to their databases.

As a recap, many Oracle Patches including PSUs require a two-step process to implement properly. First the software change needs to be applied to the associated Oracle Home via the OPatch utility. Secondly it needs to be installed into all databases running in that Oracle Home (possibly updating internal packages and objects).

With Oracle 11g the latest patch is applied within the database using:

SQL> @catbundle.sql psu apply

Oracle 12c introduces a new utility called “datapatch” which replaces the need to run the 11g command shown above.

As with Oracle 11g you first install the patch into the Oracle Home with all services & databases down. But with Oracle Database 12c after restarting the database simply run datapatch from the OPatch directory:

cd $ORACLE_HOME/OPatch
./datapatch -verbose

 

Given that patching is a two-step process, some common DBA questions are:

1) Has a patch been installed in a given database?
2) Has a patch been installed into the $ORACLE_HOME via Opatch but not one or more of the databases?
3) Is the patch in the database but not the $ORACLE_HOME?

You may be thinking that #3 isn’t very likely but the reality is that it does happen. Circumstances that lead to a database being at a higher patch level than the Oracle Home include:

A) If the database was create from an RMAN duplicate from a home with a patch to one without.
B) A data guard standby switch-over or fail-over where the primary Oracle home was patched but the standby home was missed.
C) A 12c PDB that was unplugged from a patched CDB and plugged into an unpatched CDB.
D) The patch was uninstalled via Opatch from the Oracle home without uninstalling from the database (unlikely).

Since patching is a two-step process, historically answering questions such as these is also a two step process requiring the DBA to query the Oracle Home inventory and the database and to manually correlate and reconcile the results. This is cumbersome, prone to human errors, and not scalable due to the manual component.

But with Oracle 12c and the new DBMS_QOPATCH package, both can be queried programatically and joined using SQL allowing the DBA to easily report on questions like 1, 2, and 3 above via SQL queries, custom extensions to monitoring tools, reporting programs, etc. Or just to report accurately on patch deployments for security compliance reports, general database security assessments, or database health checks.

This is actually quite a significant breakthrough with regards to patch management!

 

How it works

The first thing to note is that in Oracle Database 12c there are some new Directory Objects installed by default:

SQL> select owner, directory_name, directory_path from dba_directories
  2  where directory_name like 'OPATCH%' order by 2;

OWNER        DIRECTORY_NAME       DIRECTORY_PATH
------------ -------------------- --------------------------------------------------------------------------------
SYS          OPATCH_INST_DIR      /u01/app/oracle/product/12.1.0/dbhome_1/OPatch
SYS          OPATCH_LOG_DIR       /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch
SYS          OPATCH_SCRIPT_DIR    /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch

SQL>

 

Two of those point to a new QOpatch directory which contains a batch file which runs the OPatch utility:

SQL> !ls /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch
qopatch_log.log  qopiprep.bat

SQL> !cat /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch/qopiprep.bat
#!/bin/sh
#
# $Header: rdbms/src/client/tools/qpinv/qopiprep.bat /unix/2 2013/04/04 20:59:27 tbhukya Exp $
#
# qopiprep.bat
#
# Copyright (c) 2012, 2013, Oracle and/or its affiliates. All rights reserved.
#
#    NAME
#      qopiprep.bat - bat file for preprocessor
#
#    DESCRIPTION
#      bat file for external table preprocessor.
#
#    NOTES
#      .
#
#    MODIFIED   (MM/DD/YY)
#    tbhukya     04/03/13 - Bug 16226172 : Forward merge of fix 16076845
#    tbhukya     09/23/12 - Creation
#
cd $ORACLE_HOME
PATH=/bin:/usr/bin
export PATH

# Option: "-retry 0" avoids retries in case of locked inventory.

# Option: "-invPtrLoc" is required for non-central-inventory
# locations. $OPATCH_PREP_LSINV_OPTS which may set by users
# in the environment to configure special OPatch options
# ("-jdk" is another good candidate that may require configuration!).

# Option: "-all" gives information on all Oracle Homes
# installed in the central inventory.  With that information, the
# patches of non-RDBMS homes could be fetched.


$ORACLE_HOME/OPatch/opatch lsinventory -xml  $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt
`echo "UIJSVTBOEIZBEFFQBL" >> $ORACLE_HOME/QOpatch/xml_file.xml`
echo `cat $ORACLE_HOME/QOpatch/xml_file.xml`
rm $ORACLE_HOME/QOpatch/xml_file.xml
rm $ORACLE_HOME/QOpatch/stout.txt

 

The description text in this Oracle provided file states that it’s a batch file used for external table preprocessing. The preprocessor clause was added in Oracle 11g release 2 (and back-ported to Oracle11gR1) to allow for execution of an external script/file to process data before reading it in the database via an external table. A further explanation of this feature is beyond the scope of this article.

Consequently we know an external table is involved and it’s not too hard to find and understand that:

SQL> select owner, table_name from dba_external_tables where table_name like 'OPATCH%' order by 1,2;

OWNER        TABLE_NAME
------------ --------------------
SYS          OPATCH_XML_INV

SQL> select dbms_metadata.get_ddl('TABLE','OPATCH_XML_INV','SYS') from dual;

DBMS_METADATA.GET_DDL('TABLE','OPATCH_XML_INV','SYS')
--------------------------------------------------------------------------------

  CREATE TABLE "SYS"."OPATCH_XML_INV"
   (    "XML_INVENTORY" CLOB
   )
   ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
      DEFAULT DIRECTORY "OPATCH_SCRIPT_DIR"
      ACCESS PARAMETERS
      ( RECORDS DELIMITED BY NEWLINE CHARACTERSET UTF8
      DISABLE_DIRECTORY_LINK_CHECK
      READSIZE 8388608
      preprocessor opatch_script_dir:'qopiprep.bat'
      BADFILE opatch_script_dir:'qopatch_bad.bad'
      LOGFILE opatch_log_dir:'qopatch_log.log'
      FIELDS TERMINATED BY 'UIJSVTBOEIZBEFFQBL'
      MISSING FIELD VALUES ARE NULL
      REJECT ROWS WITH ALL NULL FIELDS
      (
        xml_inventory    CHAR(100000000)
      )
        )
      LOCATION
       ( "OPATCH_SCRIPT_DIR":'qopiprep.bat'
       )
    )
   REJECT LIMIT UNLIMITED


SQL>

 

Hence we can understand the underlying mechanics of this new feature. It’s based on an external table using a directory object running an external batch file (through the external table preprocessor option) which then runs the actual OPatch utility.

And we can query that external table directly if we want though we’ll get an XMLTYPE result:

SQL> select * from OPATCH_XML_INV;

XML_INVENTORY
--------------------------------------------------------------------------------
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <InventoryInstance> <ora
cleHome> <UId>OracleHome-0243b34c-d8db-43f7-946b-589110de0ef9</UId> <targetTypeI
d>oracle_home</targetTypeId> <inventoryLocation>/u01/app/oraInventory</inventory
Location>
...
(output truncated)

 

Using DBMS_QOPATCH

Oracle gives us a wrapper API to this new external table (OPATCH_XML_INV) with DBMS_QOPATCH to make working with the output more manageable.

For example we can start with some basic information on the inventory using DBMS_QOPATCH.GET_OPATCH_INSTALL_INFO :

SQL> set heading off long 50000
SQL> select dbms_qopatch.get_opatch_install_info from dual;

<oracleHome><UId>OracleHome-a59380fa-5f8e-42df-b624-282f0189ec93</UId><targetTyp
eId>oracle_home</targetTypeId><inventoryLocation>/u01/app/oraInventory</inventor
yLocation><isShared>false</isShared><patchingModel>oneoff</patchingModel><path>/
u01/app/oracle/product/12.1.0/dbhome_1</path><targetTypeId>oracle_home</targetTy
peId></oracleHome>

SQL>

 

But as we see the output still isn’t easy on the eyes. Fortunately the XML stylesheet is also presented using DBMS_QOPATCH.GET_OPATCH_XSLT function. Hence combining the two gives a much more readable output:

SQL> select xmltransform(dbms_qopatch.get_opatch_install_info, dbms_qopatch.get_opatch_xslt) from dual;


Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Inventory         : /u01/app/oraInventory


SQL>

 

The DBMS_QOPATCH package provides many other useful functions and procedures. For example, to see if and when a specified patch was installed, or just to see the list of all of the patches installed by bug number:

SQL> select xmltransform(dbms_qopatch.is_patch_installed('19303936'), dbms_qopatch.get_opatch_xslt) from dual;


Patch Information:
         19303936:   applied on 2014-12-20T13:54:54-07:00


SQL> select xmltransform(dbms_qopatch.get_opatch_bugs, dbms_qopatch.get_opatch_xslt) from dual;


  Bugs fixed:
          19157754  18885870  19303936  19708632  19371175  18618122  19329654
19075256  19074147  19044962  19289642  19068610  18988834  19028800  19561643
19058490  19390567  18967382  19174942  19174521  19176223  19501299  19178851
18948177  18674047  19723336  19189525  19001390  19176326  19280225  19143550
18250893  19180770  19155797  19016730  19185876  18354830  19067244  18845653
18849537  18964978  19065556  19440586  19439759  19024808  18952989  18990693
19052488  19189317  19409212  19124589  19154375  19279273  19468347  19054077
19048007  19248799  19018206  18921743  14643995

SQL>

 

Or to run the equivalent of “opatch lsinventory” but from SQL instead of the OS:

SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;


Oracle Querayable Patch Interface 1.0
--------------------------------------------------------------------------------

Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Inventory         : /u01/app/oraInventory
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 12c                                    12.1.0.2.0
Installed Products ( 135)

Oracle Database 12c                                         12.1.0.2.0
Java Development Kit                                        1.6.0.75.0
...
(output truncated)

 

Putting it all together

The examples above are interesting but to make it truly useful for patch application querying, monitoring, and reporting we need join the output of DBMS_QOPATCH (which is showing us what’s installed into the Oracle Home) with the new 12c view DBA_REGISTRY_SQLPATCH (which shows us what patches are applied to the database).

Prior to Oracle 12c to list the PSUs installed into the Oracle home we might use:

SQL> !$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'DATABASE PSU|DATABASE PATCH SET UPDATE'
19303936 19303936 Sat Dec 20 13:54:54 MST 2014 DATABASE PATCH SET UPDATE 12.1.0.2.1 (OCT2014)

SQL>

 

I obtained that result by shelling out from a sqlplus session running on the database server. But likely a centralized SQL based monitoring or reporting tool won’t have that ability. A client-server database connection won’t be able to run OPatch easily and hence the DBMS_QOPATCH API is required.

To extract the same information from the DBMS_QOPATCH package we need to parse the resulting XMLTYPE output:

SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.*
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription',
 10               applied_date varchar2(30) path 'appliedDate',
 11               sql_patch varchar2(8) path 'sqlPatch',
 12               rollbackable varchar2(8) path 'rollbackable'
 13         ) x;

  PATCH_ID  PATCH_UID
---------- ----------
DESCRIPTION
--------------------------------------------------------------------------------
APPLIED_DATE                   SQL_PATC ROLLBACK
------------------------------ -------- --------
  19303936   18116864
Database Patch Set Update : 12.1.0.2.1 (19303936)
2014-12-20T13:54:54-07:00      true     true


SQL>

 

This is a great result: Information on the PSUs installed into the Oracle Home through SQL and returned to us in an easy to read and easy to work with tabular form!

To make this information even more useful we need to join to the new DBA_REGISTRY_SQLPATCH view which replaces the DBA_REGISTRY_HISTORY view with respect to which patches have been applied in the database:

SQL> select patch_id, patch_uid, version, status, description
  2  from dba_registry_sqlpatch
  3  where bundle_series = 'PSU';

  PATCH_ID  PATCH_UID VERSION              STATUS
---------- ---------- -------------------- ---------------
DESCRIPTION
--------------------------------------------------------------------------------
  19303936   18116864 12.1.0.2             SUCCESS
Database Patch Set Update : 12.1.0.2.1 (19303936)


SQL>

 

Joining these two outputs allows us to easily write queries that report on the PSU patches deployed and ones only partially deployed (whether in the Oracle Home but not the database or vice versa):

SQL> --
SQL> --   List of PSUs applied to both the $OH and the DB
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.patch_id, x.patch_uid, x.rollbackable, s.status, x.description
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription',
 10               rollbackable varchar2(8) path 'rollbackable'
 11         ) x,
 12         dba_registry_sqlpatch s
 13   where x.patch_id = s.patch_id
 14     and x.patch_uid = s.patch_uid
 15     and s.bundle_series = 'PSU';

  PATCH_ID  PATCH_UID ROLLBACK STATUS
---------- ---------- -------- ---------------
DESCRIPTION
--------------------------------------------------------------------------------
  19303936   18116864 true     SUCCESS
Database Patch Set Update : 12.1.0.2.1 (19303936)


SQL> --
SQL> --   PSUs installed into the $OH but not applied to the DB
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.patch_id, x.patch_uid, x.description
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription'
 10         ) x
 11  minus
 12  select s.patch_id, s.patch_uid, s.description
 13    from dba_registry_sqlpatch s;

no rows selected

SQL> --
SQL> --   PSUs applied to the DB but not installed into the $OH
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select s.patch_id, s.patch_uid, s.description
  3    from dba_registry_sqlpatch s
  4  minus
  5  select x.patch_id, x.patch_uid, x.description
  6    from a,
  7         xmltable('InventoryInstance/patches/*'
  8            passing a.patch_output
  9            columns
 10               patch_id number path 'patchID',
 11               patch_uid number path 'uniquePatchID',
 12               description varchar2(80) path 'patchDescription',
 13         ) x;

no rows selected

SQL>

Simple queries such as those three are what can be incorporated into monitoring scripts and reports.

Some other DBMS_QOPATCH functions worth trying include:

set heading off long 50000 pages 9999 lines 180 trims on tab off
select xmltransform(dbms_qopatch.get_opatch_count, dbms_qopatch.get_opatch_xslt) from dual;
select xmltransform(dbms_qopatch.get_opatch_list, dbms_qopatch.get_opatch_xslt) from dual;
select xmltransform(dbms_qopatch.get_pending_activity, dbms_qopatch.get_opatch_xslt) from dual;

set serverout on
exec dbms_qopatch.get_sqlpatch_status;

 

Back-porting to Oracle Database 11g

An interesting question is: “can we back port this approach to 11g” and the answer is “absolutely“!

First of all, we need to create the directory object, external table, and OS batch script. For simplicity I’m keeping the name and structure of each the same as in 12c but of course you can adjust them if desired.

Using DBMS_METADATA.GET_DDL on an Oracle 12c database I get the necessary DDL to add the directory objects and external tables to an 11g database (note that I’ve updated the Oracle Home path but that’s the only change):

CREATE OR REPLACE DIRECTORY "OPATCH_LOG_DIR" AS '/u01/app/oracle/product/11.2.0/dbhome_1/QOpatch';
CREATE OR REPLACE DIRECTORY "OPATCH_SCRIPT_DIR" AS '/u01/app/oracle/product/11.2.0/dbhome_1/QOpatch';

CREATE TABLE "SYS"."OPATCH_XML_INV"
   (    "XML_INVENTORY" CLOB
   )
   ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
      DEFAULT DIRECTORY "OPATCH_SCRIPT_DIR"
      ACCESS PARAMETERS
      ( RECORDS DELIMITED BY NEWLINE CHARACTERSET UTF8
      DISABLE_DIRECTORY_LINK_CHECK
      READSIZE 8388608
      preprocessor opatch_script_dir:'qopiprep.bat'
      BADFILE opatch_script_dir:'qopatch_bad.bad'
      LOGFILE opatch_log_dir:'qopatch_log.log'
      FIELDS TERMINATED BY 'UIJSVTBOEIZBEFFQBL'
      MISSING FIELD VALUES ARE NULL
      REJECT ROWS WITH ALL NULL FIELDS
      (
        xml_inventory    CHAR(100000000)
      )
        )
      LOCATION
       ( "OPATCH_SCRIPT_DIR":'qopiprep.bat'
       )
    )
   REJECT LIMIT UNLIMITED;

 

Next I need to copy the preprocessor batch script into the 11g home (which can be copied from another machine if necessary):

!cp -r /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch /u01/app/oracle/product/11.2.0/dbhome_1/QOpatch

 

Now I don’t want to install the DBMS_QOPATCH package into a different version of the database. It’s Oracle supplied “wrapped” code meaning I can’t modify it and it probably has 12c dependencies meaning it would be invalid in Oracle 11g. Instead I performed a SQL Trace on the DBMS_QOPATCH.GET_OPATCH_LSINVENTORY package executing on the 12c database and from the resulting trace file I see that all it’s doing is:

INSERT INTO OPATCH_XINV_TAB(XML_INVENTORY) SELECT * FROM OPATCH_XML_INV
SELECT XMLTYPE(XML_INVENTORY) FROM OPATCH_XINV_TAB
DELETE FROM OPATCH_XINV_TAB

 

Not exactly complicated. Hence using the directory objects and external table created above I can execute the following on the 11g database:

SQL> select version from v$instance;

VERSION
-----------------
11.2.0.4.0

SQL> select XMLTYPE(XML_INVENTORY) patch_output from OPATCH_XML_INV;

PATCH_OUTPUT
--------------------------------------------------------------------------------
<?xml version="1.0" encoding="US-ASCII"?>
<INVENTORY isStandAlone="false">
 <HEADER>
 <ORACLE_HOME>/u01/app/oracle/product/11.2.0/dbhome_1</ORACLE_HOME>
 <CENTRAL_INVENTORY>/u01/app/oraInventory</CENTRAL_INVENTORY>
 <OPATCH_VERSION>11.2.0.3.6</OPATCH_VERSION>
 <OUI_VERSION>11.2.0.4.0</OUI_VERSION>
 <OUI_LOCATION>/u01/app/oracle/product/11.2.0/dbhome_1/oui</OUI_LOCATION>
 <LOG>/u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2014-1
2-23_15-08-04PM_1.log</LOG>
 </HEADER>
...
(output truncated)

 

So as we can see, it’s working perfectly back-ported to 11g.

However again the XML output isn’t really useful for me, I’m more interested in what PSUs have been installed. It’s easy to check that using opatch if I’m on the server:

SQL> !$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'DATABASE PSU|DATABASE PATCH SET UPDATE'
18522509   18522509  Sun Sep 21 20:58:00 MDT 2014   DATABASE PATCH SET UPDATE 11.2.0.4.3 (INCLUDES CPU
18031668   18031668  Sun Sep 21 20:57:48 MDT 2014   DATABASE PATCH SET UPDATE 11.2.0.4.2 (INCLUDES CPU
17478514   17478514  Sun Sep 21 20:57:38 MDT 2014   DATABASE PATCH SET UPDATE 11.2.0.4.1 (INCLUDES CPU

 

But again what if I’m not on the server, want to do this programatically through SQL, build it into a monitoring query or a security compliance report, etc?

Again we can query the XMLTYPE data and get exactly what we want. Specifically:

SQL> with a as (select XMLTYPE(XML_INVENTORY) patch_output from OPATCH_XML_INV)
  2  select x.* from a, xmltable(
  3     'INVENTORY/HOST/HOME/ONEOFF_LIST/INTERIM_PATCH/oneoff_inventory/base_bugs/bug'
  4     passing a.patch_output columns
  5     --row_number for ordinality,
  6     bug_number number path '@number',
  7     bug_description varchar2(256) path '@description'
  8  ) x
  9  where regexp_like(bug_description, '(DATABASE PSU|DATABASE PATCH SET UPDATE)','i')
 10  order by bug_number;

BUG_NUMBER BUG_DESCRIPTION
---------- ------------------------------------------------------------
  17478514 DATABASE PATCH SET UPDATE 11.2.0.4.1 (INCLUDES CPUJAN2014)
  18031668 DATABASE PATCH SET UPDATE 11.2.0.4.2 (INCLUDES CPUAPR2014)
  18522509 DATABASE PATCH SET UPDATE 11.2.0.4.3 (INCLUDES CPUJUL2014)

SQL>

 

Voila! By copying the technique used by the Oracle 12c database we’ve now performed an OPatch query from SQL against an 11g database. Just took about 2 minutes of setup.

Again this can be super handy if PSU patching is a regular activity for you or if you have to produce security compliance reports.

Unfortunately though Oracle 11g records less data in DBA_REGISTRY_HISTORY than 12c has in DBA_REGISTRY_SQLPATCH:

SQL> select comments, action_time from dba_registry_history
  2  where bundle_series like '%PSU' order by action_time;

COMMENTS                       ACTION_TIME
------------------------------ ------------------------------
PSU 11.2.0.4.3                 23-SEP-14 09.21.34.702876 AM

SQL>

 

Hence joining the two is more challenging but certainly possible. The data (in this case the string “11.2.0.4.3”) is present in both the DBA_REGISTRY_HISTORY and my XQuery output from my back-ported OPATCH_XML_INV external table output.

 

Conclusion

The new DBMS_QOPATCH API is a fantastic improvement with Oracle Database 12c when it comes to patch management (querying/reporting/monitoring). And by understanding the concepts and components added to 12c they can be manually back-ported to 11g. Both using DBMS_QOPATCH and even the back-port to 11g is actually pretty easy. In fact the hardest part for me (not being very familiar with XQuery) was coming up with the query to parse the XMLTYPE data.

 

References

http://docs.oracle.com/database/121/ARPLS/d_qopatch.htm

Oracle Recommended Patches — Oracle Database (Doc ID 756671.1)

Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)

How do you apply a Patchset,PSU or CPU in a Data Guard Physical Standby configuration (Doc ID 278641.1)

Categories: DBA Blogs

Log Buffer #405, A Carnival of the Vanities for DBAs

Mon, 2015-01-19 19:34

This Log Buffer Edition rides on the wave of new ideas in the database realms. From Oracle technologies through MySQL to the SQL Server, things are piping hot.

Oracle:

The Call for Papers for the Rittman Mead BI Forum 2015 is currently open, with abstract submissions open to January 18th 2015.

The replicat process is the apply process within the Oracle GoldenGate environment.

“SELECT * FROM TABLE” Runs Out Of TEMP Space.

“log file sync” and the MTTR Advisor.

Working with XML files and APEX – Part 3: Detail elements in a row with OUTER JOIN.

SQL Server:

Get started testing your database code with the tSQLt framework.

Archiving Hierarchical, Deleted Transaction Using XML.

As a part of his “Function / Iterator Pairs” mini-series, Business Intelligence architect, Analysis Services Maestro, and author Bill Pearson introduces the DAX COUNT() and COUNTX() functions, discussing similarities and differences.

Free eBook: Fundamentals of SQL Server 2012 Replication

Importance of Statistics and How It Works in SQL Server – Part 1

MySQL:

Is Zero downtime even possible on RDS?

Monitor MySQL Database Users with VividCortex

Django with time zone support and MySQL

Using Perl to send tweets stored in a MySQL database to twitter

When ONLY_FULL_GROUP_BY Won’t See the Query Is Deterministic.

Categories: DBA Blogs

Log Buffer #406, A Carnival of the Vanities for DBAs

Thu, 2015-01-15 20:32

This Log Buffer Edition covers blog posts from various bloggers of Oracle, SQL Server and MySQL.

Oracle:

Sync tables: generate MERGE using Unique constraint.

What Hardware and Software Do YOU Want Oracle to Build?

There were a number of new features introduced in Ops Center 12.2.2. One of the shiny ones is an expansion of the backup and recovery capabilities to include Proxy Controllers.

Want to Be a Better Leader? Answer One Question.

Managing a remote Oracle Database instance with “Geographic Edition”.

SQL Server:

Learn how you can use SQLCop to prevent your developers from writing stored procedures that are named sp_ something.

Data Cleaning in SQL 2012 with Data Quality Services.

Stairway to PowerPivot and DAX – Level 9: Function / Iterator Function Pairs: The DAX MAX() and MAXX() Functions.

Options to Improve SQL Server Bulk Load Performance.

Dynamically Create Tables Based on an Access Table

MySQL:

Stored Procedures: critiques and defences.

JSON UDF functions 0.3.3 have been released.

Business Scalability, Operational Efficiency and Competitive Edge with MariaDB MaxScale 1.0 GA.

MySQL 5.7 labs and the HTTP Plugin – inserting, updating and deleting records in MySQL via HTTP.

Hyper-threading – how does it double CPU throughput?

Categories: DBA Blogs

Performance Problems with Dynamic Statistics in Oracle 12c

Tue, 2015-01-06 09:55

I’ve been making some tests recently with the new Oracle 12.1.0.2 In-Memory option and have been faced with an unexpected  performance problem.  Here is a test case:

create table tst_1 as
with q as (select 1 from dual connect by level <= 100000)
select rownum id, 12345 val, mod(rownum,1000) ref_id  from q,q
where rownum <= 200000000;

Table created.

create table tst_2 as select rownum ref_id, lpad(rownum,10, 'a') name, rownum || 'a' name2</pre>
from dual connect by level <= 1000;

Table created.

begin
dbms_stats.gather_table_stats(
ownname          => user,
tabname          =>'TST_1',
method_opt       => 'for all columns size 1',
degree => 8
);
dbms_stats.gather_table_stats(
ownname          => user,
tabname          =>'TST_2',
method_opt       => 'for all columns size 1'
);
end;
/
PL/SQL procedure successfully completed.

alter table tst_1 inmemory;

Table altered.

select count(*) from tst_1;

COUNT(*)
----------
200000000

Waiting for in-memory segment population:

select segment_name, bytes, inmemory_size from v$im_segments;

SEGMENT_NAME         BYTES INMEMORY_SIZE

--------------- ---------- -------------

TST_1           4629463040    3533963264

Now let’s make a simple two table join:

select name, sum(val) from tst_1 a, tst_2 b where a.ref_id = b.ref_id and name2='50a'
group by name;

Elapsed: 00:00:00.17

Query runs pretty fast. Execution plan has the brand new vector transformation

Execution Plan
----------------------------------------------------------
Plan hash value: 213128033

--------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                          |     1 |    54 |  7756  (21)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION        |                          |       |       |            |          |
|   2 |   LOAD AS SELECT                  | SYS_TEMP_0FD9D66FA_57B2B |       |       |            |          |
|   3 |    VECTOR GROUP BY                |                          |     1 |    24 |     5  (20)| 00:00:01 |
|   4 |     KEY VECTOR CREATE BUFFERED    | :KV0000                  |     1 |    24 |     5  (20)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL            | TST_2                    |     1 |    20 |     4   (0)| 00:00:01 |
|   6 |   HASH GROUP BY                   |                          |     1 |    54 |  7751  (21)| 00:00:01 |
|*  7 |    HASH JOIN                      |                          |     1 |    54 |  7750  (21)| 00:00:01 |
|   8 |     VIEW                          | VW_VT_377C5901           |     1 |    30 |  7748  (21)| 00:00:01 |
|   9 |      VECTOR GROUP BY              |                          |     1 |    13 |  7748  (21)| 00:00:01 |
|  10 |       HASH GROUP BY               |                          |     1 |    13 |  7748  (21)| 00:00:01 |
|  11 |        KEY VECTOR USE             | :KV0000                  |   200K|  2539K|  7748  (21)| 00:00:01 |
|* 12 |         TABLE ACCESS INMEMORY FULL| TST_1                    |   200M|  1716M|  7697  (21)| 00:00:01 |
|  13 |     TABLE ACCESS FULL             | SYS_TEMP_0FD9D66FA_57B2B |     1 |    24 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter("NAME2"='50a')
   7 - access("ITEM_5"=INTERNAL_FUNCTION("C0") AND "ITEM_6"="C2")
  12 - inmemory(SYS_OP_KEY_VECTOR_FILTER("A"."REF_ID",:KV0000))
       filter(SYS_OP_KEY_VECTOR_FILTER("A"."REF_ID",:KV0000))

Note
-----
   - vector transformation used for this statement

After having such impressive performance I’ve decided to run the query in parallel:

select /*+ parallel(8) */ name, sum(val) from tst_1 a, tst_2 b
where a.ref_id = b.ref_id and name2='50a'
group by name;

Elapsed: 00:01:02.55

Query elapsed time suddenly dropped from 0.17 seconds to the almost 1 minute and 3 seconds. But the second execution runs in 0.6 seconds.
The new plan is:

Execution Plan
----------------------------------------------------------
Plan hash value: 3623951262

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |     1 |    29 |  1143  (26)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                     |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)               | :TQ10001 |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                    |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE                      |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH                   | :TQ10000 |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY                 |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,00 | PCWP |            |
|*  7 |        HASH JOIN                    |          |   200K|  5664K|  1142  (26)| 00:00:01 |  Q1,00 | PCWP |            |
|   8 |         JOIN FILTER CREATE          | :BF0000  |     1 |    20 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|*  9 |          TABLE ACCESS FULL          | TST_2    |     1 |    20 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|  10 |         JOIN FILTER USE             | :BF0000  |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWP |            |
|  11 |          PX BLOCK ITERATOR          |          |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWC |            |
|* 12 |           TABLE ACCESS INMEMORY FULL| TST_1    |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("A"."REF_ID"="B"."REF_ID")
   9 - filter("NAME2"='50a')
  12 - inmemory(SYS_OP_BLOOM_FILTER(:BF0000,"A"."REF_ID"))
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"A"."REF_ID"))

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 8 because of hint

We can see a Bloom filter instead of key vector, but this is not the issue. Problem is coming from the “dynamic statistics used: dynamic sampling (level=AUTO)” note.
In 10046 trace file I’ve found nine dynamic sampling queries and one of them was this one:

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
  optimizer_features_enable(default) no_parallel result_cache(snapshot=3600)
  */ SUM(C1)
FROM
 (SELECT /*+ qb_name("innerQuery")  */ 1 AS C1 FROM (SELECT /*+
  NO_VECTOR_TRANSFORM ORDERED */ "A"."VAL" "ITEM_1","A"."REF_ID" "ITEM_2"
  FROM "TST_1" "A") "VW_VTN_377C5901#0", (SELECT /*+ NO_VECTOR_TRANSFORM
  ORDERED */ "B"."NAME" "ITEM_3","B"."REF_ID" "ITEM_4" FROM "TST_2" "B" WHERE
  "B"."NAME2"='50a') "VW_VTN_EE607F02#1" WHERE ("VW_VTN_377C5901#0"."ITEM_2"=
  "VW_VTN_EE607F02#1"."ITEM_4")) innerQuery

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1     43.92      76.33          0          5          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3     43.92      76.33          0          5          0           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 64     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  RESULT CACHE  56bn7fg7qvrrw1w8cmanyn3mxr (cr=0 pr=0 pw=0 time=0 us)
         0          0          0   SORT AGGREGATE (cr=0 pr=0 pw=0 time=8 us)
         0          0          0    HASH JOIN  (cr=0 pr=0 pw=0 time=4 us cost=159242 size=2600000 card=200000)
 200000000  200000000  200000000     TABLE ACCESS INMEMORY FULL TST_1 (cr=3 pr=0 pw=0 time=53944537 us cost=7132 size=800000000 card=200000000)
         0          0          0     TABLE ACCESS FULL TST_2 (cr=0 pr=0 pw=0 time=3 us cost=4 size=9 card=1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  asynch descriptor resize                        1        0.00          0.00
  Disk file operations I/O                        1        0.00          0.00
  CSS initialization                              1        0.00          0.00
  CSS operation: action                           1        0.00          0.00
  direct path write temp                       6267        0.02         30.37
********************************************************************************

Vector transformation is disabled, inefficient table order is fixed by the ORDERING hint and we are waiting for hash table creation based on huge TST_1 table.
Dynamic statistics feature has been greatly improved in Oracle 12c  with the support for joins and group by predicates. This is why we have such join during the parse time. Next document has the”Dynamic Statistics (previously known as dynamic sampling)” section inside: Understanding Optimizer Statistics with Oracle Database 12c where the new functionality is described.

Let’s make a simpler test:

select /*+ parallel(2) */ ref_id, sum(val) from tst_1 a group by ref_id;

Execution Plan
----------------------------------------------------------
Plan hash value: 2527371111

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |          |  1000 |  9000 |  7949  (58)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                   |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)             | :TQ10001 |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                  |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE                    |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH                 | :TQ10000 |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY               |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR          |          |   200M|  1716M|  4276  (21)| 00:00:01 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS INMEMORY FULL| TST_1    |   200M|  1716M|  4276  (21)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of hint

We can see a “dynamic statistics used” note again. It’s a simple query without predicates with the single table with pretty accurate statistics. From my point of view, here is no reason for dynamic sampling at all.
Automatic dynamic sampling was introduced in 11G Release 2. Description of this feature can be found in this document: Dynamic sampling and its impact on the Optimizer.
“From Oracle Database 11g Release 2 onwards the optimizer will automatically decide if dynamic sampling will be useful and what dynamic sampling level will be used for SQL statements executed in parallel. This decision is based on size of the tables in the statement and the complexity of the predicates”.
Looks like algorithm has been changed in 12c and dynamic sampling is triggered in a broader set of use cases.
This behavior can be disabled at statement, session or system level using the fix control for the bug 7452863. For example,
ALTER SESSION SET “_fix_control”=’7452863:0′;

Summary

Dynamic statistics has been enhanced in Oracle 12c, but this can lead to a longer parse time.
Automatic dynamic statistics is used more often in 12c which can lead to a parse time increase in the more cases than before.

Categories: DBA Blogs

Troubleshooting a Multipath Issue

Tue, 2015-01-06 09:37

Multipathing allows to configure multiple paths from servers to storage arrays. It provides I/O failover and load balancing. Linux uses device mapper kernel framework to support multipathing.

In this post I will explain the steps taken to troubleshoot a multipath issue. This should provide an glimpse into the tools and technology involved. Problem was reported in a RHEL6 system in which a backup software is complaining that the device from which /boot is mounted does not exist.

Following is the device. You can see the device name is a wwid.

# df
Filesystem 1K-blocks Used Available Use% Mounted on
[..]
/dev/mapper/3600508b1001c725ab3a5a49b0ad9848ep1
198337 61002 127095 33% /boot

File /dev/mapper/3600508b1001c725ab3a5a49b0ad9848ep1 is missing under /dev/mapper.

# ll /dev/mapper/
total 0
crw-rw—- 1 root root 10, 58 Jul 9 2013 control
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpatha -> ../dm-1
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathap1 -> ../dm-2
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathb -> ../dm-0
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathc -> ../dm-3
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathcp1 -> ../dm-4
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathcp2 -> ../dm-5
lrwxrwxrwx 1 root root 7 Jul 9 2013 vgroot-lvroot -> ../dm-6
lrwxrwxrwx 1 root root 7 Jul 9 2013 vgroot-lvswap -> ../dm-7

From /ect/fstab, it is found that UUID of the device is specified.

UUID=6dfd9f97-7038-4469-8841-07a991d64026 /boot ext4 defaults 1 2

From blkid, we can see the device associated with the UUID. blkid command prints the attributes of all block device in the system.

# blkid
/dev/mapper/mpathcp1: UUID=”6dfd9f97-7038-4469-8841-07a991d64026″ TYPE=”ext4″

Remounting the /boot mount point shows user friendly name /dev/mapper/mpathcp1.

# df
Filesystem 1K-blocks Used Available Use% Mounted on
[..]
/dev/mapper/mpathcp1 198337 61002 127095 33% /boot

From this far, we can understand that the system is booting with wwid as device name. But later the device name is converted into user friendly name. In multipath configuration user_friendly_names is enabled.

# grep user_friendly_names /etc/multipath.confuser_friendly_names yes

As per Red Hat documentation,

“When the user_friendly_names option in the multipath configuration file is set to yes, the name of a multipath device is of the form mpathn. For the Red Hat Enterprise Linux 6 release, n is an alphabetic character, so that the name of a multipath device might be mpatha or mpathb. In previous releases, n was an integer.”

As the system is mounting the right disk after booting up, problem should be with the user friendly name configuration in initramfs. Extracting the initramfs file and checking the multipath configuration shows that user_friendly_names parameter is enabled.

# cat initramfs/etc/multipath.conf
defaults {
user_friendly_names yes

Now the interesting point is that, /etc/multipath/bindings is missing in initramfs. But the file is in the system. /etc/multipath/bindings file is used to refer wwid with alias.

# cat /etc/multipath/bindings
# Multipath bindings, Version : 1.0
# NOTE: this file is automatically maintained by the multipath program.
# You should not need to edit this file in normal circumstances.
#
# Format:
# alias wwid
#
mpathc 3600508b1001c725ab3a5a49b0ad9848e
mpatha 36782bcb0005dd607000003b34ef072be
mpathb 36782bcb000627385000003ab4ef14636

initramfs can be created using dracut command.

# dracut -v -f test.img 2.6.32-131.0.15.el6.x86_64 2> /tmp/test.out

Building a test initramfs file shows that a newly created initramfs is including /etc/multipath/bindings.

# grep -ri bindings /tmp/test.out
I: Installing /etc/multipath/bindings

So this is what is happening,
When system boots up, initramfs looks for /etc/multipath/bindings for aliases in initramfs to use for user friendly names. But it could not find it and and uses wwid. After system boots up /etc/multipath/bindings is present and device names are changed to user friendly names.

Looks like the /etc/multipath/bindings file is created after kernel installation and initrd generation. This might have happened as multipath configuration was done after kernel installation. Even if the system root device is not on multipath, it is possible for multipath to be included in the initrd. For example, this can happen of the system root device is on LVM. This should be the reason why multupath.conf was included in the initramfs and not /etc/multipath/bindings.

To solve the issue we can to rebuild the initrd and restart the system. Re-installing existing kernel or installing new kernel would also fix the issue as the initrd would be rebuilt in both cases..

# dracut -v -f 2.6.32-131.0.15.el6.x86_64
Categories: DBA Blogs

WARNING!!! Maximum Load 800 lbs

Mon, 2015-01-05 13:45

Is it just my recently discovered dumbness or am I really on to something here? I just returned home from a ritzy shopping mall and had to ride down an elevator with a brimming shopping trolly filled with items which my companion miraculously managed to buy from every sale from every corner of that mall. Anyway before I recall that shopping bill and weep copiously, I just wanted to share something which always bugs me when I use elevators.

Inside the elevators, there are only two things to do. Either stare at the faces of other riders and enjoy the embarrassment of shifting eyes from face to face or the other option is to look onto the walls. Like others, I always select the latter option. One thing which every elevator says to me is something like ‘Maximum Load 800 lbs’ or any other number.

Now that not only baffles me but also creates a passive panic. I quickly calculate my weight, then frantically and stealthily glance at other bodies inside and then guess the total weight in there. More often than not, it turns out that the weight of bodies plus their heavyset trollies exceed or teeter on the edge of the warning lbs.

On this very moment, the other elevator hobbits notice my ashen face, violently trembling body, sunken eyes and follow them to the warning. Some get the point and follow the ritual of getting panicked, some try to recall emergency ambulance number after watching us, and some just don’t give the flying heck.

My question here is why on Earth they write it inside the elevator when its too late to do anything about it. Do they really write it seriously or they have just assumed that never that weight would be reached? I personally know a group of people including me who for sure can easily break that weight record. They all live nearby and use the same shopping center and elevator of course. There is every chance that one day they all will come on same time and use that elevator. What happens then? Or I am just paranoid beyond cure? May be if elevator guys would write it outside it, and place a weighing machine on entry of elevator. The machine would calculate the weight as people would move in, and will close its door as soon as it reaches near it.

Interestingly enough, I have seen databases being used as elevators by the applications and data loaders. Applications and data loading without any consideration of design, scalability or performance or capacity management just throng the database. Unlike elevators, I have seen databases break down or coming to grinding halt due to runaway or in other words over-weight applications.

That panics me in the same way as elevators do.

Categories: DBA Blogs

Log Buffer #404, A Carnival of the Vanities for DBAs

Fri, 2015-01-02 09:35

With new year already in fast gear, bloggers are sparing no stone unturned to come up with innovative ideas. This Log Buffer edition is keeping pace with them as always.

Oracle:

While playing with 12c Scott tried the upgrade to the DEFAULT column syntax that now allows sequences.

This is an age old question and of course the answer depends on how you say “SQL”.

Happy New Year! Upgraded 12.1.0.1 Grid Infrastructure to 12.1.0.2 and applied the Oct 2014 PSU. Had an error during rootupgrade.sh as well, due to the ASM spfile being on disk instead of on ASM diskgroup.

If you (already) created your first Oracle Service Bus 12c application/project with SOAP webservices and tried to deploy it to your IntegratedWeblogic server you might be familiar with this error.

Using Drag-Drop functionality in af:treeTable to move data between nodes.

SQL Server:

Hadoop has been making a lot of noise in the Big Data world.

Lets look at two different ways of creating an HDInsight Cluster: Creating an HDInsight Cluster through Azure Management Portal, and creating an HDInsight Cluster through Windows Azure PowerShell.

Why you need test driven development.

SQL Server Data Import System to Alert For Missed Imports.

Create stunning visualizations with Power View in 20 minutes or less!

MySQL:

So assume you just uploaded the certificate you use to identify yourself to the MySQL server to Github or some other place it doesn’t belong…and there is no undelete.

MySQL Plugin for Oracle Enterprise Manager on VirtualBox: installation gotchas.

MariaDB slave restore using GTID & xtrabackup bug.

How small changes impact complex systems – MySQL example.

In this post, Louis talk about MHA GTID behavior, we test different cases and find something is different from previous versions.

Categories: DBA Blogs

What it’s Like to Intern at Pythian

Tue, 2014-12-30 14:00

About eight months ago I started working as a Global Talent Acquisition Intern here at Pythian. It was my first co-op work placement, and my first experience in a professional work setting. I had never done anything like it before and was definitely nervous yet excited for the opportunity. As any eager but nervous person, I researched as much as I could to prepare myself for what I’d be doing. I distinctly remember searching on the Internet; What does an intern do?” and was a little unnerved by some of the results that I came across. Being an intern couldn’t be that bad now could it? Surely what I was reading was a collection of misconceptions—people wouldn’t lie and exaggerate things on the Internet now would they? After having worked at Pythian for this long, I can finally put those misconceptions to rest and reveal firsthand what it’s really like to be an intern—well, here at Pythian at least.

“You don’t get to choose the people you work with.”

I worked as an intern in Talent Acquisition, the functional department within Human Resources whose main task is to acquire and retain the most important resource for Pythian; its people. Working in TA, I got see exactly what Pythian meant when it said it hires only the top 5%. For a company that hires talent from all around the world, it takes a special group of people who understand the complexities of knowing not only where to look, but also who to look for in order to hire the right person. As a result, not only are the people being hired exceptionally talented across a vast spectrum of technologies, but the team doing the hiring, Talent Acquisition itself, is amazingly talented at what they do (they even won an award for it this year!).

Whether you’re an intern or a regular employee, there is one thing that both have in common, and it’s that you don’t get to choose the people you work with. As soon as I walked through the doors of the TA office, I knew that I was in good hands. My colleagues respected me and treated me as one of them from the get-go and sometimes even came to me for advice and assistance. It was humbling and a true reflection of the type of people who work at Pythian. I never once was treated like an intern, nor did I ever feel like one. There were no coffee runs that I had to do and there was never a situation where I had to push papers all day. This was nothing like what my Google searches had led me to believe.

“What do I know? I’m only an intern!”

Speaking of pushing papers all day and running coffee errands, many people often get the impression that working as an intern means having to do the work that no one else wants to do. In other words, an intern is simply there to do the mundane. That may be true for internships elsewhere but here at Pythian, the work I was doing was far from mundane—it was both challenging and quite interesting. There were times where I couldn’t believe what I was asked to do. Not because I didn’t want to do it, but because I couldn’t conceive why something so important was being given to me to figure out. What do I know? I’m only an intern! Within the first week I found myself working on projects whose impact would be felt by the entire team. I wasn’t being given work for the sake of work to keep me busy, rather, the work I was doing had a sense of tangibility to it. I was able to see the impact before, during and after and each time I knew that what I was doing was truly going to make a difference in the end.

True to their nature, internships allow for lots of opportunities when it comes to learning. Everywhere you turn in Pythian there is something to learn and someone willing to teach. As a Commerce student, there is only so much you can learn about organizations and how they operate just by sitting in a lecture. At Pythian, I got to experience it. Each and every day I was learning something new, something that I didn’t know before. I had the opportunity to really see the inner workings of a successful organization and understand how all the pieces of the company worked together towards a common purpose. There were also opportunities to participate in workshops with other co-op students on topics that ranged from business acumen to time management. These workshops concluded with a final session where each of us got to present some of the things we had been working on in front of our managers and supervisors. This experience was one of the many highlights of my time at Pythian, as it was a great way to interact and learn from other students like myself.

“Get comfortable at being uncomfortable.”

If there is one thing that I could tell future co-op students or interns here at Pythian it is to not be afraid to leave your comfort zone. Pythian’s CEO at the time, Andrew Waitman, said it best when he said, “get comfortable at being uncomfortable.” When given an opportunity to do something you haven’t done before, don’t think about whether or not you can do it—just go for it. You’ll be surprised what you are capable of when you get past that feeling of uncertainty and doubt. Making mistakes is inevitable, but it is also a necessary step in order to learn and grow. Always ask questions and have an open mind in everything that you do. When all is said and done, you will leave Pythian smarter, more confident, and more prepared for anything that comes your way.

Categories: DBA Blogs

Watch: Riak vs. Oracle

Tue, 2014-12-30 11:58

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

Riak and Oracle are completely different platforms. Alex explains that “Oracle database is a very feature-rich platform,” while Riak, Alex explains, “…is a very simple model… with very minimalistic features.” Riak is excellent for a public cloud infrastructure because it provides elasticity and scalability on demand. Learn about more use cases from Alex’s video Riak vs. Oracle.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Calculating Business Days in HiveQL

Tue, 2014-12-30 09:07

One of the common tasks in data processing is to calculate the number of days between two given dates. You can easily achieve this by using Hive DATEDIFF function. You can also get weekday number by using this more obscure function:

SELECT FROM_UNIXTIME(your_date,'u') FROM some_table;

This will return 1 for Monday, 7 for Sunday and is based on Java SimpleDateFormat —
http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html

It becomes more challenging if you need to calculate the number of business days (excluding Saturdays and Sundays) between two dates. There is no built-in function in Hive to perform such calculation, but fortunately it is rather simple to write your own Hive UDFs. I couldn’t quickly find any existing open source functions to solve this problem, so I wrote my own using Scala — https://github.com/pythian/hive-udfs

There are actually three functions in hive-udfs package: CountBusinessDays, CountSaturdays and CountSundays. These functions accept start date and end date as UNIX_TIMESTAMP and return the count of different types of full days in this interval, excluding start and end points.

Here is how you can use this UDF in Hive:

ADD JAR hdfs:///user/hive/udfs/pythian-hive-udfs-assembly-0.1.jar;
CREATE TEMPORARY FUNCTION count_business_days AS 'com.pythian.udf.CountBusinessDays';
CREATE TEMPORARY FUNCTION count_saturdays AS 'com.pythian.udf.CountSaturdays';
CREATE TEMPORARY FUNCTION count_sundays AS 'com.pythian.udf.CountSundays';

SELECT count_business_days(UNIX_TIMESTAMP(start_date), UNIX_TIMESTAMP(end_date)) FROM some_table;

The code is open source and free to use. Comments and suggestions are always welcome.

Categories: DBA Blogs

How to Migrate a Database Using GoldenGate

Tue, 2014-12-30 09:06

There are many ways to migrate a database from A server to B server like Datapump, RMAN,etc. Using the combination of datapump and GoldenGate to migrate your database on cross-platform will minimize your down-time to even three minutes.

This method can be used for any size database from MB to TB level. Here is a simple sample to demonstrate this idea.

The prerequisites I assume that the GoldenGate has been configured in the source database and target database. To simulate the OLTP database, in my source database “SOURCE” there is a job will keep inserting a record into the table HOWIE.TEST as shown below.

CREATE PROCEDURE howie.insert_test
IS
BEGIN
   insert into test values(test_seq.nextval,sysdate);
   commit;
END;
/ 

SQL> SELECT * FROM HOWIE.TEST ORDER BY ID;

        ID MOD_DATE
---------- -------------------
         1 12/13/2014 21:19:17
         2 12/13/2014 21:24:03
         3 12/13/2014 21:31:11
         
		 .....................
           
        21 12/15/2014 19:14:25
        22 12/15/2014 19:15:25
        23 12/15/2014 19:16:25

23 rows selected.

2nd step, you need to start capture process on the source database and stop replicate process on the target database

SOURCE:

GGSCI (11gGG1) 4> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        44:56:46      00:00:01

TARGET:

GGSCI (11gGG2) 6> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
REPLICAT    STOPPED     REP1        00:00:00      00:00:53

3rd step, export the source database using datapump with flashback_scn

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
     284867
	 
[ggadmin@11gGG1 11.2.0]$ expdp directory=DATA_PUMP_DIR dumpfile=source.dmp logfile=source.log schemas=HOWIE flashback_scn=284867




4th step, transferred the dumpfile to the target server

[ggadmin@11gGG1 11.2.0]$ scp /u01/app/oracle/admin/SOURCE/dpdump/source.dmp 11gGG2:/u01/app/oracle/admin/TARGET/dpdump/

5th step, import the dumpfile into the target database.

[ggadmin@11gGG2 11.2.0]$ impdp directory=DATA_PUMP_DIR dumpfile=source.dmp logfile=source.log schemas=HOWIE

6th step, verify the data in the target database

SQL> SELECT * FROM HOWIE.TEST ORDER BY ID;

        ID MOD_DATE
---------- -------------------
         1 12/13/2014 21:19:17
         2 12/13/2014 21:24:03
         3 12/13/2014 21:31:11

		 ...............

    	21 12/15/2014 19:14:25
        22 12/15/2014 19:15:25
        23 12/15/2014 19:16:25

23 rows selected.

7th step, start replicate process on the target database using ATCSN

GGSCI (11gGG2) 8> start rep rep1 atcsn 284867

Sending START request to MANAGER ...

8th step, confirm the data has been synced

SQL> SELECT * FROM HOWIE.TEST ORDER BY ID;

        ID MOD_DATE
---------- -------------------
         1 12/13/2014 21:19:17
         2 12/13/2014 21:24:03
         3 12/13/2014 21:31:11
         4 12/13/2014 21:44:33
         5 12/13/2014 21:45:33
         6 12/13/2014 21:46:33
         7 12/13/2014 21:47:33

		 ...............

        60 12/15/2014 19:53:33
        61 12/15/2014 19:54:33
        62 12/15/2014 19:55:33
        63 12/15/2014 19:56:33

63 rows selected.

Action plan Summary

Step Source Target Source DB (11g) Target DB (11g) 1 Configure goldengate for capture processes. Configure goldengate for Replicate processes. 2 Start capture processes. Don’t start replicate now. 3 start export from the source database (Mark SCN when export started.) 4 Export completed. start SCP of dumpfile to target server. 5 SCP completed. Start Import on Target database using dumpfiles. 6 Import Finished. 7 Start replicat using atcsn 8 Replicate applied all changes 9 when lag is zero for capture,stop capture wait till replicate apply all changes , lag should be zero for replicate. After this stop replicate. 10 Redirect db connection point to target db. Redirect db connection point to target db.
Categories: DBA Blogs

Merging Apps Patches in Oracle EBS R12.2

Tue, 2014-12-30 09:04

It’s public knowledge that the traditional patching tool in Oracle EBS “adpatch” is replaced with “adop” utility. It’s also known that adop utility automatically merges patches when more than one patch is specified in the command line arguments. So whats the need for blog post on merging patches when its taken care automatically?

This blog is for people who like to dig little deep into EBS to shave off some downtime during the upgrades. We save some downtime if we merge the patches ahead of time instead of letting adop do it during the upgrade window. This is especially true when you are applying big patches like 12.2.4.

Merging patches is done using same utility as in earlier versions called “admrgpch”. Except that there are few extra steps needed after merging the patches.

In EBS 12.2 after merging the patches using admrgpch, we need to copy the actual unzipped patches that we merged also into the destination  directory. This is required as adop utility seems to be looking for these patches during the prepare phase. If you don’t copy the unzipped patch directories, you can still apply the patches. But when you run adop=prepare during next patching cycle,  it will fail as it will look for actual patch directories inside the merged patch dir.

Here is how a sample merging procedure will look like in EBS R12.2

# merge patches 111111 & 222222
$ pwd
  /u01/EBS/fs_ne/EBSapps/patch
$ ls
  111111 222222 
$ mkdir dest
$ admrgpch -s /u01/EBS/fs_ne/EBSapps/patch -d /u01/EBS/fs_ne/EBSapps/patch/dest
$ cd dest
$ pwd
  /u01/R122_EBS/fs_ne/EBSapps/patch/dest
$ ls
  fnd u_merged.drv 

# After admrgpch is finished, we need to copy patch directories into the dest dir

$ cd ..
$ mv 111111 /u01/EBS/fs_ne/EBSapps/patch/dest
$ mv 222222 /u01/EBS/fs_ne/EBSapps/patch/dest
$ cd /u01/EBS/fs_ne/EBSapps/patch/dest
$ ls
  111111 222222 fnd u_merged.drv

# Now you can the patches using adop=apply
Categories: DBA Blogs

Log Buffer #403, A Carnival of the Vanities for DBAs

Tue, 2014-12-30 09:02

As the 2014 is drawing to its end, the Log Buffer edition is looking back proudly at some of the blog posts from this week looking at whats happening in around database field.

Oracle:

Fusion Applications provides web services that allow external systems to integrate with Fusion Applications.

OEM 12c Release 4 has several new EM CLI verbs, including manage_agent_partnership.

To reflect the Oracle Retail enterprise applications newest code base, the 14.1 release of the Oracle Retail application enterprise includes new End User documents, considerable updates to existing End User documentation sets, and a wide range of new White Papers and Technical Papers.

If you programmatically change data in your Oracle MAF application then you need to ensure the UI reflects those data changes.

Configuring MDS Customisation Layer and Layer Value Combination in ADF.

SQL Server:

15 Quick Short Interview Questions Useful When Hiring SQL Developers.

Business Intelligence architect, Analysis Services Maestro, and author Bill Pearson exposes the DAX SUM() and SUMX() functions, comparing and contrasting the two.

SQL Server Data Aggregation for Data with Different Sampling Rates.

SSRS continues to use SET FMTONLY ON even though it has many problems. How can we cope?

When a hospital’s mission-critical database fails at Christmas, disaster for the hospital – and its hapless DBA – seems certain.

MySQL:

Does your dataset consist of InnoDB tables with large BLOB data such that the data is stored in external BLOB pages?

InnoDB crash recovery speed in MySQL 5.6.

Somebody wanted to know how to find any non-unique indexes in information_schema of the MySQL.

What is a data type?

File carving methods for the MySQL DBA.

Categories: DBA Blogs