Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 18 hours 33 sec ago

What Does oracle-rdbms-server-11gR2-preinstall Not Do?

Mon, 2014-04-21 08:47

I was working on a project to clone Oracle install to a new Linux server and this should be simple as oracle-rdbms-server-11gR2-preinstall RPM can be used to configure the server, right?

So simple that I did not read the documentation.

However, I did review Requirements for Installing Oracle 11gR2 RDBMS on RHEL6 or OL6 64-bit (x86-64) (Doc ID 1441282.1) and it’s not quite the same.

From Oracle Database Quick Installation Guide 11g Release 2 (11.2) for Linux x86-64 the following packages are required:

binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6.i686
gcc-4.4.4-13.el6 (x86_64)
gcc-c++-4.4.4-13.el6 (x86_64)
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6.i686
ksh
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6.i686
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6.i686
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6.i686
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)

RPM oracle-rdbms-server-11gR2-preinstall was installed

$  rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep oracle-rdbms
oracle-rdbms-server-11gR2-preinstall-1.0-9.el6 (x86_64)

Health Check Validation Engine was run for 11g R2 (11.2.0) Preinstall (Linux)

$ ./rda.sh -T hcve
Processing HCVE tests ...
Available Pre-Installation Rule Sets:
   1.  Oracle Database 10g R1 (10.1.0) Preinstall (Linux)
   2.  Oracle Database 10g R2 (10.2.0) Preinstall (Linux)
   3.  Oracle Database 11g R1 (11.1) Preinstall (Linux)
   4.  Oracle Database 11g R2 (11.2.0) Preinstall (Linux)
   5.  Oracle Database 12c R1 (12.1.0) Preinstallation (Linux)

Enter the HCVE rule set number or 0 to cancel the test
Hit "Return" to accept the default (0)
> 4

Performing HCVE checks ...
Enter value for < Planned ORACLE_HOME location >
Hit "Return" to accept the default (/u01/app/oracle/product/11.2.0.4/dbhome_1)
>

Enter value for < JDK Home >
>

Test "Oracle Database 11g R2 (11.2.0) Preinstall (Linux)" executed at 18-Apr-2014 05:28:37

Test Results
~~~~~~~~~~~~

ID     NAME                 RESULT  VALUE
====== ==================== ======= ==========================================
A00010 OS Certified?        PASSED  Adequate
A00020 User in /etc/passwd? PASSED  userOK
A00040 Group in /etc/group? PASSED  GroupOK
A00050 Enter ORACLE_HOME    RECORD  /u01/app/oracle/product/11.2.0.4/dbho...
A00060 ORACLE_HOME Valid?   PASSED  OHexists
A00070 O_H Permissions OK?  PASSED  CorrectPerms
A00080 oraInventory Permiss PASSED  oraInventoryOK
A00090 Got Software Tools?  PASSED  ld_nm_ar_make_found
A00100 Umask Set to 022?    PASSED  UmaskOK
A00120 Limits Processes     PASSED  Adequate
A00125 Limits Stacksize     PASSED  Adequate
A00130 Limits Descriptors   PASSED  Adequate
A00140 LDLIBRARYPATH Unset? PASSED  UnSet
A00180 JAVA_HOME Unset?     PASSED  UnSet
A00190 Enter JDK Home       RECORD
A00200 JDK Version          FAILED  JDK home is missing
A00210 Other O_Hs in PATH?  PASSED  NoneFound
A00220 Other OUI Up?        PASSED  NoOtherOUI
A00230 Temp Adequate?       PASSED  TempSpaceOK
A00240 Disk Space OK?       PASSED  DiskSpaceOK
A00250 Swap (in MB)         RECORD  6143
A00260 RAM (in MB)          PASSED  3954
A00270 Swap OK?             PASSED  SwapToRamOK
A00280 Network              PASSED  Connected
A00290 IP Address           RECORD  NotFound
A00300 Domain Name          RECORD  NotFound
A00310 DNS Lookup           FAILED  Host not known
A00320 /etc/hosts Format    PASSED  Adequate IPv4 entry
A00330 Kernel Parameters OK PASSED  KernelOK
A00380 Tainted Kernel?      PASSED  NotVerifiable
A00400 ip_local_port_range  PASSED  RangeOK
A00480 OL4 RPMs OK?         SKIPPED NotOL4
A00490 OL5 RPMs OK?         SKIPPED NotOL5
A00500 OL6 RPMs OK?         FAILED  [compat-libstdc++-33(i686)] not insta...
A00530 RHEL4 RPMs OK?       SKIPPED NotRedHat
A00540 RHEL5 RPMs OK?       SKIPPED NotRedHat
A00550 RHEL6 RPMs OK?       SKIPPED NotRedHat
A00570 SLES10 RPMs OK?      SKIPPED NotSuSE
A00580 SLES11 RPMs OK?      SKIPPED NotSuSE
Result file: output/collect/DB_HCVE_A_DB11R2_lin_res.htm

Just a note as the following entries will fail validation for A00320 /etc/hosts Format

192.168.56.11  arrow arrow.localdomain

since the engine is looking for the format below:

192.168.56.11  arrow.localdomain arrow

A00500 OL6 RPMs OK? FAILED because the following RPMs were missing:

[compat-libstdc++-33(i686)] not installed
[glibc-devel(i686)] not installed
[libaio(i686)] not installed
[libaio-devel(i686)] not installed
[libstdc++(i686)] not installed
[libstdc++-devel(i686)] not installed

RPMs check failed since the 32-bit version were not installed:

# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | egrep '^compat-libstdc++|glibc-devel|libaio|^libstdc++'|sort
compat-libstdc++-296-2.96-144.el6 (i686)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
glibc-devel-2.12-1.132.el6 (x86_64)
libaio-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6 (x86_64)
libstdc++-4.4.7-4.el6 (x86_64)
libstdc++-devel-4.4.7-4.el6 (x86_64)

Let’s install it.

# yum install compat-libstdc++-33
Loaded plugins: security
Setting up Install Process
Package compat-libstdc++-33-3.2.3-69.el6.x86_64 already installed and latest version
Nothing to do

Need to specific .i686. Learned something new here.

# yum install compat-libstdc++-33.i686
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package compat-libstdc++-33.i686 0:3.2.3-69.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=====================================================================================================================================================
 Package                                  Arch                      Version                             Repository                              Size
=====================================================================================================================================================
Installing:
 compat-libstdc++-33                      i686                      3.2.3-69.el6                        public_ol6_latest                      188 k

Transaction Summary
=====================================================================================================================================================
Install       1 Package(s)

Total download size: 188 k
Installed size: 716 k
Is this ok [y/N]: y
Downloading Packages:
compat-libstdc++-33-3.2.3-69.el6.i686.rpm                                                                                     | 188 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : compat-libstdc++-33-3.2.3-69.el6.i686                                                                                             1/1
  Verifying  : compat-libstdc++-33-3.2.3-69.el6.i686                                                                                             1/1

Installed:
  compat-libstdc++-33.i686 0:3.2.3-69.el6

Complete!

Let’s verify the RPMs.

# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | egrep '^compat-libstdc++|glibc-devel|libaio|^libstdc++'|sort
compat-libstdc++-296-2.96-144.el6 (i686)
compat-libstdc++-33-3.2.3-69.el6 (i686)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
glibc-devel-2.12-1.132.el6 (i686)
glibc-devel-2.12-1.132.el6 (x86_64)
libaio-0.3.107-10.el6 (i686)
libaio-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6 (i686)
libaio-devel-0.3.107-10.el6 (x86_64)
libstdc++-4.4.7-4.el6 (i686)
libstdc++-4.4.7-4.el6 (x86_64)
libstdc++-devel-4.4.7-4.el6 (i686)
libstdc++-devel-4.4.7-4.el6 (x86_64)

Let’s run RDA HCVE again.

[oracle@arrow:]/home/oracle/rda
$ ./rda.sh -T hcve
Processing HCVE tests ...
Available Pre-Installation Rule Sets:
   1.  Oracle Database 10g R1 (10.1.0) Preinstall (Linux)
   2.  Oracle Database 10g R2 (10.2.0) Preinstall (Linux)
   3.  Oracle Database 11g R1 (11.1) Preinstall (Linux)
   4.  Oracle Database 11g R2 (11.2.0) Preinstall (Linux)
   5.  Oracle Database 12c R1 (12.1.0) Preinstallation (Linux)

Enter the HCVE rule set number or 0 to cancel the test
Hit "Return" to accept the default (0)
> 4

Performing HCVE checks ...
Enter value for < Planned ORACLE_HOME location >
Hit "Return" to accept the default (/u01/app/oracle/product/11.2.0.4/dbhome_1)
>

Enter value for < JDK Home >
>

Test "Oracle Database 11g R2 (11.2.0) Preinstall (Linux)" executed at 18-Apr-2014 05:57:52

Test Results
~~~~~~~~~~~~

ID     NAME                 RESULT  VALUE
====== ==================== ======= ==========================================
A00010 OS Certified?        PASSED  Adequate
A00020 User in /etc/passwd? PASSED  userOK
A00040 Group in /etc/group? PASSED  GroupOK
A00050 Enter ORACLE_HOME    RECORD  /u01/app/oracle/product/11.2.0.4/dbho...
A00060 ORACLE_HOME Valid?   PASSED  OHexists
A00070 O_H Permissions OK?  PASSED  CorrectPerms
A00080 oraInventory Permiss PASSED  oraInventoryOK
A00090 Got Software Tools?  PASSED  ld_nm_ar_make_found
A00100 Umask Set to 022?    PASSED  UmaskOK
A00120 Limits Processes     PASSED  Adequate
A00125 Limits Stacksize     PASSED  Adequate
A00130 Limits Descriptors   PASSED  Adequate
A00140 LDLIBRARYPATH Unset? PASSED  UnSet
A00180 JAVA_HOME Unset?     PASSED  UnSet
A00190 Enter JDK Home       RECORD
A00200 JDK Version          FAILED  JDK home is missing
A00210 Other O_Hs in PATH?  PASSED  NoneFound
A00220 Other OUI Up?        PASSED  NoOtherOUI
A00230 Temp Adequate?       PASSED  TempSpaceOK
A00240 Disk Space OK?       PASSED  DiskSpaceOK
A00250 Swap (in MB)         RECORD  6143
A00260 RAM (in MB)          PASSED  3954
A00270 Swap OK?             PASSED  SwapToRamOK
A00280 Network              PASSED  Connected
A00290 IP Address           RECORD  NotFound
A00300 Domain Name          RECORD  NotFound
A00310 DNS Lookup           FAILED  Host not known
A00320 /etc/hosts Format    PASSED  Adequate IPv4 entry
A00330 Kernel Parameters OK PASSED  KernelOK
A00380 Tainted Kernel?      PASSED  NotVerifiable
A00400 ip_local_port_range  PASSED  RangeOK
A00480 OL4 RPMs OK?         SKIPPED NotOL4
A00490 OL5 RPMs OK?         SKIPPED NotOL5
A00500 OL6 RPMs OK?         PASSED  OL6rpmsOK
A00530 RHEL4 RPMs OK?       SKIPPED NotRedHat
A00540 RHEL5 RPMs OK?       SKIPPED NotRedHat
A00550 RHEL6 RPMs OK?       SKIPPED NotRedHat
A00570 SLES10 RPMs OK?      SKIPPED NotSuSE
A00580 SLES11 RPMs OK?      SKIPPED NotSuSE
Result file: output/collect/DB_HCVE_A_DB11R2_lin_res.htm

Does not edit of /etc/pam.d/login to include the following:

session required pam_limits.so

So what is the lesson learned? Trust, but verify. Alternatively, RTFM!

Categories: DBA Blogs

Log Buffer #367, A Carnival of the Vanities for DBAs

Thu, 2014-04-17 07:47

Log Buffer is globe trotting this week from end to end. From every nook, it has brought you some sparkling gems of blog posts. Enjoy!!!

Oracle:

On April 16th, Oracle announced the Oracle Virtual Compute Appliance X4-2.

Do your Cross Currency Receipts fail Create Accounting?

Oracle Solaris 11.2 Launch in NY.

WebCenter Portal 11gR1 dot8 Bundle Patch 3 (11.1.1.8.3) Released.

What do Sigma, a Leadership class and a webcast have in common?

SQL Server:

Stairway to SQL Server Agent – Level 9: Understanding Jobs and Security.

SQL Server Hardware will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS.

SQL Server 2014 In-Memory OLTP Dynamic Management Views.

Why every SQL Server installation should be a cluster.

SQL Server Backup Crib Sheet.

MySQL:

Looking for Slave Consistency: Say Yes to –read-only and No to SUPER and –slave-skip-errors.

More details on disk IO-bound, update only for MongoDB, TokuMX and InnoDB.

Making the MTR rpl suite GTID_MODE Agnostic.

Open Source Appreciation Day’ draws OpenStack, MySQL and CentOS faithful.

MongoDB, TokuMX and InnoDB for disk IO-bound, update-only by PK.

Categories: DBA Blogs

Why is Affinity Mask Negative in sp_configure?

Tue, 2014-04-15 07:56

While looking at a SQL server health report, I found affinity mask parameter in sp_configure output showing a negative value.

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

Affinity mask is a SQL Server configuration option which is used to assign processors to specific threads for improved performance. To know more about affinity mask, read this. Usually, the value for affinity mask is a positive integer (decimal format) in sp_configure. The article in previous link shows an example of binary bit mask and corresponding decimal value to be set in sp_configure.

 

I was curious to find out why the value of affinity mask could be negative as according to BOL http://technet.microsoft.com/en-us/library/ms187104(v=sql.105).aspx

 

 affinity_mask_memeThe values for affinity mask are as follows:

          · A one-byte affinity mask covers up to 8 CPUs in a multiprocessor computer.

       

          · A two-byte affinity mask covers up to 16 CPUs in a multiprocessor computer.

         

          · A three-byte affinity mask covers up to 24 CPUs in a multiprocessor computer.

         

          · A four-byte affinity mask covers up to 32 CPUs in a multiprocessor computer.

         

         · To cover more than 32 CPUs, configure a four-byte affinity mask for the first 32    CPUs and up to a four-byte affinity64 mask for the remaining CPUs.

 

Time to unfold the mystery. Windows Server 2008 R2 supports more than 64 logical processors. From ERRORLOG, I see there are 40 logical processors on the server:

 

2014-03-31 18:18:18.18 Server      Detected 40 CPUs. This is an informational message; no user action is required.

 

Further, going down in the ERRORLOG I see this server has four NUMA nodes configured.

 

Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.

:

Node configuration: node 0: CPU mask: 0x00000000000ffc00:0 Active CPU mask: 0x0000000000001c00:0.

Node configuration: node 1: CPU mask: 0x00000000000003ff:0 Active CPU mask: 0×0000000000000007:0.

Node configuration: node 2: CPU mask: 0x000000003ff00000:0 Active CPU mask: 0×0000000000700000:0.

Node configuration: node 3: CPU mask: 0x000000ffc0000000:0 Active CPU mask: 0x00000001c0000000:0. 

 

These were hard NUMA nodes. No soft NUMA node configured on the server (no related registry keys exist)

 

An important thing to note is that the affinity mask value forsp_configure ranges from -2147483648 to 2147483647 = 2147483648 + 2147483647 + 1 = 4294967296 = 2^32 = the range of int data type. Hence affinity mask value from sp_configure is not sufficient to hold more than 64 CPUs. To deal with this, ALTER SERVER CONFIGURATION was introduced in SQL Server 2008 R2 to support and set the processor affinity for more than 64 CPUs. However, the value of affinity mask in sp_configure, in such cases, is still an *adjusted* value which we are going to find out below.

 

Let me paste the snippet from ERRORLOG again:

 

Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.

 

As it says, the underlined values above are for the processor mask i.e. processor affinity or affinity mask. These values correspond to that of online_scheduler_mask in sys.dm_os_nodes which makes up the ultimate value for affinity mask in sp_configure. Ideally, affinity mask should be a sum of these values. Let’s add these hexadecimal values using windows calculator (Choose Programmer from Viewmenu)

 

  0x0000000000001c00

+ 0×0000000000000007

+ 0×0000000000700000

+ 0x00000001c0000000

--------------------

= 0x00000001C0701C07

 

= 7523539975 (decimal)

 

So, affinity mask in sp_configure should have been equal to 7523539975. Since this no. is greater than the limit of 2^32 i.e. 4294967296 we see an *adjusted* value (apparently a negative value). The reason I say it an *adjusted* value is because sum of processor mask values (in decimal) is adjusted (subtracted from the int range i.e. 4294967296 so that it fits within the range and falls below or equal to 4294967296 ). Here’s is an example which explains the theory:

 

7523539975 – 4294967296  – 4294967296 = –1066394617 = the negative value seen in sp_configure

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

That explains why affinity mask shows up as a negative number in sp_configure.

 

To make the calculation easier, I wrote a small script to find out the sp_configure equivalent value of affinity mask in case of NUMA nodes

               


-- Find out the sp_configure equivalent value of affinity mask in case of NUMA nodes

--------------------------------------------------------------------------------------

BEGIN
DECLARE @real_value bigint; -- to hold the sum of online_scheduler_mask

DECLARE @range_value bigint = 4294967296; -- range of int dataype i.e. 2^32

DECLARE @config_value int = 0; -- default value of affinity_mask in sp_configure output. to be set later.
-- Fetch the sum of Online Scheudler Mask excluding node id 64 i.e. Hidden scheduler

SET @real_value =( SELECT SUM(online_scheduler_mask) as online_scheduler_mask

FROM sys.dm_os_nodes

WHERE memory_node_id <> 64

);
-- Calculate the value for affinity_mask in sp_configure

WHILE (@real_value > 2147483647)

BEGIN

SET @real_value=(@real_value - @range_value);

END;
-- Copy the value for affinity_mask as seen in sp_configure

SET @config_value = @real_value;
PRINT 'The current config_value for affinity_mask parameter in sp_configure is: ' + cast(@config_value as varchar);

END;

This script will give the current config value for SQL server in any case, NUMA nodes, >64 procs, SQL Server 2008 R2..

 

Hope this post will help you if were as puzzled as I was seeing the negative no. in sp_configure.

 

Stay tuned!

Categories: DBA Blogs

Oracle RMAN Restore to the Same Machine as the Original Database

Fri, 2014-04-11 07:52

Among the most critical but often most neglected database administration tasks is testing restore from backup. But sometimes, you don’t have a test system handy, and need to test the restore on the same host as the source database. In such situations, the biggest fear is overwriting the original database. Here is a simple procedure you can follow, which will not overwrite the source.

  1. Add an entry to the oratab for the new instance, and source the new environment:
    oracle$ cat >> /etc/oratab <<EOF
    > foo:/u02/app/oracle/product/11.2.0/dbhome_1:N
    > EOF
    
    oracle$ . oraenv
    ORACLE_SID[oracle]? foo
    The Oracle base remains unchanged with value /u02/app/oracle
  2. Create a pfile and spfile with a minimum set of parameters for the new instance. In this case the source database is named ‘orcl’ and the new database will have a DB unique name of ‘foo’. This example will write all files to the +data ASM diskgroup, under directories for ‘foo’. You could use a filesystem directory as the destination as well. Just make sure you have enough space wherever you plan to write:
    oracle$ cat > $ORACLE_HOME/dbs/initfoo.ora <<EOF
    > db_name=orcl
    > db_unique_name=foo
    > db_create_file_dest=+data
    > EOF
    
    oracle$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 9 15:35:00 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to an idle instance.
    
    SQL> create spfile from pfile;
    File created.
    
    SQL> exit
    Disconnected
  3. Now, using the backup pieces from your most recent backup, try restoring the controlfile only. Start with the most recently written backup piece, since RMAN writes the controlfile at the end of the backup. It may fail once or twice, but keep trying backup pieces until you find the controlfile:
    oracle$ ls -lt /mnt/bkup
    total 13041104
    -rwxrwxrwx 1 root root      44544 Apr  4 09:32 0lp4sghk_1_1
    -rwxrwxrwx 1 root root   10059776 Apr  4 09:32 0kp4sghi_1_1
    -rwxrwxrwx 1 root root 2857394176 Apr  4 09:32 0jp4sgfr_1_1
    -rwxrwxrwx 1 root root 3785719808 Apr  4 09:31 0ip4sgch_1_1
    -rwxrwxrwx 1 root root 6697222144 Apr  4 09:29 0hp4sg98_1_1
    -rwxrwxrwx 1 root root    3647488 Apr  4 09:28 0gp4sg97_1_1
    
    $ rman target /
    Recovery Manager: Release 11.2.0.3.0 - Production on Wed Apr 9 15:37:10 2014
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    connected to target database (not started)
    
    RMAN> startup nomount;
    Oracle instance started
    Total System Global Area     238034944 bytes
    Fixed Size                     2227136 bytes
    Variable Size                180356160 bytes
    Database Buffers              50331648 bytes
    Redo Buffers                   5120000 bytes
    
    RMAN> restore controlfile from '/mnt/bkup/0lp4sghk_1_1';
    Starting restore at 09-APR-14
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=1 device type=DISK
    channel ORA_DISK_1: restoring control file
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 04/09/2014 15:42:10
    ORA-19870: error while restoring backup piece /mnt/bkup/0lp4sghk_1_1
    ORA-19626: backup set type is archived log - can not be processed by this conversation
    
    RMAN> restore controlfile from '/mnt/bkup/0kp4sghi_1_1';
    Starting restore at 09-APR-14
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=19 device type=DISK
    channel ORA_DISK_1: restoring control file
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
    output file name=+DATA/foo/controlfile/current.348.844443549
    Finished restore at 09-APR-14

    As you can see above, RMAN will report the path and name of the controlfile that it restores. Use that path and name below:

    RMAN> sql "alter system set
    2>  control_files=''+DATA/foo/controlfile/current.348.844443549''
    3>  scope=spfile";
    
    sql statement: alter system set 
    control_files=''+DATA/foo/controlfile/current.348.844443549'' 
    scope=spfile
  4. Mount the database with the newly restored controlfile, and perform a restore to the new location. The ‘set newname’ command changes the location that RMAN will write the files to the db_create_file_dest of the new instance. The ‘switch database’ command updates the controlfile to reflect the new file locations. When the restore is complete, use media recovery to apply the archived redologs.
    RMAN> startup force mount
    Oracle instance started
    database mounted
    Total System Global Area     238034944 bytes
    Fixed Size                     2227136 bytes
    Variable Size                180356160 bytes
    Database Buffers              50331648 bytes
    Redo Buffers                   5120000 bytes
    
    RMAN> run {
    2> set newname for database to new;
    3> restore database;
    4> }
    
    executing command: SET NEWNAME
    Starting restore at 09-APR-14
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=23 device type=DISK
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00002 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0hp4sg98_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0hp4sg98_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:35
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00001 to +data
    channel ORA_DISK_1: restoring datafile 00004 to +data
    channel ORA_DISK_1: restoring datafile 00005 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0ip4sgch_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0ip4sgch_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:05
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00003 to +data
    channel ORA_DISK_1: reading from backup piece /mnt/bkup/0jp4sgfr_1_1
    channel ORA_DISK_1: piece handle=/mnt/bkup/0jp4sgfr_1_1 tag=TAG20140404T092808
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:55
    Finished restore at 09-APR-14
    
    RMAN> switch database to copy;
    
    datafile 1 switched to datafile copy "+DATA/foo/datafile/system.338.844531637"
    datafile 2 switched to datafile copy "+DATA/foo/datafile/sysaux.352.844531541"
    datafile 3 switched to datafile copy "+DATA/foo/datafile/undotbs1.347.844531691"
    datafile 4 switched to datafile copy "+DATA/foo/datafile/users.350.844531637"
    datafile 5 switched to datafile copy "+DATA/foo/datafile/soe.329.844531637"
    
    RMAN> recover database;
    
    Starting recover at 09-APR-14
    using channel ORA_DISK_1
    starting media recovery
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_25_841917031.dbf thread=1 sequence=25
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_26_841917031.dbf thread=1 sequence=26
    archived log file name=/u02/app/oracle/product/11.2.0/dbhome_1/dbs/arch1_27_841917031.dbf thread=1 sequence=27
    media recovery complete, elapsed time: 00:00:01
    Finished recover at 09-APR-14
    
    RMAN> exit
    
    Recovery Manager complete.
  5. Before opening the database, we need to re-create the controlfile so that we don’t step on any files belonging to the source database. The first step is to generate a “create controlfile” script, and to locate the trace file where it was written:
    $ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 16 10:56:28 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    
    SQL> alter database backup controlfile to trace;
    Database altered.
    
    SQL> select tracefile
      2  from v$session s,
      3       v$process p
      4  where s.paddr = p.addr
      5  and s.audsid = sys_context('USERENV', 'SESSIONID');
    TRACEFILE
    ----------------------------------------------------------
    /u02/app/oracle/diag/rdbms/foo/foo/trace/foo_ora_19168.trc
    
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition
  6. Next, we need to edit the controlfile creation script so that all we have left is the “create controlfile … resetlogs” statement, and so that all file paths to the original database are removed or changed to reference the db_unique_name of the test database.Below is a pipeline of clumsy awks I created that creates a script called create_foo_controlfile.sql. It should take care of most permutations of these trace controlfile scripts.
    $ sed -n '/CREATE.* RESETLOGS/,$p' /u02/app/oracle/diag/rdbms/foo/foo/trace/foo_ora_18387.trc | \
    > sed '/.*;/q' | \
    > sed 's/\(GROUP...\).*\( SIZE\)/\1\2/' | \
    > sed 's/orcl/foo/g' | \
    > sed 's/($//' | \
    > sed 's/[\)] SIZE/SIZE/' | \
    > grep -v "^    '" > create_foo_controlfile.sql

    If it doesn’t work for you, just edit the script from your trace file, so that you end up with something like this:

    CREATE CONTROLFILE REUSE DATABASE "ORCL" RESETLOGS  ARCHIVELOG
        MAXLOGFILES 16
        MAXLOGMEMBERS 3
        MAXDATAFILES 100
        MAXINSTANCES 8
        MAXLOGHISTORY 292
    LOGFILE
      GROUP 1 
      SIZE 50M BLOCKSIZE 512,
      GROUP 2 
      SIZE 50M BLOCKSIZE 512,
      GROUP 3 
      SIZE 50M BLOCKSIZE 512
    -- STANDBY LOGFILE
    DATAFILE
      '+DATA/foo/datafile/system.338.845027673',
      '+DATA/foo/datafile/sysaux.347.845027547',
      '+DATA/foo/datafile/undotbs1.352.845027747',
      '+DATA/foo/datafile/users.329.845027673',
      '+DATA/foo/datafile/soe.350.845027673'
    CHARACTER SET WE8MSWIN1252
    ;
  7. The next step is to use the above script to open the database with the resetlogs option on a new OMF controlfile:
    $ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 16 10:56:28 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    
    SQL> alter system reset control_files scope=spfile;
    System altered.
    
    SQL> startup force nomount
    ORACLE instance started.
    
    Total System Global Area  238034944 bytes
    Fixed Size                  2227136 bytes
    Variable Size             180356160 bytes
    Database Buffers           50331648 bytes
    Redo Buffers                5120000 bytes
    
    SQL> @create_foo_controlfile
    Control file created.
    
    SQL> select value from v$parameter where name = 'control_files';
    VALUE
    -------------------------------------------
    +DATA/foo/controlfile/current.265.845031651
    
    SQL> alter database open resetlogs;
    Database altered.
  8. Last but not least, don’t forget to provide a tempfile or two to the temporary tablespaces:
    SQL> alter tablespace temp
      2  add tempfile size 5G;
    Tablespace altered.
Categories: DBA Blogs

The Growing Trend Toward Data Infrastructure Outsourcing

Fri, 2014-04-11 07:45

Today’s blog post is the first of three in a series dedicated to data infrastructure outsourcing, with excerpts from our latest white paper.

Despite the strong push to outsource corporate functions that began more than two decades ago, many IT shops have been hesitant to outsource their data management requirements.

Generally, the more mission-critical the data, the more organizations have felt compelled to control it, assigning that responsibility to their brightest and most trusted in-house database experts. The reasoning has been that with greater control comes greater security.

That mindset is rapidly changing. Macroeconomic trends are putting mounting pressure on organizations to rethink the last bastion of IT in-housing—data infrastructure management—and instead look for flexible, cost-effective outsourcing solutions that can help them improve operational efficiency, optimize performance, and increase overall productivity.

This trend toward outsourcing to increase productivity, and not simply reduce costs, is supported by a recent Forrester Research report that highlights the key reasons companies  are looking for outside help: too many new technologies and data sources, and difficulty finding people with the skills and experience to optimize and manage them.

To learn how to develop a data infrastructure sourcing strategy, download the rest of our white paper, Data Infrastructure Outsourcing.

Categories: DBA Blogs

What Happens in Vegas, Doesn’t Stay in Vegas – Collaborate 14

Thu, 2014-04-10 08:04

IOUG’s Collaborate 14, is star-studded this year with the Pythian team illuminating various tracks in the presentation rooms. It’s acting like a magnet in the expo halls of The Venetian for data lovers. It’s a kind of rendezvous for those who love their data. So if you want your data to be loved, feel free to drop by at Pythian booth 1535.

Leading from the front is Paul Vallée with an eye-catching title, with real world gems. Then there is Michael Abbey’s rich experience, Marc Fielding’s in-depth technology coverage and Vasu’s forays into Apps Database Administration. There is my humble attempt at Exadata IORM, and Rene’s great helpful tips, and Alex Gorbachev’s mammoth coverage of mammoth data – it’s all there with much more to learn, share and know.

Vegas Strip is buzzing with the commotion of Oracle. Even the big rollers are turning their necks to see what the fuss is about. Poker faces have broken into amazed grins, and even the weird, kerbside card distribution has stopped. Everybody is focused on the pleasures of Oracle technologies.

Courtesy of social media, all of this fun isn’t confined to Vegas. You can follow @Pythian on Twitter to know it all, live, and in real time.

Come Enjoy!

Categories: DBA Blogs

Paul Vallée’s Interview with Oracle Profit Magazine

Wed, 2014-04-09 23:00

Aaron Lazenby, Editor at Oracle’s Profit Magazine interviewed Pythian Founder, Paul Vallée this week to discuss the growing risk of internal threats to IT. Paul Vallee

“What we need to create is complete accountability for everything that happens around a data center, and that’s where our industry is not up to snuff right now. We tend to think that if you secure access to the perimeter of the data center, then what happens in the meeting inside can be unsupervised. But that’s not good enough.” says Paul.

The interview, Inside Job, is a preview to Paul’s Collaborate ’14 session taking place in later today in Las Vegas. If you’re at Collaborate, make sure you don’t miss Paul’s presentation Thou Shalt Not Steal: Securing Your Infrastructure in the Age of Snowden. The presentation begins at 4:15 PM Pacific at the Venetian, Level 3 – Murano 3306.

What are your thoughts? How else can organizations mitigate the risk of internal threats? Comment below.

Categories: DBA Blogs

4 Things Every CMO Should Do Before Approaching IT About Big Data

Tue, 2014-04-08 10:42

Read the full article, 4 Things Every CMO Should Do Before Approaching IT About Big Data.  Approaching IT with Big Data article

“You’ve likely heard the whispers (or shouts) about Big Data’s potential, how it’s the holy grail of marketing—and it can be. But to uncover this information and take action on it, marketing needs to partner closely with all departments, especially with IT.” says Samer Forzley, VP of Marketing at Pythian.

“IT can go off and develop as many Big Data initiatives as it wants, but without the necessary insights from the marketing team, those projects will never translate into sales. But if each plays to its strengths, with CMOs entering the Big Data conversation with logistics upfront, then IT’s structural knowhow can bring that solution to fruition.”

Categories: DBA Blogs

RMAN Infatuation?

Tue, 2014-04-08 09:59

Lately, I am becoming infatuated with RMAN again.

Have you ever run “restore database preview”?

Are you curious about how the recovery SCN are determined?

Media recovery start SCN is 1515046
Recovery must be done beyond SCN 1515051 to clear datafile fuzziness

If you are, then I will demonstrate this for you.

RMAN LEVEL0 backup and restore database preview summary:

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
366     B  A  A DISK        07-APR-2014 14:11:32 1       1       YES        ARCHIVELOG
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
368     B  0  A DISK        07-APR-2014 14:11:35 1       1       YES        LEVEL0
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
370     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
371     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
372     B  0  A DISK        07-APR-2014 14:11:52 1       1       YES        LEVEL0
373     B  A  A DISK        07-APR-2014 14:11:55 1       1       YES        ARCHIVELOG
374     B  F  A DISK        07-APR-2014 14:11:58 1       1       NO         TAG20140407T141156

RMAN> restore database preview summary;

Starting restore at 07-APR-2014 14:13:04
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=107 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=23 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=108 device type=DISK

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
371     B  0  A DISK        07-APR-2014 14:11:38 1       1       YES        LEVEL0
370     B  0  A DISK        07-APR-2014 14:11:39 1       1       YES        LEVEL0
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
368     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
373     B  A  A DISK        07-APR-2014 14:11:55 1       1       YES        ARCHIVELOG
Media recovery start SCN is 1524017
Recovery must be done beyond SCN 1524025 to clear datafile fuzziness
Finished restore at 07-APR-2014 14:13:05

RMAN>

Query database to determine recovery SCN:

ARROW:(SYS@db01):PRIMARY> r
  1  SELECT
  2    MIN(checkpoint_change#) start_scn,
  3    GREATEST(MAX(checkpoint_change#),MAX(absolute_fuzzy_change#)) beyond_scn
  4  FROM v$backup_datafile
  5  WHERE incremental_level=(SELECT MAX(incremental_level) FROM v$backup_datafile WHERE incremental_level>=0)
  6*

 START_SCN BEYOND_SCN
---------- ----------
   1524017    1524025

ARROW:(SYS@db01):PRIMARY>

RMAN LEVEL0 and LEVEL1 backup and restore database preview summary:

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
366     B  A  A DISK        07-APR-2014 14:11:32 1       1       YES        ARCHIVELOG
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
368     B  0  A DISK        07-APR-2014 14:11:35 1       1       YES        LEVEL0
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
370     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
371     B  0  A DISK        07-APR-2014 14:11:44 1       1       YES        LEVEL0
372     B  0  A DISK        07-APR-2014 14:11:52 1       1       YES        LEVEL0
373     B  A  A DISK        07-APR-2014 14:11:55 1       1       YES        ARCHIVELOG
374     B  F  A DISK        07-APR-2014 14:11:58 1       1       NO         TAG20140407T141156
375     B  A  A DISK        07-APR-2014 14:14:37 1       1       YES        ARCHIVELOG
376     B  1  A DISK        07-APR-2014 14:14:40 1       1       YES        LEVEL1
377     B  1  A DISK        07-APR-2014 14:14:40 1       1       YES        LEVEL1
378     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
379     B  1  A DISK        07-APR-2014 14:14:42 1       1       YES        LEVEL1
380     B  1  A DISK        07-APR-2014 14:14:42 1       1       YES        LEVEL1
381     B  1  A DISK        07-APR-2014 14:14:45 1       1       YES        LEVEL1
382     B  A  A DISK        07-APR-2014 14:14:47 1       1       YES        ARCHIVELOG
383     B  F  A DISK        07-APR-2014 14:14:51 1       1       NO         TAG20140407T141448

RMAN> restore database preview summary;

Starting restore at 07-APR-2014 14:15:59
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=107 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=23 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=108 device type=DISK

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
371     B  0  A DISK        07-APR-2014 14:11:38 1       1       YES        LEVEL0
376     B  1  A DISK        07-APR-2014 14:14:39 1       1       YES        LEVEL1
370     B  0  A DISK        07-APR-2014 14:11:39 1       1       YES        LEVEL0
377     B  1  A DISK        07-APR-2014 14:14:40 1       1       YES        LEVEL1
367     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
378     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
368     B  0  A DISK        07-APR-2014 14:11:34 1       1       YES        LEVEL0
380     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
369     B  0  A DISK        07-APR-2014 14:11:37 1       1       YES        LEVEL0
379     B  1  A DISK        07-APR-2014 14:14:41 1       1       YES        LEVEL1
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
382     B  A  A DISK        07-APR-2014 14:14:47 1       1       YES        ARCHIVELOG
Media recovery start SCN is 1524335
Recovery must be done beyond SCN 1524339 to clear datafile fuzziness
Finished restore at 07-APR-2014 14:16:00

RMAN>

Query database to determine recovery SCN:

ARROW:(SYS@db01):PRIMARY> r
  1  SELECT
  2    MIN(checkpoint_change#) start_scn,
  3    GREATEST(MAX(checkpoint_change#),MAX(absolute_fuzzy_change#)) beyond_scn
  4  FROM v$backup_datafile
  5  WHERE incremental_level=(SELECT MAX(incremental_level) FROM v$backup_datafile WHERE incremental_level>=0)
  6*

 START_SCN BEYOND_SCN
---------- ----------
   1524335    1524339

ARROW:(SYS@db01):PRIMARY>

Why is all of this important?

It allows one to automate the process to validate backup without having to actually run “restore database preview”.

Tested on 11.2.0.4 database.

Categories: DBA Blogs

COLLABORATE minus 3

Mon, 2014-04-07 07:50

I always like to get to the location for a conference a day in advance so I can

  • Get accustomed to the time change
  • Get a feel for my way around the venue
  • Figure out where my room is
  • Establish a few landmarks so I do not wander aimlessly around the facility and hotel as though every voyage is a new life experience

COLLABORATE officially starts on Tuesday, though there are education sessions all day Monday facilitated by the three main groups responsible for the show – the IOUG, OAUG, and Quest International Users Group. So where did this animal called COLLABORATE come from one may wonder?

Rewind to about 2004. The three above-mentioned user groups each had their own show. Each reached out to Oracle for logistic and education support, something that the vendor was (and still is) happy to give. It was starting to become obvious that the marketplace upheaval was having a dramatic effect on user group conference attendance. At the same time Oracle expressed a desire to support fewer shows. You do the math – it only made sense. Why not have a 4-5 day mega conference and work with Oracle for many facets of support. Not only were the attendees of each show being asked to pick one or the other; Oracle was investing a massive number of personnel to support all three shows separately. It was a cumulative decision to amalgamate the shows and we wondered where it all would start.

With the blessing of the IOUG board I made one of those very first phone calls to one more people on the OAUG board and the rest is history. I do not remember who I spoke to first and there were probably a handful of feelers going out from other places in the IOUG infrastructure to OAUG bigwigs. I spoke to board member Donna Rosentrater  (@DRosentrater) and we jammed on what could/should become of a co-operative effort. We chatted a few times and the interest amongst board members of the IOUG and OAUG reflected cautious optimism that we could pull if off. Each user group had its own revenue stream from separate shows. We needed to embark down a path that would not put these at risk. That is what the brunt of the negotiations centered on and the work we did together led to the very first COLLBORATE at the Gaylord in Nashville in 2006.

Once the initial framework was established, it was time to turn the discussions over to the professionals. Both groups’ professional resources collaborated (hence the name maybe) and this mega/co-operative show became a reality. COLLABORATE 14 is the 9th show put on by Quest, OAUG, and IOUG. I am not going to say “this year’s show is going to be the best yet” as I believe that implicitly belittles previous successful events. Suffice to say, for what the user community needs from an information-sharing perspective – COLLABORATE is just what the doctor ordered.

Tomorrow’s a day off; wander aimlessly through Las Vegas tempted by curios, shops, food emporiums, and just about every other possible temptation one could think of. Sunday starts with a helicopter trip to the Grand Canyon and I went all out and forked over the extra $50 to sit in the convex bubble beside the pilot. There’s a bazillion vendors poised to whisk ine away to the canyon with a fly over the Hoover dam there or on the way back. I chose Papillon and am looking forward to the excitement of the day which starts at 5:10am with a shuttle to the site where the whirlybird takes off. Talk about taking one’s breath away.

Categories: DBA Blogs

VIDEO: The Evolution of IT Outsourcing

Fri, 2014-04-04 07:46

“For us to understand what’s going on in the industry at large, it helps for us to have a deeper understanding of the history of the outsourcing industry.” Pythian founder, Paul Vallée shares his insights on the history and evolution of IT outsourcing.

Pythian has developed 5 criteria for choosing a data management outsourcing partner. Download the white paper here.

Categories: DBA Blogs

Log Buffer #366, A Carnival of the Vanities for DBAs

Fri, 2014-04-04 07:43

While the Oracle blogosphere is buzzing with the Collaborate news and views, SQL Server and MySQL bloggers are also getting upbeat about their respective fields and producing gems of blog posts. This Log Buffer Edition covers that all and more.

Oracle:

Run Virtual Machines with Oracle VM.

Last Call to Submit to the JavaOne Java EE Track.

UX – No Time To Kill.

Updated: Price Comparison for Big Data Appliance and Hadoop.

PLSQL_OPTIMIZE_LEVEL paraeter and optimizing of the PL/SQL code.

SQL Server:

SQL Server 2012 AlwaysOn Groups and FCIs.

How to recover a suspect msdb database in SQL Server.

Data Science Laboratory System – Distributed File Databases.

Stairway to T-SQL: Beyond The Basics Level 5: Storing Data in Temporary Tables.

The Girl with the Backup Tattoo.

MySQL:

pt-online-schema change and row based replication.

MariaDB Client Library for C 2.0 and Maria JDBC Client 1.1.7 released.

Help MariaDB gather some statistics!

Uninitialized data in the TokuDB recovery log.

MySQL at LOADays Conference in Antwerp, Belgium (Apr 5-6, 2014)

Categories: DBA Blogs

A Simple Way to Monitor Java in Linux

Thu, 2014-04-03 07:49

A quick and easy way to know what is it inside Java process that is using your CPU. Using just Linux command line tools and JDK supplied command line utilities.

Introduction

Here are a few things you need to know before starting. Following the links is not necessary, they are available for the reference.

  • there are different vendors of Java Virtual Machine. This post is about Oracle’s JVM which is called HotSpot. Linux x86-64 is considered as OS platform. Most of the things about HotSpot are applicable to other vendors too but with slight changes. OSes other than Linux may add some more complications
  • it’s called Virtual Machine, because it virtualizes runtime environment for a Java application. So to know where to look at you need to know a few things about how specific VM is organized. For a detailed overview of the HotSpot, please refer to this article
  • on Linux, a thread inside HotSpot VM is mapped directly to an OS level thread. Well, it may not be true on all OSes, but for modern Linux kernels this is true. So every thread on the OS level is a thread inside a Java application
  • there are generally two types of threads inside a HotSpot VM: native and application threads. Application threads are those that run some Java code, and usually this is what applications are using to run their code. Native threads run something which is not written in Java, usually it’s code in C/C++ and usually all these threads are special utility threads started by a VM itself.
Identifying Threads

Since a Java program may start many threads, each executing some program code, it is necessary to understand which threads are using CPUs. On Linux, top -H will show you CPU usage on a per-thread basis. Here is an example. First, a process which consumes CPU:

top - 16:32:29 up 10:29,  3 users,  load average: 1.08, 0.64, 0.56
Tasks: 172 total,   1 running, 171 sleeping,   0 stopped,   0 zombie
Cpu(s): 48.7%us, 51.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Mem:   1500048k total,  1476708k used,    23340k free,    62012k buffers
Swap:  4128764k total,    75356k used,  4053408k free,   919836k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 7599 oracle    20   0 1151m  28m 9100 S 85.5  1.9   0:12.67 java
 2575 oracle    -2   0  709m  10m 8336 S 10.6  0.7  47:34.05 oracle
 2151 root      20   0  207m  44m 6420 S  1.7  3.0   0:27.18 Xorg

If we check the details of CPU usage for PID=7599 with “top -H -p 7599″, then we will see something like this:

top - 16:40:39 up 10:37,  3 users,  load average: 1.47, 1.25, 0.90
Tasks:  10 total,   1 running,   9 sleeping,   0 stopped,   0 zombie
Cpu(s): 49.3%us, 50.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Mem:   1500048k total,  1460468k used,    39580k free,    50140k buffers
Swap:  4128764k total,    76208k used,  4052556k free,   912644k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 7601 oracle    20   0 1151m  28m 9100 R 85.9  1.9   7:19.98 java
 7602 oracle    20   0 1151m  28m 9100 S  1.0  1.9   0:02.95 java
 7599 oracle    20   0 1151m  28m 9100 S  0.0  1.9   0:00.01 java

So there is 1 execution thread inside a Java process, which is constantly on top and is utilizing around 85% of a single core.

Now the next thing to know is: what is this thread doing. To answer that question we need to know two things: thread stacks from a Java process and a way to map OS level thread to a Java thread. As I mentioned previously, there is one to one mapping between OS and Java level threads in HotSpot running on Linux.

To get a thread dump we need to use a JDK utility called jstack:

[oracle@oel6u4-2 test]$ jstack 7599
2014-02-28 16:57:23
Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.12-b01 mixed mode):

"Attach Listener" daemon prio=10 tid=0x00007f05a0001000 nid=0x1e66 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"Low Memory Detector" daemon prio=10 tid=0x00007f05c4088000 nid=0x1db8 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"C2 CompilerThread1" daemon prio=10 tid=0x00007f05c4085800 nid=0x1db7 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"C2 CompilerThread0" daemon prio=10 tid=0x00007f05c4083000 nid=0x1db6 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"Signal Dispatcher" daemon prio=10 tid=0x00007f05c4081000 nid=0x1db5 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"Finalizer" daemon prio=10 tid=0x00007f05c4064800 nid=0x1db4 in Object.wait() [0x00007f05c0631000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	- waiting on <0x00000000eb8a02e0> (a java.lang.ref.ReferenceQueue$Lock)
	at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
	- locked <0x00000000eb8a02e0> (a java.lang.ref.ReferenceQueue$Lock)
	at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
	at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

"Reference Handler" daemon prio=10 tid=0x00007f05c4062800 nid=0x1db3 in Object.wait() [0x00007f05c0732000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	- waiting on <0x00000000eb8a0380> (a java.lang.ref.Reference$Lock)
	at java.lang.Object.wait(Object.java:485)
	at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
	- locked <0x00000000eb8a0380> (a java.lang.ref.Reference$Lock)

"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
	at java.security.SecureRandom.getInstance(SecureRandom.java:254)
	at java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:176)
	at java.security.SecureRandom.<init>(SecureRandom.java:133)
	at RandomUser.main(RandomUser.java:9)

"VM Thread" prio=10 tid=0x00007f05c405c000 nid=0x1db2 runnable

"VM Periodic Task Thread" prio=10 tid=0x00007f05c408b000 nid=0x1db9 waiting on condition

JNI global references: 975

To map OS level thread to a Java thread in a thread dump, we need to convert native thread ID from Linux to base 16, and search for “nid=$ID” in the stack trace. In our case thread ID is 7601 which is 0x1db1, and the Java thread had following stack trace at the time of running jstack:

"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
	at java.security.SecureRandom.getInstance(SecureRandom.java:254)
	at java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:176)
	at java.security.SecureRandom.<init>(SecureRandom.java:133)
	at RandomUser.main(RandomUser.java:9)
A Way to Monitor

Here is a way to get a stack trace of a thread inside a Java process with command line tools (PID and TID are Process ID of Java process, and Thread ID of an interesting thread on the OS level):

[oracle@oel6u4-2 test]$ jstack $PID | awk '/ nid='"$(printf '%#x' $TID)"' /,/^$/'
"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
	at java.io.FileInputStream.readBytes(Native Method)
	at java.io.FileInputStream.read(FileInputStream.java:220)
	at sun.security.provider.NativePRNG$RandomIO.readFully(NativePRNG.java:185)
	at sun.security.provider.NativePRNG$RandomIO.ensureBufferValid(NativePRNG.java:247)
	at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:261)
	- locked <0x00000000eb8a3370> (a java.lang.Object)
	at sun.security.provider.NativePRNG$RandomIO.access$200(NativePRNG.java:108)
	at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:97)
	at java.security.SecureRandom.nextBytes(SecureRandom.java:433)
	- locked <0x00000000e43adc90> (a java.security.SecureRandom)
	at java.security.SecureRandom.next(SecureRandom.java:455)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

As you can see here, the thread is executing a main method of RandomUser class – at least at the time of taking a thread dump. If you would like to see how this changes over time, then a simple watch command may help to get an idea if this thread stack changes frequently or not:

watch -n .5 "jstack $PID | awk '/ nid='"$(printf '%#x' $TID)"' /,/^$/'"

Every 0.5s: jstack 7599 | awk '/ nid='0x1db1' /,/^$/'                                                                                Fri Mar 14 16:29:37 2014

"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
        at java.util.LinkedHashMap$LinkedHashIterator.<init>(LinkedHashMap.java:345)
        at java.util.LinkedHashMap$KeyIterator.<init>(LinkedHashMap.java:383)
        at java.util.LinkedHashMap$KeyIterator.<init>(LinkedHashMap.java:383)
        at java.util.LinkedHashMap.newKeyIterator(LinkedHashMap.java:396)
        at java.util.HashMap$KeySet.iterator(HashMap.java:874)
        at java.util.HashSet.iterator(HashSet.java:153)
        at java.util.Collections$UnmodifiableCollection$1.<init>(Collections.java:1005)
        at java.util.Collections$UnmodifiableCollection.iterator(Collections.java:1004)
        at java.security.SecureRandom.getPrngAlgorithm(SecureRandom.java:523)
        at java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:165)
        at java.security.SecureRandom.<init>(SecureRandom.java:133)
        at RandomUser.main(RandomUser.java:9)

So this way you could see what the application thread is doing right now. Since it could be quite a lot of different type of work, the next reasonable step is to add aggregation.

[oracle@oel6u4-2 test]$ ./prof.sh 7599 7601
Sampling PID=7599 every 0.5 seconds for 10 samples
      6  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at java.io.FileInputStream.readBytes(Native Method)
	at java.io.FileInputStream.read(FileInputStream.java:220)
	at sun.security.provider.NativePRNG$RandomIO.readFully(NativePRNG.java:185)
	at sun.security.provider.NativePRNG$RandomIO.ensureBufferValid(NativePRNG.java:247)
	at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:261)
	- locked <address> (a java.lang.Object)
	at sun.security.provider.NativePRNG$RandomIO.access$200(NativePRNG.java:108)
	at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:97)
	at java.security.SecureRandom.nextBytes(SecureRandom.java:433)
	- locked <address> (a java.security.SecureRandom)
	at java.security.SecureRandom.next(SecureRandom.java:455)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

      2  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:268)
	- locked <address> (a java.lang.Object)
	at sun.security.provider.NativePRNG$RandomIO.access$200(NativePRNG.java:108)
	at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:97)
	at java.security.SecureRandom.nextBytes(SecureRandom.java:433)
	- locked <address> (a java.security.SecureRandom)
	at java.security.SecureRandom.next(SecureRandom.java:455)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

      1  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

      1  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at java.security.SecureRandom.next(SecureRandom.java:452)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

Here is what’s inside the prof.sh script:

#!/bin/bash

P_PID=$1
P_NID=$2

if [ "$P_SLEEP" == "" ]; then
  P_SLEEP=0.5
fi

if [ "$P_CNT" == "" ]; then
  P_CNT=10
fi

echo Sampling PID=$P_PID every $P_SLEEP seconds for $P_CNT samples

if [ "$P_NID" == "" ]; then
  CMD="awk '//'"
else
  CMD="awk '/ nid='"$(printf '%#x' $P_NID)"' /,/^$/'"
fi

for i in `seq $P_CNT`
do
  jstack $P_PID | eval $CMD
  sleep $P_SLEEP;
done |
  awk ' BEGIN { x = 0; s = "" }
    /nid=/ { x = 1; }
    // {if (x == 1) {s = s ", "gensub(/<\w*>/, "<address>", "g") } }
    /^$/ { if ( x == 1) { print s; x = 0; s = ""; } }' |
  sort | uniq -c | sort -n -r | head -10 |
  sed -e 's/$/\n/g' -e 's/\t/\n\t/g' -e 's/,//g'

The idea of the script is based on the method from poor man’s profiler adapted for HotSpot thread dumps. The script does the following things:

  • Takes $P_CNT thread dumps of the Java process ID passed as $1 (10 by default)
  • If a native thread ID has been supplied as $2, then searches for the thread stack of this thread in the thread dump
  • Concatenates each thread stack trace into a comma-separated string
  • Aggregates strings and sorts them by the number of occurrences
  • Prettifies the output: removes tabs, commas, and adds new lines back to the thread stack
Conclusion

With a few little things it is possible to understand quite a lot of things in almost any situation related to Java: you can find out the most frequent stack trace by sampling thread dumps.
With this knowledge it is then easy to understand why an issue happening. In my test case, the application instantly generates random numbers without a pause, and 1 thread is occupying 1 CPU core.

Categories: DBA Blogs

Log Buffer #365, A Carnival of the Vanities for DBAs

Fri, 2014-03-28 07:51

This Log Buffer edition covers various tips, new releases, and technically rich blog posts from the worlds of Oracle, SQL Server and MySQL.

Oracle:

Why choose Oracle for Advanced Analytics? Mark Hornick answers.

Michael Rainey talks about Data Integration Tips: ODI 12c – Substitution API GUIDs.

Warren Baird has shared a tip.  If you are using AutoVue with large 3D models it is often valuable to increase the maximum heap size available to the AutoVue client.

Sveta reports that new version of JSON UDF functions: 0.3.1 has been released. This is development release which contains new functionality. You can download functions from the MySQL Labs website.

A new IOUG research report “Efficiency Isn’t Enough: Data Centers Lead the Drive to Innovation” presents the results of a survey of 285 data managers and professionals.

SQL Server:

A drive on a mission-critical server is reaching capacity, and the new DBA is panicking. How do you approach a ballooning log file that won’t stop growing?

Is there a way to process only the new data for a partition in SQL Server Analysis Services? Yes, this is accomplished in SQL Server Analysis Services with the ProcessAdd option for partitions. Daniel Calbimonte demonstrates how it works.

Stairway to XML: Level 1 – Introduction to XML

Resilient T-SQL code is code that is designed to last, and to be safely reused by others. The goal of defensive database programming, the goal of this book, is to help you to produce resilient T-SQL code.

Private Cloud, What Is It and Why Do You Need It?

MySQL:

Performance_schema success stories : replication SQL thread tuning

Real-Time Data Loading from MySQL to Hadoop with New Tungsten Replicator 3.0 — Webinar-on-Demand

There was an exciting announcement today about WebScaleSQL, the new “branch” (not a fork, they say!) of MySQL created by folks from MySQL engineering teams at Facebook, Google, LinkedIn, and Twitter.

I have wanted multi-source replication in MySQL since 4.0, so I was delighted to see this feature appear in MariaDB 10.0.

Joro wrote recently about MySQL 5.6.17‘s new support for AES-256 encryption, and it’s a great improvement for people need to encrypt their data at rest.

Categories: DBA Blogs

MySQL Community Dinner at Pedro’s

Thu, 2014-03-27 08:00

Once again, Pythian is organizing an event that by now may be considered a tradition: The MySQL community dinner at Pedro’s! This dinner is open to all MySQL community members, as many of you will be in town for the MySQL Conference that week. Here are the details:

What: The MySQL Community Pay-Your-Own-Way Dinner

When: Wednesday April 2, 2014 – Meet us at 6:30 PM in the lobby of the Hyatt Santa Clara, or at 7 PM at Pedro’s (You are welcome to show up later, too!)

Cost: The meal will be $25 USD including tax and gratuities. Please bring cash. (See menu below)

Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

How: RSVP through Eventbrite

Please note: Due to the historically high attendance for this event, Pedro’s has asked that each person pays in cash to simplify billing. Pedro’s can handle large groups of people, but we would like to have an idea of how many people are attending to give Pedro’s an accurate number of attendees so that they can be adequately prepared.

Attendees:

1. Paul Vallee (Pythian)

2. Wagner Bianchi (Pythian)

3. Danil Zburivsky (Pythian)

4. Alex Gorbachev (Pythian)

5. Derek Downey (Pythian)

6. Chad Scheiter (Pythian)

7. Andrew Moore

8. Lawrence Schwartz

9. Tom Diederich

10. Erin O’Neill

11. Your name here!!

Looking forward to seeing you all at the event!

menu_pedros
Categories: DBA Blogs

C14 OakTable World Las Vegas

Wed, 2014-03-26 12:55

If you haven’t yet made the decision to attend COLLABORATE 14 – IOUG Forum in Las Vegas taking place on 7-11 April, 2014 at the Venetian Hotel, this might just help you to make the call. You know you want to be there.

OakTable Network will be holding its OakTable World for the very first time during the COLLABORATE conference. While it’s a little bit last moment, IOUG was able to provide a room for us to use for the whole day and we at OakTable quickly put the schedule together. The agenda is selected by the OakTable speakers on the topics they are really passionate about. As history shows, this is generally what your also want to hear about.

The history of OakTable World comes from the underground event started by Mogens Nørgaard during Oracle OpenWorld somewhere around 2007-2009. After several years of success and growing popularity, the even has become knows as OakTable World run during OOW12 (link) and OOW13 (link) and was a huge success. Last year, we have also run OTWUK13 around UKOUG TEBS 13 conference in Manchester. Needless to say it was a success.

Thanks to many good companies sponsoring the events all those years — you know who you are. This year, the sponsor is really IOUG who managed to find a room for us. I’ll probably gather few usual suspect to print special t-shirts again so stay tuned for that update — I wanted to get the info out-there ASAP so that people could plan attending.

The up to date schedule is in Google Calendar. You can also sign up to the calendar feed using XML or iCal feeds.

Embedded schedule is below:

Note that we have two hours of TED-style lightning talks with or without slides. Each talk is 10-15 minutes when the author shares an innovative perspective relevant to the database industry. The talks are the mix of technical and non-technical topics. It’s been first attempted at UKOUG conference in 2011 and 2012 I believe and were very popular and well attended.

You will hear about some specific technologies or methods that are under-utilized, about provocative approaches to database tasks we have at hand, about disruptive trends and about totally non-technical things that will make you a better IT professional — stories of someone’s life or professional career. It’s usually entertaining and thoughts-provoking. I strongly recommend them.

As you can see the speakers for the rest of the day are stellar. Frits Hoogland of Enkitec Europe fame comes to C14 all the way from Holland and he will start the day by getting very very very deep inside Oracle execution code. Frits will demonstrate live use of Linux GDB debugger techniques to diagnose and troubleshoot very complex scenarios when you really need to understand internal of certain database operations. You really need your brain fresh for this session so scheduling it first thing in the morning is the only reasonable time to reduce the risk of your brain being fried in the first 15 minutes. Hardcode folks will love this.

Next goes Kyle Hailey with Agile Data story to teach us about modern techniques in making Oracle database deployment, cloning and things as easy as deploying a Linux RPM package with YUM. Following thatMoved to 3pm, Graham Wood from Oracle will be presenting Unreal World Performance use cases themed after popular Real World Performance Tour (but without two other amigos) with fresh look at bad practices of managing connections to Oracle databases — time for a refresher on those very popular RWP Youtube videos.

After lunch is when Lightning talks will be running and also a very cool session by Jeff Needham of Scalabilities where you can learn about disruptive trends in modern data warehousing and what customers are doing today (and more will be doing tomorrow) to scale their data processing in cost effective way. How vendors respond to Big Data technologies disrupting their traditional products and how customers handle it (note that I’m trying hard to be politically correct here — come, it will be very refreshing). By the way, Jeff will give away his book copies (Disruptive Possibilities) during Lightning talk as well.

Unfortunately Jeff won’t be able to be here. We might still get some of his books. However, we have Gwen Shapira presenting on loading Twitter data into Hadoop and analyzing with complete live on the spot demo!

Late Karl Arao addition is also on the schedule!

Here are the schedule for the Lightning Talks. The schedule might change slightly between two slots (and we still have one slot to be confirmed):

Lightning Talk Part I:

  • Kyle Hailey – “Importance of Data Visualization”
  • Jeff Needham – “Hadoop Myth-Busting (plus books giveaway)”
  • Tim Gorman – “Eagles and Seagulls” about being an independent IT consultant
  • Kellyn Pot’vin – “Finding Your Path”

Lightning Talk Part II:

  • Jonathan Gennick – “Seven Ways toward Better Writing”
  • Kellyn Pot’vin – “Snapclone with DBaaS in EM12c”
  • Graham Wood – How Hardware Timings Affect Your Oracle Database Performance
  • TBC

Lightning talks details are in the events details in the calendar if you open the event. Remember that there might be some last minute changes so I recommend simply signing up for the calendar feed on your mobile device you carry with you at the conference.

Also, there will be the OakTable folks and other crowd hanging around that room for the whole day so come over. I think it would be a good experiment to reintroduce couple things that Mogens had back in the old days:

  • FIVE-MINUTES-OF-FAME – Any attendee can get up and present a technical solution (5-minute time limit) to the group for admiration.
  • FIVE-MINUTES-OF-HELP – Any attendee can get up and pose a technical problem (5-minute time limit) to the group for help.

Well, that’s the news. If you are excited — post here in the comments that you are coming, share this on Twitter, Facebook, LinkedIn and what not. Looking forward to see as many of you there as possible. As Tim Gorman said — “more fun than a barrel of drunken DBAs, is the un-conference formerly known as “Oracle Closed World“, now known as “Oak Table World“.” And this year we are in Vegas!

Categories: DBA Blogs

Watch for Pythian Speakers at Collaborate 14

Mon, 2014-03-24 07:50

This year COLLABORATE 14 is being held at The Venetian and Sands Expo Center in Las Vegas, April 7-11. Some Pythian folks will be attending and speaking at the event, so be sure to watch for their presentations:

Session Date Start Time Session Room Session Title Presenter Name April 9, 2014 8:30 AM Level 3, Lido 3101B Database Private Clouds with Oracle Database 12c Marc Fielding April 9, 2014 8:30 AM Level 3, Lido 3005 DBA 101 : Calling all New Database Administrators René Antunez April 9, 2014 4:30 PM Sands, Level 1 – 309 What’s New in Oracle E-Business Suite R12.2 for Database Administrators? Vasu Balla April 10, 2014 3:00 PM Level 3, San Polo 3403 Accelerate Your Exadata Deployment with the Skills You Already Have Marc Fielding April 10, 2014 9:45 AM Level 3, Lido 3101B 12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management Fahd Mirza April 10, 2014 4:15 PM Level 3, Murano 3306 Thou Shalt Not Steal: Securing Your Infrastructure in the Age of Snowden Paul Vallee April 10, 2014 11:00 AM Level 3, San Polo 3403 My First 100 days with an Exadata René Antunez April 11, 2014 9:45 AM Level 3, Lido 3005 Ensuring Your Physical Standby is Usable Michael Abbey April 11, 2014 8:30 AM Level 3, Lido 3103 RMAN in 12c: The Next Generation René Antunez April 11, 2014 9:45 AM Level 3, San Polo 3405 Practical Machine Learning for DBAs Alex Gorbachev
Categories: DBA Blogs

Vim::X – VimL is Eldritch, Let’s Write Perl!

Mon, 2014-03-24 07:49

Last week, I finally got around writing a few macros to help with conflict resolution in Vim:


" conflict resolution - pick this one / option 1 / option 2
map ,. $?\v^[<=]{7}jy/\v^[=>]{7}?\v^[<]{7}"_d/\v^\>{7}"_ddP
map ,<  $?\v^[<]{7}<,>.
map ,>  $?\v^[<]{7}/\v^[=]{7}<,>. 
" ... or keep'em both
map ,m  $?\v^[<]{7}"_dd/\v[=]{7}cc/\v[>]{7}"_dd

With that, I can go from conflict to conflict and pick sides with the ease of two keystrokes, never have to manually delete those <<<<<<<, ======= and >>>>>>> lines again. Sweet, eh?

Now, any sane person would have stopped there. I found myself thinking it’d be nice to transform that line of garbage into a neat little function.

There is an obvious problem, though: my VimL-fu is pretty weak. However, my vim is always compiled with Perl support. Sure, the native interface is kinda sucky, but… maybe we can improve on that?

Interfacing Vim with Perl

That’s where Vim::X enter the picture (yes, I know, rather poor name. Anybody has a better suggestion?). The module has two functions:

  1. give us a bunch of helper functions to interact with Vim as painlessly as possible.
  2. deal with all the fiddly bridgey things required to give us access to functions defined in Perl modules from Vim.
Putting the ‘V’ back in ‘DWIM’

Vim::X comes with a small, but growing, bag of helper functions, as well as with helper classes — Vim::X::Window, Vim::X::Buffer, Vim::X::Line — that provide nice wrappers to the Vim entities. I still have to document them all, but the implementation of my ‘ResolveConflict’ function should give you an idea of how to use them:


package Vim::X::Plugin::ResolveConflict;

use strict;
use warnings;

use Vim::X;

sub ResolveConflict {
        my $side = shift;

        my $here   = vim_cursor;
        my $mine   = $here->clone->rewind(qr/^<{7}/);
        my $midway = $mine->clone->ff( qr/^={7}/ );
        my $theirs = $midway->clone->ff( qr/^>{7}/ );

        $here = $side eq 'here'   ? $here
              : $side eq 'mine'   ? $mine
              : $side eq 'theirs' ? $theirs
              : $side eq 'both'   ? $midway
              : die "side '$side' is invalid"
              ;

        vim_delete( 
                # delete the marker
            $mine, $midway, $theirs, 
                # and whichever side we're not on
            ( $midway..$theirs ) x ($here < $midway), 
            ( $mine..$midway )   x ($here > $midway),
        );
};

1;

Sure, it’s more verbose than the original macros. But now, we have a fighting chance to understand what is going on. As it my habit, I am overloading the heck of my objects. For example, the line objects will be seen as their line number, or their content, depending of the context. Evil? Probably. But make for nice, succinct code:



sub Shout {
    my $line = vim_cursor;
    $line <<= uc $line;
}

Fiddly bridgey things

This is where I expect a few ‘oooh’s and ‘aaaah’s. So we have ‘ResolveConflict’ in a Perl module. How do we make Vim see it?

First, you add a ‘:Vim’ attribute to the function:


sub ResolveConflict :Vim(args) {
    ...

Then, in your .vimrc:


" only if the modules aren't already in the path
perl push @INC, '/path/to/modules/';

perl use Vim::X::Plugin::ResolveConflict

map ,<  call ResolveConflict('mine')
map ,>  call ResolveConflict('theirs')
map ,.  call ResolveConflict('here')
map ,m  call ResolveConflict('both')

Isn’t that way more civilized than the usual dark invocations?

One more step down the rabbit hole

Once I had my new ‘ResolveConflict’ written, it goes without saying that I wanted to test it. At first, I wrote a vspec test suite:


describe 'basic'

    perl push @INC, './lib'
    perl use Vim::X::Plugin::ResolveConflict

    before
        new
        read conflict.txt
    end

    after
        close!
    end

    it 'here mine'
        normal 3G
        call ResolveConflict('here')

        Expect getline(1) == "a"
        Expect getline(2) == "b"
        Expect getline(3) == "c"
    end

    it 'here theirs'
        normal 6G
        call ResolveConflict('here')

        Expect getline(1) == "d"
        Expect getline(2) == "e"
        Expect getline(3) == "f"
    end

    it 'mine'
        normal 6G
        call ResolveConflict('mine')

        Expect getline(1) == "a"
        Expect getline(2) == "b"
        Expect getline(3) == "c"
    end

    it 'theirs'
        normal 6G
        call ResolveConflict('theirs')

        Expect getline(1) == "d"
        Expect getline(2) == "e"
        Expect getline(3) == "f"
    end

    it 'both'
        normal 6G
        call ResolveConflict('both')

        Expect getline(1) == "a"
        Expect getline(2) == "b"
        Expect getline(3) == "c"
        Expect getline(4) == "d"
        Expect getline(5) == "e"
        Expect getline(6) == "f"
    end

end

But then I found myself missing my good ol’ TAP. If only there was an interface to run those Perl modules within v–

oh.

So I changed the test suite to now look like:


package ResolveConflictTest;

use Vim::X;
use Vim::X::Plugin::ResolveConflict;

use Test::Class::Moose;

sub test_setup {
    vim_command( 'new', 'read conflict.txt' );
}

sub test_teardown {
    vim_command( 'close!' );
}

sub here_mine :Tests {
    vim_command( 'normal 3G' );
    vim_call( 'ResolveConflict', 'here' );

    is join( '', vim_lines(1..3) ) => 'abc', "here, mine";
    is vim_buffer->size => 3, "only 3 lines left";
};

sub here_theirs :Tests { 
    vim_command( 'normal 6G' );
    vim_call( 'ResolveConflict', 'here' );

    is join( '', vim_lines(1..3) ) => 'def';
    is vim_buffer->size => 3, "only 3 lines left";
};

sub mine :Tests {
    vim_call( 'ResolveConflict', 'mine' );

    is join( '', vim_lines(1..3) ) => 'abc';
    is vim_buffer->size => 3, "only 3 lines left";
};

sub theirs :Tests {
    vim_call( 'ResolveConflict', 'theirs' );

    is join( '', vim_lines(1..3) ) => 'def';
    is vim_buffer->size => 3, "only 3 lines left";
};

sub both :Tests {
    vim_call( 'ResolveConflict', 'both' );

    is join( '', vim_lines(1..6) ) => 'abcdef';
    is vim_buffer->size => 6, "only 6 lines left";
};

__PACKAGE__->new->runtests;

I also wrote a little vim_prove script to run the show:


#!perl -s

exec 'vim', qw/ -V -u NONE -i NONE -N -e -s /,
    ( map { 1; '-c' => "perl push \@INC, '$_'" } split ":", $I ),
    '-c', "perl do '$ARGV[0]' or die $@",
    '-c', "qall!";

Aaaand whatdyaknow:


$ perl bin/vim_prove -I=lib contrib/test.vim
#
# Running tests for ResolveConflictTest
#
    1..5
        ok 1
        ok 2 - only 6 lines left
        1..2
    ok 1 - both
        ok 1 - here, mine
        ok 2 - only 3 lines left
        1..2
    ok 2 - here_mine
        ok 1
        ok 2 - only 3 lines left
        1..2
    ok 3 - here_theirs
        ok 1
        ok 2 - only 3 lines left
        1..2
    ok 4 - mine
        ok 1
        ok 2 - only 3 lines left
        1..2
    ok 5 - theirs
ok 1 - ResolveConflictTest
What’s Next?

The current prototype is on GitHub. I’ll try to push it to CPAN once I have a little bit of documentation and a little more order in the code. But if you are interested, please, fork away, write plugins, and PR like there is no tomorrow.

Categories: DBA Blogs

RMAN – Validating Archivelog Backup

Fri, 2014-03-21 13:07

There are many posts out there about validating backup. However, none seem to address some recent concerns a client of mine had.

Currently, backup validation is performed once a week and the question asked,  “How to validate all archivelog backup?”

List Backups - [D}atafile and [A]rchivelog backup from Incremental Level 0/1

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
43      B  A  A DISK        20-MAR-2014 21:02:30 1       1       YES        AINC0_THU20
44      B  0  A DISK        20-MAR-2014 21:02:39 1       1       YES        DINC0_THU20
45      B  0  A DISK        20-MAR-2014 21:02:51 1       1       YES        DINC0_THU20
46      B  0  A DISK        20-MAR-2014 21:02:56 1       1       YES        DINC0_THU20
47      B  0  A DISK        20-MAR-2014 21:02:59 1       1       YES        DINC0_THU20
48      B  0  A DISK        20-MAR-2014 21:03:00 1       1       YES        DINC0_THU20
49      B  0  A DISK        20-MAR-2014 21:03:04 1       1       YES        DINC0_THU20
50      B  A  A DISK        20-MAR-2014 21:03:07 1       1       YES        AINC0_THU20
51      B  F  A DISK        20-MAR-2014 21:03:11 1       1       NO         TAG20140320T210309
60      B  F  A DISK        21-MAR-2014 07:02:53 1       1       NO         TAG20140321T070249
61      B  A  A DISK        21-MAR-2014 11:27:47 1       1       YES        AINC1_FRI21
62      B  1  A DISK        21-MAR-2014 11:27:54 1       1       YES        DINC1_FRI21
63      B  1  A DISK        21-MAR-2014 11:27:55 1       1       YES        DINC1_FRI21
64      B  1  A DISK        21-MAR-2014 11:27:59 1       1       YES        DINC1_FRI21
65      B  1  A DISK        21-MAR-2014 11:28:00 1       1       YES        DINC1_FRI21
66      B  1  A DISK        21-MAR-2014 11:28:01 1       1       YES        DINC1_FRI21
67      B  1  A DISK        21-MAR-2014 11:28:06 1       1       YES        DINC1_FRI21
68      B  A  A DISK        21-MAR-2014 11:28:08 1       1       YES        AINC1_FRI21
69      B  F  A DISK        21-MAR-2014 11:28:14 1       1       NO         TAG20140321T112810

RMAN>

List Backups for Archivelog All : sequence 53 – 63  / scn  1010308 – 1048901

RMAN> list backup of archivelog all;

List of Backup Sets
===================

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
43      2.49M      DISK        00:00:00     20-MAR-2014 21:02:30
        BP Key: 43   Status: AVAILABLE  Compressed: YES  Tag: AINC0_THU20
        Piece Name: /oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0

  List of Archived Logs in backup set 43
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    53      1013038    20-MAR-2014 16:31:34 1019638    20-MAR-2014 20:58:44
  1    54      1019638    20-MAR-2014 20:58:44 1019722    20-MAR-2014 20:59:53
  1    55      1019722    20-MAR-2014 20:59:53 1019884    20-MAR-2014 21:01:19
  1    56      1019884    20-MAR-2014 21:01:19 1019955    20-MAR-2014 21:02:29

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
50      10.00K     DISK        00:00:00     20-MAR-2014 21:03:07
        BP Key: 50   Status: AVAILABLE  Compressed: YES  Tag: AINC0_THU20
        Piece Name: /oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0

  List of Archived Logs in backup set 50
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    57      1019955    20-MAR-2014 21:02:29 1019981    20-MAR-2014 21:03:05

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
61      3.27M      DISK        00:00:01     21-MAR-2014 11:27:47
        BP Key: 61   Status: AVAILABLE  Compressed: YES  Tag: AINC1_FRI21
        Piece Name: /oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1

  List of Archived Logs in backup set 61
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    58      1019981    20-MAR-2014 21:03:05 1020108    20-MAR-2014 21:04:05
  1    59      1020108    20-MAR-2014 21:04:05 1040311    21-MAR-2014 06:48:37
  1    60      1040311    21-MAR-2014 06:48:37 1041387    21-MAR-2014 07:01:35
  1    61      1041387    21-MAR-2014 07:01:35 1041425    21-MAR-2014 07:02:46
  1    62      1041425    21-MAR-2014 07:02:46 1048879    21-MAR-2014 11:27:44

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
68      2.00K      DISK        00:00:00     21-MAR-2014 11:28:08
        BP Key: 68   Status: AVAILABLE  Compressed: YES  Tag: AINC1_FRI21
        Piece Name: /oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1

  List of Archived Logs in backup set 68
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    63      1048879    21-MAR-2014 11:27:44 1048901    21-MAR-2014 11:28:07

RMAN>

Validate using archivelog all does not work since RMAN does not recognize deleted backups.

RMAN> restore validate archivelog all;

Starting restore at 21-MAR-2014 09:45:09
using channel ORA_DISK_1

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 03/21/2014 09:45:09
RMAN-06026: some targets not found - aborting restore
RMAN-06025: no backup of archived log for thread 1 with sequence 52 and starting SCN of 998131 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 51 and starting SCN of 998032 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 50 and starting SCN of 997986 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 49 and starting SCN of 997779 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 48 and starting SCN of 974617 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 47 and starting SCN of 974437 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 46 and starting SCN of 974348 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 45 and starting SCN of 973251 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 44 and starting SCN of 943517 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 43 and starting SCN of 942296 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 42 and starting SCN of 942262 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 41 and starting SCN of 941967 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 40 and starting SCN of 917029 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 39 and starting SCN of 916561 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 38 and starting SCN of 908363 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 37 and starting SCN of 907850 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 36 and starting SCN of 875257 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 35 and starting SCN of 837127 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 34 and starting SCN of 791810 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 33 and starting SCN of 749949 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 32 and starting SCN of 749893 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 31 and starting SCN of 749681 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 30 and starting SCN of 712625 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 29 and starting SCN of 672466 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 28 and starting SCN of 646365 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 27 and starting SCN of 616449 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 26 and starting SCN of 581487 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 25 and starting SCN of 540184 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 24 and starting SCN of 519475 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 23 and starting SCN of 494335 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 22 and starting SCN of 470043 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 21 and starting SCN of 432577 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 20 and starting SCN of 412641 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 19 and starting SCN of 353256 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 18 and starting SCN of 306546 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 17 and starting SCN of 306515 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 16 and starting SCN of 306404 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 15 and starting SCN of 306211 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 14 and starting SCN of 280433 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 13 and starting SCN of 253917 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 12 and starting SCN of 227620 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 11 and starting SCN of 227291 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 10 and starting SCN of 226756 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 9 and starting SCN of 226342 found to restore

Validate using arbitrary date and time does not meet the requirements.

RMAN> restore archivelog from time "TRUNC(sysdate)" until time "sysdate";

Starting restore at 21-MAR-2014 09:46:32
using channel ORA_DISK_1

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=59
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=60
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2ap3naeg_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2ap3naeg_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=61
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2hp3nagn_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2hp3nagn_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 21-MAR-2014 09:46:37

RMAN> restore archivelog from time "TRUNC(sysdate-1)" until time "TRUNC(sysdate)";

Starting restore at 21-MAR-2014 09:51:21
using channel ORA_DISK_1

archived log for thread 1 with sequence 59 is already on disk as file /oradata/fra/DB01/archivelog/2014_03_21/o1_mf_1_59_9lrv79h1_.arc
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 03/21/2014 09:51:21
RMAN-06026: some targets not found - aborting restore
RMAN-06025: no backup of archived log for thread 1 with sequence 52 and starting SCN of 998131 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 51 and starting SCN of 998032 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 50 and starting SCN of 997986 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 49 and starting SCN of 997779 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 48 and starting SCN of 974617 found to restore

RMAN>

Validate using from scn 1013038 until scn 1048901 works, but where is the information stored so that it can be automated in a shell script?

RMAN>  restore validate archivelog from scn 1013038 until scn 1048901;

Starting restore at 21-MAR-2014 11:34:02
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=107 device type=DISK

channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
Finished restore at 21-MAR-2014 11:34:06

RMAN>

View v$backup_archivelog_summary will provide the information needed.

ARROW:(SYS@db01):PRIMARY> select min_first_change#,max_next_change# from v$backup_archivelog_summary;

MIN_FIRST_CHANGE# MAX_NEXT_CHANGE#
----------------- ----------------
          1013038          1048901

ARROW:(SYS@db01):PRIMARY>

Run validate_archivelog.sh

$ ./validate_archivelog.sh

Recovery Manager: Release 11.2.0.4.0 - Production on Fri Mar 21 11:41:44 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN>
echo set on

RMAN> connect target;
connected to target database: DB01 (DBID=1470673955)

RMAN> restore validate archivelog from scn        1013038 until scn      1048901;
Starting restore at 21-MAR-2014 11:41:45
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=107 device type=DISK

channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
Finished restore at 21-MAR-2014 11:41:49

RMAN> exit

Recovery Manager complete.

Script validate_archivelog.sh

#!/bin/sh
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
min_scn=`sqlplus -SL "/ as sysdba" <<END
whenever sqlerror exit sql.sqlcode
whenever oserror exit 1
set pages 0 head off veri off feed off term off echo off pause off numw 32
select min_first_change# from v\\$backup_archivelog_summary;
exit
END
`
if [ "$?" != "0" ]; then
echo "*** ERROR: $min_scn"
exit 1;
fi
max_scn=$(sqlplus -SL "/ as sysdba" <<END
whenever sqlerror exit sql.sqlcode
whenever oserror exit 1
set pages 0 head off veri off feed off term off echo off pause off numw 32
select max_next_change# from v\$backup_archivelog_summary;
exit
END
)
if [ "$?" != "0" ]; then
echo "*** ERROR: $max_scn"
exit 1;
fi
rman <<END
set echo on
connect target;
restore validate archivelog from scn ${min_scn} until scn ${max_scn};
exit
END
if [ "$?" != "0" ]; then
echo "*** ERROR: RMAN restore validate"
exit 1;
fi
exit

UPDATE:  Based on recent comment, improve script to reduce call to database.

Run validate_archivelog2.sh

$ ./validate_archivelog2.sh

Recovery Manager: Release 11.2.0.4.0 - Production on Fri Mar 21 13:40:35 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN>
echo set on

RMAN> connect target;
connected to target database: DB01 (DBID=1470673955)

RMAN> restore validate archivelog from scn 1013038 until scn 1048901;
Starting restore at 21-MAR-2014 13:40:36
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=15 device type=DISK

channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
Finished restore at 21-MAR-2014 13:40:40

RMAN> exit

Recovery Manager complete.

Script validate_archivelog2.sh

#!/bin/sh
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"

cmd=`sqlplus -SL “/ as sysdba” <<END
whenever sqlerror exit sql.sqlcode
whenever oserror exit 1
set pages 0 head off veri off feed off term off echo off pause off numw 32
select ‘restore validate archivelog from scn ‘||min_first_change#||’ until scn ‘||max_next_change# from v\\$backup_archivelog_summary;
exit
END
`
if [ "$?" != "0" ]; then
echo “*** ERROR: $cmd”
exit 1;
fi
rman <<END
set echo on
connect target;
${cmd};
exit
END
if [ "$?" != "0" ]; then
echo “*** ERROR: RMAN restore validate”
exit 1;
fi
exit

Categories: DBA Blogs

Log Buffer #364, A Carnival of the Vanities for DBAs

Fri, 2014-03-21 07:32

For some, blogging brings the same pleasure as laying under the swaying palms, where on the white sandy beaches the turquoise waters kiss their feet. This passion oozes into their posts, and this edition of Log Buffer is a manifestation of exactly that. Enjoy.

Oracle:

DBRM for 12c Container Database in Multi-tenant Environment.

A discussion of ORA-01442 error.

Oracle Developer Diversity Realized: How to Get Started in a Career in Tech.

Simplified UI and the Oracle User Experience in Oracle Applications Cloud Release 8.

Issues with using latest release of ODM.

SQL Server:

Easily grant exec permission to all stored procedures to a database role in SQL 2005 or higher.

The PoSh DBA – Specifying and Gathering Performance Counters.

Stairway to T-SQL: Beyond The Basics Level 4: Using Views to Simplify Your Query.

Microsoft SQL Server 2014 released to manufacturing.

Dynamic Management Views (DMVs) are a significant and valuable addition to the DBA’s troubleshooting armory, laying bare previously unavailable information regarding the under-the-covers activity of your database sessions and transactions.

MySQL:

The MySQL 5.6 certification exams (Developer and DBA) are now GA!

A review of several MySQL developer tools.

Clustering SugarCRM with MySQL Galera for High Availability and Performance.

Buongiorno Relies on MySQL Enterprise Edition to Power its Mobile Services.

How to log slow queries on Slave in MySQL 5.0 with pt-query-digest.

Categories: DBA Blogs