Skip navigation.

DBA Blogs

RMAN -- 4 : Recovering from an Incomplete Restore

Hemant K Chitale - 13 hours 36 min ago
What do you do if a RESTORE fails mid-way ?  Do you need to rerun the whole restore ?  If it is a very large database, it could take [many ?] hours.

RMAN is "smart" enough to detect datafiles that have been restored and not re-attempt a restore.

Here, I begin a database restore.

 
RMAN> restore controlfile from autobackup;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK

recovery area destination: /NEW_FS/oracle/FRA
database name (or database unique name) used for search: HEMANTDB
channel ORA_DISK_1: AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp found in the recovery area
AUTOBACKUP search with format "%F" not attempted because DBID was not set
channel ORA_DISK_1: restoring control file from AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/home/oracle/app/oracle/oradata/orcl/control01.ctl
output file name=/home/oracle/app/oracle/flash_recovery_area/orcl/control02.ctl
Finished restore at 04-JUL-15

RMAN>
RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
Starting implicit crosscheck backup at 04-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
Crosschecked 8 objects
Finished implicit crosscheck backup at 04-JUL-15

Starting implicit crosscheck copy at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2
Finished implicit crosscheck copy at 04-JUL-15

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp

using channel ORA_DISK_1
using channel ORA_DISK_2

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00004 to /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:02:34
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00003 to /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:35
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3777
Session ID: 1 Serial number: 9

[oracle@localhost ~]$

After having restored a few datafiles, the restore failed on being disconnected from the database. (The  server or database instance has crashed).  Since the controlfile has been restored, I can bring up the database in MOUNT mode and then re-attempt a RESTORE DATABASE.

[oracle@localhost ~]$ rman target sys/oracle@orcl

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Jul 4 12:56:41 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK

skipping datafile 3; already restored to file /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
skipping datafile 4; already restored to file /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:02:36
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00010 to /home/oracle/app/oracle/oradata/orcl/APEX_2614203650434107.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /home/oracle/app/oracle/oradata/orcl/example01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:04:02
Finished restore at 04-JUL-15

RMAN>

RMAN detects that datafiles 3 (undotbs01.dbf) and 4 (users01.dbf) had already been restored.
 If you look at the previous RESTORE run, you can see that these were restored by Channel ORA_DISK_2. The first channel ORA_DISK_1 had started restoring system01.dbf but hadn't completed restoring the datafile when the restore crashed. That restore of datafile 1 (system01.dbf) had to be redone.

 (Another thing to note : Oracle doesn't necessarily restore datafiles in the order of file_id (file#) ! There really is no ORDER BY for a RESTORE)

RMAN> recover database;

Starting recover at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2

starting media recovery

archived log for thread 1 with sequence 628 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
archived log for thread 1 with sequence 629 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
archived log for thread 1 with sequence 630 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
archived log for thread 1 with sequence 631 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
archived log for thread 1 with sequence 632 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
archived log for thread 1 with sequence 633 is already on disk as file /home/oracle/app/oracle/oradata/orcl/redo03.log
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc thread=1 sequence=628
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc thread=1 sequence=629
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc thread=1 sequence=630
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc thread=1 sequence=631
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc thread=1 sequence=632
archived log file name=/home/oracle/app/oracle/oradata/orcl/redo03.log thread=1 sequence=633
media recovery complete, elapsed time: 00:00:02
Finished recover at 04-JUL-15

RMAN>
RMAN> alter database open resetlogs;

database opened

RMAN>


You could also note, as an aside, that Log Sequence 633 was an online redo log file. RMAN automatically verifies that the online redo log files designated by the controlfile are present and uses them.

.
.
.

Categories: DBA Blogs

Log Buffer #430: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-03 12:56

This Log Buffer Edition cuts through the crowd and picks some of the outstanding blog posts from Oracle, SQL Server and MySQL.


Oracle:

  • Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time.
  • Query existing HBase tables with SQL using Apache Phoenix.
  • Even though WebLogic with Active GridlLink are Oracle’s suggested approach to deploy Java applications that use Oracle Real Applications Clusters (RAC), there might be scenarios in which you can’t make that choice (e.g. certification issues, licensing, library dependency, etc.).
  • OSB & MTOM: When to use Include Binary Data by Reference or Value.
  • Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn’t use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project.

SQL Server:

  • Using DAX to create SSRS reports: The Basics.
  • Getting to know your customers better – cohort analysis and RFM segmentation in R.
  • Using the T-SQL PERCENTILE Analytic Functions in SQL Server 2000, 2005 and 2008.
  • Schema-Based Access Control for SQL Server Databases.
  • How to Fix a Corrupt MSDB SQL Server Database.

MySQL:

  • MySQL Enterprise Audit – parsing audit information from log files, inserting into a MySQL table.
  • Proposal to deprecate MySQL INTEGER display width and ZEROFILL.
  • Using Cgroups to Limit MySQL and MongoDB memory usage.
  • Slave election is a popular HA architecture,  first MySQL MariaDB toolkit to manage switchover and failover in a correct way was introduce by Yoshinori Matsunobu into MHA.
  • Setting up environments, starting processes, and monitoring these processes on multiple machines can be time consuming and error prone.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Query existing HBase tables with SQL using Apache Phoenix

Kubilay Çilkara - Thu, 2015-07-02 13:25
Spending a bit more time with Apache Phoenix and reading again my previous post I realised that you can use it to query existing HBase tables. That is NOT tables created using Apache Phoenix, but HBase - the columnar NoSQL database in Hadoop.

I think this is cool as it gives you the ability to use SQL on an HBase table.

To test this, let's say you login to HBase and you create an HBase table like this:

> create 'table2', {NAME=>'cf1', VERSIONS => 5}

The table2 is a simple table in HBase with one column family cf1 and now let's put some data to this HBase table.

> put 'table2', 'row1', 'cf1:column1', 'Hello SQL!'

then maybe add another row

> put 'table2', 'row4', 'cf1:column1', 'London'

Now, in Phoenix all you will have to do is create a database View for this table and query it with SQL. The database View will be read-only.  How cool is that, you don't even need to physically create the table or move the data to Phoenix or convert it, a database view will be sufficient and via Phoenix you can query the HBase table with SQL.

In Phoenix you create the view for the table2 using the same name. As you can see below the DDL used to create the view is case sensitive and if you created your HBase table name in lower case you will have to put the name in between double quotes.

So login to Phoenix and create the "table2" view like this:

> create view "table2" ( pk VARCHAR PRIMARY KEY, "cf1"."column1" VARCHAR );

And here is how you then query it in Phoenix:


SQL Query on Phoenix
Tremendous potential here, imagine all those existing HBase tables which now you can query with SQL. More, you can point your Business Intelligence tools and Reporting Tools and other tools which work with SQL and query HBase as if it was another SQL database.

A solution worth investigating further? It definitely got me blogging in the evenings again.

To find out more about Apache Phoenix visit their project page https://phoenix.apache.org/



Categories: DBA Blogs

Table Recovery in #Oracle 12c

The Oracle Instructor - Thu, 2015-07-02 03:18

You can now restore single tables from backup! It is a simple command although it leads to much effort by RMAN. See it as an enhancement over a ‘normal’ Point In Time Recovery:

Point In Time Recovery

Point In Time Recovery

After a full restore from a sufficiently old backup, archived logs are being applied in direction of the presence until before the logical error. Then a new incarnation comes up (with RESETLOGS) and the whole database is as it was at that time. But what if it is only a dropped table that needs to be recovered? Enter the 12c New Feature:

Table Recovery

Table Recovery

Above is what RMAN does upon Table Recovery. The restore is done to the auxiliary destination, while the database keeps on running like it is just now. The new incarnation is there only temporarily, just to export the dropped table from. Afterwards, it is removed. RMAN will then import the table back to the still running database – unless you say otherwise with the NOTABLEIMPORT clause. So it is a huge effort to go through for the system in spite of the simple RMAN command:

 

SQL> select count(*) from sales;

  COUNT(*)
----------
  10000000

SQL> select sysdate from dual;

SYSDATE
-------------------
2015-07-02 09:33:37

SQL> drop table sales purge;

Table dropped.

Oops – that was a mistake! And I can’t simply say flashback table sales to before drop because of the purge. RMAN to the rescue!

[oracle@uhesse ~]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 2 09:34:35 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2113606181)

RMAN> list backup of database;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
1       Full    142.13M    DISK        00:01:45     2015-07-01 17:50:32
        BP Key: 1   Status: AVAILABLE  Compressed: YES  Tag: TAG20150701T174847
        Piece Name: /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
  List of Datafiles in backup set 1
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  1       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/system01.dbf
  2       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/sysaux01.dbf
  3       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/undotbs01.dbf
  4       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/users01.dbf

RMAN> host 'mkdir /tmp/auxi';

host command complete

RMAN> recover table adam.sales until time '2015-07-02 09:33:00' auxiliary destination '/tmp/auxi';

Starting recover at 2015-07-02 09:35:54
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='tDtf'

initialization parameters used for automatic instance:
db_name=PRIMA
db_unique_name=tDtf_pitr_PRIMA
compatible=12.1.0.2
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=1512M
processes=200
db_create_file_dest=/tmp/auxi
log_archive_dest_1='location=/tmp/auxi'
#No auxiliary parameter file used


starting up automatic instance PRIMA

Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                402656944 bytes
Database Buffers            1174405120 bytes
Redo Buffers                  13848576 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';

# archive current online log
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET until clause

Starting restore at 2015-07-02 09:36:21
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=3 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_ncsnf_TAG20150701T174847_bs832pht_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_ncsnf_TAG20150701T174847_bs832pht_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl
Finished restore at 2015-07-02 09:36:23

sql statement: alter database mount clone database

sql statement: alter system archive log current

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /tmp/auxi/PRIMA/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 2015-07-02 09:36:32
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /tmp/auxi/PRIMA/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /tmp/auxi/PRIMA/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:36
Finished restore at 2015-07-02 09:37:08

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_system_bs9tj1fk_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_bs9tj1hw_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_sysaux_bs9tj1jd_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  2 online

Starting recover at 2015-07-02 09:37:09
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 13 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc
archived log for thread 1 with sequence 14 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc
archived log for thread 1 with sequence 19 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc
archived log for thread 1 with sequence 20 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc
archived log for thread 1 with sequence 21 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc
archived log for thread 1 with sequence 22 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc
archived log for thread 1 with sequence 23 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc
archived log for thread 1 with sequence 24 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc
archived log for thread 1 with sequence 25 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc
archived log for thread 1 with sequence 26 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc
archived log for thread 1 with sequence 27 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc
archived log for thread 1 with sequence 28 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc
archived log for thread 1 with sequence 29 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc
archived log for thread 1 with sequence 30 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc
archived log for thread 1 with sequence 32 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc
archived log for thread 1 with sequence 33 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc
archived log for thread 1 with sequence 34 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc
archived log for thread 1 with sequence 35 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc
archived log for thread 1 with sequence 36 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc
archived log for thread 1 with sequence 37 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc
archived log for thread 1 with sequence 38 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc
archived log for thread 1 with sequence 39 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc thread=1 sequence=13
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc thread=1 sequence=14
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc thread=1 sequence=15
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc thread=1 sequence=16
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc thread=1 sequence=17
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc thread=1 sequence=18
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc thread=1 sequence=19
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc thread=1 sequence=20
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc thread=1 sequence=21
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc thread=1 sequence=22
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc thread=1 sequence=23
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc thread=1 sequence=24
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc thread=1 sequence=25
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc thread=1 sequence=26
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc thread=1 sequence=27
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc thread=1 sequence=28
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc thread=1 sequence=29
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc thread=1 sequence=30
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc thread=1 sequence=31
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc thread=1 sequence=32
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc thread=1 sequence=33
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc thread=1 sequence=34
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc thread=1 sequence=35
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc thread=1 sequence=36
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc thread=1 sequence=37
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc thread=1 sequence=38
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc thread=1 sequence=39
media recovery complete, elapsed time: 00:01:00
Finished recover at 2015-07-02 09:38:11

sql statement: alter database open read only

contents of Memory Script:
{
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  control_files =
  ''/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl'' comment=
 ''RMAN set'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}
executing Memory Script

sql statement: create spfile from memory

database closed
database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                419434160 bytes
Database Buffers            1157627904 bytes
Redo Buffers                  13848576 bytes

sql statement: alter system set  control_files =   ''/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl'' comment= ''RMAN set'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                419434160 bytes
Database Buffers            1157627904 bytes
Redo Buffers                  13848576 bytes

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  4 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  4;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

Starting restore at 2015-07-02 09:39:11
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=12 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to /tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 2015-07-02 09:39:47

datafile 4 switched to datafile copy
input datafile copy RECID=8 STAMP=883993187 file name=/tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_bs9to0k1_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# online the datafiles restored or switched
sql clone "alter database datafile  4 online";
# recover and open resetlogs
recover clone database tablespace  "USERS", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  4 online

Starting recover at 2015-07-02 09:39:47
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 13 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc
archived log for thread 1 with sequence 14 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc
archived log for thread 1 with sequence 19 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc
archived log for thread 1 with sequence 20 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc
archived log for thread 1 with sequence 21 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc
archived log for thread 1 with sequence 22 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc
archived log for thread 1 with sequence 23 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc
archived log for thread 1 with sequence 24 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc
archived log for thread 1 with sequence 25 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc
archived log for thread 1 with sequence 26 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc
archived log for thread 1 with sequence 27 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc
archived log for thread 1 with sequence 28 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc
archived log for thread 1 with sequence 29 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc
archived log for thread 1 with sequence 30 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc
archived log for thread 1 with sequence 32 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc
archived log for thread 1 with sequence 33 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc
archived log for thread 1 with sequence 34 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc
archived log for thread 1 with sequence 35 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc
archived log for thread 1 with sequence 36 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc
archived log for thread 1 with sequence 37 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc
archived log for thread 1 with sequence 38 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc
archived log for thread 1 with sequence 39 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc thread=1 sequence=13
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc thread=1 sequence=14
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc thread=1 sequence=15
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc thread=1 sequence=16
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc thread=1 sequence=17
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc thread=1 sequence=18
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc thread=1 sequence=19
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc thread=1 sequence=20
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc thread=1 sequence=21
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc thread=1 sequence=22
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc thread=1 sequence=23
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc thread=1 sequence=24
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc thread=1 sequence=25
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc thread=1 sequence=26
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc thread=1 sequence=27
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc thread=1 sequence=28
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc thread=1 sequence=29
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc thread=1 sequence=30
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc thread=1 sequence=31
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc thread=1 sequence=32
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc thread=1 sequence=33
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc thread=1 sequence=34
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc thread=1 sequence=35
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc thread=1 sequence=36
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc thread=1 sequence=37
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc thread=1 sequence=38
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc thread=1 sequence=39
media recovery complete, elapsed time: 00:01:15
Finished recover at 2015-07-02 09:41:03

database opened

contents of Memory Script:
{
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxi''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxi''";
}
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxi''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxi''

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_tDtf_lwFD":
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 600 MB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "ADAM"."SALES"                              510.9 MB 10000000 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_tDtf_lwFD" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_tDtf_lwFD is:
   EXPDP>   /tmp/auxi/tspitr_tDtf_59906.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_tDtf_lwFD" successfully completed at Thu Jul 2 09:42:53 2015 elapsed 0 00:01:06
Export completed


contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_tDtf_uink" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_tDtf_uink":
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "ADAM"."SALES"                              510.9 MB 10000000 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_tDtf_uink" successfully completed at Thu Jul 2 09:54:13 2015 elapsed 0 00:11:12
Import completed


Removing automatic instance
Automatic instance removed
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_temp_bs9tm7pz_.tmp deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/onlinelog/o1_mf_2_bs9trods_.log deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/onlinelog/o1_mf_1_bs9trjw6_.log deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_bs9to0k1_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_sysaux_bs9tj1jd_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_bs9tj1hw_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_system_bs9tj1fk_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl deleted
auxiliary instance file tspitr_tDtf_59906.dmp deleted
Finished recover at 2015-07-02 09:54:16

See how much work was done by RMAN here? But now, life is good again:

SQL> select count(*) from adam.sales;

  COUNT(*)
----------
  10000000

You say that you could have done that yourself even before 12c? Yes, you’re right: It’s not magic, it’s just more comfortable now ;-)


Tagged: Backup & Recovery, PracticalGuide, RMAN
Categories: DBA Blogs

Apache Phoenix, SQL is getting closer to Big Data

Kubilay Çilkara - Tue, 2015-06-30 15:50


Here is a post about another project in the Big Data world, like Apache Hive from my previous post, enables you to do SQL on Big Data. It is called Apache Phoenix.

Phoenix is a bit different, a bit closer to my heart too, as I read the documentation on Apache Phoenix, the word 'algebra' and 'relational algebra' came across few times, and that mean only one thing, SQL! The use of the word algebra in the docs did give me a lot of confidence. SQL has closure, is based on a database systems model which has it's roots in logic and maths and especially a subset of algebra, The Set Theory.

Apache Phoenix is developed in Salesforce and is now one of the popular projects in Apache. Apache Phoenix is a SQL skin on top of HBase, the columnar (NoSQL) database of the Hadoop ecosystem, capable of storing very large tables and data and query them via 'scans'. HBase is part of the Hadoop ecosystem and the file system it uses is usually HDFS. Apache Phoenix is using JDBC on the client as a driver.

In the race to bring the easiest to use tools for Big Data, I think Apache Phoenix is very close. It is the SQL we know used since the 1970s. The Apache Phoenix team seems to be committed and willing to introduce all of the missing parts of SQL, including transaction processing with different isolation levels.  Making Phoenix a fully operational Relational Database layer on HBase. Have a look in their roadmap. The amount of current and suggested future SQL compatibility is remarkable, and this makes me take them really seriously.
  • Transactions
  • Cost-based Query Optimization! (Wow)
  • Joins
  • OLAP
  • Subqueries
  • Striving for full SQL-92 compliance
In addition to all this, it is also possible to turn an existing HBase table to an Apache Phoenix table using CREATE TABLE or even CREATE VIEW, the DDL statements that we know. How handy is that? Suddenly you can SQL enable your existing HBase database!
How to install and use Phoenix

The SQL skin can be installed to an existing Hadoop HBase installation very quickly. All you need to do is to download and extract the tarball. You can setup a standalone Hadoop environment, look at my previous blog post for that, and then install HBase and install Apache Phoenix
Once the Apache  Phoenix software is installed, then you can start it and query it with SQL like this.

From within the bin/ directory of Phoenix install directory run

$ ./sqlline.py  localhost

That will bring you to the phoenix prompt


0: jdbc:phoenix:localhost> select * from mytable;

Categories: DBA Blogs

Prepare for the Leap Second

Pythian Group - Mon, 2015-06-29 10:37

Catch up on how to handle the Leap Second and whether you’re ready for it with our previous updates on the impacts it will have on Cassandra and Linux.

 

Background Information

A leap second will be inserted at the end of June 30, 2015 at 23:59:60 UTC. 

There is a small time difference between the atomic clock and astronomical time (which is based on the rotation of earth). Rotation of earth is slowing down.

To synchronize these times, one second will be added to the atomic clock – a leap second – so that both clocks will synchronize. This will happen on June 30th – July 1st midnight UTC (not in local time, time the same as in GMT time zone). After 23 hours 59 minutes 59 seconds, the time will become 23 hours 59 minutes 60 seconds.

Since this system of correction was implemented in 1972, 26 such leap seconds have been inserted. The most recent one happened on June 30, 2012 at 23:59:60 UTC.

Unlike daylight savings time, which shifts the timezone information and does not alter the underlying UTC time clock on which servers work, a leap-second change is an actual change in the UTC time value. Usually, UTC time is continuous and predictable, but the leap second breaks this normal continuity requiring it to be addressed.

 

What You Need to Know – Summary

The June 2015 leap second event is the addition of one second to the atomic clock on June 30, 2015. Pythian has researched the implications that the upcoming leap second insertion may have and presents the relevant information to its clients and the wider community.

At the operating system level:

  • Windows and AIX servers are not affected by this issue.
  • Linux servers using NTP (network time protocol) may be affected, potentially causing error messages, server hangs or 100% CPU utilization. There are a series of patches and workarounds available, depending upon the needs of the components running on the Linux server.
  • HP-UX servers have NTP patches released in Q2 2015.

For databases and other software components:

  • Java programs are at risk of generating endless error loops, spiking CPU utilization. Patches are available.
  • Databases generally obtain time-stamps from the server OS, so those running on Linux have potential issues. For most, there are no additional corrections necessary.
  • Oracle databases have minimal additional risk. Oracle clustered environments and java-based administration tools should be reviewed and corrective actions taken.
  • Microsoft SQL Server databases have no risk but may expose minor application issues on data granularity and error handling.
  • Open source databases should be reviewed for Java risks. Updated kernels are available.
  • Cisco UCS environments should be reviewed. Patches are available.

Symptoms from the leap second event may persist for up to a day before and after the leap second event, as server NTP service updates are provided.

For all environments, a complete assessment and planning for your systems should be performed. The Pythian team would be pleased to help you perform this assessment and complete the planning necessary to ensure your systems can handle the leap second event in 2015. Get started by reviewing the full Leap Second Report.

 

Categories: DBA Blogs

Multisection Backup for Image Copies

The Oracle Instructor - Mon, 2015-06-29 10:30

A nice Oracle Database 12c New Feature enhances the multisection backup, introduced in 11g: You can use it now for image copies also!

Multisection Backup with Image Copy

Multisection Backup with Image Copy

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name PRIMA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/prima/system01.dbf
2    235      SYSAUX               NO      /u01/app/oracle/oradata/prima/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/prima/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/prima/users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/prima/temp01.dbt

RMAN> configure device type disk parallelism 2;

new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

RMAN> backup section size 301m as copy datafile 4;

Starting backup at 29-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/prima/users01.dbf
backing up blocks 1 through 38528
channel ORA_DISK_2: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/prima/users01.dbf
backing up blocks 38529 through 77056
output file name=/u02/fra/PRIMA/datafile/o1_mf_users_bs2v934z_.dbf tag=TAG20150629T180658
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
output file name=/u02/fra/PRIMA/datafile/o1_mf_users_bs2v934z_.dbf tag=TAG20150629T180658
channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:15
Finished backup at 29-JUN-15

RMAN> host 'ls -rtl /u02/fra/PRIMA/datafile/';

total 616468
-rw-r-----. 1 oracle oinstall 631250944 Jun 29 18:07 o1_mf_users_bs2v934z_.dbf
host command complete

We use backupsets by default now especially also upon the DUPLICATE DATABASE command, which leads finally to image copies of course.


Tagged: 12c New Features, Backup & Recovery, PracticalGuide, RMAN
Categories: DBA Blogs

Quiz Time. Why Do Deletes Cause An Index To Grow ? (Solution)

Richard Foote - Mon, 2015-06-29 00:27
OK, time to reveal how a couple of simple deletes can cause an index to double in size. If we go back and look at the tree dump before the delete operation: —– begin tree dump branch: 0x180050b 25167115 (0: nrow: 19, level: 1) leaf: 0x180050c 25167116 (-1: row:540.540 avs:4) leaf: 0x180050d 25167117 (0: row:533.533 […]
Categories: DBA Blogs

Log Buffer #429: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-06-26 06:47

This Log Buffer Edition gathers a wide sample of blogs and then purifies the best ones from Oracle, SQL Server and MySQL.

Oracle:

  • If you take a look at the “alter user” command in the old 9i documentation, you’ll see this: DEFAULT ROLE Clause.
  • There’s been an interesting recent discussion on the OTN Database forum regarding “Index blank blocks after a large update that was rolled back.”
  • 12c Parallel Execution New Features: 1 SLAVE distribution
  • Index Tree Dumps in Oracle 12c Database (New Age)
  • Is it possible to cause tables to be stale with only tiny amounts of change?

SQL Server:

  • Making Data Analytics Simpler: SQL Server and R
  • Challenges with integrating MySQL data in an ETL regime and the amazing FMTONLY trick!
  • Azure Stream Analytics aims to extract knowledge structures from continuous ordered streams of data by real-time analysis.
  • Grant User Access to All SQL Server Databases
  • SQL SERVER – How Do We Find Deadlocks?

MySQL:

  • Efficient Use of Indexes in MySQL
  • SHOW ENGINE INNODB MUTEX is back!
  • Business-critical MySQL with DR in vCloud Air
  • Become a MySQL DBA blog series – Common operations – Schema Changes.
  • Building a Better CREATE USER Command

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Hive (HiveQL) SQL for Hadoop Big Data

Kubilay Çilkara - Thu, 2015-06-25 13:30


In this  post I will share my experience with an Apache Hadoop component called Hive which enables you to do SQL on an Apache Hadoop Big Data cluster.

Being a great fun of SQL and relational databases, this was my opportunity to set up a mechanism where I could transfer some (a lot)  data from a relational database into Hadoop and query it with SQL. Not a very difficult thing to do these days, actually is very easy with Apache Hive!

Having access to a Hadoop cluster which has the Hive module installed on, is all you need. You can provision a Hadoop cluster yourself by downloading and installing it in pseudo mode on your own PC. Or you can run one in the cloud with Amazon AWS EMR in a pay-as-you-go fashion.

There are many ways of doing this, just Google it and you will be surprised how easy it is. It is easier than it sounds. Next find links for installing it on your own PC (Linux).  Just download and install Apache Hadoop and Hive from Apache Hadoop Downloads

You will need to download and install 3 things from the above link.

  • Hadoop (HDFS and Big Data Framework, the cluster)
  • Hive (data warehouse module)
  • Sqoop (data importer)
You will also need to put the connector of the database (Oracle, MySQL...) you want to extract data from in the */lib folder in your Sqoop installation. For example the MySQL JDBC connector can be downloaded from hereDon't expect loads of tinkering installing Apache Hadoop and Hive or Sqoop, just unzipping binary extracts and few line changes on some config files in directories, that's all. Is not a big deal, and is Free. There are tones of tutorials on internet on this, here is one I used from another blogger bogotobogo.


What is Hive?

Hive is Big Data SQL, the Data Warehouse in Hadoop. You can create tables, indexes, partition tables, use external tables, Views like in a relational database Data Warehouse. You can run SQL to do joins and to query the Hive tables in parallel using the MapReduce framework. It is actually quite fun to see your SQL queries translating to MapReduce jobs and run in parallel like parallel SQL queries we do on Oracle EE Data Warehouses and other databases. :0) The syntax looks very much like MySQL's SQL syntax.

Hive is NOT an OLTP transactional database, does not have transactions of INSERT, UPDATE, DELETE like in OLTP and doesn't conform to ANSI SQL and ACID properties of transactions.


Direct insert into Hive with Apache Sqoop:
After you have installed Hadoop and have hive setup and are able to login to it, you can use Sqoop - the data importer of Hadoop - like in the following command and directly import a table from MySQL via JDBC into Hive using MapReduce.
$  sqoop import -connect jdbc:mysql://mydatbasename -username kubilay -P -table mytablename --hive-import --hive-drop-import-delims --hive-database dbadb --num-mappers 16 --split-by id
Sqoop import options explained:
  •  -P will ask for the password
  • --hive-import which makes Sqoop to import data straight into hive table which it creates for you
  • --hive-drop-import-delims Drops \n\r, and \01 from string fields when importing to Hive. 
  • --hive-database tells it which database in Hive to import it to, otherwise it goes to the default database. 
  • --num-mappers number of parallel maps to run, like parallel processes / threads in SQL
  • --split-by  Column of the table used to split work units, like in partitioning key in database partitioning. 
The above command will import any MySQL table you give in place of mytablename into Hive using MapReduce from a MySQL database you specify.

Once you import the table then you can login to hive and run SQL to it like in any relational database. You can login to Hive in a properly configured system just by calling hive from command line like this:

$ hive
hive> 


More Commands to list jobs:

Couple of other commands I found useful when I was experimenting with this:

List running Hadoop jobs

hadoop job -list

Kill running Hadoop jobs

hadoop job -kill job_1234567891011_1234

List particular table directories in HDFS

hadoop fs -ls mytablename


More resources & Links



Categories: DBA Blogs

Quiz Time. Why Do Deletes Cause An Index To Grow ? (Up The Hill Backwards)

Richard Foote - Thu, 2015-06-25 01:02
OK, time for a little quiz. One of the things I’ve seen at a number of sites is the almost fanatical drive to make indexes as small as possible because indexes that are larger than necessary both waste storage and hurt performance. Or so the theory goes …   :) In many cases, this drives DBAs to […]
Categories: DBA Blogs

Empty Leaf Blocks After Rollback Part II (Editions of You)

Richard Foote - Wed, 2015-06-24 01:35
In my last post, I discussed how both 1/2 empty and totally empty leaf blocks can be generated by rolling back a bulk update operation. An important point I made within the comments of the previous post is that almost the exact scenario would have taken place had the transaction committed rather than rolled back. A […]
Categories: DBA Blogs

RMAN - 3 : The DB_UNIQUE_NAME in Backups to the FRA

Hemant K Chitale - Tue, 2015-06-23 03:14
When you run RMAN Backups to the FRA without using the FORMAT clause, Oracle automatically generates filenames for the BackupPieces.  The folder name is derived from the system date.  But what is the parent folder for backups ?  Is it simply the DB_RECOVERY_FILE_DEST ?  Actuallly, the DB_UNIQUE_NAME comes into play as well.

For example :

[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Tue Jun 23 16:57:19 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> list backup of datafile 1;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Full 831.23M DISK 00:03:32 07-JUN-15
BP Key: 1 Status: AVAILABLE Compressed: YES Tag: TAG20150607T165914
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_nnndf_TAG20150607T165914_bq81z2y6_.bkp
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14068320 07-JUN-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
3 Full 366.89M DISK 00:01:56 07-JUN-15
BP Key: 3 Status: AVAILABLE Compressed: YES Tag: TAG20150607T170754
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_nnndf_TAG20150607T170754_bq82hc5f_.bkp
List of Datafiles in backup set 3
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14068721 07-JUN-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN>

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Tue Jun 23 16:58:34 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /NEW_FS/oracle/FRA
db_recovery_file_dest_size big integer 8G
SQL>

We can see that the DB_RECOVERY_FILE_DEST is defined as "/NEW_FS/oracle/FRA". However, the backups go into a "backupset" folder under "/NEW_FS/oracle/FRA/ORCL/". The "ORCL" is part of the path to the folder holding the backups. How is this "ORCL" derived ?

SQL> show parameter db_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name string orcl
SQL> show parameter db_unique_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string orcl
SQL>

By default, the DB_UNIQUE_NAME is the same as DB_NAME. Let's see what happens after I change the DB_UNIQUE_NAME.

SQL> 
SQL> !ls -l /NEW_FS/oracle/FRA/
total 4
drwxrwx--- 5 oracle oracle 4096 Jun 7 17:10 ORCL

SQL>
SQL> alter system set db_unique_name='HEMANTDB' scope=SPFILE;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 385878712 bytes
Database Buffers 62914560 bytes
Redo Buffers 6008832 bytes
Database mounted.
Database opened.
SQL> !ls -l /NEW_FS/oracle/FRA/
total 4
drwxrwx--- 5 oracle oracle 4096 Jun 7 17:10 ORCL

SQL>

After resetting the DB_UNIQUE_NAME, Oracle doesn't immediately create the folder for the new DB_UNIQUE_NAME until and unless I run an RMAN Backup.

RMAN> exit

RMAN-06900: WARNING: unable to generate V$RMAN_STATUS or V$RMAN_OUTPUT row
RMAN-06901: WARNING: disabling update of the V$RMAN_STATUS and V$RMAN_OUTPUT rows
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3344
Session ID: 67 Serial number: 13


Recovery Manager complete.
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Tue Jun 23 17:07:14 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> backup datafile 1;

Starting backup at 23-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=27 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=38 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: starting piece 1 at 23-JUN-15
channel ORA_DISK_1: finished piece 1 at 23-JUN-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_06_23/o1_mf_nnndf_TAG20150623T170721_brl8g9od_.bkp tag=TAG20150623T170721 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:25
Finished backup at 23-JUN-15

Starting Control File and SPFILE Autobackup at 23-JUN-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_06_23/o1_mf_s_883156126_brl8k0w4_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 23-JUN-15

RMAN>

SQL> !ls -l /NEW_FS/oracle/FRA/
total 8
drwxrwx--- 3 oracle oracle 4096 Jun 23 17:07 HEMANTDB
drwxrwx--- 5 oracle oracle 4096 Jun 7 17:10 ORCL

SQL>
SQL> show parameter db_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name string orcl
SQL> show parameter db_unique_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string HEMANTDB
SQL>

Notice how Oracle created the "HEMANTDB" folder under the designated DB_RECOVERY_FILE_DEST. It then created the "backupset" and "autobackup" folders also as subfolders under this.  BackupSet BackupPieces and Controlfile Autobackups are now going to the new path.  The backups are go to folders under {DB_RECOVERY_FILE_DEST}/{DB_UNIQUE_NAME}
.
.
.



Categories: DBA Blogs

Empty Leaf Blocks After Rollback Part I (Empty Spaces)

Richard Foote - Tue, 2015-06-23 00:09
There’s been an interesting recent discussion on the OTN Database forum regarding “Index blank blocks after large update that was rolled back“. Setting aside the odd scenario of updating a column that previously had 20 million distinct values to the same value on a 2 billion row table, the key questions raised are why the blank index leaf blocks […]
Categories: DBA Blogs

Log Buffer #428: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-06-22 11:45

The Log Buffer Edition once again is sparkling with some gems, hand-picked from Oracle, SQL Server and MySQL.

Oracle:

  • Oracle GoldenGate 12.1.2.1.1  is now certified with Unity 14.10.  With this certification, customers can use Oracle GoldenGate to deliver data to Teradata Unity which can then automate the distribution of data to multiple Teradata databases.
  • How do I change DNS servers on Exadata storage servers.
  • Flushing Shared Pool Does Not Slow Its Growth.
  • Code completion is the key feature you need when adding support for your own JavaScript framework to NetBeans IDE.
  • Replicating Hive Data Into Oracle BI Cloud Service for Visual Analyzer using BICS Data Sync.

SQL Server:

  • Trigger an Email of an SSRS Report from an SSIS Package.
  • Script All Server Level Objects to Recreate SQL Server.
  • A Syntax Mystery in a Previously Working Procedure.
  • Using R to Explore Data by Analysis – for SQL Professionals.
  • Converting Rows to Columns (PIVOT) and Columns to Rows (UNPIVOT) in SQL Server.

MySQL:

  • Some applications, particularly those written with a single-node database server in mind, attempt to immediately read a value they have just inserted into the database, without making those operations part of a single transaction. A read/write splitting proxy or a connection pool combined with a load-balancer can direct each operation to a different database node.
  • Q&A: High availability when using MySQL in the cloud.
  • MariaDB 10.0.20 now available.
  • Removal and Deprecation in MySQL 5.7.
  • Getting EXPLAIN information from already running queries in MySQL 5.7.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Index Tree Dumps in Oracle 12c Database (New Age)

Richard Foote - Sun, 2015-06-21 23:56
I’ve previously discussed Index Tree Dumps but I’ve recently found a nice little improvement that’s been introduced in Oracle Database 12c. Let’s begin by creating a little table and index: To generate an Index Tree Dump, we first need the OBJECT_ID of the index: And then use it to generate the Index Tree Dump: Previously, an […]
Categories: DBA Blogs

Flushing Shared Pool Does Not Slow Its Growth

Bobby Durrett's DBA Blog - Thu, 2015-06-18 17:14

I’m still working on resolving the issues caused by bug 13914613.

Oracle support recommended that we apply a parameter change to resolve the issue but that change requires us to bounce the database  and I was looking for a resolution that does not need a bounce.  The bug caused very bad shared pool latch waits when the automatic memory management feature of our 11.2.0.3 database expanded the shared pool.  Oracle support recommending setting _enable_shared_pool_durations=false and I verified that changing this parameter requires a bounce.  It is a big hassle to bounce this database because of the application so I thought that I might try flushing the shared pool on a regular basis so the automatic memory management would not need to keep increasing the size of the shared pool.  The shared pool was growing in size because we have a lot of SQL statements without bind variables.  So, I did a test and in my test flushing the shared pool did not slow the growth of the shared pool.

Here is a zip of the scripts I used for this test and their outputs: zip

I set the shared pool to a small value so it was more likely to grow and I created a script to run many different sql statements that don’t use bind variables:

spool runselects.sql

select 'select * from dual where dummy=''s'
||to_char(sysdate,'HHMISS')||rownum||''';'
from dba_objects;

spool off

@runselects

So, the queries looked like this:

select * from dual where dummy='s0818111';
select * from dual where dummy='s0818112';
select * from dual where dummy='s0818113';
select * from dual where dummy='s0818114';
select * from dual where dummy='s0818115';
select * from dual where dummy='s0818116';
select * from dual where dummy='s0818117';

I ran these for an hour and tested three different configurations.  The first two did not use the _enable_shared_pool_durations=false setting and the last did.  The first test was a baseline that showed the growth of the shared pool without flushing the shared pool.  The second test including a flush of the shared pool every minute.  The last run included the parameter change and no flush of the shared pool.  I queried V$SGA_RESIZE_OPS after each test to see how many times the shared pool grew.  Here is the query:

SELECT OPER_TYPE,FINAL_SIZE Final,
to_char(start_time,'dd-mon hh24:mi:ss') Started, 
to_char(end_time,'dd-mon hh24:mi:ss') Ended 
FROM V$SGA_RESIZE_OPS
where component='shared pool'
order by start_time,end_time;

Here are the results.

Baseline – no flush, no parameter change:

OPER_TYPE       FINAL STARTED         ENDED
--------- ----------- --------------- ---------------
GROW      150,994,944 18-jun 05:03:54 18-jun 05:03:54
GROW      134,217,728 18-jun 05:03:54 18-jun 05:03:54
STATIC    117,440,512 18-jun 05:03:54 18-jun 05:03:54
GROW      167,772,160 18-jun 05:04:36 18-jun 05:04:36
GROW      184,549,376 18-jun 05:47:38 18-jun 05:47:38

Flush every minute, no parameter change:

OPER_TYPE       FINAL STARTED         ENDED
--------- ----------- --------------- ---------------
GROW      134,217,728 18-jun 06:09:15 18-jun 06:09:15
GROW      150,994,944 18-jun 06:09:15 18-jun 06:09:15
STATIC    117,440,512 18-jun 06:09:15 18-jun 06:09:15
GROW      167,772,160 18-jun 06:09:59 18-jun 06:09:59
GROW      184,549,376 18-jun 06:22:26 18-jun 06:22:26
GROW      201,326,592 18-jun 06:42:29 18-jun 06:42:29
GROW      218,103,808 18-jun 06:47:29 18-jun 06:47:29

Parameter change, no flush:

OPER_TYPE        FINAL STARTED         ENDED
--------- ------------ --------------- ---------------
STATIC     117,440,512 18-jun 07:16:09 18-jun 07:16:09
GROW       134,217,728 18-jun 07:16:18 18-jun 07:16:18

So, at least in this test – which I have run only twice – flushing the shared pool if anything makes the growth of the shared pool worse.  But, changing the parameter seems to lock it in.

– Bobby

Categories: DBA Blogs

Create #em12c users fast and easy!

DBASolved - Thu, 2015-06-18 11:57

Over the last few months, I’ve been working a project where I’ve started to dive into EM CLI and the value that EM CLI brings to cutting down on doing things like creating Enterprise Manager users. Hence the reason for this post.

Note: If you haven’t looked into EM CLI yet, I encourage you to do so. A good starting point is here. Plus there is a whole book written on the topic by some friends and guru’s of mine, here.

Creating users in Enterprise Manager 12c is pretty simple as it is. Simply go to Setup -> Security -> Administrators. When you get this screen, then click on either the Create or Create Like buttons.

After clicking Create or Create Like, Enterprise Manger takes you to a five (5) step wizard for creating a user. This wizard allows you to provide details about the user, assign roles, assign target privileges, assign resource privileges and then review what you have done.

Depending on how many users you have to create, this wizard is either an great way of creating user or a slow way for creating users. Using EM CLI, users can be created from the command line very quickly and easily and no need to use the GUI wizard either.. :)

The syntax to create a user from the command line is as follows:

emcli create_user
-name="name"
-password="password"
[-type="user_type"]
[-roles="role1;role2;..."]
[-email="email1;email2;..."]
[-privilege="name[;secure-resource-details]]"
[-separator=privilege="sep_string"]
[-subseparator=privilege="subsep_string"]
[-profile="profile_name"]
[-desc="user_description"]
[-expired="true|false"]
[-prevent_change_password="true|false"]
[-department="department_name"]
[-cost_center="cost_center"]
[-line_of_business="line_of_business"]
[-contact="contact"]
[-location="location"]
[-input_file="arg_name:file_path"]

The beautiful part of EM CLI is that is can be used with any scripting language. Since I like to use PERL, I decided to write a simple script that can be used to create a user from the command line using EM CLI.

#!/usr/bin/perl -w
use strict;
use warnings;

#Parameters
my $oem_home_bin = “$OMS_HOME/bin";
my ($username, $passwd, $email) = @ARGV;
my $pwdchange = ‘false';

#Program
if (not defined $username or not defined $passwd or not defined $email)
{    
    print "\nUsage: perl ./emcli_create_em_user.pl username password email_address\n\n";    
    exit;
}

system($oem_home_bin.'/emcli login -username=sysman);
system($oem_home_bin.'/emcli sync');
my $cmd = 'emcli create_user -name='.$username.' -password='.$passwd.' -email='.$email.' -prevent_change_password='.$pwdchange;
#print $cmd."\n";
system($oem_home_bin.'/'.$cmd);
system($oem_home_bin.'/emcli logout');

Now using this bit of code, I’m able to create users very rapidly using EM CLI with a command like this:

perl ./emcli_create_em_user.pl <username> <password for user> <email address>

Well, I hope this helps other look at and start using EM CLI when managing their EM environments.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: EMCLI, OEM
Categories: DBA Blogs

Overall I/O Query

Bobby Durrett's DBA Blog - Tue, 2015-06-16 14:57

I hacked together a query today that shows the overall I/O performance that a database is experiencing.

The output looks like this:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:59        359254               20              711636
2015-06-15 16:00:59        805884               16              793033
2015-06-15 17:00:13        516576               13              472478
2015-06-15 18:00:27        471098                6              123565
2015-06-15 19:00:41        201820                9              294858
2015-06-15 20:00:55        117887                5              158778
2015-06-15 21:00:09         85629                1               79129
2015-06-15 22:00:23        226617                2               10744
2015-06-15 23:00:40        399745               10              185236
2015-06-16 00:00:54       1522650                0               43099
2015-06-16 01:00:08       2142484                0               19729
2015-06-16 02:00:21        931349                0                9270

I’ve combined reads and writes and focused on three metrics – number of IOs, average IO time in milliseconds, and average IO size in bytes.  I think it is a helpful way to compare the way two systems perform.  Here is another, better, system’s output:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:25        331931                1              223025
2015-06-15 16:00:40        657571                2               36152
2015-06-15 17:00:56       1066818                1               24599
2015-06-15 18:00:11        107364                1              125390
2015-06-15 19:00:26         38565                1               11023
2015-06-15 20:00:41         42204                2              100026
2015-06-15 21:00:56         42084                1               64439
2015-06-15 22:00:15       3247633                3              334956
2015-06-15 23:00:32       3267219                0               49896
2015-06-16 00:00:50       4723396                0               32004
2015-06-16 01:00:06       2367526                1               18472
2015-06-16 02:00:21       1988211                0                8818

Here is the query:

select 
to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS') "End snapshot time",
sum(after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS) "number of IOs",
trunc(10*sum(after.READTIM+after.WRITETIM-before.WRITETIM-before.READTIM)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO time (ms)",
trunc((select value from v$parameter where name='db_block_size')*
sum(after.PHYBLKRD+after.PHYBLKWRT-before.PHYBLKRD-before.PHYBLKWRT)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO size (bytes)"
from DBA_HIST_FILESTATXS before, DBA_HIST_FILESTATXS after,DBA_HIST_SNAPSHOT sn
where 
after.file#=before.file# and
after.snap_id=before.snap_id+1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number
group by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS')
order by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS');

I hope this is helpful.

– Bobby

Categories: DBA Blogs