Skip navigation.

DBA Blogs

Log Buffer #428: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-06-22 11:45

The Log Buffer Edition once again is sparkling with some gems, hand-picked from Oracle, SQL Server and MySQL.

Oracle:

  • Oracle GoldenGate 12.1.2.1.1  is now certified with Unity 14.10.  With this certification, customers can use Oracle GoldenGate to deliver data to Teradata Unity which can then automate the distribution of data to multiple Teradata databases.
  • How do I change DNS servers on Exadata storage servers.
  • Flushing Shared Pool Does Not Slow Its Growth.
  • Code completion is the key feature you need when adding support for your own JavaScript framework to NetBeans IDE.
  • Replicating Hive Data Into Oracle BI Cloud Service for Visual Analyzer using BICS Data Sync.

SQL Server:

  • Trigger an Email of an SSRS Report from an SSIS Package.
  • Script All Server Level Objects to Recreate SQL Server.
  • A Syntax Mystery in a Previously Working Procedure.
  • Using R to Explore Data by Analysis – for SQL Professionals.
  • Converting Rows to Columns (PIVOT) and Columns to Rows (UNPIVOT) in SQL Server.

MySQL:

  • Some applications, particularly those written with a single-node database server in mind, attempt to immediately read a value they have just inserted into the database, without making those operations part of a single transaction. A read/write splitting proxy or a connection pool combined with a load-balancer can direct each operation to a different database node.
  • Q&A: High availability when using MySQL in the cloud.
  • MariaDB 10.0.20 now available.
  • Removal and Deprecation in MySQL 5.7.
  • Getting EXPLAIN information from already running queries in MySQL 5.7.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Index Tree Dumps in Oracle 12c Database (New Age)

Richard Foote - Sun, 2015-06-21 23:56
I’ve previously discussed Index Tree Dumps but I’ve recently found a nice little improvement that’s been introduced in Oracle Database 12c. Let’s begin by creating a little table and index: To generate an Index Tree Dump, we first need the OBJECT_ID of the index: And then use it to generate the Index Tree Dump: Previously, an […]
Categories: DBA Blogs

Flushing Shared Pool Does Not Slow Its Growth

Bobby Durrett's DBA Blog - Thu, 2015-06-18 17:14

I’m still working on resolving the issues caused by bug 13914613.

Oracle support recommended that we apply a parameter change to resolve the issue but that change requires us to bounce the database  and I was looking for a resolution that does not need a bounce.  The bug caused very bad shared pool latch waits when the automatic memory management feature of our 11.2.0.3 database expanded the shared pool.  Oracle support recommending setting _enable_shared_pool_durations=false and I verified that changing this parameter requires a bounce.  It is a big hassle to bounce this database because of the application so I thought that I might try flushing the shared pool on a regular basis so the automatic memory management would not need to keep increasing the size of the shared pool.  The shared pool was growing in size because we have a lot of SQL statements without bind variables.  So, I did a test and in my test flushing the shared pool did not slow the growth of the shared pool.

Here is a zip of the scripts I used for this test and their outputs: zip

I set the shared pool to a small value so it was more likely to grow and I created a script to run many different sql statements that don’t use bind variables:

spool runselects.sql

select 'select * from dual where dummy=''s'
||to_char(sysdate,'HHMISS')||rownum||''';'
from dba_objects;

spool off

@runselects

So, the queries looked like this:

select * from dual where dummy='s0818111';
select * from dual where dummy='s0818112';
select * from dual where dummy='s0818113';
select * from dual where dummy='s0818114';
select * from dual where dummy='s0818115';
select * from dual where dummy='s0818116';
select * from dual where dummy='s0818117';

I ran these for an hour and tested three different configurations.  The first two did not use the _enable_shared_pool_durations=false setting and the last did.  The first test was a baseline that showed the growth of the shared pool without flushing the shared pool.  The second test including a flush of the shared pool every minute.  The last run included the parameter change and no flush of the shared pool.  I queried V$SGA_RESIZE_OPS after each test to see how many times the shared pool grew.  Here is the query:

SELECT OPER_TYPE,FINAL_SIZE Final,
to_char(start_time,'dd-mon hh24:mi:ss') Started, 
to_char(end_time,'dd-mon hh24:mi:ss') Ended 
FROM V$SGA_RESIZE_OPS
where component='shared pool'
order by start_time,end_time;

Here are the results.

Baseline – no flush, no parameter change:

OPER_TYPE       FINAL STARTED         ENDED
--------- ----------- --------------- ---------------
GROW      150,994,944 18-jun 05:03:54 18-jun 05:03:54
GROW      134,217,728 18-jun 05:03:54 18-jun 05:03:54
STATIC    117,440,512 18-jun 05:03:54 18-jun 05:03:54
GROW      167,772,160 18-jun 05:04:36 18-jun 05:04:36
GROW      184,549,376 18-jun 05:47:38 18-jun 05:47:38

Flush every minute, no parameter change:

OPER_TYPE       FINAL STARTED         ENDED
--------- ----------- --------------- ---------------
GROW      134,217,728 18-jun 06:09:15 18-jun 06:09:15
GROW      150,994,944 18-jun 06:09:15 18-jun 06:09:15
STATIC    117,440,512 18-jun 06:09:15 18-jun 06:09:15
GROW      167,772,160 18-jun 06:09:59 18-jun 06:09:59
GROW      184,549,376 18-jun 06:22:26 18-jun 06:22:26
GROW      201,326,592 18-jun 06:42:29 18-jun 06:42:29
GROW      218,103,808 18-jun 06:47:29 18-jun 06:47:29

Parameter change, no flush:

OPER_TYPE        FINAL STARTED         ENDED
--------- ------------ --------------- ---------------
STATIC     117,440,512 18-jun 07:16:09 18-jun 07:16:09
GROW       134,217,728 18-jun 07:16:18 18-jun 07:16:18

So, at least in this test – which I have run only twice – flushing the shared pool if anything makes the growth of the shared pool worse.  But, changing the parameter seems to lock it in.

– Bobby

Categories: DBA Blogs

Create #em12c users fast and easy!

DBASolved - Thu, 2015-06-18 11:57

Over the last few months, I’ve been working a project where I’ve started to dive into EM CLI and the value that EM CLI brings to cutting down on doing things like creating Enterprise Manager users. Hence the reason for this post.

Note: If you haven’t looked into EM CLI yet, I encourage you to do so. A good starting point is here. Plus there is a whole book written on the topic by some friends and guru’s of mine, here.

Creating users in Enterprise Manager 12c is pretty simple as it is. Simply go to Setup -> Security -> Administrators. When you get this screen, then click on either the Create or Create Like buttons.

After clicking Create or Create Like, Enterprise Manger takes you to a five (5) step wizard for creating a user. This wizard allows you to provide details about the user, assign roles, assign target privileges, assign resource privileges and then review what you have done.

Depending on how many users you have to create, this wizard is either an great way of creating user or a slow way for creating users. Using EM CLI, users can be created from the command line very quickly and easily and no need to use the GUI wizard either.. :)

The syntax to create a user from the command line is as follows:

emcli create_user
-name="name"
-password="password"
[-type="user_type"]
[-roles="role1;role2;..."]
[-email="email1;email2;..."]
[-privilege="name[;secure-resource-details]]"
[-separator=privilege="sep_string"]
[-subseparator=privilege="subsep_string"]
[-profile="profile_name"]
[-desc="user_description"]
[-expired="true|false"]
[-prevent_change_password="true|false"]
[-department="department_name"]
[-cost_center="cost_center"]
[-line_of_business="line_of_business"]
[-contact="contact"]
[-location="location"]
[-input_file="arg_name:file_path"]

The beautiful part of EM CLI is that is can be used with any scripting language. Since I like to use PERL, I decided to write a simple script that can be used to create a user from the command line using EM CLI.

#!/usr/bin/perl -w
use strict;
use warnings;

#Parameters
my $oem_home_bin = “$OMS_HOME/bin";
my ($username, $passwd, $email) = @ARGV;
my $pwdchange = ‘false';

#Program
if (not defined $username or not defined $passwd or not defined $email)
{    
    print "\nUsage: perl ./emcli_create_em_user.pl username password email_address\n\n";    
    exit;
}

system($oem_home_bin.'/emcli login -username=sysman);
system($oem_home_bin.'/emcli sync');
my $cmd = 'emcli create_user -name='.$username.' -password='.$passwd.' -email='.$email.' -prevent_change_password='.$pwdchange;
#print $cmd."\n";
system($oem_home_bin.'/'.$cmd);
system($oem_home_bin.'/emcli logout');

Now using this bit of code, I’m able to create users very rapidly using EM CLI with a command like this:

perl ./emcli_create_em_user.pl <username> <password for user> <email address>

Well, I hope this helps other look at and start using EM CLI when managing their EM environments.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: EMCLI, OEM
Categories: DBA Blogs

Overall I/O Query

Bobby Durrett's DBA Blog - Tue, 2015-06-16 14:57

I hacked together a query today that shows the overall I/O performance that a database is experiencing.

The output looks like this:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:59        359254               20              711636
2015-06-15 16:00:59        805884               16              793033
2015-06-15 17:00:13        516576               13              472478
2015-06-15 18:00:27        471098                6              123565
2015-06-15 19:00:41        201820                9              294858
2015-06-15 20:00:55        117887                5              158778
2015-06-15 21:00:09         85629                1               79129
2015-06-15 22:00:23        226617                2               10744
2015-06-15 23:00:40        399745               10              185236
2015-06-16 00:00:54       1522650                0               43099
2015-06-16 01:00:08       2142484                0               19729
2015-06-16 02:00:21        931349                0                9270

I’ve combined reads and writes and focused on three metrics – number of IOs, average IO time in milliseconds, and average IO size in bytes.  I think it is a helpful way to compare the way two systems perform.  Here is another, better, system’s output:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:25        331931                1              223025
2015-06-15 16:00:40        657571                2               36152
2015-06-15 17:00:56       1066818                1               24599
2015-06-15 18:00:11        107364                1              125390
2015-06-15 19:00:26         38565                1               11023
2015-06-15 20:00:41         42204                2              100026
2015-06-15 21:00:56         42084                1               64439
2015-06-15 22:00:15       3247633                3              334956
2015-06-15 23:00:32       3267219                0               49896
2015-06-16 00:00:50       4723396                0               32004
2015-06-16 01:00:06       2367526                1               18472
2015-06-16 02:00:21       1988211                0                8818

Here is the query:

select 
to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS') "End snapshot time",
sum(after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS) "number of IOs",
trunc(10*sum(after.READTIM+after.WRITETIM-before.WRITETIM-before.READTIM)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO time (ms)",
trunc((select value from v$parameter where name='db_block_size')*
sum(after.PHYBLKRD+after.PHYBLKWRT-before.PHYBLKRD-before.PHYBLKWRT)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO size (bytes)"
from DBA_HIST_FILESTATXS before, DBA_HIST_FILESTATXS after,DBA_HIST_SNAPSHOT sn
where 
after.file#=before.file# and
after.snap_id=before.snap_id+1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number
group by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS')
order by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS');

I hope this is helpful.

– Bobby

Categories: DBA Blogs

Indexing and Transparent Data Encryption Part III (You Can’t Do That)

Richard Foote - Tue, 2015-06-16 00:28
In Part II of this series, we looked at how we can create a B-Tree index on a encrypted column, providing we do not apply salt during encryption. However, this is not the only restriction with regard to indexing an encrypted column using column-based encryption. If we attempt to create an index that is not a […]
Categories: DBA Blogs

CRS-4995: The command ‘Modify resource’ is invalid in crsctl. Use srvctl for this command.

Oracle in Action - Mon, 2015-06-15 09:40

RSS content

Today, in my 12.1.0.2 cluster,  I encountered above error message when I was trying to modify ACL of an ASM cluster file system created on volume VOL1 in DATA diskgroup as follows:

[root@host01 ~]# crsctl modify resource ora.data.vol1.acfs -attr "ACL='owner:root:rwx,pgrp:dba:rwx,other::r--'"

CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.

I resolved the above problem by using the unsupported flag as follows:

[root@host01 ~]# crsctl modify resource ora.data.vol1.acfs -attr "ACL='owner:root:rwx,pgrp:dba:rwx,other::r--'" -unsupported

 

Hope it helps!!

References:
Oracle Issue running 12.1.0.2 clusterware with 11.2.0.2 database

Oracle Issue running 12.1.0.2 clusterware with 11.2.0.2 database

——————————————————————————————————————-

 Related Links :

Home

12c RAC Index

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.], All Right Reserved. 2015.

The post CRS-4995: The command ‘Modify resource’ is invalid in crsctl. Use srvctl for this command. appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

RMAN -- 2 : ArchiveLog Deletion Policy

Hemant K Chitale - Sat, 2015-06-13 08:54
Most Internet references about defining the ArchiveLog Deletion Policy relate to the necessity to preserve ArchiveLogs for Standby databases.

For example, the configuration here prevents deletion unless an ArchiveLog has been applied on a Standby :

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/snapcf_orcl.f'; # default

RMAN>

But it is possible to also configure it differently. For example, thus for a database without a Standby, I can configure it to prevent deletion unless a Backup of the ArchiveLog has been made (to disk in this case)  :

RMAN> configure archivelog deletion policy to backed up 1 times to device type disk;

old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
new RMAN configuration parameters are successfully stored

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/snapcf_orcl.f'; # default

RMAN>

Let's see how this plays.
RMAN> sql 'alter system archive log current ';

sql statement: alter system archive log current

RMAN> delete archivelog all;

released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=52 device type=DISK
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_623_bqrjp5gx_.arc thread=1 sequence=623
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_624_bqrjpsb3_.arc thread=1 sequence=624
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_625_bqrjq8kj_.arc thread=1 sequence=625
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_626_bqrjqfdq_.arc thread=1 sequence=626

RMAN>

RMAN raised a WARNING that indicates that deletion of the ArchiveLog is not permitted until a Backup has been taken.  Thus, you can protect your ArchiveLogs from deletion by RMAN commands if they have not been backed up.
NOTE : This does NOT prevent non-RMAN commands (e.g. cron jobs with shell scripts) from deleting ArchiveLogs !

Let me backup and then delete the ArchiveLogs.

RMAN> backup as compressed backupset archivelog all;

Starting backup at 13-JUN-15
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=623 RECID=9 STAMP=882312517
channel ORA_DISK_1: starting piece 1 at 13-JUN-15
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=624 RECID=10 STAMP=882312537
input archived log thread=1 sequence=625 RECID=11 STAMP=882312552
input archived log thread=1 sequence=626 RECID=12 STAMP=882312557
channel ORA_DISK_2: starting piece 1 at 13-JUN-15
channel ORA_DISK_1: finished piece 1 at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_13/o1_mf_annnn_TAG20150613T225210_bqrjwtfd_.bkp tag=TAG20150613T225210 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=627 RECID=13 STAMP=882312730
channel ORA_DISK_1: starting piece 1 at 13-JUN-15
channel ORA_DISK_2: finished piece 1 at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_13/o1_mf_annnn_TAG20150613T225210_bqrjwtg3_.bkp tag=TAG20150613T225210 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: finished piece 1 at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_13/o1_mf_annnn_TAG20150613T225210_bqrjwvp1_.bkp tag=TAG20150613T225210 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-JUN-15

Starting Control File and SPFILE Autobackup at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_13/o1_mf_s_882312732_bqrjwwsc_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 13-JUN-15

RMAN> delete archivelog all;

released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=52 device type=DISK
List of Archived Log Copies for database with db_unique_name ORCL
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
9 1 623 A 07-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_623_bqrjp5gx_.arc

10 1 624 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_624_bqrjpsb3_.arc

11 1 625 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_625_bqrjq8kj_.arc

12 1 626 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_626_bqrjqfdq_.arc

13 1 627 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_627_bqrjwt3k_.arc


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_623_bqrjp5gx_.arc RECID=9 STAMP=882312517
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_624_bqrjpsb3_.arc RECID=10 STAMP=882312537
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_625_bqrjq8kj_.arc RECID=11 STAMP=882312552
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_626_bqrjqfdq_.arc RECID=12 STAMP=882312557
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_627_bqrjwt3k_.arc RECID=13 STAMP=882312730
Deleted 5 objects


RMAN>

Now, I am able to delete the ArchiveLogs as I have at least 1 backup (on disk) of each.

.
.
.

Categories: DBA Blogs

Log Buffer #427: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-06-12 06:54

This Log Buffer Edition covers various blog posts from the last week regarding Oracle, SQL Server and MySQL.

Oracle:

  • Merging Overlapping Date Ranges with MATCH_RECOGNIZE
  • The latest version of Enterprise Manager, EM 12.1.0.5, has been announced!
  • Kdump is the Linux kernel crash-dump mechanism. In the event of a server crash, Kdump creates a memory image (vmcore) that can help in determining the cause of the crash.
  • APEX Connect Presentation and Download of the sample application
  • One of my favorite feature of ZFS is the I/O aggregation done in the final stage of issuing I/Os to devices.

SQL Server:

  • Reusing T-SQL Code is catching on.
  • SELECT INTO vs INSERT INTO on Columnstore
  • SQL Monitor Custom Metric: WriteLog wait time
  • Query Performance Tuning – A Methodical Approach
  • How to Get SQL Server Dates and Times Horribly Wrong

MySQL:

  • Hash-based workarounds for MySQL unique constraint limitations
  • Replicate MySQL to Amazon Redshift with Tungsten: The good, the bad & the ugly
  • Indexing MySQL JSON Data
  • Improving the Performance of MySQL on Windows
  • Auditing MySQL with McAfee and MongoDB

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Partner Webcast – Oracle Big Data & Business Analytics: NEOS SNA Solution for Telcos

Behind the hype of big data there's a simple story, as for decades companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pillars of PowerShell: Windows OS

Pythian Group - Wed, 2015-06-10 06:42
Introduction

This is the fifth blog post continuing the series on the Pillars of PowerShell. The previous post in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling

The Windows Operating System (OS) is something a DBA should know and be familiar with since SQL Server has to run on top of it. I would say that on average most DBAs interact with the OS for troubleshooting purposes. In this post I just want to point out a few snippets of how PowerShell can help you do this type of work.

 Services Console Manager

In the SQL Server 2000 days DBAs became very familiar with typing in “services.msc” in the run prompt. Scrolling through the list of services to find out what state it is, or what the login is configured for with a particular service. Now, if you are performing administrative tasks against SQL Server services it is always advised that you use SQL Server Configuration Manager. However, if you are looking to check the status of the service or performing a restart of just the service, PowerShell can help out.

Get-Service

This cmdlet has a few discrepancies that it can help to understand upfront when you start using PowerShell instead of the Services Console. In the Services Console you find the service by the “Name”, this is the “DisplayName in the Get-Service cmdlet. The “Name” in Get-Service is actually the “Service Name” in the Service Console, do you follow? OK. So with SQL Server the DisplayName for a default instance would be “SQL Server (MSSQLSERVER)”, and the “Name” would be “mssqlserver”. This cmdlet allows you to filter by either field so the below two commands will return the same thing:

Get-Service 'SQL Server (MSSQLSERVER)'
Get-Service mssqlserver

You can obviously see which one is easier to type right off. So with SQL Server you will likely know that a default instance’s name would be queried using “mssqlserver”, and a named instance would be “mssql$myinstance”. So if you wanted to find all of the instances running on a server you could use this one-liner:

Get-Service mssql*
Restart-Service

This does exactly what you think it will, so you have to be careful. You can call this cmdlet by itself and restart a service by referencing the “name” just as you did with Get-Service. I want to show you how the pipeline can work for you in this situation. You will find some cmdlets in PowerShell that have a few “special” features. The service cmdlets are included in this category, they allow an array as an input object to the cmdlet for the property or via the pipeline.

So, let’s use the example that I have a server with multiple instances of SQL Server, and all the additional components like SSRS and SSIS. I only want to work with the named instance “SQL12″. I can get the status of all component services with this command:

Get-Service -Name 'MSSQL$SQL12','ReportServer$SQL12','SQLAgent$SQL12','MsDtsServer110'

Now if I need to do a controlled restart of all of those services I can just do this command:

Get-Service -Name 'MSSQL$SQL12','ReportServer$SQL12','SQLAgent$SQL12','MsDtsServer110' |
Restart-Service -Force -WhatIf

The added “-WhatIf” will not actually perform the operation but tell you what it would end up doing. Once I remove that the restart would actually occur. All of this would look something like this in the console:

Get-Service_Restart-Service Win32_Service

Some of you may recognize this one as a WMI class, and it is. Using WMI offers you a bit more information than the Get-Service cmdlet. You can see that by just running this code:

Get-Service mssqlserver
Get-WmiObject win32_service | where {$_.name -eq 'mssqlserver'}

The two commands above equate to the same referenced service but return slightly different bits of information by default:

gwmi_Win32_Service

However, if you run the command below, you will see how gathering service info with WMI offers much more potential:

Get-WmiObject win32_service | where {$_.name -eq 'mssqlserver'} | select *

Get-Service will not actually give you the service account. So here is one function I use often (saved in my profile):

function Get-SQLServiceStatus ([string[]]$server)
{
 foreach ($s in $server) {
 Get-WmiObject win32_service -ComputerName $s |
	where {$_.DisplayName -match &quot;SQL &quot;} |
	select @{Label=&quot;ServerName&quot;;Expression={$s}},
	DisplayName, Name, State, Status, StartMode, StartName
 }
}

One specific thing I did in this function is declaring the type of parameter you pass into this function. When you use “[string[]]”, it means the parameter accepts an array or multiple objects. You can set your variable to do this, but you also have to ensure the function is written in a manner that can process the array. I did this simply by wrapping the commands into a “foreach” loop. So an example use of this against a single server would be:
getsqlserverstatus
If you wanted to run this against multiple servers it would go something like this:

Get-SQLServerStatus -server 'MyServer','MyServer2','MyServer3' | Out-GridView
#another option
$serverList = 'MyServer','MyServer2','MyServer3'
Get-SQLServerStatus -server $serverList | Out-GridView
Disk Manager

Every DBA should be very familiar with this management console and can probably get to it blind folded. You might use this or “My Computer” when you need to see how much free space there is on a drive. If you happen to be working in an environment that only has Window Server 2012 and Windows 8 or higher, wish I was there with you. PowerShell 4.0 and higher offers storage cmdlets that let you get information about your disk and volume much easier, and cleaner. They actually use CIM (Common Information Model), which is what WMI is built upon. I read somewhere that basically “WMI is just Microsoft’s way of implementing CIM”. They are obviously going back to the standard, as they have done with other areas. It is worth learning more about, and it actually allows you to connect to a PowerShell 2.0 machine to get the same amount of information.

Anyway back to the task at hand. If you are working on PowerShell 3.0 or lower you can use Get-WmiObject and win32_Volume to get similar information that the storage cmdlet Get-Volume returns in 4.0:

Get-Volume
Get-WmiObject win32_volume | select DriveLetter, Label, FileSystem,
@{Label="SizeRemaining";Expression={"{0:N2}" -f($_.FreeSpace/1GB)}},
@{Label="Size";Expression={"{0:N2}" -f($_.Capacity/1GB)}} | Format-Table
win32_volume  Windows Event Viewer

Almost everyone is familiar with and knows their way around the Windows Event Viewer. I actually left this last for a reason. I want to walk you through an example that I think will help “put it all together” on what PowerShell can do for you. Our scenario is dealing with a server that had an unexpected restart, at least for me. There are times that I will get paged by our Avail Monitoring product for a customer’s site, and I need to find out who or why the server restarted. The most common place you are going to go for this will be the Event Log.

Show-EventLog

If you just want to go through Event Viewer and manually find events, and it is a remote server, I find this to be the quickest method:

Show-EventLog -ComputerName Server1

This command will open Event Viewer and go through the process of connecting you to “Server1″. No more right-clicking and selecting “connect to another computer”!

Get-EventLog

I prefer to just dig into searching for events, this is where Get-EventLog comes in handy. You can call this cmdlet and provide:

  1. Specific Log to look in (system, application, or security most commonly)
  2. Specify a time range
  3. Look just for specific entry type (error, information, warning, etc.)

In Windows Server 2003 Microsoft added a group policy “Shutdown Event Tracker” that if enabled writes particular events to the System Log when a server restarts, either planned or unplanned. In an unplanned event the first user that logs into the server will get a prompt about the unexpected shutdown. When you are dealing with planned, they are prompted for a similar prompt for restart and it has to be filled in before the restart will occur. What you can do with this cmdlet is search for those messages in the System Log.

To find the planned you would use:

Get-EventLog -LogName System -Message "*restart*" -ComputerName Server1 |
select * -First 1

Then to find the unplanned simply change “*restart*” to “*shutdown*”:


geteventlog

In this instance I find that SSIS and SSRS did not start back up and failed to start. I found this because I checked the status of the services for SQL Server using my custom function, Get-SQLServiceStatus:

troubleshoot_service1a

To search for events after the shutdown I need to find the first event that is written to the Event Log when a server starts up, the EventLog source. I can then use that time stamp as a starting point to search for messages on the SQL Server services that did not start up correctly. I just need the time stamp of the event and pass that into the Get-EventLog cmdlet to pull up error events. I am going to do that with this bit of code:

$t = Get-EventLog -LogName System -Source EventLog -Message "*shutdown*" | select * -First 1
Get-EventLog -LogName System -Before $t.TimeGenerated -Newest 5 -EntryType Error |
select TimeGenerated, Source, Message | Format-Table -Wrap
troubleshoot_service2a Summary

I hope you found this post useful and it gets you excited about digging deeper into PowerShell. In the next post I am going to close up the series digging into SQL Server and a few areas where PowerShell can help.

 

Learn more about our expertise in SQL Server.

Categories: DBA Blogs

Bug 13914613 Example Shared Pool Latch Waits

Bobby Durrett's DBA Blog - Tue, 2015-06-09 17:18

Oracle support says we have hit bug 13914613.  Here is what our wait events looked like in an AWR report:

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class latch: shared pool 3,497 17,482 4999 38.83 Concurrency latch: row cache objects 885 12,834 14502 28.51 Concurrency db file sequential read 1,517,968 8,206 5 18.23 User I/O DB CPU 4,443 9.87 library cache: mutex X 7,124 2,639 370 5.86 Concurrency

What really struck me about these latch waits were that the average wait time was several thousand milliseconds which means several seconds.  That’s a long time to wait for a latch.

Oracle pointed to the Latch Miss Sources section of the AWR.  This is all gibberish to me.  I guess it is the name of internal kernel latch names.

Latch Miss Sources Latch Name Where NoWait Misses Sleeps Waiter Sleeps shared pool kghfrunp: clatch: wait 0 1,987 1,956 shared pool kghfrunp: alloc: session dur 0 1,704 1,364

Bug description says “Excessive time holding shared pool latch in kghfrunp with auto memory management” so I guess the “kghfrunp” latch miss sources told Oracle support that this was my issue.

I did this query to look for resize operations:

SELECT COMPONENT ,OPER_TYPE,FINAL_SIZE Final,
  2  to_char(start_time,'dd-mon hh24:mi:ss') Started,
  3  to_char(end_time,'dd-mon hh24:mi:ss') Ended
  4  FROM V$SGA_RESIZE_OPS;

COMPONENT                 OPER_TYPE               FINAL STARTED                   ENDED
------------------------- ------------- --------------- ------------------------- -------------------------
DEFAULT 2K buffer cache   STATIC                      0 12-may 04:33:01           12-may 04:33:01
streams pool              STATIC            134,217,728 12-may 04:33:01           12-may 04:33:01
ASM Buffer Cache          STATIC                      0 12-may 04:33:01           12-may 04:33:01
DEFAULT buffer cache      INITIALIZING   10,401,873,920 12-may 04:33:01           12-may 04:33:08
DEFAULT 32K buffer cache  STATIC                      0 12-may 04:33:01           12-may 04:33:01
KEEP buffer cache         STATIC          2,147,483,648 12-may 04:33:01           12-may 04:33:01
shared pool               STATIC         13,958,643,712 12-may 04:33:01           12-may 04:33:01
large pool                STATIC          2,147,483,648 12-may 04:33:01           12-may 04:33:01
java pool                 STATIC          1,073,741,824 12-may 04:33:01           12-may 04:33:01
DEFAULT buffer cache      STATIC         10,401,873,920 12-may 04:33:01           12-may 04:33:01
DEFAULT 16K buffer cache  STATIC                      0 12-may 04:33:01           12-may 04:33:01
DEFAULT 8K buffer cache   STATIC                      0 12-may 04:33:01           12-may 04:33:01
DEFAULT 4K buffer cache   STATIC                      0 12-may 04:33:01           12-may 04:33:01
RECYCLE buffer cache      STATIC                      0 12-may 04:33:01           12-may 04:33:01
KEEP buffer cache         INITIALIZING    2,147,483,648 12-may 04:33:02           12-may 04:33:04
DEFAULT buffer cache      SHRINK         10,334,765,056 20-may 21:00:12           20-may 21:00:12
shared pool               GROW           14,025,752,576 20-may 21:00:12           20-may 21:00:12
shared pool               GROW           14,092,861,440 27-may 18:06:12           27-may 18:06:12
DEFAULT buffer cache      SHRINK         10,267,656,192 27-may 18:06:12           27-may 18:06:12
shared pool               GROW           14,159,970,304 01-jun 09:07:35           01-jun 09:07:36
DEFAULT buffer cache      SHRINK         10,200,547,328 01-jun 09:07:35           01-jun 09:07:36
DEFAULT buffer cache      SHRINK         10,133,438,464 05-jun 03:00:33           05-jun 03:00:33
shared pool               GROW           14,227,079,168 05-jun 03:00:33           05-jun 03:00:33
DEFAULT buffer cache      SHRINK         10,066,329,600 08-jun 11:06:06           08-jun 11:06:07
shared pool               GROW           14,294,188,032 08-jun 11:06:06           08-jun 11:06:07

The interesting thing is that our problem ended right about the time the last shared pool expansion supposedly started.  The latch waits hosed up our database for several minutes and it ended right about 11:06.  I suspect that the system was hung up with the bug and then once the bug finished then the normal expansion work started.  Or, at least, the time didn’t get recorded until after the bug finished slowing us down.

So, I guess it’s just a bug.  This is on 11.2.0.3 on HP-UX Itanium.  I believe there is a patch set with the fix for this bug.

Maybe it will be helpful for someone to see an example.

– Bobby

Categories: DBA Blogs

Links for 2015-06-08 [del.icio.us]

Categories: DBA Blogs

How to Display Base64 Encoded Image in MAF

One of the feature and benefits of Oracle Mobile Framework is the ability to access native device services, such as your smartphone camera. In an Oracle Mobile Application Framework...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Oracle Database 12c: Application Express 5.0 for Cloud development

If you have the Oracle Database, you already have Application Express. When you get Oracle Database Cloud, you get Application Express full development platform for cloud-based applications. Since...

We share our skills to maximize your revenue!
Categories: DBA Blogs

RMAN -- 1 : Backup Job Details

Hemant K Chitale - Sun, 2015-06-07 03:57
Here's a post on how you could be misled by a simple report on the V$RMAN_BACKUP_JOB_DETAILS view.

Suppose I run RMAN Backups through a shell script.  Like this :

[oracle@localhost Hemant]$ ls -l *sh
-rwxrw-r-- 1 oracle oracle 336 Jun 7 17:30 Backup_DB_Plus_ArchLogs.sh
[oracle@localhost Hemant]$ cat Backup_DB_Plus_ArchLogs.sh
ORACLE_SID=orcl;export ORACLE_SID

rman << EOF
connect target /

spool log to Backup_DB_plus_ArchLogs.LOG

backup as compressed backupset database ;

sql 'alter system switch logfile';
sql 'alter system archive log current' ;

backup as compressed backupset archivelog all;

backup as compressed backupset current controlfile ;

EOF

[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$ ./Backup_DB_Plus_ArchLogs.sh

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Jun 7 17:31:06 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

RMAN>
connected to target database: ORCL (DBID=1229390655)

RMAN>
RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> [oracle@localhost Hemant]$
[oracle@localhost Hemant]$

I then proceed to check the results of the run in V$RMAN_BACKUP_JOB_DETAILS.

SQL> l
1 select to_char(start_time,'DD-MON HH24:MI') StartTime, to_char(end_time,'DD-MON HH24:MI') EndTime,
2 input_type, status
3 from v$rman_backup_job_details
4* where start_time > trunc(sysdate)+17.5/24
SQL> /

STARTTIME ENDTIME INPUT_TYPE STATUS
--------------------- --------------------- ------------- -----------------------
07-JUN 17:31 07-JUN 17:31 DB FULL FAILED

SQL>

It says that I ran one FULL DATABASE Backup that failed. Is that really true ?  Let me check the RMAN spooled log.

[oracle@localhost Hemant]$ cat Backup_DB_plus_ArchLogs.LOG

Spooling started in log file: Backup_DB_plus_ArchLogs.LOG

Recovery Manager11.2.0.2.0

RMAN>
RMAN>
Starting backup at 07-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=60 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=59 device type=DISK
RMAN-06169: could not read file header for datafile 6 error reason 4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 06/07/2015 17:31:08
RMAN-06056: could not access datafile 6

RMAN>
RMAN>
sql statement: alter system switch logfile

RMAN>
sql statement: alter system archive log current

RMAN>
RMAN>
Starting backup at 07-JUN-15
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=615 RECID=1 STAMP=881773851
channel ORA_DISK_1: starting piece 1 at 07-JUN-15
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=616 RECID=2 STAMP=881773851
input archived log thread=1 sequence=617 RECID=3 STAMP=881773853
input archived log thread=1 sequence=618 RECID=4 STAMP=881774357
input archived log thread=1 sequence=619 RECID=5 STAMP=881774357
channel ORA_DISK_2: starting piece 1 at 07-JUN-15
channel ORA_DISK_2: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v12b_.bkp tag=TAG20150607T173112 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=620 RECID=6 STAMP=881775068
input archived log thread=1 sequence=621 RECID=7 STAMP=881775068
input archived log thread=1 sequence=622 RECID=8 STAMP=881775071
channel ORA_DISK_2: starting piece 1 at 07-JUN-15
channel ORA_DISK_1: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v10y_.bkp tag=TAG20150607T173112 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_2: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v292_.bkp tag=TAG20150607T173112 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
Finished backup at 07-JUN-15

Starting Control File and SPFILE Autobackup at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775075_bq83v3nr_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-JUN-15

RMAN>
RMAN>
Starting backup at 07-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 07-JUN-15
channel ORA_DISK_1: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_ncnnf_TAG20150607T173117_bq83v6vg_.bkp tag=TAG20150607T173117 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 07-JUN-15

Starting Control File and SPFILE Autobackup at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775080_bq83v88z_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-JUN-15

RMAN>
RMAN>

Recovery Manager complete.
[oracle@localhost Hemant]$

Hmm. There were *three* distinct BACKUP commands in the script file.  The first was BACKUP ... DATABASE ..., the second was BACKUP ... ARCHIVELOG ... and the third was BACKUP ... CURRENT CONTROLFILE.  All three were executed.
Only the first BACKUP execution failed.  The subsequent  two BACKUP commands succeeded.  They were for ArchiveLogs and the Controlfile.
And *yet* the view V$RMAN_BACKUP_JOB_DETAILS shows that I ran  a FULL DATABASE BACKUP that failed.  It tells me nothing about the ArchiveLogs and the ControlFile backups that did succeed !


What if I switch my strategy from using a shell script to an rman script ?

[oracle@localhost Hemant]$ ls -ltr *rmn
-rw-rw-r-- 1 oracle oracle 287 Jun 7 17:41 Backup_DB_plus_ArchLogs.rmn
[oracle@localhost Hemant]$ cat Backup_DB_plus_ArchLogs.rmn
connect target /

spool log to Backup_DB_plus_ArchLogs.TXT

backup as compressed backupset database ;

sql 'alter system switch logfile';
sql 'alter system archive log current' ;

backup as compressed backupset archivelog all;

backup as compressed backupset current controlfile;

exit

[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$ rman @Backup_DB_plus_ArchLogs.rmn

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Jun 7 17:42:17 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

RMAN> connect target *
2>
3> spool log to Backup_DB_plus_ArchLogs.TXT
4>
5> backup as compressed backupset database ;
6>
7> sql 'alter system switch logfile';
8> sql 'alter system archive log current' ;
9>
10> backup as compressed backupset archivelog all;
11>
12> backup as compressed backupset current controlfile;
13>
14> exit[oracle@localhost Hemant]$




SQL> l
1 select to_char(start_time,'DD-MON HH24:MI') StartTime, to_char(end_time,'DD-MON HH24:MI') EndTime,
2 input_type, status
3 from v$rman_backup_job_details
4 where start_time > trunc(sysdate)+17.5/24
5* order by start_time
SQL> /

STARTTIME ENDTIME INPUT_TYPE STATUS
--------------------- --------------------- ------------- -----------------------
07-JUN 17:31 07-JUN 17:31 DB FULL FAILED
07-JUN 17:42 07-JUN 17:42 DB FULL FAILED

SQL>

[oracle@localhost Hemant]$
[oracle@localhost Hemant]$ cat Backup_DB_plus_ArchLogs.TXT

connected to target database: ORCL (DBID=1229390655)

Spooling started in log file: Backup_DB_plus_ArchLogs.TXT

Recovery Manager11.2.0.2.0

Starting backup at 07-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=59 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=50 device type=DISK
RMAN-06169: could not read file header for datafile 6 error reason 4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 06/07/2015 17:42:19
RMAN-06056: could not access datafile 6

Recovery Manager complete.
[oracle@localhost Hemant]$

Now, this time, once the first BACKUP command failed, RMAN seems to have bailed out. It didn't even try executing the subsequent BACKUP commands !

How can V$RMAN_BACKUP_JOB_DETAILS differentiate from the two failed backups ?

SQL> l
1 select to_char(start_time,'DD-MON HH24:MI') StartTime, to_char(end_time,'DD-MON HH24:MI') EndTime,
2 input_bytes/1048576 Input_MB, output_bytes/1048576 Output_MB,
3 input_type, status
4 from v$rman_backup_job_details
5 where start_time > trunc(sysdate)+17.5/24
6* order by start_time
SQL> /

STARTTIME ENDTIME INPUT_MB OUTPUT_MB INPUT_TYPE STATUS
--------------------- --------------------- ---------- ---------- ------------- -----------------------
07-JUN 17:31 07-JUN 17:31 71.5219727 34.878418 DB FULL FAILED
07-JUN 17:42 07-JUN 17:42 0 0 DB FULL FAILED

SQL>

The Input Bytes does indicate that some files were backed up in the first run. Yet, it doesn't tell us how much of those were ArchiveLogs and how much were the ControlFile.


Question 1 : How would you script your backups ?  (Hint : Differentiate between the BACKUP DATABASE and the BACKUP ARCHIVELOG runs).

Question 2 : Can you improve your Backup Reports ?

Yes, the RMAN LIST BACKUP command is useful.  But you can't select the columns, format the output or add text  as you would with a query on V$ views.

[oracle@localhost oracle]$ NLS_DATE_FORMAT=DD_MON_HH24_MI_SS;export NLS_DATE_FORMAT
[oracle@localhost oracle]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Jun 7 17:51:41 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> list backup completed after "trunc(sysdate)+17.5/24";

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
17 375.50K DISK 00:00:01 07_JUN_17_31_13
BP Key: 17 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173112
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v12b_.bkp

List of Archived Logs in backup set 17
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------------- ---------- ---------
1 616 14068910 07_JUN_17_10_49 14068920 07_JUN_17_10_51
1 617 14068920 07_JUN_17_10_51 14068931 07_JUN_17_10_53
1 618 14068931 07_JUN_17_10_53 14069550 07_JUN_17_19_17
1 619 14069550 07_JUN_17_19_17 14069564 07_JUN_17_19_17

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
18 1.03M DISK 00:00:00 07_JUN_17_31_14
BP Key: 18 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173112
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v292_.bkp

List of Archived Logs in backup set 18
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------------- ---------- ---------
1 620 14069564 07_JUN_17_19_17 14070254 07_JUN_17_31_08
1 621 14070254 07_JUN_17_31_08 14070265 07_JUN_17_31_08
1 622 14070265 07_JUN_17_31_08 14070276 07_JUN_17_31_11

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
19 13.72M DISK 00:00:02 07_JUN_17_31_14
BP Key: 19 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173112
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v10y_.bkp

List of Archived Logs in backup set 19
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------------- ---------- ---------
1 615 14043833 12_JUN_23_28_21 14068910 07_JUN_17_10_49

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
20 Full 9.36M DISK 00:00:00 07_JUN_17_31_15
BP Key: 20 Status: AVAILABLE Compressed: NO Tag: TAG20150607T173115
Piece Name: /NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775075_bq83v3nr_.bkp
SPFILE Included: Modification time: 07_JUN_17_28_15
SPFILE db_unique_name: ORCL
Control File Included: Ckp SCN: 14070285 Ckp time: 07_JUN_17_31_15

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
21 Full 1.05M DISK 00:00:02 07_JUN_17_31_19
BP Key: 21 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173117
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_ncnnf_TAG20150607T173117_bq83v6vg_.bkp
Control File Included: Ckp SCN: 14070306 Ckp time: 07_JUN_17_31_17

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
22 Full 9.36M DISK 00:00:00 07_JUN_17_31_20
BP Key: 22 Status: AVAILABLE Compressed: NO Tag: TAG20150607T173120
Piece Name: /NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775080_bq83v88z_.bkp
SPFILE Included: Modification time: 07_JUN_17_31_18
SPFILE db_unique_name: ORCL
Control File Included: Ckp SCN: 14070312 Ckp time: 07_JUN_17_31_20

RMAN>

So, the RMAN LIST BACKUP can provide details that V$RMAN_BACKUP_JOB_DETAILS cannot provide. Yet, it doesn't tell us that a Backup failed.
.
.
.

Categories: DBA Blogs