Skip navigation.

DBA Blogs

RMAN -- 8 : Using a Recovery Catalog Schema

Hemant K Chitale - Sun, 2015-08-30 06:21
Besides retaining information about backups in the controlfile, Oracle allows the use of an (external) Recovery Catalog schema.  This schema is queryable via SQL in the same manner as querying any user / application schema.

Let's start with a database that already has backups present but created without a Recovery Catalog Schema.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 30 19:48:30 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select to_char(completion_time,'DD-MON HH24:MI') Completed_at, datafile_blocks, blocks_read
2 from v$backup_datafile
3 where file#=1
4 order by completion_time;

COMPLETED_AT DATAFILE_BLOCKS BLOCKS_READ
--------------------- --------------- -----------
01-AUG 22:10 107648 107648
10-AUG 15:14 107648 107648
10-AUG 19:58 107648 107648
30-AUG 16:59 107648 107648

SYS>

I now create a Catalog Schema and register this database into that schema.  There are 4 steps to this.  The first is to create the database that will hold t he Catalog Schema (here, we presume that the database has been created before today's steps).  The next is to create the database account for the Catalog Schema (I create the account RCAT_OWNER).  The third step is to login to the Catalog Schema with RMAN and run the CREATE CATALOG command.  The final step is to create an RMAN connection from the TARGET database to the Catalog and REGISTER the database.

[oracle@localhost ~]$ sqlplus system/oracle@rcat

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 30 19:52:15 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYSTEM>create user rcat_owner identified by rcat_owner
2 default tablespace users quota unlimited on users;

User created.

SYSTEM>grant create session to rcat_owner;

Grant succeeded.

SYSTEM>grant recovery_catalog_owner to rcat_owner;

Grant succeeded.

SYSTEM>
SYSTEM>exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman catalog rcat_owner@rcat

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Aug 30 19:53:26 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

recovery catalog database Password:
connected to recovery catalog database

RMAN> create catalog;

recovery catalog created

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$ rman target / catalog rcat_owner/rcat_owner@rcat

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Aug 30 19:55:59 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)
connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN>

When my (TARGET) database ("ORCL") is registered, Oracle automatically does a RESYNC CATALOG. Can I now see my backups in the Catalog ?

RMAN> list backup of datafile 1;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
381 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 391 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 381
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
385 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 395 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 385
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN>

Why does it show only the latest two backups ? What about the preceding two backups that I could see in V$BACKUP_DATAFILE ? Let me check those backups without a CATALOG connection.

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Aug 30 19:59:59 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> list backup of datafile 1;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
254 Full 733.27M DISK 00:04:51 01-AUG-15
BP Key: 266 Status: AVAILABLE Compressed: YES Tag: TAG20150801T220612
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/o1_mf_nnndf_TAG20150801T220612_bvsnlnpn_.bkp
List of Datafiles in backup set 254
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14157609 01-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
262 Full 733.23M DISK 00:03:17 10-AUG-15
BP Key: 274 Status: AVAILABLE Compressed: YES Tag: TAG20150810T151144
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T151144_bwjmojs6_.bkp
List of Datafiles in backup set 262
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14158847 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
271 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 283 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 271
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
275 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 287 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 275
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN>

When I disconnect from the Catalog Schema and do a local only (TARGET) connection, I can see 4 backups of the datafile.  So, what gives ?  Let me try an SQL query on V$BACKUP_DATAFILE.
(As an aside : Note above how there an be discrepancy in the listings showed by LIST BACKUP in the two scenarios (a) without a Recovery Catalog connection  and  (b) with a Recovery Catalog that was created after the last RESETLOGS).  This is something to remember.

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 30 20:01:54 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select to_char(completion_time,'DD-MON HH24:MI') Completed_at, datafile_blocks, blocks_read,
2 resetlogs_change#, resetlogs_time
3 from v$backup_datafile
4 where file#=1
5 order by completion_time
6 /

COMPLETED_AT DATAFILE_BLOCKS BLOCKS_READ RESETLOGS_CHANGE# RESETLOGS
--------------------- --------------- ----------- ----------------- ---------
01-AUG 22:10 107648 107648 14082620 04-JUL-15
10-AUG 15:14 107648 107648 14082620 04-JUL-15
10-AUG 19:58 107648 107648 14185666 10-AUG-15
30-AUG 16:59 107648 107648 14185666 10-AUG-15

SYS>

Notice that I have added two columns (RESETLOGS_CHANGE# and RESETLOGS_TIME) in the query.  Now, I see that the two older backups were from an *older* incarnation of the database.  They have a different RESETLOGS_CHANGE# / RESETLOGS_TIME.  So, those backups of the older incarnation are not cataloged into the Catalog Schema !

Can I do anything about this ?  It seems that the full RESYNC doesn't resync for backups of previous incarnations.   Can I reset my RETENTION POLICY and then do a RESYNC ?

SYS>exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target / catalog rcat_owner/rcat_owner@rcat

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Aug 30 20:08:06 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)
connected to recovery catalog database

RMAN> show all;

RMAN configuration parameters for database with db_unique_name HEMANTDB are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 4 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/snapcf_orcl.f'; # default

RMAN> configure retention policy to recovery window of 36 days;

old RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 4 DAYS;
new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 36 DAYS;
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN> resync catalog;

starting full resync of recovery catalog
full resync complete

RMAN> list backup of datafile 1;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
381 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 391 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 381
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
385 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 395 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 385
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN>

No, extending the Recovery Window still doesn't help.  Can I try something else ?  What about the CATALOG command ?

[oracle@localhost ~]$ ls -l /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/
total 894280
-rw-rw---- 1 oracle oracle 7786496 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnlfrc_.bkp
-rw-rw---- 1 oracle oracle 2421248 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnlk3q_.bkp
-rw-rw---- 1 oracle oracle 56320 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnll9n_.bkp
-rw-rw---- 1 oracle oracle 3595776 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnlmdx_.bkp
-rw-rw---- 1 oracle oracle 165888 Aug 1 22:12 o1_mf_annnn_TAG20150801T221209_bvsnxs75_.bkp
-rw-rw---- 1 oracle oracle 16896 Aug 1 22:14 o1_mf_annnn_TAG20150801T221404_bvso1f28_.bkp
-rw-rw---- 1 oracle oracle 768901120 Aug 1 22:11 o1_mf_nnndf_TAG20150801T220612_bvsnlnpn_.bkp
-rw-rw---- 1 oracle oracle 110075904 Aug 1 22:11 o1_mf_nnndf_TAG20150801T220612_bvsnvwjj_.bkp
-rw-rw---- 1 oracle oracle 21782528 Aug 1 22:12 o1_mf_nnndf_TAG20150801T220612_bvsnx9v4_.bkp
[oracle@localhost ~]$ ls -l /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/
total 1771844
-rw-rw---- 1 oracle oracle 786944 Aug 10 15:11 o1_mf_annnn_TAG20150810T151143_bwjmohc7_.bkp
-rw-rw---- 1 oracle oracle 28672 Aug 10 15:16 o1_mf_annnn_TAG20150810T151601_bwjmxk5v_.bkp
-rw-rw---- 1 oracle oracle 4140032 Aug 10 19:55 o1_mf_annnn_TAG20150810T195509_bwk48xxv_.bkp
-rw-rw---- 1 oracle oracle 526336 Aug 10 19:55 o1_mf_annnn_TAG20150810T195509_bwk4911j_.bkp
-rw-rw---- 1 oracle oracle 311296 Aug 10 19:55 o1_mf_annnn_TAG20150810T195509_bwk4923s_.bkp
-rw-rw---- 1 oracle oracle 32256 Aug 10 19:59 o1_mf_annnn_TAG20150810T195901_bwk4j5lt_.bkp
-rw-rw---- 1 oracle oracle 768851968 Aug 10 15:15 o1_mf_nnndf_TAG20150810T151144_bwjmojs6_.bkp
-rw-rw---- 1 oracle oracle 110075904 Aug 10 15:15 o1_mf_nnndf_TAG20150810T151144_bwjmvycy_.bkp
-rw-rw---- 1 oracle oracle 21938176 Aug 10 15:15 o1_mf_nnndf_TAG20150810T151144_bwjmx1sv_.bkp
-rw-rw---- 1 oracle oracle 905863168 Aug 10 19:58 o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
[oracle@localhost ~]$


RMAN> catalog start with '/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/';

searching for all files that match the pattern /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/
no files found to be unknown to the database

RMAN> list backup of datafile 1;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
381 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 391 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 381
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
385 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 395 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 385
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN>
RMAN> catalog start with '/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/';

searching for all files that match the pattern /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/
no files found to be unknown to the database

Well, apparently, even the CATALOG command refuses to catalog backup pieces from an older incarnation !

So, it seems that :
1.  If you've done a RESETLOGS recently and even though the controlfile may show previous backups, if you create a Catalog Schema after the RESETLOGS, previous backups (i.e. of the preceding incarnation)  are not visible in the Catalog  (disconnecting from the Catalog does allow you to view the previous backups in RMAN !)
2. The CATALOG command also will not include the previous backups, it will only accept backups of the current incarnation.

This testing has been done with 11.2.0.2    Has the behaviour changed in 11.2.0.4 / 12.1.0.1 / 12.1.0.2 ?

.
.
.


Categories: DBA Blogs

amcmd> a better “du”

Pythian Group - Fri, 2015-08-28 14:28

I discovered ASM with a 10.1.0.3 RAC running on Linux Itanium and that was a big adventure. At this time there was no asmcmd. In 2005, Oracle released Oracle 10gR2 and asmcmd came into the place and we figured out how to make it work with a 10gR1 ASM. We were very excited to have a command line for ASM until… we tried it ! let’s call a spade a spade,  it was very poor…

10 years after, Oracle has released 11gR1, 11gR2, 12cR1, asmcmd has been improved but the “ASM shell” remains very weak and specially the “du” command :

ASMCMD> du
Used_MB Mirror_used_MB
 556178 556178
ASMCMD> du .
Used_MB Mirror_used_MB
 556178 556178
ASMCMD> du *
Used_MB Mirror_used_MB
 556265 556265
ASMCMD> ls
ASM_CONFIG/
DATA/
FRA/
LOG/
ASMCMD>

Why “du *” does not act as it acts in any Unix shell ? How do I know the size of each subdirectory in my current directory ?

 

Nowadays, we use to have dozens of instances running on the same server sharing the same ASM :


[oracle@higgins ~]$ ps -ef | grep pmon | wc -l
30
[oracle@higgins ~]$

so should I use one “du” per database (directory) to know the size used by each database ? what if I keep one month of archivelogs in my FRA ? should I wait for the month of February to have only 28 “du” to perform if I want to know the size of archivelogs generated each day (if this is a non-leap year !) ?

 

This is why I wrote this piece of code to have a “du” under ASM that makes my life easier everyday :

[oracle@higgins ~]$ cat asmdu.sh
#!/bin/bash
#
# du of each subdirectory in a directory for ASM
#
D=$1

if [[ -z $D ]]
then
 echo "Please provide a directory !"
 exit 1
fi

(for DIR in `asmcmd ls ${D}`
 do
     echo ${DIR} `asmcmd du ${D}/${DIR} | tail -1`
 done) | awk -v D="$D" ' BEGIN {  printf("\n\t\t%40s\n\n", D " subdirectories size")           ;
                                  printf("%25s%16s%16s\n", "Subdir", "Used MB", "Mirror MB")   ;
                                  printf("%25s%16s%16s\n", "------", "-------", "---------")   ;}
                               {
                                  printf("%25s%16s%16s\n", $1, $2, $3)                         ;
                                  use += $2                                                    ;
                                  mir += $3                                                    ;
                               }
                         END   { printf("\n\n%25s%16s%16s\n", "------", "-------", "---------");
                                 printf("%25s%16s%16s\n\n", "Total", use, mir)                 ;} '
[oracle@higgins ~]$
Let's see it in action with some real life examples :
[oracle@higgins ~]$. oraenv
ORACLE_SID = [+ASM] ? +ASM
The Oracle base remains unchanged with value /oracle
[oracle@higgins ~]$./asmdu.sh DATA

DATA subdirectories size

Subdir  Used MB Mirror MB
------  ------- --------
DB01/    2423    2423
DB02/    2642    2642
DB03/    321201  321201
DB04/    39491   39491
DB05/    180753  180753
DB06/    4672    4672
DB07/    1431    1431
DB08/    2653    2653
DB09/    70942   70942
DB10/    96001   96001
DB11/    57322   57322
DB12/    70989   70989
DB13/    4639    4639
DB14/    40800   40800
DB15/    13397   13397
DB16/    15279   15279
DB17/    19020   19020
DB18/    8886    8886
DB19/    4671    4671
DB20/    14994   14994
DB21/    502245  502245
DB22/    4839    4839
DB23/    10169   10169
DB24/    7772    7772
DB25/    7828    7828
DB26/    112109  112109
DB27/    5564    5564
DB28/    16895   16895
------  ------- ---------
Total   1639627 1639627
[oracle@higgins ~]$

 

Another one with many archivelogs directories :
[oracle@higgins ~]$./asmdu.sh FRA/THE_DB/ARCHIVELOG/

 FRA/THE_DB/ARCHIVELOG/ subdirectories size

 Subdir       Used MB Mirror MB
 ------        ------ ---------
 2015_02_19/    114   114
 2015_02_20/    147   147
 2015_02_21/    112   112
 2015_02_22/    137   137
 2015_02_23/    150   150
 2015_02_24/    126   126
 2015_02_25/    135   135
 2015_02_26/    130   130
 2015_02_27/    129   129
 2015_02_28/    119   119
 2015_03_01/    146   146
 2015_03_02/    150   150
 2015_03_03/    128   128
 2015_03_04/    134   134
 2015_03_05/    44    44
 2015_05_27/    28    28
 2015_05_28/    95    95
 2015_05_29/    76    76
 2015_05_30/    187   187
 2015_05_31/    78    78
 2015_06_01/    111   111
 2015_06_02/    105   105
 2015_06_03/    43    43
 2015_06_04/    142   142
 2015_06_05/    42    42
 2015_06_06/    84    84
 2015_06_07/    70    70
 2015_06_08/    134   134
 2015_06_09/    77    77
 2015_06_10/    143   143
 2015_06_11/    2     2
 2015_06_21/    14    14
 2015_06_22/   14918 14918
 ------       ------- ---------
 Total         18250   18250

[oracle@higgins ~]$

This example is a very nice one as it shows us that 2015 is not a leap year and that some archivelogs are still on disk even if they probably shouldn’t and that’s a good information as v$log_history do not contain these information anymore :

SQL> select trunc(FIRST_TIME), count(*) from v$log_history group by trunc(FIRST_TIME) order by 1 ;

TRUNC(FIR COUNT(*)
--------- ----------
22-JUN-15 402

SQL>

Hope it will also makes your life easier,

Have a good day :)

The post amcmd> a better “du” appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Simplify Oracle Tracing with Creative Scripting

Pythian Group - Fri, 2015-08-28 14:26

Running a SQL trace is something that all DBAs do to varying degrees. Let’s say you are working on optimizing a SQL statement, and experimenting with some different hints for indexes and optimizer directives. This kind of effort typically goes something like this:

  • modify the SQL statement
  • enable tracing
  • run the statement
  • disable tracing
  • disconnect
  • retrieve the trace file
  • use a profiler to process the trace file
    this might be Method-R mrskew,Oracle tkprof, or something of your own.
  • delete the trace file if no longer needed

That process is OK if all you need to do is look at a couple of trace files, but quickly becomes tedious for any serious optimization effort as there will be many iterations of this process.  This is the kind of job that just cries out for some simple automation.

Let’s walk though automating much of this process using Sqlplus, ssh and some profiling tools.

First let’s consider the environment:

  • Oracle 11.2 database on a remote server
  • Workstation has 11.2 client software installed
  • ssh is setup for connecting to the oracle user on the database server
  • some profiling tools are available

Let’s get started with the script that is the subject of our ‘tuning’ effort.

-- sql2trace.sql
select * from dual;

As you can see there is not really going to be any tuning done in this article; it is all about the process.

The following script tracefile_identifier_demo.sql is used to setup the trace environment by collecting some information about the database host the process owner, and then setting the tracefile_identifier parameter.  The values for these are then used to set sqlplus define variables.

-- tracefile_identifier_demo.sql

-- column variables to capture host, owner and tracefile name
col tracehost new_value tracehost noprint
col traceowner new_value traceowner noprint
col tracefile new_value tracefile noprint

set term off head off feed off

-- get oracle owner
select username traceowner from v$process where pname = 'PMON';

-- get host name
select host_name tracehost from v$instance;

-- set tracefile identifier
alter session set tracefile_identifier = 'MYTRACEFILE';

select value tracefile from v$diag_info where name = 'Default Trace File';

set term on head on feed on

-- do your tracing here
alter session set events '10046 trace name context forever, level 12';

-- run your SQL here
@@sql2trace

alter session set events '10046 trace name context off';

-- disconnect to ensure all trace data flushed
-- the disconnect must be done in the called script
-- otherwise the values of the defined vars are lost

-- now get the trace file, or other processing
--@@mrskew '&&traceowner@&&tracehost' '&&tracefile'
@@tkprof '&&traceowner@&&tracehost' '&&tracefile'

This article began as an idea to write about tracefile_identier, hence the script name.

Most of this script is quite straightforward:

  • set column command initiated define variables to capture host, process owner and tracefile name
  • collect the data
  • enable tracing
  • run the target script
  • disable tracing
  • call the tkprof.sql script to run tkprof

The interesting bit is found in tkprof.sql.

-- tkprof.sql

col ssh_target new_value ssh_target noprint
col scp_filename new_value scp_filename noprint

set term off feed off verify off echo off

select '&&1' ssh_target from dual;
select '&&2' scp_filename from dual;

set feed on term on verify on
disconnect

host ssh &&ssh_target 'cat &&scp_filename' | tkprof /dev/stdin ./tkprof.out sort=exeqry sys=no
host cat ./tkprof.out

There are a couple of things to take notice of in tkprof.sql.  Did you notice the disconnect statement?  There are couple of points of interest about that.  Prior to 11g it was necessary to disconnect from Oracle to ensure that all cursors were closed and all STAT and row source operation rows were written to the trace file.  Disconnecting the session is not necessary in Oracle 11g+.

Another interesting bit about this disconnect statement is its placement.  At first the disconnect statement was in the main script.  The problem was that the define variables would all lose their values prior to calling the tkprof.sql script, and so the call would fail; and so the disconnect command is in the called script.

Finally the trace output is retrieved via ssh and piped to tkprof.  Notice that there is no need to actually copy the file, rather the contents of the file are simple sent to STDOUT and piped to tkprof.

The tkprof command does not read from STDIN.  If for instance you try this; cat somefile | tkprof – ./tkprof.out sort=exeqry; tkprof will exit with an error that an input file is needed.  That problem is circumvented by using the file /dev/stdin.

Put it all together and it looks like this:

11:34:11 JKSTILL@oravm > @tracefile_identifier_demo

Session altered.

Elapsed: 00:00:00.00

D
-
X

1 row selected.

Elapsed: 00:00:00.00

Session altered.

Elapsed: 00:00:00.00

TKPROF: Release 11.2.0.3.0 - Development on Thu Aug 27 11:34:18 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Host key fingerprint is de:ad:be:ed:a2:d6:63:4b:rx:77:fd:1c:e1:36:2b:88
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|         .  .    |
|        S  +.    |
|        ..ox.o   |
|       o+.F.* o  |
|      99+o.o.= . |
|     . ..+y.ooo  |
+-----------------+

TKPROF: Release 11.2.0.3.0 - Development on Thu Aug 27 11:34:18 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Trace file: /dev/stdin
Sort options: exeqry
********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

SQL ID: a5ks9fhw2v9s1 Plan Hash: 272002086

select *
from
 dual


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          2          0           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 90
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS FULL DUAL (cr=2 pr=0 pw=0 time=22 us cost=2 size=2 card=1)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  log file sync                                   1        0.00          0.00
  SQL*Net message from client                     2        0.00          0.00
********************************************************************************

SQL ID: 06nvwn223659v Plan Hash: 0

alter session set events '10046 trace name context off'


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0
Parsing user id: 90

********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        6      0.00       0.00          0          2          0           1

Misses in library cache during parse: 0

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       3        0.00          0.00
  SQL*Net message from client                     3        0.00          0.00
  log file sync                                   1        0.00          0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          3           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        1      0.00       0.00          0          0          3           1

Misses in library cache during parse: 0

    2  user  SQL statements in session.
    1  internal SQL statements in session.
    3  SQL statements in session.
********************************************************************************
Trace file: /dev/stdin
Trace file compatibility: 11.1.0.7
Sort options: exeqry
       1  session in tracefile.
       2  user  SQL statements in trace file.
       1  internal SQL statements in trace file.
       3  SQL statements in trace file.
       3  unique SQL statements in trace file.
     218  lines in trace file.
       0  elapsed seconds in trace file.

The same process was used to run the trace data through the Method-R mrskew command:

-- mrskew.sql

col ssh_target new_value ssh_target noprint
col scp_filename new_value scp_filename noprint

set term off feed off verify off echo off

select '&&1' ssh_target from dual;
select '&&2' scp_filename from dual;

set feed on term on verify on
--disconnect
host ssh &&ssh_target 'cat &&scp_filename' | mrskew

The results of calling mrskew.sql  rather than tkprof.sql:

CALL-NAME                    DURATION       %  CALLS      MEAN       MIN       MAX
—————————  ——–  ——  —–  ——–  ——–  ——–
SQL*Net message from client  0.003733   74.1%      3  0.001244  0.001004  0.001663
log file sync                0.001300   25.8%      1  0.001300  0.001300  0.001300
SQL*Net message to client    0.000008    0.2%      3  0.000003  0.000002  0.000003
PARSE                        0.000000    0.0%      2  0.000000  0.000000  0.000000
FETCH                        0.000000    0.0%      2  0.000000  0.000000  0.000000
CLOSE                        0.000000    0.0%      2  0.000000  0.000000  0.000000
EXEC                         0.000000    0.0%      2  0.000000  0.000000  0.000000
—————————  ——–  ——  —–  ——–  ——–  ——–
TOTAL (7)                    0.005041  100.0%     15  0.000336  0.000000  0.001663

These scripts can all be found at https://github.com/jkstill/tracefile_identifier

If you have ideas about how to improve these, please feel free to clone the repo, make some changes and issue a pull request.

If you don’t know what all of that means, might I suggest this article?  Git for Beginners

The next time you have some tracing to do, why not give this method a try?  Doing so will save you time and make you more productive.

 

The post Simplify Oracle Tracing with Creative Scripting appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Pillars of PowerShell: SQL Server – Part 2

Pythian Group - Fri, 2015-08-28 14:24
Introduction

This is the seventh and final post in the series on the Pillars of PowerShell. The previous posts in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling
  5. Windows OS
  6. SQL Server – Part 1

In this final post I am going to touch on SQL Server Management Objects (SMO) with PowerShell. SMO is one of the most widely used methods, and offers the most versatile way of working with SQL Server to me. It can be a bit tedious to work with being that you are going to be using raw .NET objects now instead of cmdlets, but offers so much more compared to SQLPS. In this post I am just going to touch on the basics of loading SMO, and how you can connect to an instance of SQL Server (or multiple). I am going to end it showing you a function I published a few years ago and use fairly frequently to this day.

Loading SMO

As with SQLPS, you have to load SMO into your PowerShell session before you can utilize it. SMO is what is referred to as an “assembly”, basically a collection of types and other objects that form a logical unit of functionality for interacting with various parts of SQL Server. SQL Server 2012 and above you can import the SQLPS module and it will automatically import the associated version of SMO. However, being that SQLPS is loading in more than just SMO it can take time for that to complete before your script will continue. In that regard, it can shave off some time by just loading SMO directly without all the overhead of the SQLPS module. You will commonly see the following line of code used to load SMO into your session:

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO')
SMO_Load_1

Generally this command is going to load the highest version registered in the GAC on your machine. In the screenshot you may see the version is “13.0.0”, this is from SQL Server Management Studio preview (July 2015) that is installed on my machine. Now with PowerShell things change over time and using LoadWithPartialName is actually the version 1 method of loading SMO. This method is actually no longer supported, but still works for now. In PowerShell 2.0 a cmdlet was added to do this for you called, Add-Type. If you were to just type in Add-Type ‘Microsoft.SqlServer.Smo’ when you have multiple versions, your are going to get an error similar to this:

SMO_Load_2

In this situation you have to specify the assembly you want to load, so there is a bit more to doing this with SMO. You can load an assembly by specifying the file itself or by the assembly name along with 4 bits of information:

  1. Name
  2. Version
  3. Culture
  4. PublicKeyToken

To date, Microsoft always uses the same Culture and PublicKeyToken on almost all of their assemblies that come out of Redmond. So the only thing lacking is the version, which is going to be in the format of a 4-part version number, 0.0.0.0. If you have worked with SQL Server and you are familiar with the build numbers, you simply need to know that “10” is SQL Server 2008, “11” is SQL Server 2012, “12” is SQL Server 2014, and “13” is going to be SQL Server 2016. So, if I want to load the SQL Server 2012 SMO into my session I simply use this command:

Add-Type -AssemblyName "Microsoft.SqlServer.Smo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"
The first connection…

To connect to a single instance of SQL Server with Windows Authentication you can use the following:

$srvObject = New-Object Microsoft.SqlServer.Management.Smo.Server "MyServer"

Once you hit enter, it will make a connection to your instance and then the variable $srvObject will contain properties and methods that you can use to manipulate the server-level objects of your instance. If you recall from the previous pillars in this series, this is where Get-Member comes in real handy for exploring. As an example let’s say you wanted to get similar information to what SELECT @@VERSION returns in T-SQL. You simply need to know the properties that hold this information and pipe the object to select:

 
$srvObject | select Product, VersionString, Edition, OSVersion 

In PowerShell it is good to start out with the mindset “if I write it for one server, might as well write it to handle multiple”. What I mean by this is you get to the point of developing a script into a tool. If I wanted to turn the above bit of code into something I can reuse, and run for one instance or 50 instances it just takes a bit of work and you are there before you know it:

function Get-SqlVersion {
 [cmdletbinding()]
 param (
 [string[]]$server
 )
 
 $allServers = @()
 $props = @{ServerName="";Product="";Version="";Edition="";OSVersion=""}
 foreach ($s in $server) {
 $srvObject = New-Object Microsoft.SqlServer.Management.Smo.Server $s

 $cserver = New-Object psobject -Property $props
 $cserver.ServerName = $s
 $cserver.Product = $srvObject.Product
 $cserver.Version = $srvObject.VersionString
 $cserver.Edition = $srvObject.Edition
 $cserver.OSversion = $srvObject.OSVersion
 $allServers += $cserver
 }
 
 $allServers
}

Now, don’t let this scare you as it may look more complicated than it seems. You could just put two lines inside the foreach loop that create your server object and then just select the properties, then you are done. It is best though when you start to write functions that the output of your function is an object. So that is the only additional step I take using New-Object psobject to create a PowerShell object with the properties ServerName, Product, Version, Edition, and OSVersion. In the event you expand on this function in the future, and wanted to pipe this output to another cmdlet or custom bit of code it will be in a more formal object type for you to work against.

Golden Nugget

One of the things I got annoyed with fairly quickly when troubleshooting an instance of SQL Server was having to search through the error log(s). You could be dealing with the default of 6 logs for an instance or up to 99 of them. Now there is some T-SQL code out there of people iterating through each log for you, but I just prefer to use PowerShell. I published this code on my personal blog back in December of 2014. You can find the write-up and code here: Search-SqlErrorLog. It will be good practice for you to try and understand it on your own, but I include help information just in case.

This is one of the few times I wrote a function that only works with one server at a time. You can do some one-liner tricks with the pipeline to easily call it for multiple servers:

"server1","server2" | foreach {Search-SqlErrorLog -server $_ -all -value "^backup"}

The output of this function provides the number of the log it was found in, the date, the process (if noted in the log), and the text found matching the value you provided (which can accept regex expressions, the “^” means the start of the string):

search_sqlerrorlog The End

I hope you learned something new in this series on PowerShell, and good scripting to you all.

The post Pillars of PowerShell: SQL Server – Part 2 appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Migration of Oracle Database to Amazon RDS using Golden Gate

Pythian Group - Fri, 2015-08-28 14:15

Amazon RDS is a web service used to manage databases, like Oracle, in the cloud. Small- and medium-sized enterprises with databases of normal load, volume, and SLA, can certainly leverage the ease and cost efficiency Amazon RDS offers.

There are two other methods that are widely used to migrate databases with minimal downtime: Oracle Data Guard and Oracle GoldenGate. AWS RDS doesn’t support Data Guard, but luckily it does support Oracle GoldenGate. There are some version constraints though.

The following steps are involved while migrating a database from on-premises to AWS RDS:

— Source database on premises
— Oracle GoldenGate Hub on EC2 instance
— Target database on AWS RDS

Now there could be different topologies for the above 3 components, but we are just using this topology for simplicity. For details on this topology, refer to this very fine and simple Appendix: Using Oracle GoldenGate with Amazon RDS.

Generally and roughly, the steps used to migrate databases from on-premises Oracle database to AWS RDS could be as follows:

— Create target database targetdb in AWS RDS with same parameters as that of the source database sourcedb.

— Create same tablespaces on targetdb in AWS RDS as they exist in source database sourcedb.

— Create same non default users on targetdb in AWS RDS as they exist in source database sourcedb.

— Create same non default roles on targetdb in AWS RDS as they exist in source database sourcedb and assign these roles to users on targetdb.

— Export data/objects from sourcedb database to specific SCN from non default schemas

— Import data/objects into targetdb database

— Configure GoldenGate extract process on sourcedb , for configuration see this

— Configure GoldenGate replicate processes on targetdb , for configuration see this

— Set up Oracle GoldenGate (GG) Hub on EC2 , for configuration see this

— Start GG extract process on sourcedb

— Start GG replicate process on targetdb starting after that SCN until it catch all changes generated on sourcedb database during exp/imp time.

— Then plan the cut-off time for applications to switch to new AWS RDS database after stopping replicat process at targetdb.

— Cleanup of sourcedb.

These are just the skeleton steps and need refining and proper planning. It’s always good to first thoroughly test such action plans. But as you can see, Oracle GoldenGate is a viable tool to migrate databases to the AWS RDS. Pythian has a full range of skills, experience, and capabilities to oversee such migrations as its our daily routine to use GoldenGate to do migrations. And yes, even if AWS RDS is a cloud service, you still need a DBA :)

The post Migration of Oracle Database to Amazon RDS using Golden Gate appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Three Hidden Azure SQL Database Gotchas

Pythian Group - Fri, 2015-08-28 13:35

Azure SQL Database is Microsoft’s Database as a Service (DBaaS) platform offering. It allows end users to leverage the power of SQL Server in the cloud without the expense and complexity of building a private infrastructure. Additionally, this offering simplifies database maintenance tasks while providing seamless high availability and disaster recovery capabilities.

Although DBaaS offerings are still crawling out their infancy, with the correct planning and use cases, implementing an Azure SQL Database solution can be a relatively straightforward process. However, as this platform continues to mature, you can expect to encounter some “Ghosts in the Machine”. Hopefully this post will allow you to avoid some of these unexpected behaviors.

  1. What’s in a name?

Azure SQL Servers all share the same public domain, database.windows.net and access is controlled through IP white-lists and user credentials. Until recently, Azure SQL Database dynamically allocated server names comprised of long random strings for security purposes and because each Azure server name must be unique globally. However, recently Microsoft provided the ability to allocate specific server names specified by the end user, i.e. MyServerName.database.windows.net.

This feature is a more than a welcome addition, particularly for organizations who wish to pre-configure connection strings for cloud implementations.

The hidden gotcha resides in the implementation of this feature. Once you create a server with a user defined name, the Azure cloud reserves that name for you within the Azure fabric. If for any reason you remove the server you will be unable to recreate the server using the same name for at least 5 days. When you attempt to recreate the server, you will receive the message “Specified server name is already used” as depicted below:

Bl1

Microsoft is aware of this limitation, however, at this time, the only way to correct the situation is to contact Microsoft Support and have them remove the Azure fabric metadata manually.

Additionally, it should be noted that you can only specify a specific Azure SQL Database Server name in the preview portal. This feature is not available in the standard portal or via the New-AzureSqlDatabaseServer Cmdlet in PowerShell.

2. You can change the performance tier at any time, unless you can’t.

One of the fantastic benefits of leveraging Azure SQL Database is the ability to switch service tiers at any time, without service disruption in order to leverage pay per minute costing efficiencies.

Unfortunately, another hidden gotcha may rear its ugly head during the switching process. Organizations that utilize BCP processes against an Azure SQL instance need to be wary when performing a service level switch. BCP operations often simply “Hang” when switching between service levels. The only resolution for this issue is to terminate the process and re-initiate once the tier switch has been completed.

3. I know you’re there, but I can’t see you.

Just like all could offerings, Azure SQL Database continues to mature and improve. However, you need to be prepared for some management inconsistencies. The preview portal is aptly named and although some functions are only available within the preview portal, you may need to frequently revert to the standard portal for a more consistent experience.

As an example, I have a client who switched databases between standard and premium tiers and vice versa. These databases no longer display in the preview portal at all. However, they do appear correctly in the standard portal as shown in the CIA level of redacted screen captures below.

BL_Combo2015-08-24_14-08-57

 

The post Three Hidden Azure SQL Database Gotchas appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Trust and confidence from Pythian

Pythian Group - Fri, 2015-08-28 13:25

Recently I “inherited” some new responsibilities at work. It’s not the first time during my 11 or so of the last 16 years at Pythian. Throughout my employ at Pythian, I have been continually given new titles based on new roles I have taken on. For me, besides the enjoyment I have been lucky to have at Pythian, this trust and confidence are two of the biggest contributors to one’s longevity with a company.

For Pythian and me, it all started one spring afternoon in about 1998. Paul and Steve had been doing the Pythian-thing for a year or more, and were looking for assistance getting “off-the-ground” so to speak. That endeavour was part of the reason for our new association and it’s been a magic carpet ride since. I did leave at one point for almost 6 years, but returned in early 2011. Between 1998 and 2011, the size of the company changed, but it was still the same old company.

I now manage the day-to-day operations of the consulting group and take pride in the work I do. Touché all you people out there in Pythian-land.

The post Trust and confidence from Pythian appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Creating an Oracle Database Cloud Service

Pythian Group - Fri, 2015-08-28 13:10

Back in late June of 2015, Larry Ellison launched several public cloud services and one of those services was the public DBaaS. Today, I had the opportunity to try out this new service. This blog post will examine how to create it and how to connect it with sqlcli. As with any cloud service, it all happens in the background, saving you from doing tedious configuration steps to start using your service.

2015-08-21_1319

In my case, it took about 30 mins from when I clicked on create service to start using my database.

So the first thing that you have to do, obviously, is access the Oracle Cloud My Services application.  If you do not currently have access, speak with your sales rep or cloud administrator, but remember that this application is not free. Once you have access, click on the Oracle Database Cloud Service link and the following page will come up. Click on “Create Service” :

Once you have done that, we need to choose the type of service we will solicit and the billing frequency. As I have talked about in previous posts, it all depends on your business needs and abilities. The difference here between choosing a “Cloud Service” and a “Cloud Service – Virtual Image” is that in the first option, the database and the database instance are created for you, whereas in the “Virtual Image“, you will need to create it yourself, so choose carefully. One of the good things that comes with the first option is that the cloud patching option comes with it, but in the “Virtual Image“, you have to do this yourself.

As of the writing of this post, Oracle offers two database versions – 11.2.0.4 and 12.1.0.2. I chose the latter.

2015-08-18_1256

 

In the Edition section, we get to choose the type of service we will get when choosing the Cloud Software Edition. Unlike the previous one, here we will choose the bells and whistles that you will be licensed to use in this database. I won’t include the differences between the two here, but you can view them in cloud.oracle.com in the PaaS section, under Database. In my case, I just chose the regular Enterprise Edition :

In the details section, we can set the characteristics of the database service. It is important to select the “Compute Shape” correctly as this is critical to your usage billing. It is also good to know that one OCPU (Oracle CPU) is equivalent to a 3.0 GHz 2012 Intel Xeon with HyperThreading Enabled. Also you will have to add a Public SSH key to access your compute node. You can learn how here: how to create one. This is where you will also set the usable storage, your system or administrator password for the database, the name of the SID, the version (in this case, you are using version 12.1.0.2), the name of the PDB. Last, but not least, you will choose your backup destination. In my case, I just chose a local, but you can choose the Oracle Database Backup Service if you have one.

 

 

Last, but not least, you will get a confirmation of the service you are about to create. I didn’t copy this particular screenshot when I created it, but here is a similar one, so you get the gist.

 

Once you click on create, you can select the service and see the details of the creation process, as well as some others, like the Public IP, Port, etc.

Once the DB and VM are allocated, you need to go back to the Oracle Cloud My Services application  and go to the Oracle Compute Cloud Service console. This is to enable the security rule that will allow us to connect to port 1521 for this DB.

 

2015-08-21_1247

In the page that comes up, go to the Network section, and you will see a set of Security Rules, which you will find disabled.2015-08-21_1056

In my case, I enabled the “dbaas/test-orcl/db/ora_p2_dblistener” rule.

2015-08-21_1057

In this particular case – and I want to emphasize this – I am not concerned with security, so I also enabled the Security List for Inbound/Outbound Policy traffic.

2015-08-21_1106

 

Once I had done this, I am now ready to connect to my DB via sqlcli  like I would connect to any other DB:

Renes-iMac:bin Rene$ ./sql system@***.***.****.****:1521:ORCL

SQLcl: Release 4.2.0.15.177.0246 RC on Fri Aug 21 11:41:42 2015

Copyright (c) 1982, 2015, Oracle. All rights reserved.


Password? (**********?) ************
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Oracle Label Security option 

SQL> select name from v$database;


NAME 
---------
ORCL 

SQL> set lines 200 pages 9999


SQL> COLUMN PDB_NAME FORMAT A15

SQL> 
SQL> SELECT PDB_ID, PDB_NAME, STATUS FROM CDB_PDBS ORDER BY PDB_ID;


 PDB_ID PDB_NAME STATUS 
---------- --------------- ---------
 2 PDB$SEED NORMAL 
 3 PDB1 NORMAL 

SQL> alter session set container=PDB1;


Session altered.

Conclusion

As you can see, it is quite easy to request a database service and start using it. You will have to start building your case to use the public cloud, but once you do, you can see that using your database is no different from an on-premise to a cloud service.

Note– This was originally published on rene-ace.com

The post Creating an Oracle Database Cloud Service appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #438: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-08-28 12:08

This Log Buffer Edition covers Oracle, MySQL, and SQL Server blog posts from the last week.

Oracle:

Integrating Telstra Public SMS API into Bluemix

Adaptive Query Optimization in Oracle 12c : Ongoing Updates

First flight into the Oracle Mobile Cloud Service

Oracle 12C Problem with datapatch. Part 2, the “fix”

oracle applications r12 auto start on linux

SQL Server:

Email Formatted HTML Table with T-SQL

SQL Server 2016 – Introduction to Stretch Database

Soundex – Experiments with SQLCLR Part 3

An Introduction to Real-Time Communication with SignalR

Strange Filtered Index Problem

MySQL:

Announcing Galera Cluster 5.5.42 and 5.6.25 with Galera 3.12

doing nothing on modern CPUs

Single-threaded linkbench performance for MySQL 5.7, 5.6, WebScale and MyRocks

Identifying Insecure Connections

MyOraDump, Oracle dump utility, version 1.2

The post Log Buffer #438: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #437: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-08-28 12:07

This Log Buffer Edition goes out deep into the vistas of database world and brings out few of the good ones published during the week from Oracle, SQL Server, and MySQL.

Oracle:

Overriding Default Context-Sensitive Action Enablement

This is an alternative to if… then… else… elsif… end if when you want to use conditional statements in PL/SQL.

Achieving SAML interoperability with OAM OAuth Server

Release of BP02 for Oracle Identity Manager 11.1.2.3

IT Business Edge: Oracle Ties Mobile Security to Identity and Access Management

SQL Server:

How to render PDF documents using SQL CLR. Also a good introduction on creating SQL CLR functions.

What is DNX?

SQL Server Performance dashboard reports

Using Microsoft DiskSpd to Test Your Storage Subsystem

Connect to Salesforce Data as a Linked Server

MySQL:

Optimizing PXC Xtrabackup State Snapshot Transfer

Adding your own collation to MySQL

Monitoring your Amazon Aurora Databases using MONyog

How much could you benefit from MySQL 5.6 parallel replication?

MySQL checksum

The post Log Buffer #437: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Creating User Schema Table and Projections in Vertica

Pakistan's First Oracle Blog - Fri, 2015-08-28 01:25
Vertica is a an exciting database with some real nifty features. Projections is a ground breaking unique feature of Vertica which dramatically increases performance benefits in terms of querying and space benefits in terms of compression.



Following test commands are impromptu sesssion in which a user is being created, then a schema is created, and that user is authorized on that schema. Then a table is created with a default superprojection and then a projection is created and then we see its usage.

Create new vertica database user, create schema and authorize that user to that schema. Create 4 column table and insert data.

select user_name from v_catalog.users;

vtest=> create user mytest identified by 'user123';
CREATE USER
vtest=>

vtest=> \du
      List of users
 User name | Is Superuser
-----------+--------------
 dbadmin   | t
 mytest    | f
(2 rows)

vtest=> \dn
         List of schemas
     Name     |  Owner  | Comment
--------------+---------+---------
 v_internal   | dbadmin |
 v_catalog    | dbadmin |
 v_monitor    | dbadmin |
 public       | dbadmin |
 TxtIndex     | dbadmin |
 store        | dbadmin |
 online_sales | dbadmin |
(7 rows)


vtest=> \q
[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U mytest -w user123 -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit


vtest=> create table testtab (col1 integer,col2 integer, col3 varchar2(78), col4 varchar2(90));
ROLLBACK 4367:  Permission denied for schema public

[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U dbadmin -w vtest -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit

vtest=> \du
      List of users
 User name | Is Superuser
-----------+--------------
 dbadmin   | t
 mytest    | f
(2 rows)

vtest=> create schema mytest authorization mytest;
CREATE SCHEMA
vtest=> select current_user();
 current_user
--------------
 dbadmin
(1 row)

vtest=>

vtest=> \q
[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U mytest -w user123 -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit

vtest=> create table testtab (col1 integer,col2 integer, col3 varchar2(78), col4 varchar2(90));
CREATE TABLE
vtest=> select current_user();
 current_user
--------------
 mytest
(1 row)

vtest=>

vtest=> \dt
               List of tables
 Schema |  Name   | Kind  | Owner  | Comment
--------+---------+-------+--------+---------
 mytest | testtab | table | mytest |
(1 row)

vtest=> insert into testtab values (1,2,'test1','test2');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (2,2,'test2','test3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (3,2,'test2','test3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> commit;
COMMIT
vtest=>


Create a projection on 2 columns.

Superprojection exists already:

vtest=> select anchor_table_name,projection_name,is_super_projection from projections;
 anchor_table_name | projection_name | is_super_projection
-------------------+-----------------+---------------------
 testtab           | testtab_super   | t
(1 row)

vtest=>


vtest=> \d testtab
                                    List of Fields by Tables
 Schema |  Table  | Column |    Type     | Size | Default | Not Null | Primary Key | Foreign Key
--------+---------+--------+-------------+------+---------+----------+-------------+-------------
 mytest | testtab | col1   | int         |    8 |         | f        | f           |
 mytest | testtab | col2   | int         |    8 |         | f        | f           |
 mytest | testtab | col3   | varchar(78) |   78 |         | f        | f           |
 mytest | testtab | col4   | varchar(90) |   90 |         | f        | f           |
(4 rows)

vtest=>
vtest=> create projection ptest (col1,col2) as select col1,col2 from testtab;
WARNING 4468:  Projection is not available for query processing. Execute the select start_refresh() function to copy data into this projection.
          The projection must have a sufficient number of buddy projections and all nodes must be up before starting a refresh
CREATE PROJECTION
vtest=>


vtest=> select anchor_table_name,projection_name,is_super_projection from projections;
 anchor_table_name | projection_name | is_super_projection
-------------------+-----------------+---------------------
 testtab           | testtab_super   | t
 testtab           | ptest           | f
(2 rows)


vtest=> select * from ptest;
ERROR 3586:  Insufficient projections to answer query
DETAIL:  No projections eligible to answer query
HINT:  Projection ptest not used in the plan because the projection is not up to date.
vtest=>

vtest=> select start_refresh();
             start_refresh
----------------------------------------
 Starting refresh background process.

(1 row)

vtest=> select * from ptest;
 col1 | col2
------+------
    1 |    2
    2 |    2
    3 |    2
    4 |    2
    4 |    2
    4 |    2
    4 |    2
(7 rows)

vtest=>


 projection_basename | USED/UNUSED |           last_used
---------------------+-------------+-------------------------------
 testtab             | UNUSED      | 1970-01-01 00:00:00-05
 ptest               | USED        | 2015-08-28 07:14:49.877814-04
(2 rows)

vtest=> select * from testtab;
 col1 | col2 | col3  |  col4
------+------+-------+--------
    1 |    2 | test1 | test2
    3 |    2 | test2 | test3
    2 |    2 | test2 | test3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
(7 rows)

projection_basename | USED/UNUSED |           last_used
---------------------+-------------+-------------------------------
 ptest               | USED        | 2015-08-28 07:14:49.877814-04
 testtab             | USED        | 2015-08-28 07:16:10.155434-04
(2 rows)
Categories: DBA Blogs

Reminder: Great free computer science and Python programming class starts Wednesday

Bobby Durrett's DBA Blog - Mon, 2015-08-24 13:01

I mentioned this class earlier in a blog post but I wanted to remind people who read this blog that the class is starting again on Wednesday.  Here is the URL for the class: link

The class is completely free and taught at a very high level of quality.

It teaches computer science concepts that apply in any programming language but also teaches Python programming.

It is valuable information in the increasingly computer oriented world and economy and the class is free which is remarkable given its quality.

Here is the class name:

MITx: 6.00.1x Introduction to Computer Science and Programming Using Python

Bobby

Categories: DBA Blogs

Issue with Perl in $ORACLE_HOME during installs

DBASolved - Mon, 2015-08-24 01:14

I’ve been doing some Enterprise Manager installs a bit more lately. At the same time, I’ve been working on Data Integration items such as GoldenGate and ODI.  What these products have in common are that they require an Oracle Database for a repository.  Needless to say I’ve been installing a lot of 12.1.0.2 databases in test and production environments.  The one thing that has been consistent is the issue I keep seeing with PERL that is packaged with the Grid Infrastructure and/or Database.

Tip: This may not be happening to everyone and I may have a bad set of binaries.  In discussing this with a co-worker, the md5sum sets were the same for my set of binaries as they were for his. So I couldn’t say if this issue was bad binaries or something else.

As I was doing installs of Grid Infrastructure or Database on Oracle Enterprise Linux 6.6, I would get the following issue when trying to run the root.sh scripts from either the OUI or from the command line.

[root@rac1 grid]./root.sh
Performing root user operation.</p>
<p>The following environment variables are set as:
<br> ORACLE_OWNER=oracle 
<br> ORACLE_HOME=/u01/app/grid/12.1.0/grid 
<br> Copying dbhome to /usr/local/bin...
<br> Copying oraenv to /usr/local/bin...
<br> Copying coraenv to/usr/local/bin&nbsp;...</p>
<p>Entries will be added to the /etc/oratab file as needed by <br>Database Configuration Assistant when a database&nbsp;is&nbsp;created <br>Finished running generic part of root script.<br>Now product-specific root actions will be performed.<br><strong>/u01/app/grid/12.1.0/grid/crs/config/rootconfig.sh: line 131: 20862 Segmentation fault  (core dumped) $ROOTSCRIPT $ROOTSCRIPT_ARGS</strong><strong>The command '/u01/app/grid/12.1.0/grid/perl/bin/perl -I/u01/app/grid/12.1.0/grid/perl/lib -I/u01/app/grid/12.1.0/grid/crs/install /u01/app/grid/12.1.0/grid/crs/install/rootcrs.pl ' execution failed</strong>

You will notice that the execution failed with a “Segmentation fault”. In looking at the command, I noticed that this is running perl from the $ORACLE_HOME/perl/bin directory. When I did a “which perl”, the perl that the operating system is using is coming from /usr/bin/perl. This is not the correct one being used by the root.sh script. Also if I did a “perl -v” from the command line it returns that the version of perl is 5.10.

Now that it is established that the operating system installed perl is fine, I took a look at the perl in $ORACLE_HOME/perl/bin. When I navigated to the $ORACLE_HOME/perl/bin directory and executed “perl -v”; I was met with the “Segmentation fault” issue. Knowing that the problem is within the Oracle binaries; how can this be resolved?

To resolve this “Segmentation fault” issue, I had to recompile the perl binaries that Oracle uses in the $ORACLE_HOME path. To do this, I had to download and recompile the perl binaries in the $ORACLE_HOME directories.

<br>$cd ~/Downloads 
<br>$wget http://www.cpan.org/src/5.0/perl-5.14.4.tar.gz
<br>$tar -xzf perl-5.14.4.tar.gz 
<br>$cd perl-5.14.4 <br>$./Configure -des -Dprefix=$GI_HOME/perl <br>$make 
<br>$make test 
<br>$make install 
<br>

With the binaries recompiled, I was now able to run a “perl -v” from the $ORACLE_HOME and get a successful result set.

<br>[oracle@rac1 ~] cd /u01/app/grid/12.1.0.2/grid 
<br>[oracle@rac1 grid] cd perl/bin 
<br>[oracle@rac1 bin]./perl -v 
</p>
<p>This is perl 5, version 14,subversion 4 (v5.14.4) built for x86_64-linux </p>
<p>Copyright 1987-2013,Larry&nbsp;Wall </p><p>Perl may be copied only under the terms of either the Artistic License or the <br>GNU General Public License, which may be found in the Perl 5 source kit.</p><p>Complete documentation for Perl, including FAQ lists, should be found on 
<br>this system using "man perl" or "perldoc perl".  If you have access to the <br>Internet, point&nbsp;your browser at http://www.perl.org/, the Perl Home Page.
<br>

This process can be done if the OUI is running and the step that hung can be retried.  If you closed out the OUI, then the root.sh scripts will run successfully now from the $ORACLE_HOME directories.

Enjoy!


Filed under: Database, General
Categories: DBA Blogs

Presenting in Perth on 9 September and Adelaide on 11 September (Stage)

Richard Foote - Sat, 2015-08-22 05:54
For those of you lucky enough to live on the western half of Australia, I’ll be presenting at a couple of events in both Perth and Adelaide in the coming weeks. On Wednesday, 9th September 2015, I’ll be presenting on Oracle Database 12c New Features For DBAs (and Developers) at a “Let’s Talk Oracle” event […]
Categories: DBA Blogs

Are you ready to be a private cloud service provider?

Pythian Group - Thu, 2015-08-20 20:35

When defining what a cloud service is, we need to know that it is not a technology per se, but its an architectural and operational paradigm. It is a self-service computing environment offering the ability to create, consume, and pay for services. In this architecture, computing resources are elastically supplied from a shared pool and charged based on metered use and it uses service catalogs to provide a menu of options and service levels.

According to the IDC  the “total cloud IT infrastructure spending (server, disk storage, and ethernet switch) will grow by 21% year over year to $32 billion in 2015, accounting for approximately 33% of all IT infrastructure spending, which will be up from about 28% in 2014. Private cloud IT infrastructure spending will grow by 16% year over year to $12 billion, while public cloud IT infrastructure spending will grow by 25% in 2015 to $21 billion.

Meaning that the growth for this architecture (Private,Public or Hybrid) will not stop for the foreseeable future, so we first need to understand what drives it and how to translate your current architecture into a 3rd platform architecture.

2015-08-19_1240 Source: Image from IDC 3rd Platform Study

The principles of a cloud architecture support the following necessary capabilities:

  • Resource pooling – Services can be adjusted to suit each client’s needs without any changes being apparent to the client or end user.
  • Rapid elasticity – The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.
  • On-demand self-service – Provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider
  • Measured service – Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer
  • Broad network access – Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms
Business Drivers

Cloud will not be a true fit for everybody or for every case. We need to understand and determine the business drivers before we implement a cloud architecture.

  1. Increment our agility within our enterprise by providing:
    1. The ability to remove certain human procedures and have the end user be a self-service consumer
    2. A well-defined service catalog
    3. Capability to adapt to workload changes by provisioning or deprovisioning system resources
  2. Reduce enterprise costs by:
    1. Using shared system resources for our different applications and internal business divisions
    2. Being capable of determining the actual usage of system resources to show the benefit of our architecture
    3. Capable of automating mundane and routine tasks
  3. Reduce enterprise risks
    1. By having greater control of the resources we have and how they are being used
    2. Have more unified security across our business
    3. Providing different levels of high availability to our enterprise
Service Catalog

The most critical part when defining any type of service is defining what is it that we are going to provide. Take McDonalds for example. When we get to a counter, there is a well-defined catalog of what products we can consume in that establishment. It will be a certain type of hamburger and junk food. To define it more clearly, we can’t go into McDonalds and order a pizza or Italian food, as that is not in their business or service catalog.

When defining our business enterprise service catalog, we need to define the What, as to what type of service we want to provide, what service levels we want to provide, what policies we are going to apply to the service, and what our capabilities are to provide it.

The business service catalog will translate into a technical enterprise catalog, defining every detail of how we will provide our business services. Here we need to define the How. How are we going to deploy the service? How are we going to provide the service levels? How are we going to apply the business policies and how are we going to manage our services?

As mentioned, this is not a technology, but it is an architecture, and like any, we first must understand where we are to know where we are going. So we, in our current organization, first need to capture our existing assets, skills, and processes so that we can then validate the future state of our architecture. 2015-08-19_1312

Meter, Charge, and Optimize

Business consumers want to know what they are consuming and what it costs, even if they don’t actually want to pay for the service. Additionally, from an operational perspective, as different tenants start sharing the same piece of platform or infrastructure, there needs to be accountability on the usage, or else resources may be over-allocated. To mitigate this, we often meter the usage and optionally chargeback [or show back] the tenants. Though an IT organization may not actually charge back its LOBs, this provides a transparent mechanism to budget resources and optimize the cloud platform on an ongoing basis.

Conclusion

These are just a few points to be aware of if you want to become a private cloud provider, but this is also helpful for any cloud architecture, as we need to understand what drives the change, what it is we are going provide, and how we are going to deliver and measure the services that we are providing.

Note– This was originally published on rene-ace.com

The post Are you ready to be a private cloud service provider? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Git for Beginners

Pythian Group - Thu, 2015-08-20 20:04
git, simplified

Perhaps you’ve come across a great cache of publicly available SQL scripts that would be very useful in monitoring your databases, and these scripts are hosted on github.  Getting those scripts is as simple as clicking the Download button.

What if, however, you wish to contribute to the script library?

Or perhaps you would like to collaborate with coworkers on a project and want to host the files on github.

How do you get the files to your local server so that changes can be saved and pushed to the master repo?

Github is often the answer for that.

Some time ago github was probably considered by most IT folks as a tool for developers.  That has changed, as now git and github are popularly used to manage changes and allow collaboration on many kinds of projects that require file management.

If you are reading this blog, you are probably a DBA.  What better way to manage SQL scripts and allow others to contribute than with github?

Let’s simplify the use of git and make it usable for casual users. In other words, DBAs who want to access a SQL repo, and don’t want to relearn git every time, need to access the repo.

The methods shown here are not the same ones that would be used by a team of developers. Typically developers would create a fork of a project, clone that fork, modify files, and then issue pull requests to the main repo owner. There would also be branches to the development tree, merging, etc.

For this demo, there will still be a need to fork your own copy of the repo, but that is as far as it will go at this time.

Read more about creating a fork: https://help.github.com/articles/fork-a-repo/

In the spirit of keeping this simple, there will be no branching in this demo; I’ll only show the basics required to contribute to a project.

With simplicity as a goal, the following steps are to be performed in this demo:

  • Create a copy (fork) of the main repo in github
  • Clone the repo to a work environment (my linux workstation)
  • Add a file to the local repo
  • Commit the changes and push to my forked repo on github
  • Issue a ‘pull request’ asking the main repo admin to include my changes

So while it will be necessary to create a fork of the project, we won’t be dealing with branches off the mainline.

 Assumptions:

– you already have a github account

– git is installed on your laptop, server, whatever.

Git Repos

Two users will be used for this demo: jkstill and pytest.

The following repos will be used.

Main Repo: https://github.com/jkstill/git-demo

Developer’s (you) repo: https://github.com/pytest/git-demo

The Main Repo is public, so you can run this demo using your own account if you like.

Fork the Repo

The following steps were performed by the pytest user on github.

Login to https://github.com/ using a browser.

Navigate to https://github.com/jkstill/git-demo

Click on the ‘Fork’ icon and follow any instructions; this should only take a few seconds.

After forking this repo as pytest, my browser was now directed to https://github.com/pytest/git-demo

ssh key setup

This only needs to be done once.

The following examples are for github user pytest.

The pytest account will be used to demonstrate the concepts. Later I will explain more about ssh usage as it pertains to github, but for now this is probably sufficient.

create a new ssh key for use with github
   ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa_pytest_github -C 'github'
add key to github account

While logged in to your github account in a browser, find the account settings icon.

The icon for account settings is in upper right corner of browser window.

Navigate to the Add SSH Key section.

account settings -> SSH Keys -> Add SSH Key

The key added will be the public key. So in this case, the contents of ~/.ssh/id_rsa_pytest_github.pub would be pasted in the the text box that appears when the Add SSH Key button is pushed.

authenticate to github – the ‘git@github.com’ is required

Make sure to authenticate the key with github.

   ssh -i ~/.ssh/id_rsa_pytest_github -t git@github.com

Here is a successful example:

> ssh -i ~/.ssh/id_rsa_github -t git@github.com

Host key fingerprint is DE:AD:BE:EF:2b:00:2b:36:63:1b:56:4d:eb:df:a6:42

+--[ RSA 2048]----+
|        .        |
|       + .       |
|      . B .      |
|     o * +       |
|    Y * S        |
|   + O o . .     |
|    .   Z . o    |
|       . . t     |
|        . .      |
+-----------------+
PTY allocation request failed on channel 0
Hi pytest! You've successfully authenticated, but GitHub does not provide shell access.
Clone the REPO

Now you are ready to clone the newly forked repo to your workstation. At this point, it is assumed that git is already installed in your development environment. If git is not installed then you will need to install it.  There are many resources available whichever platform you are working on; installation will not be covered here.

The following command will clone your forked copy of the repo in the current directory:

> git clone https://github.com/pytest/git-demo
Cloning into 'git-demo'...
remote: Counting objects: 7, done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 7 (delta 0), reused 7 (delta 0), pack-reused 0
Unpacking objects: 100% (7/7), done.
Checking connectivity... done

> cd git-demo
/home/jkstill/github/pytest/git-demo

> ls -la
total 20
drwxr-xr-x 3 jkstill dba 4096 Aug 18 15:45 .
drwxr-xr-x 4 jkstill dba 4096 Aug 18 15:45 ..
drwxr-xr-x 8 jkstill dba 4096 Aug 18 15:45 .git
-rw-r--r-- 1 jkstill dba  113 Aug 18 15:45 .gitignore
-rw-r--r-- 1 jkstill dba   47 Aug 18 15:45 README.md

Note: it is possible to use the ~/.ssh/config file to specify multiple ssh keys for use with git. This is useful when you may be using multiple accounts.

The command I used to do this operation is below as I do have multiple accounts:

  git clone git-as-pytest:pytest/git-demo

You can read more about this in a later section of this article.

Now cd to the new repo:  cd git-demo

There should be two files and a directory as seen in the previous example.

Modify or add a script

Now you can modify a script or add a new script and then commit to your local repo.

In this case, we will add a script fra_config.sql to the local repo.

-- fra_config.sql
-- show location and size of FRA

col fra_location format a30
col fra_size format a16

select fra_location, fra_size from (
   select name, value
   from v$parameter2
   where name like 'db_recovery_file_dest%'
)d
pivot ( max(value) for name in (
      'db_recovery_file_dest' as FRA_LOCATION,
      'db_recovery_file_dest_size' as FRA_SIZE
   )
)
/

Modified files can be seen with git status:

> git status
# On branch master
# Untracked files:
#   (use "git add <file>..." to include in what will be committed)
#
#       fra_config.sql
nothing added to commit but untracked files present (use "git add" to track)

Now add the file to the list of those that should be tracked and check the status again:

> git add fra_config.sql


> git status
# On branch master
# Changes to be committed:
#   (use "git reset HEAD <file>..." to unstage)
#
#       new file:   fra_config.sql
#

As we are happy with the results, it is time to commit to the local repo:

> git commit -m 'Added the new file fra_config.sql'
[master 86eaf7c] Added the new file fra_config.sql
1 file changed, 18 insertions(+)
create mode 100644 fra_config.sql

> git status
# On branch master
# Your branch is ahead of 'origin/master' by 1 commit.
#   (use "git push" to publish your local commits)
#
nothing to commit, working directory clean

Shouldn’t we have put a date in that file? OK, a date and time was added, changes to the file displayed, the file was added to the list of those to commit, and the commit made:

> git diff fra_config.sql | cat
diff --git a/fra_config.sql b/fra_config.sql
index 03b98fd..37c58ac 100644
--- a/fra_config.sql
+++ b/fra_config.sql
@@ -1,6 +1,7 @@

-- fra_config.sql
-- show location and size of FRA
+-- jkstill 2015-08-18 16:03:00 PDT

col fra_location format a30
col fra_size format a16

> git add fra_config.sql

> git commit -m 'added timestamp'
[master 83afd35] added timestamp
1 file changed, 1 insertion(+)

> git status
# On branch master
# Your branch is ahead of 'origin/master' by 2 commits.
#   (use "git push" to publish your local commits)
#
nothing to commit, working directory clean

Committing can and should be done frequently, as the commit affects only the local repository.

This makes it possible to see (and retrieve) incremental changes to a file as you work on it.

Once you are satisfied with all changes, push the changes to the repo. Notice that git status knows that 2 commits have been performed locally that are not seen in the master repository.

Configure the Remote

Before pushing to the main repo, there is a little more configuration work to do. While this method is not strictly necessary, it does simplify the use of git.

You will need to edit the file ~/.ssh/config; create it if it does not already exist.

Here’s my example file where a host git-as-pytest has been created. This host will be used to connect to github.

GSSAPIAuthentication no
VisualHostKey=yes

Host git-as-pytest
  HostName github.com
  User git
  IdentityFile /home/jkstill/.ssh/id_rsa_pytest_github
  IdentitiesOnly yes

Now edit the file ./.git/config.  Find the line that remote “origin” and change the URL as seen in this example.

[core]
  repositoryformatversion = 0
  filemode = true
  bare = false
  logallrefupdates = true
[remote "origin"]
  #url = https://github.com/pytest/git-demo
  url = git-as-pytest:pytest/git-demo.git
  fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
  remote = origin
  merge = refs/heads/master

Now you should be able to push the changes to the master repo:

> git push origin master
Counting objects: 7, done.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 787 bytes | 0 bytes/s, done.
Total 6 (delta 2), reused 0 (delta 0)
To git-as-pytest:pytest/git-demo.git
788e5b1..83afd35  master -> master

The changes to your files can be seen in your repo on github.com

Issue a PULL request

Once you think the file or files are ready to be included in the master repository, you will issue a pull request to the admin of the master repo.

The repo admin can then pull the changes and examine them. Once it has been determined that the changes can be made to the master repo, the admin will push the changes.

Issuing the pull request

View the repo in your browser, press the ‘pull request’ icon and follow the instructions. This action will cause an email to be sent to the repo admin with URL to view the pull request.   The admin can then examine and test the changes, and merge the pull request (if appropriate) into the mainline.

If the pull request results in your changes being merged, github will send you an email.

After the Pull request has been merged

Now other users can get the updates with the following commands

  git pull
  git status
  git commit

These commands will merge the repo from github with this one.

As there is the possibility of overwriting files you are working on, be sure this is the right thing to do.

Now that you have the basics, you can get started.

Please feel free to use  the https://github.com/jkstill/git-demo repo to follow along with the steps shown here.

The post Git for Beginners appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Difference Between Oracle’s Table and Mongo’s Collection

Pythian Group - Thu, 2015-08-20 11:44

Roughly speaking, the notion of ‘Tables’ in Oracle is similar to MongoDB’s ‘Collections’. They are NOT identical though. Before we examine the differences between Oracle’s Table and MongoDB’s Collection, let’s see what Table in Oracle and Collection in MongoDB are.

Table in Oracle:

A table in Oracle is made up of a fixed number of columns for any number of rows. Every row in a table has the same columns.

Collection in MongoDB:

A collection in MongoDB is made up of documents. The concept of Documents is similar to rows in a table, but it’s not identical. A document can have its own unique set of columns. In MongoDB, columns are called fields.

So in MongoDB, fields are defined at the document level (or we can say in Oracle lingo that columns are defined at the row level), whereas in Oracle the columns are defined at the table level.

That is actually the main difference between Oracle’s Table and Mongo’s collection among other subtle differences such as collections are schema-less, whereas Table in Oracle has to be in some schema.

Example of an Oracle table:

EMP

EMPID    NAME    CITY
1                Smith    Karachi
2               Adam    Lahore
3               Jim        Wah Cantt
4               Ken         Quetta

CREATE TABLE EMP (EMPID  NUMBER(5),NAME VARCHAR2(20),CITY VARCHAR2(25));

INSERT INTO EMP VALUES (1,’SMITH’,’KARACHI’);
INSERT INTO EMP VALUES (2,’ADAM’,’LAHORE’);
INSERT INTO EMP VALUES (3,’JIM’,’WAH CANTT’);
INSERT INTO EMP VALUES (4,’KEN’,’KARACHI’);

Select * from EMP;

In the above example, the table is ‘EMP’, with 4 rows. All 4 rows have a fixed number of columns EMPID, NAME, and CITY.

Example of a MongoDB Collection:

db.EMP.insert({EMPID: ‘1’,NAME: ‘Smith’, CITY: ‘Karachi’})
db.EMP.insert({EMPID: ‘2’,NAME: ‘Adam’, CITY: ‘Wah Cantt’, Designation: ‘CTO’})
db.EMP.insert({EMPID: ‘3,NAME: ‘Jim’, Designation: ‘Technician’})
db.EMP.insert({EMPID: ‘4’,NAME: ‘Ken’})

> db.EMP.find()

{ “_id” : ObjectId(“55d44757283d7d463aec4cc1”), “EMPID” : “1”, “NAME” : “Smith”, “CITY” : “Karachi” }
{ “_id” : ObjectId(“55d44757283d7d463aec4cc2”), “EMPID” : “2”, “NAME” : “Adam”, “CITY” : “Wah Cantt”, “Designation” : “CTO” }
{ “_id” : ObjectId(“55d44757283d7d463aec4cc3”), “EMPID” : “3”, “NAME” : “Jim”, “Designation” : “Technician” }
{ “_id” : ObjectId(“55d44757283d7d463aec4cc4”), “EMPID” : “4”, “NAME” : “Ken” }

In the above example, first we inserted 4 documents into collection ‘EMP’. Notice that all 4 documents have different number of columns. db.EMP.find() command is to display these documents.

Hope that helps……

The post Difference Between Oracle’s Table and Mongo’s Collection appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Memory added based on buffer pool advisory did not give desired result

Bobby Durrett's DBA Blog - Wed, 2015-08-19 14:37

Development asked me to look at ways to cut the run time of a series of PeopleSoft payroll batch jobs so I took an AWR report of the entire 4 hour period.  Based on the waits, the percentage of the elapsed time spent using I/O and the buffer pool advisory I chose to double the size of the buffer cache. But, this added memory did not improve the run time of the batch jobs. Maybe the affected blocks are only read into memory once so they would not get cached no matter how big the buffer pool was.

Here is the original run on June 22 with the original memory settings:

Cache Sizes

Begin End Buffer Cache: 3,328M 3,424M Std Block Size: 8K Shared Pool Size: 1,600M 1,520M Log Buffer: 7,208K

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class db file sequential read 1,910,393 10,251 5 72.03 User I/O DB CPU 2,812 19.76 log file sync 35,308 398 11 2.80 Commit resmgr:cpu quantum 31,551 62 2 0.43 Scheduler db file scattered read 7,499 60 8 0.42 User I/O Buffer Pool Advisory
  • Only rows with estimated physical reads >0 are displayed
  • ordered by Block Size, Buffers For Estimate
P Size for Est (M) Size Factor Buffers (thousands) Est Phys Read Factor Estimated Phys Reads (thousands) Est Phys Read Time Est %DBtime for Rds D 336 0.10 41 2.71 6,513,502 1 9842530.00 D 672 0.20 83 2.42 5,831,130 1 8737799.00 D 1,008 0.29 124 2.18 5,241,763 1 7783636.00 D 1,344 0.39 166 1.96 4,720,053 1 6939010.00 D 1,680 0.49 207 1.77 4,250,981 1 6179603.00 D 2,016 0.59 248 1.59 3,825,904 1 5491420.00 D 2,352 0.69 290 1.43 3,438,372 1 4864023.00 D 2,688 0.79 331 1.28 3,083,734 1 4289879.00 D 3,024 0.88 373 1.15 2,758,459 1 3763273.00 D 3,360 0.98 414 1.02 2,459,644 1 3279504.00 D 3,424 1.00 422 1.00 2,405,118 1 3191229.00 D 3,696 1.08 455 0.91 2,184,668 1 2834329.00 D 4,032 1.18 497 0.80 1,931,082 1 2423784.00 D 4,368 1.28 538 0.71 1,696,756 1 2044421.00 D 4,704 1.37 579 0.62 1,479,805 1 1693185.00 D 5,040 1.47 621 0.53 1,278,370 1 1367070.00 D 5,376 1.57 662 0.45 1,090,505 1 1062925.00 D 5,712 1.67 704 0.38 914,112 1 777352.00 D 6,048 1.77 745 0.31 746,434 1 505888.00 D 6,384 1.86 786 0.24 580,310 1 236941.00 D 6,720 1.96 828 0.17 414,233 1 149325.00

In the SQL ordered by Elapsed Time report the top batch job SQL was 99.14% I/O

Based on this report it seems that the number of physical reads could be reduced to about 20% what they were on June 22 by doubling the size of the buffer cache. But, adding the memory did not cut the number of physical reads in any major way.

Here is yesterday’s run:

Cache Sizes

Begin End Buffer Cache: 6,848M 6,816M Std Block Size: 8K Shared Pool Size: 3,136M 3,136M Log Buffer: 16,572K

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class db file sequential read 1,789,852 10,173 6 72.15 User I/O DB CPU 2,970 21.06 log file sync 37,562 200 5 1.42 Commit resmgr:cpu quantum 24,996 59 2 0.42 Scheduler db file scattered read 5,409 54 10 0.38 User I/O Buffer Pool Advisory
  • Only rows with estimated physical reads >0 are displayed
  • ordered by Block Size, Buffers For Estimate
P Size for Est (M) Size Factor Buffers (thousands) Est Phys Read Factor Estimated Phys Reads (thousands) Est Phys Read Time Est %DBtime for Rds D 672 0.10 83 11.25 516,440 1 1309098.00 D 1,344 0.20 166 5.98 274,660 1 683610.00 D 2,016 0.29 248 4.02 184,712 1 450915.00 D 2,688 0.39 331 2.90 133,104 1 317404.00 D 3,360 0.49 414 2.20 100,860 1 233990.00 D 4,032 0.59 497 1.80 82,768 1 187185.00 D 4,704 0.69 580 1.53 70,445 1 155305.00 D 5,376 0.79 663 1.31 60,345 1 129176.00 D 6,048 0.88 745 1.14 52,208 1 108127.00 D 6,720 0.98 828 1.01 46,477 1 93301.00 D 6,848 1.00 844 1.00 45,921 1 91862.00 D 7,392 1.08 911 0.95 43,572 1 85785.00 D 8,064 1.18 994 0.89 40,789 1 78585.00 D 8,736 1.28 1,077 0.85 38,889 1 73671.00 D 9,408 1.37 1,160 0.81 37,112 1 69073.00 D 10,080 1.47 1,242 0.77 35,490 1 64876.00 D 10,752 1.57 1,325 0.75 34,439 1 62158.00 D 11,424 1.67 1,408 0.73 33,353 1 59347.00 D 12,096 1.77 1,491 0.71 32,524 1 57204.00 D 12,768 1.86 1,574 0.69 31,909 1 55613.00 D 13,440 1.96 1,656 0.68 31,361 1 54194.00

After the memory add the same top batch job SQL was 98.80% I/O.  Some improvement but not nearly as much as I expected based on the buffer pool advisory.

I guess the moral of the story is that the buffer pool advisory does not apply to specific workloads and is only a general guideline.  Maybe this is the same kind of fallacy that you have with buffer cache hit ratios where certain workloads make the ratio irrelevant.  Here were the hit ratios:  Before 98.59% After 98.82%.  Basically these are the same.

I just thought I would share this to document a real case of using the buffer pool advisory and having it not produce the expected results.

Bobby

Categories: DBA Blogs

Script to get previous month’s AWR report

Bobby Durrett's DBA Blog - Tue, 2015-08-18 11:58

We keep 6 weeks of history in the AWR on our databases, but I want to capture some information for long-term trending. What I really want to do is capture some metrics and put them in some database tables to use to generate reports, but I have not had time to build the scripts to do that.  So, instead I built a simple set of scripts to capture an AWR for the previous month. Since we have 6 weeks of history if I run my report in the first week of a month all the days of the previous month should still be in the AWR. I have just finished building this script so I can not promise that there is value in keeping monthly AWR reports but I thought it was worth sharing it.  Maybe something in the code will be useful to someone.  Here is the script:

-- Has to be run in the first week of the month so the entire
-- previous month is available. We keep 6 weeks of awr history.

-- setup columns for snapshots

column bsnap1 new_value bsnap1s noprint;
column esnap1 new_value esnap1s noprint;
column filenm new_value filenms noprint;

-- get snap id for first day of previous month

select min(snap_id) bsnap1
from dba_hist_snapshot
where 
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))
and
STARTUP_TIME=
(select max(STARTUP_TIME)
from dba_hist_snapshot
where 
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD')))));

-- get snap id for last day of previous month

select max(snap_id) esnap1
from dba_hist_snapshot
where 
extract(month from END_INTERVAL_TIME)=
extract(month from (sysdate-to_number(to_char(sysdate,'DD'))));

-- get html file name

select 
name||
'_'|| 
to_char(extract(month from 
(sysdate-to_number(to_char(sysdate,'DD')))))||
'_'|| 
to_char(extract(year from 
(sysdate-to_number(to_char(sysdate,'DD')))))||
'.html' filenm
from v$database;

-- get awr report

define report_type='html';
define begin_snap = &bsnap1s;
define end_snap = &esnap1s;
define report_name = '&filenms';

define num_days = 0;

@@$ORACLE_HOME/rdbms/admin/awrrpt.sql

undefine report_type
undefine report_name
undefine begin_snap
undefine end_snap
undefine num_days

If the database bounced during the previous month we get the last set of snapshots after the last bounce.

I am not sure whether this approach will give us any benefits but I think it may help to show how to use queries to pick begin and end snapshots and then run an AWR report.

The tricky part of the code is this:

extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))

It just returns the previous month as a number.  It is August now so here is what it returns today:

SQL> select
 2 extract(month from (sysdate-to_number(to_char(sysdate,'DD'))))
 3 prev_month
 4 from dual;

PREV_MONTH
----------
 7

sysdate-to_number(to_char(sysdate,’DD’)) is the last day of the previous month:

SQL> select sysdate-to_number(to_char(sysdate,'DD')) last_day
 2 from dual;

LAST_DAY
---------
31-JUL-15

– Bobby

Categories: DBA Blogs

John King in Cleveland CTOWN for September meeting

Grumpy old DBA - Mon, 2015-08-17 04:50
For our September 18 friday afternoon quarterly meeting Northeast Ohio Oracle Users Group is lucky to have a great speaker Oracle Ace Director John King presenting.  It's the usual networking and free lunch beginning at noon at the Rockside Oracle offices followed by quick business meeting and presentations starting at 1 pm.

Full details on NEOOUG Sept 2015 presentations

November meeting we have Daniel Morgan woo hoo!
Categories: DBA Blogs