Skip navigation.

DBA Blogs

Webcast - Oracle Multitenant Option

On their road to adoption of Oracle Database 12c, many are considering the Multitenant Database Architecture, where a multitenant container database can hold many pluggable databases, enabling...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Windows 7 error “key not valid for use in specified state”

The Oracle Instructor - Sun, 2014-08-31 00:37

When you see that error upon trying to install or upgrade something on your Windows 7 64-bit machine, chances are that it is caused by a Windows Security update that you need to uninstall. There is probably no point in messing around with the registry or the application that you want to upgrade. Instead, remove the Windows update KB2918614 like this:

Open the control panel, then click  Windows Update

Windows 7 control panelClick Update History and then Installed Updates:

Update HistoryScroll down to Microsoft Windows and look for KB2918614 (I have removed it already before I took the screenshot):

Uninstall KB2918614Finally, hide that update so you don’t get it installed later on again:

Hide KB2918614I’m using a corporate notebook with automatic Windows security updates coming from time to time and encountered that problem while trying to upgrade VirtualBox to version 4.3.12. It is not a VirtualBox issue, though, other installs or upgrades may fail for the same reason. For me, this was a serious problem, because I rely on virtual machines for many demonstrations. Kudos to the virtualbox.org forums! They helped me resolve that problem within a day. Thank you once again, guys! :-)


Tagged: #KB2918614
Categories: DBA Blogs

OSCON 2014: Complete Video Compilation

Surachart Opun - Sat, 2014-08-30 03:30
OSCON 2014 - Today, it's not only developers, system administrators or organizations have use the Open Source. Businesses have established to use the Open Source as well. So, you can not ignore about Open Source. At OSCON, you'll encounter the open source ecosystem. It helps digging deep into the business of open source.

Five Reasons to Attend OSCON: Get straight to the epicenter of all things open source and get better at what you do, Learn from the best and make valuable connections, Get solutions to your biggest challenges that you can apply today, See the latest developments, products, services, and career trends and Hear it first at OSCON.

It's very good idea to attend the OSCON, if you missed OSCON2014. I mention OSCON 2014: Complete Video Compilation. You can download these videos or view them through our HD player, and learn about open source with more than 350 presenters, including Matthew McCullough (GitHub), Leslie Hawthorn (Elasticsearch), James Turnbull (Docker), Andrei Alexandrescu (Facebook), Tim Berglund (DataStax), Paco Nathan (Zettacap), Kirsten Hunter (Akamai), Matt Ray (Chef Software, Inc.), and Damian Conway (Thoughtstream) among them. In these videos, you are able to see a lot of tracks (Business,Cloud,Community,Computational Thinking,Databases & Datastores,Education,Emerging anguages,Geek Lifestyle,Java & JVM,JavaScript - HTML5 - Web,Mobile Platforms,Open Hardware,Operations & System Admin,Perl,PHP,Python,Security,Tools & Techniques,User Experience).

You will able to learn many tracks as I told you. Anyway, Oreilly has improved video streaming and downloading. Additional, it's very useful for playback speed control and mobile viewing.

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Using #DB12c PDB as a Repository for #EM12c…. Not so fast!

DBASolved - Fri, 2014-08-29 14:51

Last year, I wrote a post on how to setup Oracle Enterprise Manger 12c (12.1.0.3) with Oracle Database 12c (12.1.0.1) and what you could expect.  At the time the configuration was not supported by Oracle and I only did it to see if it could be done.  If you would like to read that post it can be found here.

This past May (2014), I attended the Oracle Customer Advisory Board (CAB) for Oracle Enterprise Manager 12c. It was a great gathering of power users and customers of the Oracle Enterprise Manager product line with a lot of useful information being shared.  One thing that came out in the CAB is that Oracle Enterprise Manager 12c (12.1.0.4) is now fully supported against Oracle Database 12c (12.1.0.1 or later).  During the discussion, I asked if Pluggable Databases (PDBs) would be supported for the repository database; I was surprised when the initial answer was “Yes”.  I was excited to say the least and wanted to try it out.  The ability to house multiple monitoring tools in a single CDB via multiple PDBs would shrink many monitoring application databases footprints within organizations.

Unfortunately, Oracle and I don’t share the same outlook with that use case.  In doing more research, installations and discussing the use case with a few friends, inside and outside of Oracle, I can safely say that Pluggable Databases (PDBs) are not supported as a Oracle Management Repository (OMR).  Although a Pluggable Database (PDB) is not supported as the OMR, Oracle Enterprise Manger 12c (12.1.0.4) can still be ran with a Oracle Database 12c (12.1.0.1 or later) OMR as long as it is a traditional database (non-CDB).

In summary, I’ve answered two (2) questions for you.

1. Are Pluggable Databases (PDBs) supported as Oracle Management Repositories for EM12c?  -> NO

2. Can an Oracle Database 12c (12.1.0.1 or later), in traditional (non-CDB) mode, be used as an Oracle Management Repository? -> YES

In Image 1, you can see that I have EM12c (12.1.0.4) running with a DB12c repository.

Image 1:

12104_db12_oem.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Enjoy!

about.me: http://about.me/dbasolved

 


Filed under: Database, OEM
Categories: DBA Blogs

Log Buffer #386, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-29 08:16

If you ever wanted an easy-peazy way to get few of the best blog posts of the week from Oracle, SQL Server and MySQL then Log Buffer Editions are the place to be.

Oracle:

The Product Management team have released a knowledge article for Enterprise Performance Management (EPM) 11.1.2.2.x and 11.1.2.3.x containing details for EPM support with Internet Explorer (IE) 11.

As if anyone needs to be reminded, there’s a ridiculous amount of hype surrounding clouds and big data. There’s always oodles of hype around any new technology that is not well understood.

By mapping an external table to some text file, you can view the file contents as if it were data in a database table.

Vikram has discovered a utility adopreports utility in R12.2.

As a lot of the new APEX 5 features are “by developers for developers”, this one is also a nifty little thing that make our lives easier.

SQL Server:

Data Mining: Part 15 Processing Data Mining components with SSIS.

SQL Server AlwaysOn Availability Groups Fail the Initial Failover Test.

Stairway to PowerPivot and DAX – Level 6: The DAX SUM() and SUMX() Functions.

Questions about T-SQL Expressions You Were Too Shy to Ask

SQL Server Service Engine fails to start after applying CU4 for SQL Server 2008 SP1.

MySQL:

MySQL for Visual Studio 1.2.x recently became a GA version. One of the main features included in this version was the new MySQL ASP.NET MVC Wizard.

Resources for Database Clusters: Performance Tuning for HAProxy, Support for MariaDB 10, Technical Blogs & More.

Trawling the binlog with FlexCDC and new FlexCDC plugins for MySQL

InnoDB provides a custom mutex and rw-lock implementation.

You probably already know that Sphinx supports MySQL binary network protocol. But, if you haven’t heard– Sphinx can be accessed with the regular ol’ MySQL API.

Categories: DBA Blogs

Presenting at OOW 2014

DBASolved - Fri, 2014-08-29 07:26

This year I’ll be presenting at Oracle Open World with many of the best in the industry.  If you are going to be in the San Francisco area between September 28 thru October 2 2014, stop by and check out the conference.  Registration information can be found here.

The topics which I’ll be presenting or assisting with this year are:

  • OTN RAC Attack – Sunday, September 28, 2014 – 9 am – 3 pm PST @ OTN Lounge
  • How many ways can I monitor Oracle GoldenGate – Sunday, September 28, 2014 0 3:30 pm – 4:15 pm PST @ Moscone South 309
  • Oracle Exadata’s Exachk and Oracle Enterprise Manager 12c: Keeping Up with Oracle Exadata – Thursday, October 2, 2014 10:45 am – 11:30 am PST @ Moscone South 310

Hope to see you there!

Enjoy!

about.me: http://about.me/dbasolved


Filed under: General
Categories: DBA Blogs

Tungsten Replicator: MariaDB Master-Master and Master-Slave Topologies

Pythian Group - Thu, 2014-08-28 12:45

A common concern in the MySQL community is how to best implement high availability for MySQL. There are various built-in mechanisms to accomplish this such as Master/Master and Master/Slave replication using binary logs as well as FOSS solutions such as Galera and Tungsten, just to name a few. Often times, IT Managers and DBAs alike opt to avoid implementing a third party solution due to the added administrative overhead without fully evaluating the available solutions. In today’s blog post, I would like to describe the process for configuring a Master/Slave topology and switching to a Master/Master topology with Tungsten Replicator.

Tungsten Replicator is a well known tool that has gained much acclaim in the area of MySQL Enterprise database implementation, however, many teams tend to stay away from the implementation to avoid over-complicating the replication topology. I have listed and described all of the steps required to configure a replication topology for 1 to N nodes (today’s how-to guide serves for a 2-node implementation but I will described the additional steps that would be required to implement these topologies for N nodes).

The 2 nodes I will be using are vm128-142 and vm129-117, the first part of the document contains the steps that need to be performed on both nodes and the latter describes the steps to be performed on either one of the two nodes. As soon as Tungsten Replicator has been installed on both nodes with the same configuration files the switch is as simple as “one, two, three” – all it requires is running the script that configures the topology of your choice. The main topologies that are available are :

  • Master – Slave: Replication flowing from 1 .. N nodes using Tungsten Replicator
  • Master – Master: Bi-directional replication for 1 .. N nodes
  • Star Topology: A central node acts as a hub and all spokes are Master nodes
  • Fan-in Topology: A single slave node with replication from 1 .. N Master nodes

(Check out https://code.google.com/p/tungsten-replicator/wiki/TRCMultiMasterInstallation for further details)

So, let’s continue with the actual steps required (please note I’m using the “root” account with SSH passwordless authentication for the purposes of this article, it is best to define another user on production systems). The parameters and values in red text require customization for your system / topology. The configuration files are all indented in the text is royal blue:

### The following commands should be executed on all nodes (vm128-142 & vm129-117 in this how-to)

su - root
cd /root # or alternatively to a place like /opt/ or /usr/local/
vi /etc/yum.repos.d/MariaDB.repo

 # MariaDB 5.5 CentOS repository list - created 2014-08-25 16:59 UTC
 # http://mariadb.org/mariadb/repositories/
 [mariadb]
 name = MariaDB
 baseurl = http://yum.mariadb.org/5.5/centos6-amd64
 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
 gpgcheck=1

vi /etc/security/limits.conf

 # add the following line
 * - nofile 65535

yum update

yum install wget MariaDB-server MariaDB-client ruby openssh-server rsync 
yum install java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el6_5.x86_64 
yum install http://www.percona.com/downloads/XtraBackup/LATEST/binary/redhat/6/x86_64/percona-xtrabackup-2.2.3-4982.el6.x86_64.rpm
ln -s /usr/bin/innobackupex /usr/bin/innobackupex-1.5.1

wget http://downloads.tungsten-replicator.org/download.php?file=tungsten-replicator-2.2.1-403.tar.gz
tar -xzvf download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
rm download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
cd tungsten-replicator-2.2.1-403/

vi cookbook/COMMON_NODES.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 export NODE1=vm128-142.dlab.pythian.com
 export NODE2=vm129-117.dlab.pythian.com
 #export NODE3=host3
 #export NODE4=host4
 #export NODE5=host5
 #export NODE6=host6
 #export NODE7=host7
 #export NODE8=host8

vi cookbook/USER_VALUES.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 # User defined values for the cluster to be installed.

 cookbook_dir=$(dirname $0 )

 # Where to install Tungsten Replicator
 export TUNGSTEN_BASE=/opt/tungsten-replicator/installs/cookbook

 # Directory containing the database binary logs
 export BINLOG_DIRECTORY=/var/lib/mysql

 # Path to the script that can start, stop, and restart a MySQL server
 export MYSQL_BOOT_SCRIPT=/etc/init.d/mysql

 # Path to the options file
 export MY_CNF=/etc/my.cnf

 # Database credentials
 export DATABASE_USER=tungsten
 export DATABASE_PASSWORD=tungsten
 export DATABASE_PORT=3306

 # Name of the service to install
 export TUNGSTEN_SERVICE=cookbook

 # Replicator ports
 export RMI_PORT=10000
 export THL_PORT=2112

 # If set, replicator starts after installation
 [ -z "$START_OPTION" ] && export START_OPTION=start

 ##############################################################################
 # Options used by the "direct slave " installer only
 # Modify only if you are using 'install_master_slave_direct.sh'
 ##############################################################################
 export DIRECT_MASTER_BINLOG_DIRECTORY=$BINLOG_DIRECTORY
 export DIRECT_SLAVE_BINLOG_DIRECTORY=$BINLOG_DIRECTORY
 export DIRECT_MASTER_MY_CNF=$MY_CNF
 export DIRECT_SLAVE_MY_CNF=$MY_CNF
 ##############################################################################

 ##############################################################################
 # Variables used when removing the cluster
 # Each variable defines an action during the cleanup
 ##############################################################################
 [ -z "$STOP_REPLICATORS" ] && export STOP_REPLICATORS=1
 [ -z "$REMOVE_TUNGSTEN_BASE" ] && export REMOVE_TUNGSTEN_BASE=1
 [ -z "$REMOVE_SERVICE_SCHEMA" ] && export REMOVE_SERVICE_SCHEMA=1
 [ -z "$REMOVE_TEST_SCHEMAS" ] && export REMOVE_TEST_SCHEMAS=1
 [ -z "$REMOVE_DATABASE_CONTENTS" ] && export REMOVE_DATABASE_CONTENTS=0
 [ -z "$CLEAN_NODE_DATABASE_SERVER" ] && export CLEAN_NODE_DATABASE_SERVER=1
 ##############################################################################


 #
 # Local values defined by the user.
 # If ./cookbook/USER_VALUES.local.sh exists,
 # it is loaded at this point

 if [ -f $cookbook_dir/USER_VALUES.local.sh ]
 then
 . $cookbook_dir/USER_VALUES.local.sh
 fi

service iptables stop 

 # or open ports listed below:
 # 3306 (MySQL database)
 # 2112 (Tungsten THL)
 # 10000 (Tungsten RMI)
 # 10001 (JMX management)

vi /etc/my.cnf.d/server.cnf

 # These groups are read by MariaDB server.
 # Use it for options that only the server (but not clients) should see
 #
 # See the examples of server my.cnf files in /usr/share/mysql/
 #

 # this is read by the standalone daemon and embedded servers
 [server]

 # this is only for the mysqld standalone daemon
 [mysqld]
 open_files_limit=65535
 innodb-file-per-table=1
 server-id=1 # make server-id unique per server
 log_bin
 innodb-flush-method=O_DIRECT
 max_allowed_packet=64M
 innodb-thread-concurrency=0
 default-storage-engine=innodb
 skip-name-resolve

 # this is only for embedded server
 [embedded]

 # This group is only read by MariaDB-5.5 servers.
 # If you use the same .cnf file for MariaDB of different versions,
 # use this group for options that older servers don't understand
 [mysqld-5.5]

 # These two groups are only read by MariaDB servers, not by MySQL.
 # If you use the same .cnf file for MySQL and MariaDB,
 # you can put MariaDB-only options here
 [mariadb]

 [mariadb-5.5]

service mysql start
mysql -uroot -p -e"CREATE USER 'tungsten'@'%' IDENTIFIED BY 'tungsten';"
mysql -uroot -p -e"GRANT ALL PRIVILEGES ON *.* TO 'tungsten'@'%' WITH GRANT OPTION;"
mysql -uroot -p -e"FLUSH PRIVILEGES;"

ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_rsa.pub | ssh vm129-117 'cat >> ~/.ssh/authorized_keys' # from vm128-142
cat ~/.ssh/id_rsa.pub | ssh vm128-142 'cat >> ~/.ssh/authorized_keys' # from vm129-117
chmod 600 authorized_keys

cookbook/validate_cluster # this is the command used to validate the configuration

vi cookbook/NODES_MASTER_SLAVE.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 CURDIR=`dirname $0`
 if [ -f $CURDIR/COMMON_NODES.sh ]
 then
 . $CURDIR/COMMON_NODES.sh
 else
 export NODE1=
 export NODE2=
 export NODE3=
 export NODE4=
 export NODE5=
 export NODE6=
 export NODE7=
 export NODE8=
 fi

 export ALL_NODES=($NODE1 $NODE2 $NODE3 $NODE4 $NODE5 $NODE6 $NODE7 $NODE8)
 # indicate which servers will be masters, and which ones will have a slave service
 # in case of all-masters topologies, these two arrays will be the same as $ALL_NODES
 # These values are used for automated testing

 #for master/slave replication
 export MASTERS=($NODE1)
 export SLAVES=($NODE2 $NODE3 $NODE4 $NODE5 $NODE6 $NODE7 $NODE8)

## The following commands should be performed on just one of the nodes
## In my case either vm128-142 OR 129-117

cookbook/install_master_slave # to install master / slave topology
cookbook/show_cluster # here we see master - slave replication running

 --------------------------------------------------------------------------------------
 Topology: 'MASTER_SLAVE'
 --------------------------------------------------------------------------------------
 # node vm128-142.dlab.pythian.com
 cookbook [master] seqno: 1 - latency: 0.514 - ONLINE
 # node vm129-117.dlab.pythian.com
 cookbook [slave] seqno: 1 - latency: 9.322 - ONLINE

cookbook/clear_cluster # run this to destroy the current Tungsten cluster 

cookbook/install_all_masters # to install master - master topology 
cookbook/show_cluster # and here we've switched over to master - master replication

 --------------------------------------------------------------------------------------
 Topology: 'ALL_MASTERS'
 --------------------------------------------------------------------------------------
 # node vm128-142.dlab.pythian.com
 alpha [master] seqno: 5 - latency: 0.162 - ONLINE
 bravo [slave] seqno: 5 - latency: 0.000 - ONLINE
 # node vm129-117.dlab.pythian.com
 alpha [slave] seqno: 5 - latency: 9.454 - ONLINE
 bravo [master] seqno: 5 - latency: 0.905 - ONLINE

Categories: DBA Blogs

12c: How to Restore/Recover a Small Table in a Large Database

Pythian Group - Thu, 2014-08-28 09:35

As a DBA, you will receive requests from developers or users, indicating that they deleted some data in a small table in a large database a few hours prior. They will probably want you to recover the data as soon as possible, and it will likely be a critical production database. Flashback will not be enabled, and the recycle bin will have been purged. Restoring a full database using RMAN might take you over 10 hours, and you will need a spare server with big storage. Looks like it’s going to be a difficult and time consuming task for you.

In Oracle Database 12c, there is a method available which allows us to recover the table more efficiently, and at a lower cost. The method is to create a second database (often called a stub database) using the backup of the first database. In this situation, we restore the SYSTEM, SYSAUX, and UNDO tablespaces and the the individual tablespaces that contain the data that we want to restore. After the restore is complete, we alter any tablespaces that we did not restore offline. We then apply the archived redo logs to the point in time that we want to restore the table to. Having restored the database to the appropriate point in time, we then use Oracle Data Pump to export the objects, and then you import them into the original database, again using Oracle Data Pump. Oracle Database 12c introduces new functionality in RMAN that supports point-in-time restore of individual database tables and individual table partitions.

Here is an example of when I tested this new feature:

1. The database TEST has 9 tablespaces and a schema called Howie. I created a table with 19377 records called TEST1 which is in the tablespace DATA_HOWIE.

SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME                                                        VERSION           STARTUP_T STATUS       PAR    THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS     SHU DATABASE_STATUS   INSTANCE_ROLE      ACTIVE_ST BLO  CON_ID INSTANCE_MO EDITION FAMILY
--------------- ---------------- ---------------------------------------------------------------- ----------------- --------- ------------ --- ---------- ------- --------------- ---------- --- ----------------- ------------------ --------- --- ---------- ----------- ------- --------------------------------------------------------------------------------
1 TEST             12cServer1                                                       12.1.0.1.0        17-AUG-14 OPEN         NO           1 STARTED                 ALLOWED    NO  ACTIVE            PRIMARY_INSTANCE   NORMALNO            0 REGULAR     EE

SQL> select tablespace_name from dba_tablespaces order by tablespace_name;

TABLESPACE_NAME
------------------------------
DATA_HOWIE
DATA_TB1
DATA_TB2
DATA_TB3
SYSAUX
SYSTEM
TEMP
UNDOTBS1
USERS

9 rows selected.

SQL> conn howie
Enter password:
Connected.
SQL> create table test1 as select * from dba_objects;

Table created.

SQL> select count(*) from test1;

COUNT(*)
----------
19377

SQL> select table_name,tablespace_name from user_tables where table_name='TEST1';

TABLE_NAME                                                                                                                       TABLESPACE_NAME
-------------------------------------------------------------------------------------------------------------------------------- ------------------------------
TEST1                                                                                                                            DATA_HOWIE

2. The database is in archivelog mode, and I took a full backup of the database.

[oracle@12cServer1 RMAN]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Sun Aug 17 20:16:17 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2146502230)

RMAN> run
{
allocate channel d1 type disk format '/u01/app/oracle/RMAN/rmn_%d_t%t_p%p';
backup
incremental level 0
tag backup_level0
filesperset 1
(database)
plus archivelog ;
release channel d1;
}2> 3> 4> 5> 6> 7> 8> 9> 10> 11>

3. The data in the table howie.test1 has been deleted.

SQL> select sysdate,current_scn from v$database;

SYSDATE             CURRENT_SCN
------------------- -----------
08/17/2014 21:01:15      435599

SQL> delete test1;

19377 rows deleted.

SQL> commit;

Commit complete.

4. I ran following scripts to recover the data to an alternative table howie.test1_temp to the point in time “08/17/2014 21:01:15″

[oracle@12cServer1 RMAN]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Sun Aug 17 21:01:35 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2146502230)

RMAN> recover table howie.test1
until time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')"
auxiliary destination '/u01/app/oracle/aux'
remap table howie.test1:test1_temp;2> 3> 4>

5. The scripts above will take care of everything and you will see the data has been restored to howie.test1_temp

SQL> select count(*) from TEST1_TEMP;

COUNT(*)
----------
19377

SQL> select count(*) from TEST1;

COUNT(*)
----------
0

Let’s take a look at the log of RMAN recovery and find out how it works.

1. Creation of the auxiliary instance

Creating automatic instance, with SID='ktDA'

initialization parameters used for automatic instance:
db_name=TEST
db_unique_name=ktDA_pitr_TEST
compatible=12.1.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=80
diagnostic_dest=/u01/app/oracle
db_create_file_dest=/u01/app/oracle/aux
log_archive_dest_1='location=/u01/app/oracle/aux'
#No auxiliary parameter file used

2. Restore of the control file for the auxiliary instance

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
}

3. A list of datafiles that will be restored, followed by their restore and recovery in the auxiliary instance

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  8 online";
# recover and open resetlogs
recover clone database tablespace  "DATA_HOWIE", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}

4. Export of tables from the auxiliary instance via Oracle Data Pump

Performing export of tables...
EXPDP> Starting "SYS"."TSPITR_EXP_ktDA_BAkw":
EXPDP> Estimate in progress using BLOCKS method...
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
EXPDP> Total estimation using BLOCKS method: 3 MB
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
EXPDP> . . exported "HOWIE"."TEST1"                             1.922 MB   19377 rows
EXPDP> Master table "SYS"."TSPITR_EXP_ktDA_BAkw" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_ktDA_BAkw is:
EXPDP>   /u01/app/oracle/aux/tspitr_ktDA_70244.dmp
EXPDP> Job "SYS"."TSPITR_EXP_ktDA_BAkw" successfully completed at Sun Aug 17 21:03:53 2014 elapsed 0 00:00:14
Export completed

5. Import of tables, constraints, indexes, and other dependent objects into the target database from the Data Pump export file

contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
IMPDP> Master table "SYS"."TSPITR_IMP_ktDA_lube" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_ktDA_lube":
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
IMPDP> . . imported "HOWIE"."TEST1_TEMP"                        1.922 MB   19377 rows
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
IMPDP> Job "SYS"."TSPITR_IMP_ktDA_lube" successfully completed at Sun Aug 17 21:04:19 2014 elapsed 0 00:00:19
Import completed

6. Clean-up of the auxiliary instance

Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_temp_9z2yqst6_.tmp deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_3_9z2yrkqm_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_2_9z2yrj35_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_1_9z2yrh2r_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/datafile/o1_mf_data_how_9z2yrcnq_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_sysaux_9z2yptms_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_undotbs1_9z2yq9of_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_system_9z2yp0mk_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/controlfile/o1_mf_9z2yos1l_.ctl deleted
auxiliary instance file tspitr_ktDA_70244.dmp deleted
Finished recover at 17-AUG-14
Categories: DBA Blogs

Building a MariaDB Galera Cluster with Docker

Pythian Group - Thu, 2014-08-28 08:13

There’s been a lot of talk about Docker for running processes in isolated userspace (or the cloud for that matter) lately. Virtualization is a great way to compartmentalise applications  and processes however the overhead of virtualization isn’t always worth it – in fact, without directly attached storage IO degradation can seriously impact performance. The solution? Perhaps Docker… With its easy to use CLI as well as the lightweight implementation of cgroups and kernel namespaces.

Without further ado, I present a step-bystep guide on how to build a MariaDB 5.5 Galera Cluster on Ubuntu 14.04. The same guide can probably be applied for MariaDB versions 10+ however I’ve stuck with 5.5 since the latest version of MariaDB Galera Cluster is still in beta.

So we start off with modifying the “ufw” firewall policy to accept forwarded packets and perform a “ufw” service restart for good measure:

root@workstation:~# vi /etc/default/ufw

DEFAULT_FORWARD_POLICY="ACCEPT"

root@workstation:~# service ufw restart
ufw stop/waiting
ufw start/running

I’m assuming you already have docker installed – this is available as a package within the Ubuntu repositories and also available in the Docker repositories (see http://docs.docker.com/installation/ubuntulinux/). You’ll also need to have LXC installed (“apt-get install lxc” should suffice) in order to attach to the Linux Containers / Docker Images.

The next step is pulling the Docker / Ubuntu repository in order to customize an image for our purposes

root@workstation:~# docker pull ubuntu
Pulling repository ubuntu
c4ff7513909d: Pulling dependent layers 
3db9c44f4520: Download complete 
c5881f11ded9: Download complete 
c4ff7513909d: Download complete 
463ff6be4238: Download complete 
822a01ae9a15: Download complete 
75204fdb260b: Download complete 
511136ea3c5a: Download complete 
bac448df371d: Download complete 
dfaad36d8984: Download complete 
5796a7edb16b: Download complete 
1c9383292a8f: Download complete 
6cfa4d1f33fb: Download complete 
f127542f0b61: Download complete 
af82eb377801: Download complete 
93c381d2c255: Download complete 
3af9d794ad07: Download complete 
a5208e800234: Download complete 
9fccf650672f: Download complete 
fae16849ebe2: Download complete 
b7c6da90134e: Download complete 
1186c90e2e28: Download complete 
0f4aac48388f: Download complete 
47dd6d11a49f: Download complete 
f6a1afb93adb: Download complete 
209ea56fda6d: Download complete 
f33dbb8bc20e: Download complete 
92ac38e49c3e: Download complete 
9942dd43ff21: Download complete 
aa822e26d727: Download complete 
d92c3c92fa73: Download complete 
31db3b10873e: Download complete 
0ea0d582fd90: Download complete 
cc58e55aa5a5: Download complete

After the download is complete, we can check the Ubuntu images available for customization with the following command:

root@workstation:~# docker images
 REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
 ubuntu              14.04.1             c4ff7513909d        12 days ago         225.4 MB
 ubuntu              trusty              c4ff7513909d        12 days ago         225.4 MB
 ubuntu              14.04               c4ff7513909d        12 days ago         225.4 MB
 ubuntu              latest              c4ff7513909d        12 days ago         225.4 MB
 ubuntu              utopic              75204fdb260b        12 days ago         230.1 MB
 ubuntu              14.10               75204fdb260b        12 days ago         230.1 MB
 ubuntu              precise             822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.04               822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.04.5             822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.10               c5881f11ded9        9 weeks ago         172.2 MB
 ubuntu              quantal             c5881f11ded9        9 weeks ago         172.2 MB
 ubuntu              13.04               463ff6be4238        9 weeks ago         169.4 MB
 ubuntu              raring              463ff6be4238        9 weeks ago         169.4 MB
 ubuntu              13.10               195eb90b5349        9 weeks ago         184.7 MB
 ubuntu              saucy               195eb90b5349        9 weeks ago         184.7 MB
 ubuntu              lucid               3db9c44f4520        4 months ago        183 MB
 ubuntu              10.04               3db9c44f4520        4 months ago        183 MB

Now that we’ve downloaded our images lets create a custom Dockerfile for our customized MariaDB / Galera Docker image, I’ve added a brief description for each line of the file:

root@workstation:~# vi Dockerfile
 # # MariaDB Galera 5.5.39/Ubuntu 14.04 64bit
 FROM ubuntu:14.04
 MAINTAINER Pythian Nikolaos Vyzas <vyzas@pythian.com>

 RUN echo "deb http://archive.ubuntu.com/ubuntu trusty main universe" > /etc/apt/sources.list # add the universe repo
 RUN apt-get -q -y update # update apt
 RUN apt-get -q -y install software-properties-common # install software-properties-common for key management
 RUN apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db # add the key for Mariadb Ubuntu repos
 RUN add-apt-repository 'deb http://ftp.cc.uoc.gr/mirrors/mariadb/repo/5.5/ubuntu trusty main' # add the MariaDB repository for 5.5
 RUN apt-get -q -y update # update apt again
 RUN echo mariadb-galera-server-5.5 mysql-server/root_password password root | debconf-set-selections # configure the default root password during installation
 RUN echo mariadb-galera-server-5.5 mysql-server/root_password_again password root | debconf-set-selections # confirm the password (as in the usual installation)
 RUN LC_ALL=en_US.utf8 DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::='--force-confnew' -qqy install mariadb-galera-server galera mariadb-client # install the necessary packages
 ADD ./my.cnf /etc/mysql/my.cnf # upload the locally created my.cnf (obviously this can go into the default MariaDB path
 RUN service mysql restart # startup the service - this will fail since the nodes haven't been configured on first boot
 EXPOSE 3306 4444 4567 4568 # open the ports required to connect to MySQL and for Galera SST / IST operations

We’ll also need our base configuration for MariaDB, I’ve included the base configuration variable for Galera – obviously there are more however these are good enough for starting up the service:

root@workstation:~# vi my.cnf
 [mysqld]
 wsrep_provider=/usr/lib/galera/libgalera_smm.so
 wsrep_cluster_address=gcomm://
 wsrep_sst_method=rsync
 wsrep_cluster_name=galera_cluster
 binlog_format=ROW
 default_storage_engine=InnoDB
 innodb_autoinc_lock_mode=2
 innodb_locks_unsafe_for_binlog=1

So far so good, we have Docker installed and our Dockerfile as well as our “my.cnf” file ready to go. Now its time to build our Docker image, check that the image exists and startup 3x separate Docker images for each of our Galera nodes:

root@workstation:~# docker build -t ubuntu_trusty/mariadb-galera .
root@workstation:~# docker images |grep mariadb-galera
 ubuntu_trusty/mariadb-galera   latest              afff3aaa9dfb        About a minute ago   412.5 MB
docker run --name mariadb1 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash
docker run --name mariadb2 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash
docker run --name mariadb3 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash

We’ve started up our Docker images, now lets verify that they are in fact up and retrieve the process information we need to connect. We’ll need two pieces of information, the IP-Address and the Docker image name which can be received using the combination the the “docker ps” and the “docker inspect” commands:

}]root@workstation:~# docker ps
 CONTAINER ID        IMAGE                                 COMMAND             CREATED             STATUS              PORTS                                    NAMES
 b51e74933ece        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb3
 03109c7018c0        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb2
 1db2a9a520f8        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb1
root@workstation:~# docker ps |cut -d' ' -f1 |grep -v CONTAINER | xargs docker inspect |egrep '"ID"|IPAddress'
 "ID": "b51e74933ece2f3f457ec87c3a4e7b649149e9cff2a4705bef2a070f7adbafb0",
 "IPAddress": "172.17.0.3",
 "ID": "03109c7018c03ddd8448746437346f080a976a74c3fc3d15f0191799ba5aae74",
 "IPAddress": "172.17.0.4",
 "ID": "1db2a9a520f85d2cef6e5b387fa7912890ab69fc0918796c1fae9c1dd050078f",
 "IPAddress": "172.17.0.2",

Time to use lxc-attach to connect to our Docker images using the Docker image name, add the mounts to “/etc/mtab” to keep them MariaDB friendly and customize the “gcomm://” address as we would for a usual Galera configuration (the Docker image name is a generated when the instance fires up so make sure to use your own instance name in the following commands):

root@workstation:~# lxc-attach --name b51e74933ece2f3f457ec87c3a4e7b649149e9cff2a4705bef2a070f7adbafb0
 root@b51e74933ece:~# cat /proc/mounts > /etc/mtab
 root@b51e74933ece:~# service mysql restart
 * Starting MariaDB database mysqld                            [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.

root@b51e74933ece:~# vi /etc/mysql/my.cnf
 #wsrep_cluster_address=gcomm://
 wsrep_cluster_address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4

root@b51e74933ece:~# exit
 exit

root@workstation:~# lxc-attach --name 03109c7018c03ddd8448746437346f080a976a74c3fc3d15f0191799ba5aae74
 root@03109c7018c0:~# cat /proc/mounts > /etc/mtab
 root@03109c7018c0:~# vi /etc/mysql/my.cnf
 #wsrep_cluster_address=gcomm://
 wsrep_cluster_address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4
 root@03109c7018c0:~# service mysql start
 * Starting MariaDB database server mysqld                            [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.
 root@03109c7018c0:~# mysql -uroot -proot
 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 Your MariaDB connection id is 30
 Server version: 5.5.39-MariaDB-1~trusty-wsrep mariadb.org binary distribution, wsrep_25.10.r4014

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like 'wsrep_cluster%';
 +--------------------------+--------------------------------------+
 | Variable_name            | Value                                |
 +--------------------------+--------------------------------------+
 | wsrep_cluster_conf_id    | 2                                    |
 | wsrep_cluster_size       | 2                                    |
 | wsrep_cluster_state_uuid | 42bc375b-2bc0-11e4-851c-1a7627c0624c |
 | wsrep_cluster_status     | Primary                              |
 +--------------------------+--------------------------------------+
 4 rows in set (0.00 sec_

MariaDB [(none)]> exit
 Bye
 root@03109c7018c0:~# exit
 exit

root@workstation:~# lxc-attach --name 1db2a9a520f85d2cef6e5b387fa7912890ab69fc0918796c1fae9c1dd050078f
 root@1db2a9a520f8:~# cat /proc/mounts > /etc/mtab
 root@1db2a9a520f8:~# vi /etc/mysql/my.cnf
 root@1db2a9a520f8:~# service mysql start
 * Starting MariaDB database server mysqld                                                                                                                                                     [ OK ]
 root@1db2a9a520f8:~# mysql -uroot -proot
 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 Your MariaDB connection id is 34
 Server version: 5.5.39-MariaDB-1~trusty-wsrep mariadb.org binary distribution, wsrep_25.10.r4014

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like 'wsrep_cluster%';
 +--------------------------+--------------------------------------+
 | Variable_name            | Value                                |
 +--------------------------+--------------------------------------+
 | wsrep_cluster_conf_id    | 3                                    |
 | wsrep_cluster_size       | 3                                    |
 | wsrep_cluster_state_uuid | 42bc375b-2bc0-11e4-851c-1a7627c0624c |
 | wsrep_cluster_status     | Primary                              |
 +--------------------------+--------------------------------------+
 4 rows in set (0.00 sec)

MariaDB [(none)]> exit
 Bye
 root@1db2a9a520f8:~# exit
 exit

Now be honest… Wasn’t that easier than creating multiple virtual machines and configuring the OS for each?

Enjoy your new MariaDB Galera Cluster and happy Dockering!

Categories: DBA Blogs

How to Configure an Azure Point-to-Site VPN – Part 3

Pythian Group - Thu, 2014-08-28 07:58

This blog post is the last of this series and which will demonstrate how to configure a Point-to-Site VPN step-by-step. In my first blog post, I demonstrated how to configure a virtual network and a dynamic routing gateway. This was followed by another post about how to deal with the certificate. Today we will learn how to configure the VPN client.

CONFIGURE THE VPN CLIENT
1. In the Management Portal, navigate to virtual network page; in the “quick glance” you have the links to download the VPN package.

Choose the one appropriate to your architecture (x86 or x64).

Screen Shot 2014-07-31 at 14.10.48

2. After successfully download, copy the file to your servers and execute the setup.
Screen Shot 2014-07-31 at 14.49.34

3. Click Yes when it asks if you want to install the VP and let it run.
Screen Shot 2014-07-31 at 15.09.26

4. After successful installation, it will be visible in your network connections.
Screen Shot 2014-07-31 at 15.46.07

5. In Windows 2012 you can click in the network icon, in the notification area icons (close to the clock), and it will show the right-side bar with all the network connections. You can connect from there.
The other option is right-click the connection in the “Network Connections” window (previous step) and click “Connect / Disconnect”.

6. A window will be shown, click Connect.

Screen Shot 2014-07-31 at 15.58.23

7. Now check the box near to “Do not show this message again for this Connection” and click on “Continue”.

If everything is ok, the connection will succeed.

Screen Shot 2014-07-31 at 16.07.04

8. To confirm that you are connected, execute the command “ipconfig /all” in the command line, and you should see and entry for the VPN with an IP assigned.

Screen Shot 2014-07-31 at 16.24.01

9. After a while, you will be also able to see the connection in you vNet dashboard. As you can see in the image you have data in/out in the vNet.

Screen Shot 2014-07-31 at 16.26.39

After this last part, you are done with the point-to-site VPN configuration. You can test the connectivity by executing the “ping” command and also using the “telnet” client to test if some specific port is opened and reachable.

The point-to-site VPN is recommended if you want connect users/devices to your Azure infrastructure, for few different reasons. If you need to connect the entire or part of your on-premises infrastructure, the way to go is configure a Site-to-Site VPN. Stay tuned for a blog post on how it works.

Thank you for reading!

Categories: DBA Blogs

Partner Webcast – Oracle Internet of Things Platform: Java 8 connecting the world

The Internet of Things Revolution is gaining speed. There are more and more devices, data and connections, thus more and more complexity to handle. But in the first place it brings complete...

We share our skills to maximize your revenue!
Categories: DBA Blogs

force_match => TRUE option of DBMS_SQLTUNE.IMPORT_SQL_PROFILE

Bobby Durrett's DBA Blog - Wed, 2014-08-27 14:13

Yesterday and today I’ve read or heard two people mention the force_match => TRUE parameter value for DBMS_SQLTUNE.IMPORT_SQL_PROFILE and how it forces a profile to work on all SQL statements that are the same except for their literal values.  So, I ran a quick test using the coe_xfr_sql_profile.sql utility that comes with the SQLT scripts that are available for download on Oracle’s support site.

I’ve mentioned in earlier posts how we use coe_xfr_sql_profile.sql to force plans on particular SQL statements using the sql_id of the SQL statement and the plan_hash_value of the plan:

July 2013 post

October 2013 post

March 2014 post

May 2014 post

Yesterday I read this post by David Kurtz where he mentions force_match: post

Today I heard Karen Morton mention force_match in her webinar which should soon be posted here: url

So, after the webinar completed I built a test case to see how the force_match=>TRUE option works.  I created a test table and ran a query with a literal in the where clause and got its plan showing its sql_id and plan_hash_value:

ORCL:SYSTEM>create table test as select * from dba_tables;
ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYS';

SUM(BLOCKS)
-----------
      34633

ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  10g08ytt2m5mu, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYS'

Plan hash value: 1950795681

---------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)| 
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |   992 | 29760 |    29   (0)|
----------------------------------------------------------------

Then I ran coe_xfr_sql_profile.sql to create a profile that forces the plan on the given sql_id:

SQL> @coe_xfr_sql_profile.sql 10g08ytt2m5mu 1950795681

Then, using vi I edited the output of coe_xfr_sql_profile.sql:

vi coe_xfr_sql_profile_10g08ytt2m5mu_1950795681.sql

I searched for force_match and changed the line to read like this:

force_match => TRUE

instead of

force_match => FALSE

There are comments in the script explaining the meaning of these two values but I don’t want to plagiarize the script by including them here.  Next I ran the edited script:

sqlplus system/password < coe_xfr_sql_profile_10g08ytt2m5mu_1950795681.sql

Then I ran a test showing that not only the original query with the where clause literal ‘SYS’ would use the profile but the same query with a different literal ‘SYSTEM’ would use the  created profile.

ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYS';

SUM(BLOCKS)
-----------
      34633

ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  10g08ytt2m5mu, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYS'

Plan hash value: 1950795681

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |    81 |  2430 |    29   (0)|
----------------------------------------------------------------

Note
-----
  - SQL profile coe_10g08ytt2m5mu_1950795681 used for this statement

ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYSTEM';

SUM(BLOCKS)
-----------
        520

ORCL:SYSTEM>
ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  806ncj0a5fgus, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYSTEM'

Plan hash value: 1950795681

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |    81 |  2430 |    29   (0)|
----------------------------------------------------------------

Note
-----
  - SQL profile coe_10g08ytt2m5mu_1950795681 used for this statement

Note that a different sql_id = 806ncj0a5fgus represents the second statement but the same plan_hash_value = 1950795681.  Also note that the SQL profile has the same name in both plans = coe_10g08ytt2m5mu_1950795681.

Now that I’m aware of the force_match=>TRUE option of DBMS_SQLTUNE.IMPORT_SQL_PROFILE I can use SQL profiles to force plans on queries that have different literal values, but are otherwise identical.  This adds a whole new set of problems that can be resolved without modifying the existing code which can really help in a performance firefight.

- Bobby

 

 

 

Categories: DBA Blogs

My Speaking Schedule for Oracle Open World 2014

Galo Balda's Blog - Wed, 2014-08-27 12:22

A quick post to let you know about the two presentations that I’ll be doing at Oracle Open World 2014.

Session ID:         UGF4482
Session Title:     “Getting Started with SQL Pattern Matching in Oracle Database 12c
Venue / Room:  Moscone South – 301
Date and Time:  9/28/14, 13:30 – 14:15

Session ID:          CON4493
Session Title:      “Regular Expressions in Oracle Database 101″
Venue / Room:   Moscone South – 303
Date and Time:   10/2/14, 13:15 – 14:00

As usual, you might have to check before the session to make sure the room has not changed.

I hope to see you there.


Filed under: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL Tagged: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL
Categories: DBA Blogs

Hands-On Programming with R by Garrett Grolemund

Surachart Opun - Wed, 2014-08-27 02:42
R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS.
R language is useful to become a data scientist, as well as a computer scientist. I mention a book that points about a data science with R. A Hands-On Programming with R Write Your Own Functions and Simulations By Garrett Grolemund. It was written how to solve the logistical problems of data science. Additional, How to write our own functions and simulations with R. In a book, readers are able to learn in practical data analysis projects (Weighted Dice, Playing Cards, Slot Machine) and understand more in R. Additional, Appendix A-E will help to install/update R and R packages as well as loading Data and debugging in R code.
Garrett Grolemund maintains shiny.rstudio.com, the development center for the Shiny R package.
Free Sampler.Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

SQL Server Replication Quick Tips

Pythian Group - Tue, 2014-08-26 07:56

There is a time in every SQL Server DBA career where a mail came in with a “replication is not working, please check” message. This article is intended to provide with quick tips on how to handle common replication errors and performance problems in a one way transactional replication topology

Oh boy, there is a data problem:

ID-10039897

You check replication monitor and get a :

“Transaction sequence number: 0x0003BB0E000001DF000600000000, Command ID: 1″

The row was not found at the Subscriber when applying the replicated command. (Source: MSSQLServer, Error number: 20598)

Note the sequential number will be used in the following scripts, also the commandID is important to note as not necessarily the whole sequential number has issues, it might be tied to just one command.

Go to the distributor database en run the following command to get the list of articles involved in this issue:

select * from dbo.MSarticles
where article_id in (
select article_id from MSrepl_commands
where xact_seqno = 0x0003BB0E000001DF000600000000)

To get the whole list of commands you can run below query

exec sp_browsereplcmds
@xact_seqno_start = ’0x0003BB0E000001DF000600000000′,
@xact_seqno_end = ’0x0003BB0E000001DF000600000000′

With this last query you can get to the exact command that is failing (by searching the command number in the commandID column)

You will notice that a transactional replication will typically(depending on setup) use insert, delete, update stored procedures to replicate the data, so the command you will see over here will look something like:

{CALL [sp_MSdel_dboMyArticle] (118)}

That is the stored procedure generated to process delete statement over dbo.MyArticle table, and in this case it is trying to delete ID 118. Based on the error reported you will now realize that the issue is that the replication is trying to delete MyArtcile on ID 118 and is not there, so it is trying to delete a non existent record.

Options:

  1. You can either check the publisher for this record and manually insert it in the subscriber, this will cause the replication command to succeed and will fix the issue.
  2. You can skip the command, for this specific example you can skip the command as there is no need to delete something that has been already deleted, by removing the command from the MSrepl_commands table. (Beware, only do this when you know what you are doing, manually removing transactions can result in a unstable replication.) In this example you would use something like
    Delete from MSrepl_commands
    where xact_seqno = 0x0003BB0E000001DF000600000000 and commandID=1
  3. Reinitialize, this option is the least famous, you should try to fix the issue before doing this, however if after skipping the command you still get new errors everywhere, something definitely went wrong and there is no easy way to guarantee that your subscription is up to date and stable, this can be indicator that someone or something messed around with the data, there was some type of modification at the subscription and this is causing issues with the replication. Remember most likely a one way transactional replication is intended to have a copy of the data so it can be queried, no modification should be made to the data as this won´t replicate back to the publisher.

Query time outs:

After checking the replication monitor you get a message like:ID-10054415

Query timeout expired
The process is running and is waiting for a response from the server
Initializing…

and then terminating with this error…
Agent ‘MyAgent’ is retrying after an error, YY retries attempted

This can be due to several reasons:

  • Your transaction is taking a long time and needs some tuning. If your transaction is touching too much data or is using a bad query plan it can result in a long running query, check your TSQL and see if the execution plan is optimal
  • There is a problem with the network. If you normally don´t have this issue and this just happened out of the blue, you can try to check the network, sometimes a network failure or saturated endpoint can increase transfer rates affecting your replication.
  • Server performance, either the publisher or subscriber can have a performance problem, either too much CPU or Memory usage can eventually impact a replication transaction causing it to timeout
  • The query just needs some more time to complete. If this is the case you can tweak the time out setting to give the transaction some more time so it can process properly. To do this:
  1. Right click the Replication folder
  2. Click Distributor Properties and select General
  3. Click ‘Profile Defaults’
  4. Choose ‘Distribution Agents’ on left
  5. Click ‘New’ to create a new default agent profile
  6. Choose ‘Default Agent Profile’ from the list displayed, (to copy this)
  7. Pick a name for your new profile and upate the QueryTimeout value in right column
  8. Save
  9. Choose to use this profile across all your replication sets. However I would recommend to only apply to the agent that requires this change
  10. To individually assign the profile, open Replication Monitor and then in the left pane click your replication set
  11. In the right pane, select your desired agent, right click and change the profile to the new one you just created

 Mini Hack on expired subscriptionsID-10098834

When a replication is marked as expired, it will tell you that you need to reinitialize.

To activate it “under the hood”, check your replication monitor last error, it will show you the last sequential number that tried to process, then run this command(using the corresponding seq_no):

update MSsubscriptions
set status=2
where subscription_seqno=0x0002AADE00005030000100000002

The status column means:

0 = Inactive.

1 = Subscribed.

2 = Active.

You can change it to Active and it will try to process again. Why would you use this? if the subscription expired but your distribution cleanup job haven´t run, then it can try to reprocess everything again, if the issue was related to a network time out and now you have your network back up, you can try this as it will try to start from the last sequential number. Also you can try to do this to reproduce the last error reported, so it will fail and eventually expire again but you will have a better idea on why it failed in the first place.

Multi threading or “Streams”

A slow replication, and by slow I mean when you know that your replication is experiencing a delay when your command goes from the distributor to the subscription, you can check this with performance counters or quickly insert a token(http://technet.microsoft.com/en-us/library/ms151846%28v=sql.105%29.aspx)

You can improve the performance by adding streams, normally a default setting will write sequentially the replication transactions one by one, with Streams you can add more threads, say you specify to use 4 strems, you will be processing 4 transactions at a time meaning a faster turnaround. This can work beautifully but it can also generate deadlocks and inconsistencies, I would recommend to start low and just add 1 stream at a time and stop when you start seeing a problem. Do not go crazy and feel this is a turbo button and add 30 streams, and like most features, test it in QA first!

To Enable this option follow these steps:

  1. Open Replication Monitor, expand the Publisher and select the Publication in the left pane.
  2. On the right pane window , under “All Subscriptions” , you will see a list of all the Subscribers.
  3. Right Click the Subscriber you want to modify and click on “View Details”. A new Window will appear with the distribution agent session details.
  4. Now click on “Action” in the Menu bar at top and select “Distribution Agent Job Properties”, this will open the corresponding job properties.ID-100203331
  5. Go to  “Steps” in the left pane window followed by highlighting “Run Agent” on the Right pane window, click Edit.
  6. A new Windows will popup , scroll to the right end of the command section and append this parameter “ -SubscriptionStreams 2”
  7. Save the settings and restart the Distribution Agent job.

You might encounter some issues when implementing this, you can read this KB for further info:

http://support.microsoft.com/kb/953199

Conclusion

There are many tips on how to fix a replication, sometimes is easier to just reinitialize, but sometimes this is not an option when critical systems depend on the subscription to be up to date or your database is so huge that it will take days to complete. When possible try to troubleshoot instead of just restarting the replication from scratch as it will give you a lot more insight on what is going on.

Categories: DBA Blogs

12.1.0.2 Introduction to Attribute Clustering (The Division Bell)

Richard Foote - Tue, 2014-08-26 00:03
One of the really cool new features introduced in 12.1.0.2 is Attribute Clustering. This new table based attribute allows you to very easily cluster data in close physical proximity based on the content of specific columns. As I’ve discussed many times, indexes love table data that is physically clustered in a similar manner to the index […]
Categories: DBA Blogs

Arizona Oracle User Group (AZORA)

Bobby Durrett's DBA Blog - Mon, 2014-08-25 11:10

I saw this fun blog post about the Arizona Oracle User Group getting organized: blog post

I’m definitely interested in being involved.  Please pass this on to any Oracle DBAs, developers, etc. that you know in the Phoenix area.

- Bobby

Categories: DBA Blogs

Integrated Application Heartbeat for Oracle GoldenGate

DBASolved - Mon, 2014-08-25 06:51

Over the last two weeks I’ve been working on a heartbeat monitoring solution for a client.  This is not the normal heartbeat solution provided by Oracle as described in note 1299679.1; yet very similar.  The approach that I configured is similar to a traditional heartbeat setup but uses the same extract and replicats already being used by the database/application.  For simplicity reasons, I like to call this approach an Integrated Application Heartbeat.

In order to setup this style of heartbeat monitoring, the following items are are needed:

1.  Identify the heartbeat table in the source database.  In this configuration, lets call this table SETTINGS (Keep in mind nothing will change with the table (no DDL changes)).
2.  Identify and generate DDL for the target heartbeat table.  Let’s call this table GG_STATUS_HB.
3.  Identify and generate DDL for the target heartbeat history table.  Let’s call this table GG_STATUS_HB_HIST
4.  Two triggers for updating information in the target heartbeat tables (1 example below)
5.  A few macros.  One for each of the processes in the configuration.
6.  Either a crontab job or a DBMS_JOBs process

Now that the components have been identified, lets take a look at what needs to be done.

The source side heartbeat table has already been identified.  In the database, the table name is SETTINGS.  The SETTINGS table has a single column for a primary key.  This table is updated using a crontab job that runs the following SQL.  The <schema> variable is due to a this table (SETTINGS) being in more than one schema.

update <schema>.settings set id=id where rownum < 2;

On the target side, the target heartbeat and heartbeat history table need to be created.  These tables will not match the columns in the SETTINGS table.  This means that all the data that is replicated in the SETTINGS table still have to be replicated a long with being used as a heartbeat table.  The DDL to create these two tables are as follows:

CREATE TABLE <schema>.GG_STATUS_HB
(
SITE_ID                    NUMBER(10),
DB_SCHEMA                  VARCHAR2(30),
CSN_TS                     TIMESTAMP(6),
CSN_NB                     NUMBER(18),
EXT_GROUP                  VARCHAR2(10),
EXT_TIME                   TIMESTAMP(6),
EXT_LAG                    NUMBER,
PMP_GROUP                  VARCHAR2(10),
PMP_TIME                   TIMESTAMP(6),
PMP_LAG                    NUMBER,
TGT_DB_NAME                VARCHAR2(15),
REP_GROU{                  VARCHAR2(10),
REP_TIME                   TIMESTAMP(6),
REP_LAG                    NUMBER,
TOTAL_LAG                  NUMBER,
TOTAL_LAG_MIN              NUMBER,
UPDATE_TS                  TIMESTAMP(6),
CONSTRAINT GG_STATUS_HB_PK PRIMARY KEY (SITE_ID) ENABLE
);
CREATE TABLE <schema>.GG_STATUS_HB_HIST
(
SITE_ID                    NUMBER(10),
DB_SCHEMA                  VARCHAR2(30),
CSN_TS                     TIMESTAMP(6),
CSN_NB                     NUMBER(18),
EXT_GROUP                  VARCHAR2(10),
EXT_TIME                   TIMESTAMP(6),
EXT_LAG                    NUMBER,
PMP_GROUP                  VARCHAR2(10),
PMP_TIME                   TIMESTAMP(6),
PMP_LAG                    NUMBER,
TGT_DB_NAME                VARCHAR2(15),
REP_GROU{                  VARCHAR2(10),
REP_TIME                   TIMESTAMP(6),
REP_LAG                    NUMBER,
TOTAL_LAG                  NUMBER,
TOTAL_LAG_MIN              NUMBER,
UPDATE_TS                  TIMESTAMP(6),
CONSTRAINT GG_STATUS_HB_HIST_PK PRIMARY KEY (SITE_ID, DB_SCHEMA, CSN_TS, CSN_NB, UPDATE_TS) ENABLE
);

In mapping these tables, I found it easier to create macros to handle the mappings.  Macros are especially handy since the SETTINGS table is in multiple schemas and they need to be mapped through existing GoldenGate processes.  For more on macros and passing parameters, I’ve written this post to highlight macros (here).

Examples of the macros that are used in this configuration are as follows:

Macro for Extract process:
This macro is used to capture and map the extract name, time the transaction was extracted, the timestamp of the commit and the System Change Number.

--Heartbeat Extract Macro Library - Version 1.01
--Inital write - 08-13-2014 - BLC</pre>
<pre>-- ***************************************
-- Common macros for this library
-- called internally
-- ***************************************
MACRO #hb_ext_details
BEGIN
EXT_GROUP = @GETENV("GGENVIRONMENT","GROUPNAME"), &
EXT_TIME = @DATE("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP")), &
CSN_TS = @GETENV("GGHEADER","COMMITTIMESTAMP"), &
TRANS_CSN = @GETENV("TRANSACTION","CSN")
END;
-- *****************************************
-- Application Specific Marcos for Heartbeat
-- *****************************************
MACRO #opshb_info
BEGIN
       #hb_ext_details()
END;

Macro for the Pump process:
The macro captures the pump name and time of transaction passed through the pump.

--Heartbeat Pump Macro Library - Version 1.01
--Inital write - 08-13-2014 - BLC
-- ***************************************
-- Common macros for this library
-- called internally
-- ***************************************

MACRO #hb_pmp_details
BEGIN
PMP_GROUP=@GETENV("GGENVIRONMENT","GROUPNAME"),
PMP_TIME=@DATE("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP"))
END;

-- *****************************************
-- Application Specific Marcos for Heartbeat
-- *****************************************

MACRO #opshb_info
BEGIN
        #hb_pmp_details()
END;

Macro for the Replicat Process:
This macro does the mapping of all the tokens that have been passed from the other GoldenGate processes to the target tables.

--Heartbeat Replicat Macro Library - Version 1.01
--Inital write - 08-13-2014 - BLC
-- ***************************************
-- Common macros for this library
-- called internally
-- ***************************************
MACRO #opshb_rep_details
BEGIN
SITE_ID=@TOKEN("NHIN_STORE_ID"), &
DB_SCHEMA=@TOKEN("SRC_DB_SCHEMA"),
HOST_NAME=@GETENV("GGFILEHEADER", "HOSTNAME"), &
CSN_TS=@TOKEN("SRC_CSN_TS"), &
CSN_NB=@TOKEN("SRC_TRANS_CSN"), &
EXT_NAME=@TOKEN("EXT_GROUP"), &
EXT_TIME=@TOKEN("EXT_TIME"), &
PMP_GROUP=@TOKEN("PMP_GROUP"), &
PMP_TIME=@TOKEN("PMP_TIME"), &
TGT_DB_NAME=@GETENV("DBENVIRONMENT","DBNAME"), &
REP_GROUP=@GETENV ("GGENVIRONMENT", "GROUPNAME"), &
REP_TIME=@DATE("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")), &
UPDATE_TS=@DATENOW()
END;
-- *****************************************
-- Application Specific Marcos for Heartbeat
-- *****************************************
MACRO #hb_info
PARAMS (#src_schema)
BEGIN
MAP #src_schema.SETTINGS,TARGET <target schema>.GG_STATUS_HB, &
KEYCOLS (SITE_ID), &
INSERTMISSINGUPDATES, &
COLMAP ( #hb_rep_details() );</pre>
<pre>MAP #src_schema.SETTINGS,TARGET <target schema>.GG_STATUS_HB_HIST, &
KEYCOLS (SITE_ID, DB_SCHEMA, CSN_TS, CSN_NB, UPDATE_TS), &
INSERTALLRECORDS, HANDLECOLLISIONS &
COLMAP ( #hb_rep_details() );
END;

For each of the target heartbeat tables, a trigger is needed to calculate the lag and timestamp information.  The triggers that I used/created are based on the Oracle supplied examples in note 1299679.1.  To keep this blog some what short, I’m just going to include one sample of the triggers used.

Trigger Example:


CREATE OR REPLACE TRIGGER <target schema>.GG_STATUS_HB_TRIG
BEFORE INSERT OR UPDATE ON <target schema>.GG_STATUS_HB
FOR EACH ROW
BEGIN
SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),1, INSTR(:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+10,6)) / 1000000
INTO :NEW.EXT_LAG
FROM DUAL;</pre>
<pre>SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),1, INSTR(:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+10,6)) / 1000000
INTO :NEW.PMP_LAG
FROM DUAL;</pre>
<pre>SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),1, INSTR(:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+10,6)) / 1000000
INTO :NEW.REP_LAG
FROM DUAL;</pre>
<pre>SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),1, INSTR(:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+10,6)) / 1000000
INTO :NEW.TOTAL_LAG
FROM DUAL;
SELECT round((:NEW.TOTAL_LAG/60),1) INTO :NEW.TOTAL_LAG_MIN FROM DUAL;
SELECT SYSTIMESTAMP INTO :NEW.UPDATE_TS FROM DUAL;
END;
/

After all these items are in place, the next thing that has to be done is updating the parameter files.  With any macros, the parameter files have to reference the macro via an INCLUDE statement.  Then the mappings for the SETTINGS table has to be done in each parameter file before restarting the process.  More information on macros can be found here.

The biggest change that has to be be made to existing processes comes in the pump process.  The pump has to be configured to be a PASSTHRU for all tables except the SETTINGS table.  In order to assign tokens to the SETTINGS table, the pump needs to be put in NOPASSTHRU mode.  In order to do this, the pump parameter file needs to be updated with something similar to this approach:

NODYNAMICRESOLUTION
WILDCARDRESOLVE IMMEDIATE
NOPASSTHRU
TABLE <schema>.SETTINGS, TOKENS ( #opshb_info() );

DYNAMICRESOLUTION
WILDCARDRESOLVE DYNAMIC
PASSTHRU
TABLEEXCLUDE <schema>.SETTINGS;

TABLE <schema>.*;

With everything in place and processes restarted, the crontab job should be updating the heartbeat process on a scheduled interval.  By reviewing the STATUS_HB table, it will provide you a quick update on the application heartbeat.  By looking at the STATUS_HB_HIST table, you can get a sense of how much lag is happening within your GoldenGate environment over time without having additional overhead of a separate heartbeat processes.

Enjoy!

about.me: http://about.me/dbasolved

 

 

 


Filed under: Golden Gate
Categories: DBA Blogs

Don’t go directly to Maximum Protection!

The Oracle Instructor - Mon, 2014-08-25 04:14

With a Data Guard Configuration in Maximum Performance protection mode, don’t go to Maximum Protection directly, because that leads to a restart of the primary database:

 Attention!

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
  prima  - Primary database
    physt  - Physical standby database
      physt2 - Physical standby database (receiving current redo)

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> edit configuration set protection mode as maxprotection;
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Database opened.

Instead, go to Maximum Availability first and then to Maximum Protection:

DGMGRL> edit configuration set protection mode as maxperformance;
Succeeded.
DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.
DGMGRL> edit configuration set protection mode as maxprotection;
Succeeded.

The demo was done with 12c, involving a cascading standby database, but the behavior is the same in 11g already. The odd thing about it is that DGMGRL will restart the primary without warning. Wanted to share that with the Oracle community for years but always got over it somehow.


Tagged: Data Guard, High Availability
Categories: DBA Blogs

ASM Commands : 2 -- Migrating a DiskGroup to New Disk(s)

Hemant K Chitale - Sun, 2014-08-24 08:52
In 11gR2 Grid Infrastructure and RAC

After the previous demonstration of adding a new DiskGroup, I now demonstrate migrating the DiskGroup to a new pair of disks.

First, I create a table in the Tablespace on that DiskGroup.

[root@node1 ~]# su - oracle
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:17:28 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create table new_tbs_tbl
2 tablespace new_tbs
3 as select * from dba_objects
4 /

Table created.

SQL> select segment_name, bytes/1048576
2 from user_segments
3 where tablespace_name = 'NEW_TBS'
4 /

SEGMENT_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
NEW_TBS_TBL
9


SQL> select file_name, bytes/1048576
2 from dba_data_files
3 where tablespace_name = 'NEW_TBS'
4 /

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA3/racdb/datafile/new_tbs.256.855792859
100


SQL>


Next, I verify that the DiskGroup is currently on disk asmdisk.7 and that the two new disks that I plan to migrate the DiskGroup to are available as asmdisk.8 and asmdisk.9  (yes, unfortunately, they are on /fra, instead of /data1 or /data2 because I have run out of disk space in /data1 and /data2).
This I do from node1 :

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ exit
logout

[root@node1 ~]#
[root@node1 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:22:32 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> l
1 select d.name, d.path
2 from v$asm_disk d, v$asm_diskgroup g
3 where d.group_number=g.group_number
4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0000
/data1/asmdisk.7


SQL>
SQL> !sh
sh-3.2$ ls -l /fra/asmdisk*
-rwxrwxr-x 1 grid oinstall 1024000000 Aug 24 22:06 /fra/asmdisk.8
-rwxrwxr-x 1 grid oinstall 1024000000 Aug 24 22:07 /fra/asmdisk.9
sh-3.2$


Note how the ownership and permissions are set for the two new disks (see my previous post).

I now add the two new disks.

sh-3.2$ exit
exit

SQL> show parameter power

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_power_limit integer 1
SQL> alter diskgroup data3 add disk '/fra/asmdisk.8', '/fra/asmdisk.9';

Diskgroup altered.

SQL>
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
3 REBAL RUN 1 1 1 101 60
1


SQL>


With ASM_POWERLIMIT set to 1, Oracle ASM automatically starts a REBALANCE operation.  However, since I did *not* drop the existing asmdisk.7, Oracle will still continue to use it.

After a while, I confirm that the REBALANCE has completed.  I can now drop asmdisk.7.  Unfortunately, this will trigger a new REBALANCE !

SQL> l
1* select * from v$asm_operation
SQL> /

no rows selected

SQL>
SQL> l
1 select d.name, d.path
2 from v$asm_disk d, v$asm_diskgroup g
3 where d.group_number=g.group_number
4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0000
/data1/asmdisk.7

DATA3_0002
/fra/asmdisk.9

DATA3_0001
/fra/asmdisk.8


SQL>
SQL> alter diskgroup data3 drop disk '/data1/asmdisk.7';
alter diskgroup data3 drop disk '/data1/asmdisk.7'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15054: disk "/DATA1/ASMDISK.7" does not exist in diskgroup "DATA3"


SQL> alter diskgroup data3 drop disk 'DATA3_0000';

Diskgroup altered.

SQL>
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
3 REBAL RUN 1 1 2 102 120
0


SQL>
SQL> l
1* select * from v$asm_operation
SQL>
SQL> /

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
3 REBAL RUN 1 1 47 101 95
0


SQL> /

no rows selected

SQL>


NOTE : Note how I must specify the Disk NAME (not the PATH) for the DROP. When I added disks asmdisk.8 and asmdisk.9, I could have given then meaningful names as well. Oracle has automatically named them.

Ideally, what I should have done is to use the ADD and DROP command together.  That way, I would have a single-pass REBALANCE required.

After a while, I run my validation queries on node2.


[root@node2 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:42:39 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select d.name, d.path
from v$asm_disk d, v$asm_diskgroup g
where d.group_number=g.group_number
and g.name = 'DATA3' 2 3 4
5
SQL> l
1 select d.name, d.path
2 from v$asm_disk d, v$asm_diskgroup g
3 where d.group_number=g.group_number
4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0002
/fra/asmdisk.9

DATA3_0001
/fra/asmdisk.8


SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ exit
logout

[root@node2 ~]# su - oracle
-sh-3.2$
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:44:10 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select count(*) from new_tbs_tbl;

COUNT(*)
----------
72460

SQL>


I have now accessed the table, tablespace, diskgroup and disks from node2 successfully. Disk asmdisk.7 is no longer part of the DiskGroup.

I can physically remove disk asmdisk7 from the storage.


SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ exit
logout

[root@node1 ~]# cd /data1
[root@node1 data1]# ls -l asmdisk.7
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 24 22:39 asmdisk.7
[root@node1 data1]# rm asmdisk.7
rm: remove regular file `asmdisk.7'? y
[root@node1 data1]#
[root@node1 data1]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:50:18 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> set pages 60
SQL> col name format a15
SQL> col path format a20
SQL> select group_number, name, path
2 from v$asm_disk
3 order by 1,2;

GROUP_NUMBER NAME PATH
------------ --------------- --------------------
0 /crs/voting.disk
0 /data1/votedisk.1
0 /data2/votedisk.2
0 /fra/votedisk.3
1 DATA1_0000 /data1/asmdisk.1
1 DATA1_0001 /data2/asmdisk.4
2 DATA2_0000 /data1/asmdisk.2
2 DATA2_0001 /data2/asmdisk.5
2 DATA2_0002 /data2/asmdisk.6
3 DATA3_0001 /fra/asmdisk.8
3 DATA3_0002 /fra/asmdisk.9
4 DATA_0000 /crs/ocr.configurati
on

5 FRA_0000 /fra/fradisk.3
5 FRA_0001 /fra/fradisk.2
5 FRA_0002 /fra/fradisk.1
5 FRA_0003 /fra/fradisk.4

16 rows selected.

SQL>

The disk asmdisk.7 is no longer part of the storage. (Remember : All my disks here are on NFS).
.
.
.
Categories: DBA Blogs