Skip navigation.

DBA Blogs

EMEA Exadata, Manageability & Hardware Partner Forum

Oracle EMEA Exadata, Manageability, Servers & Storage Partner Community Forum ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Long parse time with interval partitioning and subpartitions

Bobby Durrett's DBA Blog - Mon, 2015-02-09 13:52

In an earlier post I described how some queries with long parse times were causing long library cache lock waits.  Friday I applied a patch from Oracle that resolved the long parse times.  Here are the conditions which may relate to this bug:

  1. Interval partitioned table
  2. Partitioned by range
  3. Sub-partitioned by list
  4. Open ended range on partitioning column in where clause
  5. Tens of thousands of sub-partitions

Prior to applying the patch I did an explain plan on two versions of the problematic query.  One version specified an open-ended range on the partitioning column and the other closed off the range.

Slow version:

...
T392658.CLNDR_DT >= TO_DATE('2014-11-17' , 'YYYY-MM-DD') and 
...
Explained.

Elapsed: 00:00:46.20

Fast version:

...
T392658.CLNDR_DT >= TO_DATE('2014-11-17' , 'YYYY-MM-DD') and
T392658.CLNDR_DT <= TO_DATE('2014-11-26' , 'YYYY-MM-DD') and
...
Explained.

Elapsed: 00:00:00.09

The queries are the same except for the extra date condition which closes off the date range.  Note that the explain plan took 46 seconds with the open-ended range and less than a tenth of a second with the closed off range.

With the patch the slow version is just as fast as the fast one.

This is bug 20061582 in Oracle’s system.

From my conversations with Oracle it sounds like in certain cases the optimizer is reviewing information for many or all of the sub-partitions and since we have 20,000 for this table in production it can take minutes to parse.  I also messed with the dates on the open-ended condition and found that if I made the date early enough the parse time issue went away.  So, it seems that there is some set of conditions, which I don’t know how to easily reproduce, which lead the optimizer to look at sub-partition information and slows parse time.

This is on 11.2.0.4 on an Exadata system.  I was able to reproduce the long parse times on non-Exadata Linux 11.2.0.4 and 12.1.0.2 systems so it does not occur only on Exadata.

This issue surprised me because I thought that the optimizer would not look at partition or sub-partition statistics in a query that has a range condition which spanned more than one partition.  In the past I have always seen the global or table level stats used in these situations.  But, now I know that there are some cases where the optimizer will dig into the sub-partition stats even though the query covers more than one partition.

Looking at the plan of my slow parsing query before and after the patch I get a clue that the optimizer is looking at sub-partition stats:

Partition range without the patch:

KEY(SQ)|KEY(SQ)

Partition range with the patch:

     1 |   136

Evidently KEY(SQ) relates to some sort of filtering of the sub-partitions which cause the long parse time.  The manuals describe KEY(SQ) in a section titled “Dynamic Pruning with Subqueries” so maybe the problem queries have some sub-query that the optimizer was using to choose which sub-partitions that the query needed.

If you have an interval partitioned table with tens of thousands of sub-partitions and parse times in the minutes with open-ended ranges specified on the partitioning column your system could be hitting this same issue.

– Bobby

Categories: DBA Blogs

Database Flashback -- 2

Hemant K Chitale - Sun, 2015-02-08 09:05
Continuing my series on Oracle Database Flashback.
(My first post on this topic was a week ago).


I create a Restore Point :

[oracle@localhost Hemant]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Feb 8 22:55:28 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>create restore point Feb08_15;

Restore point created.

SYS>
SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>

Oracle has captured a Restore Point. Note that this is *not* a Guaranteed Restore Point.  Although the default Flashback Retention Target is 24 hours, However, Oracle maintains Flashback Logs to *target* the ability to flashback to 24hours ago.  If the FRA is insufficient to hold all the Flashback Logs created over 24hours, some Flashback Logs may be deleted.

At any time, I can query my Flashback-ability status.

SYS>  
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14569609 01-FEB-15 1440 24576000 0

SYS>

I had enabled Flashback on 01-Feb and there has been no activity in this database since then.  So, I currently do have Flashback Logs upto the time they were first created.

Therefore : It is not a hard-and-fast rule that you can Flashback to the Flashback Retention Target.  You might be able to flashback further back in time in an inactive database.  On the other hand, in a very active database, with inadequate FRA, you might NOT be able to Flashback to the Flashback Retention Target.
.
.
.
Categories: DBA Blogs

Slimming Down Oracle RAC 12c’s Resource Footprint

Pythian Group - Sun, 2015-02-08 09:00

I’ve been working on setting up a demo for my upcoming presentation on application continuity at RMOUG training days later this month. The challenge is to get a multi-node cluster, plus a load generator, and a host OS, to fit on a memory-constrained laptop.

According to the Oracle grid installation guide, 4GB per virtual host is the minimum requirement. However with a few tweaks I’ve been able to get the full stack to run in 2GB of memory. For anyone else out there installing 12c clusters into virtual machines, here are a few tips.

But first the disclaimer: these changes are mostly unsupported by Oracle, and intended for test and demo databases. They can potentially cause unexpected behaviour, hangs, or crashes. Don’t try this in production!

  • Grid Infrastructure management repository database (GIMR): This is a full Oracle database that stores operating system workload metrics generated by the cluster health monitor, for use Oracle QoS management and troubleshooting. Being a full database, it has a large memory and CPU footprint. I originally installed Oracle 12.1.0.1 skipping the database on install, and upgraded to 12.1.0.2 without it. However, it looks like it’s no longer possible to skip the installation on the GUI. My colleague Gleb suggests adding -J-Doracle.install.mgmtDB=false on the installer command line to skip it.
  • Cluster health monitor (CHM): this tool colleccts a myriad fo worklaod-related metrics to store in the GIMR. And it uses a surprisingly high amount of CPU: it was the top CPU consumer in my VM before removal. It can be disabled fairly easily, with a hat tip to rocworks:

    $ crsctl stop res ora.crf -init
    # crsctl delete res ora.crf -init
  • Trace File Analyzer Collector (TFA): collects log and trace files from all nodes and products into a single location. Unfortunately it’s written in Java with its own Java Virtual Machine, again requiring a large memory footprint for the heap etc. It can be removed wit ha single command, though note that next time you run rootcrs.pl (patching for example) it will reinstall itself.

    # tfactl uninstall
  • Cluster Verification Utility (CVU): As you install Oracle Grid Infrastructure, the CVU tool automatically runs, pointing out configuration issues that may affect system operation (such as running under 4GB of RAM). In Oracle 12.1.0.2, it also gets scheduled to run automatically every time the cluster is started and periodically after that. The CVU itself and checks use CPU and RAM resources, and are better run manually when such resources are limited. It’s also a quick removal:

    $ srvctl cvu stop
    $ srvctl cvu disable
  • OC4J: Every Oracle 12c grid infrasturucture install contains OC4J, Oracle’s old Java J2EE web application server, since replaced with WebLogic. And no, please don’t make me install WebLogic too now, Oracle! I’m honestly not sure what it’s used for, but I’ve been able to disable it without any obvious ill effects

    $ srvctl stop oc4j
    $ srvctl disable oc4j
  • ASM memory target: as of 12c, the ASM instance has a default memory target of 1 gigabyte, a big jump from the 256mb of Oracle 11g. And if you set a lower target, you’ll find it’s ignored unless it’s overridden with a hidden parameter. I’ve set it to 750mb with good results, and it can possibly be set even lower in light-utilization workloads:

    $ sqlplus "/ as sysasm"
    alter system set "_asm_allow_small_memory_target"=true scope=spfile;
    alter system set memory_target=750m scope=spfile;
    alter system set memory_max_target=750m scope=spfile;
    exit
    # service ohasd stop
    # service ohasd start

A non-memory issue I’ve run into is the VKTM, virtual keeper, to time background process using large amounts of CPU time in both ASM and database instances. I’ve noticed it to be especially pronounced in virtual environments, and in Oracle Enterprise Linux 6. I’ve ended up disabling it completely without obvious ill effects, but as always, don’t try on your “real” production clusters.

alter system set "_disable_highres_ticks"=TRUE scope=spfile;

(Hat tip to MOS community discussion 3252157, also this IBM slide deck)

Additionally, Jeremy Schneider has taken on the biggest remaining GI memory user, the Oracle cluster synchronization service daemon (OCSSD). This is an important cluster management process, and Jeremy figured out a way to unlock its memory in the gdb debugger, allowing it to be swapped out. My own tests were less successful: the process wasn’t swapped out even after trying his changes. But his blog post is worth a read, and others may have more success than I did.

I also noted that during the link phase of installation and patching, the ld process alone takes over 1GB of RAM. So either shut down clusterware or add swap and wait while linking.

So to wrap up, I’ve managed to get a full Oracle GI 12.1.0.2 stack including database to run in a virtual machine with 2GB RAM. Readers, any other tips to put the goliath that is Oracle GI on a diet?

Categories: DBA Blogs

Partner Webcast – Enterprise Mobility: Remote Data Synchronization in Oracle Mobile Application Framework

Digital disruption – you have probably heard this concept quite many times recently and this is for a reason. Disruption means changing old ways of doing things, especially in business. Many...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The Next-Generation of Oracle Engineered Systems: New Markets, New Opportunities for Partners

The Data Center of the Future—Low Cost and Engineered for Innovation On January 21st Larry Ellison announced the next generation of Oracle Engineered Systems as the cornerstone of the Oracle...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Steps to Blackout Agent of Cloud Control 12c

Pakistan's First Oracle Blog - Wed, 2015-02-04 18:09
1) Set the environment to the cloud control agent. You can agent name from /etc/oratab file.

myserver: $ . oraenv
ORACLE_SID = [ORCL] ? agent12c

2) Check which targets are being monitored by the cloud control agent on this server:

myserver: $ emctl config agent listtargets
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
[MYSERVER, host]
[MYSERVER:3872, oracle_emd]
[ORCL, oracle_database]

3) Check if there is any existing blackout of agent on this server:

myserver: $ emctl status blackout
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
No Blackout registered.

4) Start the blackout:

myserver: $ emctl start blackout orcl_down_20150204 ORCL:oracle_database
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Blackout orcl_down_20150204 added successfully
EMD reload completed successfully

5) Again check the status of the blackout:

myserver: $ emctl status blackout
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Blackoutname = orcl_down_20150204
Targets = (ORCL:oracle_database,)
Time = ({2015-02-04|16:51:37,|} )
Expired = False

6) Stop the blackout:

myserver: $ emctl stop blackout orcl_down_20150204
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Blackout orcl_down_20150204 stopped successfully
EMD reload completed successfully

7) Again check the status of blackout:

myserver: $ emctl status blackout
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
No Blackout registered.
Categories: DBA Blogs

Just a few days left to submit abstracts for GLOC 2015

Grumpy old DBA - Wed, 2015-02-04 14:09
The Cleveland based Great Lakes Oracle Conference 2015 is going to be a big event.  Hoping to pass 350 in attendance its going to be two days of track based sessions and a 1/2 day of workshops on monday.

Full details here GLOC 2015

We have a great set of abstracts in already and we want you to consider submitting one before we stop accepting them ( February 9th 2015 ) is last day.  Please consider attending even if you do not submit a presentation proposal!

Call for abstracts is GLOC 2015 abstract call

Thanks! John
Categories: DBA Blogs

How To Approach Different Oracle Database Performance Problems

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.How To Approach Different Oracle Database Performance ProblemsJump Start Your Oracle Database Tuning Effort
Every Oracle Database Administrator will tell you no two performance problems are the same. But a seasoned Oracle DBA recognizes there are similarities...patterns. Fast problem pattern recognition allows us to minimize diagnosis time, so we can focus on developing amazing solutions.

I tend to group Oracle performance problems into four patterns. Quickly exploring these four patterns is what this article is all about.


You Can Not Possibly List Every Problem And Solution
When I teach, some Oracle Database Administrators want me to outline every conceivable problem along with the solution. Not only is the thought of this exhausting, it's not possible. Even my Stori product uses pattern matching. One of the keys to becoming a fantastic performance analyst is the ability quickly look at a problem and then decided which diagnosis approach is the best. For example, if you don't know the problem SQL (assuming there is one) tracing is not likely to be your best approach.

The Four Oracle Database Performance Patterns
Here are the four performance patterns I tend to group problems into.

The SQL Is Known
Many times there is a well know SQL statement that is responsible for the poor performance. While I will always do a quick Oracle Time Based Analysis (see below) and verify the accused SQL, I will directly attack this problem by tuning with SQL specific diagnostic and tuning tools.

But... I will also ask a senior application user, if the users are using the application correctly. Sometimes new applications users try and use a new application like their old application. It's like trying to drive a car with moving your feet as you are riding a bicycle... not going to work and it's dangerous!

Business Process Specific
I find that when the business is seriously affected by application performance issues, that's when the "limited budget" is suddenly not so limited. When managers and their business's are affected they want action.

When I'm approached to help solve a problem, I always ask how the business is being affected. If I keep hearing about a specific business process or application module I know two things.

First, there are many SQL statements involved. Second, the problem is bounded by a business process or application. This is when I start the diagnostic process with an Oracle Time Based Analysis approach which, will result in multiple solutions to the same problem.

As I teach in my online seminar How To Tune Oracle With An AWR Report, user feel performance through time. So, if our analysis is time based we can create a quantitative link between our analysis and their experience. If our analysis creates solutions that reduce time, then we can expect the user experience to improve. This combined with my "3 Circle" approach yields spot-on solutions very quickly.

While an Oracle Time Based Analysis is amazing, because Oracle does not instrument CPU consumption we can't answer the question, "What's Oracle doing with all that CPU?" If you want to drill into this topic check out my online seminar, Detailing Oracle CPU Consumption: The Missing Link.

It's Just Slow
How many times have I experienced this... It's Just Slow!


If what the user is attempting to explain is true, the performance issue is affecting a wide range of business processes. The problem is probably not a single issue (but could be) and clearly the key SQL is not know. Again, this is a perfect problem scenario to apply an Oracle Time Based Analysis.

The reason I say this is because an OTBA will look at the problem from multiple perspectives, categorize Oracle time and develop solutions to reduce those big categories of time. If you also do Unit Of Work Time Based Analysis, you can an even anticipate the impact of your solutions! Do an OraPub website search HERE or search my blog for UOWTBA.
Random Incident That Quickly Appears And Vanishes
This is the most difficult problem to fix. Mainly because the problem "randomly" appears and can't be duplicated. (Don't even bother calling Oracle Support to help in this situation.) Furthermore, it's too quick for an AWR report to show it's activity and you don't want to impact the production system by gathering tons of detailed performance statistics.

Even a solid Oracle Time Based Analysis will likely not help in this situation. Again, the problem is performance data collection and retention. The instrumented AWR or Statpack data does not provide enough detail. What we need step-by-step activity...like a timeline.

Because this type of problem scares both DBAs and business managers, you will likely need to answer questions like this:

  • What is that blip all about?
  • Did this impact users?
  • Has it happened before?
  • Will it happen again?
  • What should we do about it?

The only way I know how to truly diagnose a problem like this is to do a session-level time-line analysis. Thankfully, this is possible using the Oracle Active Session History data. Both v$active_session_history and dba_hist_active_sess_history are absolutely key in solving problems like this.

ASH samples Oracle Database session activity once each second (by default). This is very different than measuring how long something takes, which is the data an AWR report is based upon. Because sampling is non-continuous, a lot of detail can be collected, stored and analyzed.

A time-line type of analysis is so important, I enhanced my ASH tools in my OraPub System Monitor (OSM) toolkit to provide this type of analysis. If you want to check them out, download the OSM toolkit HERE, install it and read the osm/interactive/ash-readme.txt file.

As an example, using these tools you can construct an incident time-line like this:

HH:MM:SS.FFF User/Process Notes
------------ ------------- -----------------
15:18:28.796 suspect (837) started the massive update (see SQL below)

15:28:00.389 user (57) application hung (row lock on TM_SHEET_LINE_EXPLOR)
15:28:30.486 user (74) application hung (row lock on TM_SHEET_LINE_EXPLOR)
15:29:30.??? - row locks becomes the top wait event (16 locked users)
15:29:50.749 user (83) application hung (row lock on TM_SHEET_LINE_EXPLOR)

15:30:20.871 user (837) suspect broke out of update (implied)
15:30:20.871 user (57) application returned
15:30:20.871 user (74) application returned
15:30:20.871 user (83) application returned

15:30:30.905 smon (721) first smon action since before 15:25:00 (os thread startup)
15:30:50.974 user (837) first wait for undo - suspect broke out of update
15:30:50.974 - 225 active session, now top event (wait for a undo record)

15:33:41.636 smon (721) last PQ event (PX Deq: Test for msg)
15:33:41.636 user (837) application returned to suspect. Undo completed
15:33:51.670 smon (721) last related event (DFS lock handle)

Without ASH seemingly random problems would be a virtually impossible nightmare scenario for an Oracle DBA.
Summary
It's true. You need the right tool for the job. And the same is true when diagnosing Oracle Database performance. What I've done above is group probably 90% of the problems we face as Oracle DBAs into four categories. And each of these categories needs a special kind of tool and/or diagnosis method.

Once we recognize the problem pattern and get the best tool/method involved to diagnosis the problem, then we will know the time spent developing amazing solutions is time well spent.

Enjoy your work!

Craig.


Categories: DBA Blogs

Exadata Vulnerability

Pakistan's First Oracle Blog - Mon, 2015-02-02 19:49
This Exadata vulnerability is related to glibc vulnerability. A heap-based buffer overflow was found in glibc's __nss_hostname_digits_dots() function, which is used by the gethostbyname() and gethostbyname2() glibc function calls.

A remote attacker able to make an application call either of these functions could use this flaw to execute arbitrary code with the permissions of the user running the application.

In order to check if your Exadata system suffers from this vulnerability, use:

[root@server ~]# ./ghostest-rhn-cf.sh
vulnerable

The solution and action plan for this vulnerability is available by My Oracle Support in the following document:

glibc vulnerability (CVE-2015-0235) patch availability for Oracle Exadata Database Machine (Doc ID 1965525.1)
Categories: DBA Blogs

Database Flashback -- 1

Hemant K Chitale - Sun, 2015-02-01 09:25
A first post on Database Flashback.

Enabling Database Flashback in 11.2 non-RAC

SQL*Plus: Release 11.2.0.2.0 Production on Sun Feb 1 23:13:17 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select version, status, database_status
2 from v$instance;

VERSION STATUS DATABASE_STATUS
----------------- ------------ -----------------
11.2.0.2.0 OPEN ACTIVE

SYS>select flashback_on, database_role
2 from v$database;

FLASHBACK_ON DATABASE_ROLE
------------------ ----------------
NO PRIMARY

SYS>
SYS>show parameter db_recovery_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /home/oracle/app/oracle/flash_
recovery_area
db_recovery_file_dest_size big integer 3852M
SYS>
SYS>select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE 0 0 0
REDO LOG 0 0 0
ARCHIVED LOG .95 .94 5
BACKUP PIECE 28.88 .12 5
IMAGE COPY 0 0 0
FLASHBACK LOG 0 0 0
FOREIGN ARCHIVED LOG 0 0 0

7 rows selected.

SYS>

So, the above output shows that the database is OPEN but Flashback is not enabled.
Let me enable Flashback now.
SYS>alter database flashback on;

Database altered.

SYS>select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

SYS>select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE 0 0 0
REDO LOG 0 0 0
ARCHIVED LOG .95 .94 5
BACKUP PIECE 28.88 .12 5
IMAGE COPY 0 0 0
FLASHBACK LOG .41 0 2
FOREIGN ARCHIVED LOG 0 0 0

7 rows selected.

SYS>

Immediately after enabling Flashback, Oracle shows usage of the FRA for Flashback Logs. Note : Although 11.2 allows you to enable Flashback in an OPEN Database, I would suggest doing so when the database is not active.

Categories: DBA Blogs

12c Dataguard: Restore Data File From Service

Oracle in Action - Fri, 2015-01-30 23:15

RSS content

Starting with Oracle Database 12c, in a Data Guard environment, you can restore data files on a primary (standby) database by connecting to a standby (primary) database over the network .

RMAN restores database files, over the network, from the physical standby (primary) database by using the FROM SERVICE clause of the RESTORE command. The FROM SERVICE clause provides the service name of the physical standby (primary) database from which the files must be restored. During the restore operation, RMAN creates backup sets, on the physical standby database (primary), of the files that need to be restored and then transfers these backup sets to the target database over the network.”

 Optionally, you can use SECTION SIZE to restore files from the source database as multisection backup sets. You can also compress the transferred files by specifying the USING COMPRESSED BACKUPSET.

Prerequisites for restoring Files from remote host :

  • The password file on the source database and the target database must be the same.
  • The tnsnames.ora file in the target database must contain an entry that corresponds to the remote database.

In this post, I will demonstrate restore of a data file on primary  from standby using service clause of RMAN  Restore command.

Current scenario:

  • Primary CDB : Boston
  • Physical Standby CDB : London
  • PDB : Dev1

– Create a new tablespace called sample in PDB dev1 on primary (boston)

BOSTON>alter session set container=dev1;
        create tablespace sample
        datafile       '/u01/app/oracle/oradata/boston/dev1/sample01.dbf'
         size 5m;

– Verify that parameter standby_file_management = auto
on standby database  (london)

LONDON>sho parameter standby_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
standby_file_management string AUTO

– Verify that datafile for tablespace sample has been created on physical standby  (london)

LONDON>select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/london/system01.dbf
/u01/app/oracle/oradata/london/sysaux01.dbf
/u01/app/oracle/oradata/london/undotbs01.dbf
/u01/app/oracle/oradata/london/pdbseed/system01.dbf
/u01/app/oracle/oradata/london/users01.dbf
/u01/app/oracle/oradata/london/pdbseed/sysaux01.dbf
/u01/app/oracle/oradata/london/dev1/system01.dbf
/u01/app/oracle/oradata/london/dev1/sysaux01.dbf
/u01/app/oracle/oradata/london/dev1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/london/dev1/example01.dbf
/u01/app/oracle/oradata/london/dev1/sample01.dbf

– Create table hr.employees2 in new tablespace sample on primary

BOSTON>sho con_name

CON_NAME
------------------------------
DEV1

BOSTON>create table hr.employees2 tablespace sample
       as select * from hr.employees;
      select count(*) from hr.employees2;

COUNT(*)
----------
107

– To simulate loss of datafile, rename  sample01.dbf to sample01.sav on primary host

BOSTON>!mv /u01/app/oracle/oradata/boston/dev1/sample01.dbf /u01/app/oracle/oradata/boston/dev1/sample01.sav

– Restart primary – error while opening as datafile is missing

BOSTON>conn / as sysdba

       shu abort;
       startup
Database mounted.
ORA-01157: cannot identify/lock data file 12 - see DBWR trace file
 ORA-01110: data file 12: '/u01/app/oracle/oradata/boston/dev1/sample01.dbf'

– Take the missing datafile offline  on primary and then open primary database

BOSTON>alter session set container=dev1;
       alter tablespace sample datafile offline;
       alter session set container=cdb$root;
       alter database open;

BOSTON>sho pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 DEV1 MOUNTED

BOSTON>alter pluggable database dev1 open;
-- Connect to primary (boston)  using RMAN
[oracle@host01 ~]$ . oraenv
ORACLE_SID = [boston] ?

[oracle@host01 ~]$ rman target /

-- Restore datafile from physical standby database (london) over network

RMAN> restore tablespace dev1:sample from service 'london';

Starting restore at 23-JAN-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service london
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00012 to /u01/app/oracle/oradata/boston/dev1/sample01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 23-JAN-15

– Recover the restored tablespace using archivelogs available
locally on primary database (boston)

RMAN> recover tablespace dev1:sample;

Starting recover at 23-JAN-15
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 23-JAN-15

– Bring tablespce online

BOSTON>alter session set container=dev1;
       alter tablespace sample datafile online;
       select count(*) from hr.employees2;

COUNT(*)
----------
107

I hope this post was useful.
Your comments and suggestions are always welcome.

References:

https://docs.oracle.com/database/121/RCMRF/rcmsynta2008.htm#RCMRF149

http://docs.oracle.com/database/121/BRADV/rcmadvre.htm#BRADV681



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [12c Dataguard: Restore Data File From Service], All Right Reserved. 2015.

The post 12c Dataguard: Restore Data File From Service appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

OTNYathra 2015

Oracle in Action - Fri, 2015-01-30 05:23

RSS content

The Oracle ACE directors and Java champions will be organizing an evangelist event called ‘OTNYathra 2015’  during February 2015. during which a series of 7 conferences will be held across 7 major cities of India  in a time period of 2 weeks.  This event will bring the Oracle community together, spread the knowledge and increase the networking opportunities in the region. The detailed information about the event can be viewed at http://www.otnyathra.com.

I will be presenting a session on Adaptive Query Optimization on 13th Feb 2015 at FMDI, Sector 17B, IFFCO Chowk , Gurgaon.

Thanks to Sir Murali Vallath  and his team for organizing it and giving me an opportunity to present.

Hope to see you there!!

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [OTNYathra 2015], All Right Reserved. 2015.

The post OTNYathra 2015 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Log Buffer #408, A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2015-01-29 21:45

This Log Buffer Edition covers various innovative blog posts from various fields of Oracle, MySQL and SQL Server. Enjoy!!!


Oracle:

A user reported an ORA-01114 and an ORA-27069 in a 3rd party application running against an Oracle 11.1 database.

Oracle SOA Suite 12c: Multithreaded instance purging with the Java API.

Oracle GoldenGate for Oracle Database has introduced several features in Release 12.1.2.1.0.

Upgrade to 12c and Plugin – one fast way to move into the world of Oracle Multitenant.

The Oracle Database Resource Manager (the Resource Manager) is an infrastructure that provides granular control of database resources allocated to users, applications, and services. The Oracle Database Resource Manager (RM) enables you to manage multiple workloads that are contending for system and database resources.

SQL Server:

Database ownership is an old topic for SQL Server pro’s.

Using T-SQL to Perform Z-Score Column Normalization in SQL Server.

The APPLY operator allows you to join a record set with a function, and apply the function to every qualifying row of the table (or view).

Dynamic Management Views (DMVs) are a significant and valuable addition to the DBA’s troubleshooting armory, laying bare previously unavailable information regarding the under-the-covers activity of your database sessions and transactions.

Grant Fritchey reviews Midnight DBA’s Minion Reindex, a highly customizable set of scripts that take on the task of rebuilding and reorganizing your indexes.

MySQL:

It’s A New Year – Take Advantage of What MySQL Has To Offer.

MySQL High Availability and Disaster Recovery.

MariaDB Galera Cluster 10.0.16 now available.

Multi-threaded replication with MySQL 5.6: Use GTIDs!

MySQL and the GHOST: glibc gethostbyname buffer overflow.

Categories: DBA Blogs

From 0 to Cassandra – An Exhaustive Approach to Installing Cassandra

Pythian Group - Thu, 2015-01-29 21:44

All around the Internet you can find lots of guides on how to install Cassandra on almost every Linux distro around. But normally all of this information is based on the packaged versions and omit some parts that are deemed essential for proper Cassandra functioning.

Note: If you are adding a machine to an existing Cluster please approach this guide with caution and replace the configurations here recommended by the ones you already have on your cluster, specially the Cassandra configuration.

Without further conversation lets start!

Essentials

Start your machine and install the following:

  • ntp (Packages are normally ntp, ntpdata and ntp-doc)
  • wget (Unless you have your packages copied over via other means)
  • vim (Or your favorite text editor)

Retrieve the following packages

Installation
Set up NTP

This can be more or less dependent of your system, but the following commands should do it (You can check this guide also):

~$ chkconfig ntpd on
~$ ntpdate pool.ntp.org
~$ service ntpd start
Set up Java (Let’s assume we are doing this in /opt)

Extract Java and install it:

~$ tar xzf [java_file].tar.gz
~$ update-alternatives --install /usr/bin/java java /opt/java/bin/java 1 

Check that is installed:

~$ java -version
java version '1.7.0_75'
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)

Let’s put JNA into place

~$ mv jna-VERSION.jar /opt/java/lib
Set up Cassandra (Let’s assume we are doing this in /opt)

 

Extract Cassandra:

~$ tar xzf [cassandra_file].tar.gz

Create Cassandra Directories:

~$ mkdir /var/lib/cassandra
~$ mkdir /var/lib/cassandra/commitlog
~$ mkdir /var/lib/cassandra/data
~$ mkdir /var/lib/cassandra/saved_caches
~$ mkdir /var/log/cassandra
Configuration  Linux configuration
~$ vim /etc/security/limits.conf

Add the following:

root soft memlock unlimited
root hard memlock unlimited
root – nofile 100000
root – nproc 32768
root – as unlimited

CentOS, RHEL, OEL, set in the following in /etc/security/limits.d/90-nproc.conf:

* – nproc 32768

Add the following to the sysctl file:

~$ vim /etc/sysctl.conf
vm.max_map_count = 131072

Finally (Reboot also works):

~$ sysctl -p

Firewall, the following ports must be open:

# Internode Ports
7000    Cassandra inter-node cluster communication.
7001    Cassandra SSL inter-node cluster communication.
7199    Cassandra JMX monitoring port.
# Client Ports
9042    Cassandra client port (Native).
9160    Cassandra client port (Thrift).

Note: Some/Most guides tell you to disable swap, I think of swap as an acrobat’s safety net, it should never have to be put to use, but in need it exists. As such, I never disable it and I put a low swappiness (around 10). You can read more about it here and here.

 Cassandra configuration

Note: Cassandra has a LOT of settings, these are the ones you should always set if you are going live. Lots of them depend on hardware and/or workload. Maybe I’ll write a post about them in the near future. In the meantime, you can read about them here.

~$ vim /opt/cassandra/conf/cassandra.yaml

Edit the following fields:

cluster_name: <Whatever you would like to call it>
data_file_directories: /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog

saved_caches_directory: /var/lib/cassandra/saved_caches

# Assuming this is your first node, this should be reachable by other nodes
seeds: “<IP>”

# This is where you listen for intra node communication
listen_address: <IP>

# This is where you listen to incoming client connections
rpc_address: <IP>

endpoint_snitch: GossipingPropertyFileSnitch

Edit the snitch property file:

~$ vim  /opt/cassandra/conf/cassandra-rackdc.properties:

Add the DC and the RACK the server is in. Ex:

dc=DC1
rack=RAC1

Finally make sure your logs go to /var/log/cassandra:

~$ vim /opt/cassandra/conf/logback.xml
Testing

Start Cassandra

~$ service cassandra start

You should see no error here, wait a bit then:

~$ grep  JNA /var/log/cassandra/system.log
INFO  HH:MM:SS JNA mlockall successful

Then check the status of the ring:

~$ nodetool status
Datacenter: DC1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Owns   Host ID                               Token                                    Rack
UN  185.10.49.136  140.59 KB  100.0%  5c3c697f-8bfd-4fb2-a081-7af1358b313f  0                                        RAC

Creating a keyspace a table and inserting some data:

~$ cqlsh xxx.yy.zz.ww

cqlsh- CREATE KEYSPACE sandbox WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', DC1 : 1};
Should give no errors
cqlsh- USE sandbox;
cqlsh:sandbox- CREATE TABLE data (id uuid, data text, PRIMARY KEY (id));
cqlsh:sandbox- INSERT INTO data (id, data) values (c37d661d-7e61-49ea-96a5-68c34e83db3a, 'testing');
cqlsh:sandbox- SELECT * FROM data;

id                                   | data
--------------------------------------+---------
c37d661d-7e61-49ea-96a5-68c34e83db3a | testing

(1 rows)

And we are done, you can start using your Cassandra node!

Categories: DBA Blogs

GGSCI Commands : History and Command (!) recall

DBASolved - Wed, 2015-01-28 11:08

Ever get tired of typing commands?  Ever wish there was a way to recall commands from GGSCI?  Well, you are in luck!  There are two ways this can be done (listed below).

Commands:
1. History
2. ! (command)

The first option, History, is a command that allows you to view a list of the most recent commands executed in the GGSCI interface since the session started.  

Example:

GGSCI> history

Output:

GGSCI Command History

    3: stats ext ext
    4: stats exti
    5: info all
    6: info manager
    7: edit mgr
    8: edit manager
    9: edit params mgr
   10: set editor vi
   11: edit params mgr
   12: history

Notice that the last 10 commands are listed with the history command.  This list can be tailered by using a number after the history command.

Example:

GGSCI> history [n]

Output:
GGSCI Command History

    9: edit params mgr
   10: set editor vi
   11: edit params mgr
   12: history
   13: history 5

By using the a number with the history command, the last N commands will be displayed.  In the example above, I wanted the last five commands.  So commands that meet this requriement are displayed including the command ran.

Now that we can see a list of the last commands ran, how can we execute the command that we would like to execute from the list.  This is quite simple, just use the command (!) option.  An exclamation point (!) in Oracle GoldenGate acts as a command recall/execution.

Example:

GGSCI> !

In the example above, the command (!) will execute the last command in the list of commands store in the session history.  By adding a number to the command (!), we can tell GGSCI to excute that command.

Example:

GGSCI> ! 5

In the example above, I’m asking to rerun the “info all” command that can be seen in the history earlier in the post.  The output from running the command will be a listing of the current status of the Oracle GoldenGate environment.

Output:

GGSCI (oel.acme.com) 15> ! 5
info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
JAGENT      RUNNING
EXTRACT     RUNNING     EXT         00:00:00      00:00:02  
EXTRACT     RUNNING     EXTI        00:00:06      00:00:09  Description “Integrated Extract”
EXTRACT     RUNNING     PMP         00:00:00      00:00:00
EXTRACT     RUNNING     PMPI        00:00:00      00:00:05  Description “Integrated Pump”

Both of these commands can be used to cut down on typing in the GGSCI utility and make working a bit easier. 

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Log Buffer #407, A Carnival of the Vanities for DBAs

Pythian Group - Tue, 2015-01-27 08:32

This Log Buffer Edition keeps the aim high and brings few of the best blog posts from Oracle, SQL Server and MySQL.

Oracle:

3 Modes for Moving Data to the BI Applications DW from a Source Application Database.

JSON for APEX Developers.

Neelakanth Nadgir posted a useful utility that prints out various statistics about the ZFS Adaptive Replacement Cache (ARC).

Obtaining Bonus Depreciation Methods for Oracle Fixed Assets.

Existing News – Java Cloud Service just got an update – WebLogic Server 12.1.3

SQL Server:

Tracking Database DDL Changes with SQLVer.

While a diminished level of control is certainly a factor to consider when contemplating migration of on-premises systems to Microsoft Azure, especially when dealing with PaaS resources such as Azure SQL Database, you have a range of techniques at your disposal that allow you to control and monitor both the status of and access to your Azure-resident services.

A Custom Execution Method – Level 19 of the Stairway to Integration Services

When a SQL Server database is operating smoothly and performing well, there is no need to be particularly aware of the transaction log, beyond ensuring that every database has an appropriate backup regime and restore plan in place.

SQL Server Reporting Services Basics: Deploying Reports

MySQL:

Mo’ Data, Mo’ Problems

FromDual.en: FromDual Backup Manager for MySQL 1.2.1 has been released

Tracking MySQL query history in long running transactions

MySQL Cluster 7.4.3 RELEASE CANDIDATE now available

Streaming MariaDB backups in the cloud.

Categories: DBA Blogs

Indexing Points to Remember

Pakistan's First Oracle Blog - Mon, 2015-01-26 18:54
Indexing depends upon the queries in the application.

There is no one-size-fits-all break-even point for indexed versus table scan access. If only a few rows are being accessed, the index will be preferred.

If almost all the rows are being accessed, the full table scan will be preferred. In between these two extremes, your “mileage” will vary.

A concatenated index is more useful if it also supports queries where not all columns are specified. For instance SURNAME, FIRSTNAME is more useful than FIRSTNAME, SURNAME because queries against SURNAME only are more likely to occur than queries against FIRSTNAME only.

Global indexes provide better performance for queries that must span all partitions.
Categories: DBA Blogs

Partner Webcast – Oracle Mobile Security Suite for Mobile Applications

As the workforce becomes increasingly mobile, the need for organizations to control access and protect company data on mobile devices has become more critical than ever. BYOD (Bring Your Own Device)...

We share our skills to maximize your revenue!
Categories: DBA Blogs