Skip navigation.

The Oracle Instructor

Syndicate content The Oracle Instructor
Explain, Exemplify, Empower
Updated: 14 hours 42 min ago

New 12c Default: Controlfile Autobackup On – But only for Multitenant

Wed, 2014-09-24 10:39

This a a little discovery from my present Oracle Database 12c New Features course in Copenhagen: The default setting for Controlfile Autobackup has changed to ON – but only for Multitenant, apparently:

$ rman target sys/oracle_4U@cdb1

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 24 13:28:39 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB1 (DBID=832467154)

RMAN> select cdb from v$database;

using target database control file instead of recovery catalog
CDB
---
YES

RMAN> show controlfile autobackup;

RMAN configuration parameters for database with db_unique_name CDB1 are:
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default

Above you see the setting for a container database (CDB). Now an ordinary (Non-CDB) 12c Database:

$ rman target sys/oracle_4U@orcl

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 24 13:33:27 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1386527354)

RMAN> select cdb from v$database;

using target database control file instead of recovery catalog
CDB
---
NO

RMAN> show controlfile autobackup;

RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default

I really wonder why we have this difference! Is that still so with 12.1.0.2? Don’t believe it, test it! :-)


Tagged: 12c New Features, Backup & Recovery, Multitenant, RMAN
Categories: DBA Blogs

#Oracle Certification: Always go for the most recent one!

Tue, 2014-09-23 11:14

It is quite often that I encounter attendees in my Oracle University courses that strive to become OCP or sometimes even OCM, asking me whether they should better go for an older versions certificate before they take on the most recent. The reasoning behind those questions is mostly that it may be easier to do it with the older version. My advise is then always: Go for the most recent version! No Oracle Certification exam is easy, but the older versions certificate is already outdated. The now most recent one will become outdated also sooner as you may think :-)

OCP 12c upgrade

For that reason I really appreciate the option to upgrade from 9i/10g/11g OCA directly to 12c OCP as discussed in this posting. There is just no point in becoming a new 11g OCP now when 12c is there, in my opinion. What do you think?


Tagged: Oracle Certification
Categories: DBA Blogs

Oracle EMEA Customer Support Services Excellence Award 2014

Wed, 2014-09-17 13:54

The corporation announced today that I got the Customer Services Excellence Award 2014 in the category ‘Customer Champion’ for the EMEA region. It is an honor to be listed there together with these excellent professionals that I proudly call colleagues.

CSS Excellence Award 2014


Categories: DBA Blogs

Windows 7 error “key not valid for use in specified state”

Sun, 2014-08-31 00:37

When you see that error upon trying to install or upgrade something on your Windows 7 64-bit machine, chances are that it is caused by a Windows Security update that you need to uninstall. There is probably no point in messing around with the registry or the application that you want to upgrade. Instead, remove the Windows update KB2918614 like this:

Open the control panel, then click  Windows Update

Windows 7 control panelClick Update History and then Installed Updates:

Update HistoryScroll down to Microsoft Windows and look for KB2918614 (I have removed it already before I took the screenshot):

Uninstall KB2918614Finally, hide that update so you don’t get it installed later on again:

Hide KB2918614I’m using a corporate notebook with automatic Windows security updates coming from time to time and encountered that problem while trying to upgrade VirtualBox to version 4.3.12. It is not a VirtualBox issue, though, other installs or upgrades may fail for the same reason. For me, this was a serious problem, because I rely on virtual machines for many demonstrations. Kudos to the virtualbox.org forums! They helped me resolve that problem within a day. Thank you once again, guys! :-)


Tagged: #KB2918614
Categories: DBA Blogs

Don’t go directly to Maximum Protection!

Mon, 2014-08-25 04:14

With a Data Guard Configuration in Maximum Performance protection mode, don’t go to Maximum Protection directly, because that leads to a restart of the primary database:

 Attention!

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
  prima  - Primary database
    physt  - Physical standby database
      physt2 - Physical standby database (receiving current redo)

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> edit configuration set protection mode as maxprotection;
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Database opened.

Instead, go to Maximum Availability first and then to Maximum Protection:

DGMGRL> edit configuration set protection mode as maxperformance;
Succeeded.
DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.
DGMGRL> edit configuration set protection mode as maxprotection;
Succeeded.

The demo was done with 12c, involving a cascading standby database, but the behavior is the same in 11g already. The odd thing about it is that DGMGRL will restart the primary without warning. Wanted to share that with the Oracle community for years but always got over it somehow.


Tagged: Data Guard, High Availability
Categories: DBA Blogs

Why Write-Through is still the default Flash Cache Mode on #Exadata X-4

Wed, 2014-08-06 12:41

The Flash Cache Mode still defaults to Write-Through on Exadata X-4 because most customers are better suited that way – not because Write-Back is buggy or unreliable. Chances are that Write-Back is not required, so we just save Flash capacity that way. So when you see this

CellCLI> list cell attributes flashcachemode
         WriteThrough

it is likely to your best :-)
Let me explain: Write-Through means that writing I/O coming from the database layer will first go to the spinning drives where it is mirrored according to the redundancy of the diskgroup where the file is placed that is written to. Afterwards, the cells may populate the Flash Cache if they think it will benefit subsequent reads, but there is no mirroring required. In case of hardware failure, the mirroring is already sufficiently done on the spinning drives, as the pictures shows:

Flash Cache Mode Write-Through

Flash Cache Mode WRITE-THROUGH

That changes with the Flash Cache Mode being Write-Back: Now writes go primarily to the Flashcards and popular objects may even never get aged out onto the spinning drives. At least that age out may happen significantly later, so the writes on flash must be mirrored now. The redundancy of the diskgroup where the object in question was placed on determines again the number of mirrored writes. The two pictures assume normal redundancy. In other words: Write-Back reduces the usable capacity of the Flashcache at least by half.

Flash Cache Mode Write-Back

Flash Cache Mode WRITE-BACK

Only databases with performance issues on behalf of writing I/O will benefit from Write-Back, the most likely symptom of which would be high numbers of the Free Buffer Waits wait-event. And Flash Logging is done with both Write-Through and Write-Back. So there is a good reason behind turning on the Write-Back Flash Cache Mode only on demand. I have explained this just very similar during my present Oracle University Exadata class in Frankfurt, by the way :-)


Tagged: exadata
Categories: DBA Blogs

Common Roles get copied upon plug-in with #Oracle Multitenant

Fri, 2014-08-01 08:51

What happens when you unplug a pluggable database that has local users who have been granted common roles? They get copied upon plug-in of the PDB to the target container database!

Before Unplug of the PDBThe picture above shows the situation before the unplug command. It has been implemented with these commands:

 

SQL> connect / as sysdba
Connected.
SQL> create role c##role container=all;

Role created.

SQL> grant select any table to c##role container=all;

Grant succeeded.

SQL> connect sys/oracle_4U@pdb1 as sysdba
Connected.
SQL> grant c##role to app;

Grant succeeded.



SQL> grant create session to app;

Grant succeeded.

The local user app has now been granted the common role c##role. Let’s assume that the application depends on the privileges inside the common role. Now the pdb1 is unplugged and plugged in to cdb2:

SQL> shutdown immediate
Pluggable Database closed.
SQL> connect / as sysdba
Connected.
SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1.xml';

Pluggable database altered.

SQL> drop pluggable database pdb1;

Pluggable database dropped.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@EDE5R2P0 ~]$ . oraenv
ORACLE_SID = [cdb1] ? cdb2
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 is /u01/app/oracle
[oracle@EDE5R2P0 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 29 12:52:19 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> create pluggable database pdb1 using '/home/oracle/pdb1.xml' nocopy;

Pluggable database created.

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL> connect app/app@pdb1
Connected.
SQL> select * from scott.dept;

    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

SQL> select * from session_privs;

PRIVILEGE
----------------------------------------
CREATE SESSION
SELECT ANY TABLE

SQL> connect / as sysdba
Connected.

SQL> select role,common from cdb_roles where role='C##ROLE';

ROLE
--------------------------------------------------------------------------------
COM
---
C##ROLE
YES

As seen above, the common role has been copied upon the plug-in like the picture illustrates:
After plug-in of the PDBNot surprisingly the local user app together with the local privilege CREATE SESSION was moved to the target container database. But it is not so obvious that the common role is copied then to the target CDB. This is something I found out during delivery of a recent Oracle University LVC about 12c New Features, thanks to a question of one attendee. My guess was it will lead to an error upon unplug, but this test-case proves it doesn’t. I thought that behavior may be of interest to the Oracle Community. As always: Don’t believe it, test it! :-)


Tagged: 12c New Features, Multitenant
Categories: DBA Blogs

Restore datafile from service: A cool #Oracle 12c Feature

Wed, 2014-07-02 09:02

You can restore a datafile directly from a physical standby database to the primary. Over the network. With compressed backupsets. How cool is that?

Here’s a demo from my present class Oracle Database 12c: Data Guard Administration. prima is the primary database on host01, physt is a physical standby database on host03. There is an Oracle Net configuration on both hosts that enable host01 to tnsping physt and host03 to tnsping prima

 

[oracle@host01 ~]$ rman target sys/oracle@prima

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Jul 2 16:43:39 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2084081935)

RMAN> run
{
set newname for datafile 4 to '/home/oracle/stage/users01.dbf';
restore (datafile 4 from service physt) using compressed backupset;
catalog datafilecopy '/home/oracle/stage/users01.dbf';
}

executing command: SET NEWNAME

Starting restore at 02-JUL-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=47 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service physt
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to /home/oracle/stage/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 02-JUL-14

cataloged datafile copy
datafile copy file name=/home/oracle/stage/users01.dbf RECID=8 STAMP=851877850

This does not require backups taken on the physical standby database.


Tagged: 12c New Features, Backup & Recovery, Data Guard
Categories: DBA Blogs

Data Guard 12c New Features: Far Sync & Real-Time Cascade

Wed, 2014-06-11 08:10

UKOUG Oracle Scene has published my article about two exciting Data Guard 12c New Features:

http://viewer.zmags.com/publication/62b883ad#/62b883ad/44

Far Sync Instance enables Zero-Data-Loss across large distance

Hope you find it useful :-)


Tagged: 12c New Features, Data Guard
Categories: DBA Blogs