Skip navigation.

DBA Blogs

Log Buffer #413, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-03-09 21:15

This Log Buffer Editions scours the Internet and brings some of the fresh blog posts from Oracle, SQL Server and MySQL.

Oracle:

Most of Kyles’ servers tend to be Linux VMs on VMware ESX without any graphics desktops setup, so it can be disconcerting trying to install Oralce with it’s graphical “runInstaller” being the gate way we have to cross to achieve installation.

Working around heatbeat issues caused by tracing or by regexp

APEX 5 EA Impressions: Custom jQuery / jQuery UI implementations

Introduction to the REST Service Editor, Generation (PART 2)

Due to recent enhancements and importance within Oracle’s storage portfolio, StorageTek Storage Archive Manager 5.4 (SAM-QFS) has been renamed to Oracle Hierarchical Storage Manager (Oracle HSM) 6.0.

SQL Server:

There are different techniques to optimize the performance of SQL Server queries but wouldn’t it be great if we had some recommendations before we started planning or optimizing queries so that we didn’t have to start from the scratch every time? This is where you can use the Database Engine Tuning Advisor utility to get recommendations based on your workload.

Data Mining Part 25: Microsoft Visio Add-Ins

Stairway to Database Source Control Level 3: Working With Others (Centralized Repository)

SQL Server Hardware will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS.

Questions About SQL Server Transaction Log You Were Too Shy To Ask

MySQL:

The post shows how you can easily read the VCAP_SERVICES postgresql credentials within your Java Code using the maven repo. This assumes your using the ElephantSQL Postgresql service. A single connection won’t be ideal but for demo purposes might just be all you need.

MariaDB 5.5.42 Overview and Highlights

How to test if CVE-2015-0204 FREAK SSL security flaw affects you

Using master-master for MySQL? To be frankly we need to get rid of that architecture. We are skipping the active-active setup and show why master-master even for failover reasons is the wrong decision.

Resources for Highly Available Database Clusters: ClusterControl Release Webinar, Support for Postgres, New Website and More

Categories: DBA Blogs

Recovering an Oracle Database with Missing Redo

Pythian Group - Mon, 2015-03-09 21:14
Background

I ran into a situation where we needed to recover from an old online backup which (due to some issues with the RMAN “KEEP” command) was missing the archived redo log backups/files needed to make the backup consistent.  The client wasn’t concerned about data that changed during the backup, they were interested in checking some very old data from long before this online backup had started.

Visualizing the scenario using a timeline (not to scale):

  |-------|------------------|---------|------------------|
  t0      t1                 t2        t3                 t4
          Data is added                                   Present

The client thought that some data had become corrupted and wasn’t sure when but knew that it wasn’t recently so the flashback technologies were not an option.  Hence they wanted a restore of the database into a new temporary server as of time t1 which was in the distant past.

An online (hot) backup was taken between t2 and t3 and was considered to be old enough or close enough to t1 however the problem was that all archived redo log backups were missing. The client was certain that the particular data they were interested in would not have change during the online backup.

Hence the question is: without the necessary redo data to make the online backup consistent (between times t2 and t3) can we still open the database to extract data from prior to when the online backup began?  The official answer is “no” – the database must be made consistent to be opened.  And with an online backup the redo stream is critical to making the backed up datafiles consistent.  So without the redo vectors in the redo stream, the files cannot be made consistent with each other and hence the database cannot be opened.  However the unofficial, unsupported answer is that it can be done.

This article covers the unsupported and unofficial methods for opening a database with consistency corruption so that certain data can be extracted.

Other scenarios can lead to the same situation.  Basically this technique can be used to open the Oracle database any time the datafiles cannot be made consistent.

 

Demo Setup

To illustrate the necessary steps I’ve setup a test 12c non-container database called NONCDB.  And to simulate user transactions against it I ran a light workload using the Swingbench Order Entry (SOE) benchmark from another computer in the background.

Before beginning any backups or recoveries I added two simple tables to the SCOTT schema and some rows to represent the “old” data (with the words “OLD DATA” in the C2 column):

SQL> create table scott.parent (c1 int, c2 varchar2(16), constraint parent_pk primary key (c1)) tablespace users;

Table created.

SQL> create table scott.child (c1 int, c2 varchar2(16), foreign key (c1) references scott.parent(c1)) tablespace soe;

Table created.

SQL> insert into scott.parent values(1, 'OLD DATA 001');

1 row created.

SQL> insert into scott.parent values(2, 'OLD DATA 002');

1 row created.

SQL> insert into scott.child  values(1, 'OLD DETAILS A');

1 row created.

SQL> insert into scott.child  values(1, 'OLD DETAILS B');

1 row created.

SQL> insert into scott.child  values(1, 'OLD DETAILS C');

1 row created.

SQL> insert into scott.child  values(2, 'OLD DETAILS D');

1 row created.

SQL> commit;

Commit complete.

SQL>

 

Notice that I added a PK-FK referential integrity constraint and placed each table is a different tablespace so they could be backed up at different times.

These first entries represent my “old data” from time t1.

 

The Online Backup

The next step is to perform the online backup.  For simulation purposes I’m adjusting the steps a little bit to try to represent a real life situation where the data in my tables is being modified while the backup is running.  Hence my steps are:

  • Run an online backup of all datafiles except for the USERS tablespace.
  • Add some more data to my test tables (hence data going into the CHILD table is after the SOE tablespace backup and the data into the PARENT table is before the USERS tablespace backup).
  • Record the current archived redo log and then delete it to simulate the lost redo data.
  • Backup the USERS tablespace.
  • Add some post backup data to the test tables.

The actual commands executed in RMAN are:

$ rman

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Feb 26 15:59:36 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target

connected to target database: NONCDB (DBID=1677380280)

RMAN> backup datafile 1,2,3,5;

Starting backup at 26-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=46 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/NONCDB/datafile/SOE.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_b2k8dsno_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_sysaux_b2k8f3d4_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_undotbs1_b2k8fcdm_.dbf
channel ORA_DISK_1: starting piece 1 at 26-FEB-15
channel ORA_DISK_1: finished piece 1 at 26-FEB-15
piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp tag=TAG20150226T155942 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:11:16
Finished backup at 26-FEB-15

Starting Control File and SPFILE Autobackup at 26-FEB-15
piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/autobackup/2015_02_26/o1_mf_s_872698259_bgzb0647_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 26-FEB-15

RMAN> alter system switch logfile;

Statement processed

RMAN> commit;

Statement processed

RMAN> alter system switch logfile;

Statement processed

RMAN> insert into scott.parent values (3, 'NEW DATA 003');

Statement processed

RMAN> insert into scott.child  values (3, 'NEW DETAILS E');

Statement processed

RMAN> commit;

Statement processed

RMAN> select sequence# from v$log where status='CURRENT';

 SEQUENCE#
----------
        68

RMAN> alter system switch logfile;

Statement processed

RMAN> alter database backup controlfile to '/tmp/controlfile_backup.bkp';

Statement processed

RMAN> backup datafile 4;

Starting backup at 26-FEB-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_users_b2k8gf7d_.dbf
channel ORA_DISK_1: starting piece 1 at 26-FEB-15
channel ORA_DISK_1: finished piece 1 at 26-FEB-15
piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp tag=TAG20150226T165814 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-FEB-15

Starting Control File and SPFILE Autobackup at 26-FEB-15
piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/autobackup/2015_02_26/o1_mf_s_872701095_bgzdrrrh_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 26-FEB-15

RMAN> alter database backup controlfile to '/tmp/controlfile_backup.bkp';

Statement processed

RMAN> insert into scott.parent values (4, 'NEW DATA 004');

Statement processed

RMAN> insert into scott.child  values (4, 'NEW DETAILS F');

Statement processed

RMAN> commit;

Statement processed

RMAN> exit


Recovery Manager complete.
$

 

Notice that in the above steps that since I’m using Oracle Database 12c I’m able to execute normal SQL commands from RMAN – this is a RMAN 12c new feature.

 

Corrupting the Backup

Now I’m going to corrupt my backup by removing one of the archived redo logs needed to make the backup consistent:

SQL> set pages 999 lines 120 trims on tab off
SQL> select 'rm '||name stmt from v$archived_log where sequence#=68;

STMT
------------------------------------------------------------------------------------------------------------------------
rm /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_68_bgzcnv04_.arc

SQL> !rm /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_68_bgzcnv04_.arc

SQL>

 

Finally I’ll remove the OLD data to simulate the data loss (representing t4):

SQL> select * from scott.parent order by 1;

        C1 C2
---------- ----------------
         1 OLD DATA 001
         2 OLD DATA 002
         3 NEW DATA 003
         4 NEW DATA 004

SQL> select * from scott.child order by 1;

        C1 C2
---------- ----------------
         1 OLD DETAILS A
         1 OLD DETAILS B
         1 OLD DETAILS C
         2 OLD DETAILS D
         3 NEW DETAILS E
         4 NEW DETAILS F

6 rows selected.

SQL> delete from scott.child where c2 like 'OLD%';

4 rows deleted.

SQL> delete from scott.parent where c2 like 'OLD%';

2 rows deleted.

SQL> commit;

Commit complete.

SQL>

 

Attempting a Restore and Recovery

Now let’s try to recover from our backup on a secondary system so we can see if we can extract that old data.

After copying over all of the files, the first thing to do is to try a restore as per normal:

$ rman target=/

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Mar 2 08:40:12 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount;

Oracle instance started

Total System Global Area    1577058304 bytes

Fixed Size                     2924832 bytes
Variable Size                503320288 bytes
Database Buffers            1056964608 bytes
Redo Buffers                  13848576 bytes

RMAN> restore controlfile from '/tmp/controlfile_backup.bkp';

Starting restore at 02-MAR-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=12 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=/u01/app/oracle/oradata/NONCDB/controlfile/o1_mf_b2k8d9nq_.ctl
output file name=/u01/app/oracle/fast_recovery_area/NONCDB/controlfile/o1_mf_b2k8d9v5_.ctl
Finished restore at 02-MAR-15

RMAN> alter database mount;

Statement processed
released channel: ORA_DISK_1

RMAN> restore database;

Starting restore at 02-MAR-15
Starting implicit crosscheck backup at 02-MAR-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=12 device type=DISK
Crosschecked 4 objects
Finished implicit crosscheck backup at 02-MAR-15

Starting implicit crosscheck copy at 02-MAR-15
using channel ORA_DISK_1
Crosschecked 2 objects
Finished implicit crosscheck copy at 02-MAR-15

searching for all files in the recovery area
cataloging files...
cataloging done

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_b2k8dsno_.dbf
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_sysaux_b2k8f3d4_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_undotbs1_b2k8fcdm_.dbf
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/NONCDB/datafile/SOE.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp tag=TAG20150226T155942
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:46
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_users_b2k8gf7d_.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp tag=TAG20150226T165814
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 02-MAR-15

RMAN>

 

Notice that it did restore the datafiles from both the SOE and USERS tablespaces, however we know that those are inconsistent with each other.

Attempting to do the recovery should give us an error due to the missing redo required for consistency:

RMAN> recover database;

Starting recover at 02-MAR-15
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 67 is already on disk as file /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_67_bgzcn05f_.arc
archived log for thread 1 with sequence 69 is already on disk as file /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_69_bgzdqo9n_.arc
Oracle Error:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_bh914cx2_.dbf'

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/02/2015 08:44:21
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of archived log for thread 1 with sequence 68 and starting SCN of 624986 found to restore

RMAN>

 

As expected we got the dreaded ORA-01547, ORA-01194, ORA-01110 errors meaning that we don’t have enough redo to make the recovery successful.

 

Attempting a Recovery

Now the crux of the situation. We’re stuck with the common inconsistency error which most seasoned DBAs should be familiar with:

Oracle Error:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_bh914cx2_.dbf'

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/02/2015 08:44:21
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of archived log for thread 1 with sequence 68 and starting SCN of 624986 found to restore

 

And of course we need to be absolutely positive that we don’t have the missing redo somewhere.  For example in an RMAN backup piece on disk or on tape somewhere from an archive log backup that can be restored.  Or possibly still in one of the current online redo logs.  DBAs should explore all possible options for retrieving the missing redo vectors in some form or another before proceeding.

However, if we’re absolutely certain of the following we can continue:

  1. We definitely can’t find the missing redo anywhere.
  2. We absolutely need to extract data from prior to the start of the online backup.
  3. Our data definitely wasn’t modified during the online backup.

 

The natural thing to check first when trying to open the database after an incomplete recovery is the fuzziness and PIT (Point In Time) of the datafiles from SQLPlus:

SQL> select fuzzy, status, checkpoint_change#,
  2         to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,
  3         count(*)
  4    from v$datafile_header
  5   group by fuzzy, status, checkpoint_change#, checkpoint_time
  6   order by fuzzy, status, checkpoint_change#, checkpoint_time;

FUZZY STATUS  CHECKPOINT_CHANGE# CHECKPOINT_TIME        COUNT(*)
----- ------- ------------------ -------------------- ----------
NO    ONLINE              647929 26-FEB-2015 16:58:14          1
YES   ONLINE              551709 26-FEB-2015 15:59:43          4

SQL>

 

The fact that there are two rows returned and that not all files have FUZZY=NO indicates that we have a problem and that more redo is required before the database can be opened with the RESETLOGS option.

But our problem is that we don’t have that redo and we’re desperate to open our database anyway.

 

Recovering without Consistency

Again, recovering without consistency is not supported and should only be attempted as a last resort.

Opening the database with the data in an inconsistent state is actually pretty simple.  We simply need to set the “_allow_resetlogs_corruption” hidden initialization parameter and set the undo management to “manual” temporarily:

SQL> alter system set "_allow_resetlogs_corruption"=true scope=spfile;

System altered.

SQL> alter system set undo_management='MANUAL' scope=spfile;

System altered.

SQL> shutdown abort;
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size                  2924832 bytes
Variable Size             503320288 bytes
Database Buffers         1056964608 bytes
Redo Buffers               13848576 bytes
Database mounted.
SQL>

 

Now, will the database open? The answer is still: “probably not”.  Giving it a try we get:

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00600: internal error code, arguments: [2663], [0], [551715], [0], [562781], [], [], [], [], [], [], []
Process ID: 4538
Session ID: 237 Serial number: 5621


SQL>

 

Doesn’t look good, right?  Actually the situation is not that bad.

To put it simply this ORA-00600 error means that a datafile has a recorded SCN that’s ahead of the database SCN.  The current database SCN is shown as the 3rd argument (in this case 551715) and the datafile SCN is shown as the 5th argument (in this case 562781).  Hence a difference of:

562781 - 551715 = 11066

In this example, that’s not too large of a gap.  But in a real system, the difference may be more significant.  Also if multiple datafiles are ahead of the current SCN you should expect to see multiple ORA-00600 errors.

The solution to this problem is quite simple: roll forward the current SCN until it exceeds the datafile SCN.  The database automatically generates a number of internal transactions on each startup hence the way to roll forward the database SCN is to simply perform repeated shutdowns and startups.  Depending on how big the gap is, it may be necessary to repeatedly shutdown abort and startup – the gap between the 5th and 3rd parameter to the ORA-00600 will decrease each time.  However eventually the gap will reduce to zero and the database will open:

SQL> connect / as sysdba
Connected to an idle instance.
SQL> shutdown abort
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size                  2924832 bytes
Variable Size             503320288 bytes
Database Buffers         1056964608 bytes
Redo Buffers               13848576 bytes
Database mounted.
Database opened.
SQL>

 

Now presumably we want to query or export the old data so the first thing we should do is switch back to automatic undo management using a new undo tablespace:

SQL> create undo tablespace UNDOTBS2 datafile size 50M;

Tablespace created.

SQL> alter system set undo_tablespace='UNDOTBS2' scope=spfile;

System altered.

SQL> alter system set undo_management='AUTO' scope=spfile;

System altered.

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size                  2924832 bytes
Variable Size             503320288 bytes
Database Buffers         1056964608 bytes
Redo Buffers               13848576 bytes
Database mounted.
Database opened.
SQL>

 

Finally the database is opened (although the data is inconsistent) and the “old” data can be queried:

SQL> select * from scott.parent;

        C1 C2
---------- ----------------
         1 OLD DATA 001
         2 OLD DATA 002
         3 NEW DATA 003

SQL> select * from scott.child;

        C1 C2
---------- ----------------
         1 OLD DETAILS A
         1 OLD DETAILS B
         1 OLD DETAILS C
         2 OLD DETAILS D

SQL>

 

As we can see, all of the “old” data (rows that begin with “OLD”) that were from before the backup began (before t2) is available.  And only part of the data inserted during the backup (rows where C1=3) as would be expected – that’s our data inconsistency.

We’ve already seen that we can SELECT the “old” data.  We can also export it:

$ expdp scott/tiger dumpfile=DATA_PUMP_DIR:OLD_DATA.dmp nologfile=y

Export: Release 12.1.0.2.0 - Production on Mon Mar 2 09:39:11 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_02":  scott/******** dumpfile=DATA_PUMP_DIR:OLD_DATA.dmp nologfile=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 640 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "SCOTT"."CHILD"                             5.570 KB       4 rows
. . exported "SCOTT"."PARENT"                            5.546 KB       3 rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_02" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_02 is:
  /u01/app/oracle/admin/NONCDB/dpdump/OLD_DATA.dmp
Job "SCOTT"."SYS_EXPORT_SCHEMA_02" successfully completed at Mon Mar 2 09:39:46 2015 elapsed 0 00:00:34

$

 

At this point we’ve either queried or extracted that critical old data which was the point of the exercise and we should immediately discard the restored database.  Remember it has data inconsistency which may include in internal tables an hence shouldn’t be used for anything beyond querying or extracting that “old” data.  Frequent crashes or other bizarre behavior of this restored database should be expected.  So get in, get the data, get out, and get rid of it!

 

Conclusion

If “desperate times call for desperate measures” and if you’re in that situation described in detail above where you need the data, are missing the necessary redo vectors, and are not concerned about the relevant data being modified during the backup then there options.

The “more redo needed for consistency” error stack should be familiar to most DBAs:

ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent

And they may also be somewhat familiar with the “_allow_resetlogs_corruption” hidden initialization parameter.  However don’t let the resulting ORA-00600 error make the recovery attempt seem unsuccessful:

ORA-00600: internal error code, arguments: [2663], [0], [551715], [0], [562781], [], [], [], [], [], [], []

This error is overcome-able and the database likely can still be opened so the necessary data can be queried or extracted.

Note: this process has been tested with Oracle Database 10g, Oracle Database 11g, and Oracle Database 12c.

Categories: DBA Blogs

Partner Webcast – Oracle Private Cloud: Database as a Service (DBaaS) using Oracle Enterprise Manager 12c

Large enterprises today have hundreds and thousands of databases of various versions, configurations and patch levels. Another challenge is around time to provision new databases. When an end...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Blog third anniversary

Bobby Durrett's DBA Blog - Thu, 2015-03-05 09:31

My first blog post was March 5, 2012, three years ago today.

I have enjoyed blogging.  Even though I am talking about topics related to my work blogging does not feel like work. The great thing about blogging is that it’s completely in my control.  I control the content and the time-table. I pay a small amount each year for hosting and for the domain name, but the entertainment value alone is worth the price of the site.  But, it also has career value because this blog has given me greater credibility both with my employer and outside the company.  Plus, I think it makes me better at my job because blogging forces me to put into words the technical issues that I am working on.

It’s been three good years of blogging.  Looking forward to more in the future.

– Bobby

Categories: DBA Blogs

Oracle Information Security Partner Community Forum - March 26-27, 2015

FEBRUARY 2015 ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Joined twitter

Bobby Durrett's DBA Blog - Wed, 2015-03-04 17:27

I joined twitter.  I don’t really know how to use it.  I’m setup as Bobby Durrett, @bobbydurrettdba if that means anything to you. :)

– Bobby

Categories: DBA Blogs

Oracle University Instructors on the Cruise Ship

The Oracle Instructor - Wed, 2015-03-04 14:14

Oracle User Group Norway Annual ConferenceI’m really looking forward to speak at the Oracle User Group Norway Spring Seminar 2015, together with my dear colleague Joel Goodman! For sure it’s one of the highlights this year in terms of Oracle Events.

Joel will present about Oracle Automatic Parallel Execution on MAR-12, 6pm and about Oracle 12c Automatic Data Optimization and Heat Map on MAR-13, 9:30am

Yours sincerely will talk about The Data Guard Broker – Why it is recommended on MAR-12, 6pm and about The Recovery Area – Why it is recommended on MAR-13, 8:30am

Joel Goodman & Uwe Hesse

The OUGN board has again gathered an amazing lineup of top-notch speakers for this event, so I will gladly take the opportunity to improve my knowledge :-)


Tagged: #ougn2015
Categories: DBA Blogs

Oracle APEX_WEB_SERVICE REST API call

Kubilay Çilkara - Wed, 2015-03-04 12:15
In this post I will try to show you how I used the Oracle Apex and the APEX_WEB_SERVICE  PL/SQL package to quickly send a request to a public Internet API and how I handled the response. The code below was written during a 'Hackday' and hasn't been extensively tested.

My use case is integrating Oracle Apex with the public Mendeley REST API for Mendeley Catalog Search.

The idea was to build an application in Oracle Apex to query the Mendeley REST API Catalog with a keyword. Mendeley REST API gives JSON response so I used PL/JSON to parse it.  I hear in Oracle 12c JSON is going to be a native data-type. My Oracle Apex host is running Oracle 11g and I had to use PL/JSON for ease.

To cut it short here is how the Mendeley Catalog Search on Oracle Apex application look  like. (Click image to go to app or visit http://apex2.enciva.co.uk/apex/f?p=864:2






To integrate with Mendeley REST API from Oracle Apex, I used one PL/SQL function and one procedure.

I used the function to obtain the Mendeley REST API Client Credentials Authorisation flow token and the procedure to do make the API request to Mendeley Catalog Search and to handle the response.

Here is the MENDELEY_CALL PL/SQL function I created:

This function returns the Client Credentials Authorisation Flow token from the Mendeeley REST API

create or replace function mendeley_call (p_id in varchar2)
return varchar2
is
v_token varchar2(1000);
token varchar2(1000);
jtoken json;
v_grant_type varchar2(400) := 'client_credentials';
v_client_id varchar2(500) := p_id;
v_client_secret varchar2(500) := '<put_your_mendeley_client_secret_here>';
v_scope varchar2(300) := 'all';
begin
/*----------Setting Headers----------------------------------------*/                                      
apex_web_service.g_request_headers(1).name := 'Content-Type';
apex_web_service.g_request_headers(1).Value := 'application/x-www-form-urlencoded; charset=utf-8';
/*-----------------------------------------------------------------*/

token := apex_web_service.make_rest_request
    (
      p_url         => 'https://api.mendeley.com/oauth/token'
    , p_http_method => 'POST'
    , p_parm_name   => apex_util.string_to_table('grant_type:client_id:client_secret:scope')
    , p_parm_value  => apex_util.string_to_table(v_grant_type||':'||v_client_id||':'||v_client_secret||':'
||v_scope)
    , p_wallet_path => 'file:/home/app/oracle/product/11.2.0/dbhome_1/owm/wallets/oracle'
    , p_wallet_pwd  => '<put_your_oracle_wallet_password_here>'
    );
-- debug
-- dbms_output.put_line(token);
jtoken := json(token);
v_token := json_ext.get_string(jtoken,'access_token');
-- debug
-- dbms_output.put_line(v_token);
return v_token;
EXCEPTION
WHEN OTHERS THEN
   raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
end;​


Here is the anonymous procedure which I put into a PL/SQL region on the Oracle Apex page:

This procedure incorporates the function above and makes the request and handles the response from the Mendeley REST API

Note how the procedure calls the function MENDELEY_CALL (above) to load the variable v_token. 

DECLARE
  v_token  VARCHAR2(599) := mendeley_call(put_your_mendeley_client_id_here);
  v_search VARCHAR2(500);
  mendeley_document NCLOB;
  v_status VARCHAR2(100);
  obj json_list;
  v_id VARCHAR2(100);
  v_title NVARCHAR2(1000);
  v_abstract NCLOB;--varchar2(32000);
  v_link     VARCHAR2(1000);
  v_source   VARCHAR2(500);
  v_type     VARCHAR2(100);
  v_pct_hit  VARCHAR2(10);
  v_rows     NUMBER(10);
  v_batch_id NUMBER(10);
BEGIN
  -- Oracle Wallet
  utl_http.set_wallet('file:/home/app/oracle/product/11.2.0/dbhome_1/owm/wallets/oracle', 
'my_secret_password');
  -- Set Authorisation headers and utf8
  -- the following lilne is necessary if you need to use languages other than latin and 
  -- you will use APEX_WEB_SERVICE package 
  utl_http.set_body_charset('UTF-8');
  -- build the Authorisation header
  apex_web_service.g_request_headers(1).name  := 'Content-Type';
  apex_web_service.g_request_headers(1).value := 'application/jsonrequest';
  apex_web_service.g_request_headers(1).name  := 'Authorization';
  apex_web_service.g_request_headers(1).value := 'Bearer '||v_token||'';
  
  -- Make the request and load the response into a CLOB 
  mendeley_document := apex_web_service.make_rest_request 
      ( 
        p_url => 'https://api.mendeley.com:443/search/catalog' 
      , p_http_method => 'GET' 
      , p_parm_name => apex_util.string_to_table('title:limit') 
      , p_parm_value => apex_util.string_to_table('Mendeley:10') 
      );
  -- Load the response to JSON_LIST PL/JSON object
  obj := json_list(mendeley_document);
  -- Start extracting values from the JSON and writhe some HTML
  -- Traverse over JSON_LIST extract elements you like
  FOR i IN 1..obj.count
  LOOP
    v_id       := json_ext.get_string(json(obj.get(i)),'id');
    v_title    := json_ext.get_string(json(obj.get(i)),'title');
    v_abstract := json_ext.get_string(json(obj.get(i)),'abstract');
    v_link     := json_ext.get_string(json(obj.get(i)),'link');
    v_source   := json_ext.get_string(json(obj.get(i)),'source');
    v_type     := json_ext.get_string(json(obj.get(i)),'type');
    -- write extracted data
   dbms_output.put_line(v_title||' ==> '||v_abstract);
   END LOOP;
 END;​
 END;

This shows how easy is, in this case using one function and one procedure to make a REST API request to an external Web Service from Oracle Apex. 
Categories: DBA Blogs

Parallel Execution -- 2 PX Servers

Hemant K Chitale - Tue, 2015-03-03 09:51
I've posted a couple of examples (here and here) of Parallel Execution servers for Parallel Query.

How do we identify usage of Parallel Execution ?

I will update this post (and, possibly, subsequent post(s)) with a few methods.

The first one (as I've shown in my previous posts) is to look at the column PX_SERVERS_EXECUTIONS in either V$SQLSTATS or V$SQL.  This can identify the number of PX Servers used for an SQL (Query or DML).  However, there is a caveat when the SQL undergoes multiple execution -- the statistic on PX_SERVERS_EXECUTIONS may be cumulative (i.e. additive) across all the executions of the SQL.  UPDATE 13-Mar-15 : See the new post here.

Another method is to look at the V$PX_PROCESS and V$PX_SESSION views.

Let me demonstrate this second method using the same SQL query from my previous blog post.

[oracle@localhost ~]$ sqlplus hemant/hemant

SQL*Plus: Release 11.2.0.2.0 Production on Tue Mar 3 23:34:37 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

HEMANT>select distinct sid from v$mystat;

SID
----------
197

HEMANT>select count(*) from v$px_process;

COUNT(*)
----------
0

HEMANT>select count(*) from v$px_session;

COUNT(*)
----------
0

HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 4
resource_manager_cpu_allocation integer 4
HEMANT>set serveroutput off
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select count(*) from v$px_process;

COUNT(*)
----------
16

SYS>select qcsid, req_degree, degree, count(*)
2 from v$px_session
3 group by qcsid, req_degree, degree
4 /

QCSID REQ_DEGREE DEGREE COUNT(*)
---------- ---------- ---------- ----------
197 1
197 16 16 16

SYS>

The query by the SYS user is from a different session while the "select /*+ PARALLEL */ count(*) from Large_Table;" is being executed by HEMANT.  This query is on V$PX_SESSION and shows only when the Parallel Query sessions are active -- i.e. running HEMANT's  parallel count(*) query.  (If I query V$PX_SESSION after the parallel count(*) completes, I won't get the information).

The above output demonstrates
(a) that there were no PX servers before I began the parallel count(*) query and there were 16 at the end -- 16 PX servers had been started and had not yet shutdown by the time I queried V$PX_PROCESS (They will shutdown after a while  ** note below).
(b) that my parallel count(*) query (executed by SID 197 which is the QueryCo-ordinator -- represented by QCSID) DID request and use 16 PX server sessions (as evidenced in the output from the query on V$PX_SESSION).  Thus, what I claimed on the basis of PX_SERVERS_EXECUTION in my previous post is correct.

** Note : A few minutes later, I can see that the PX Servers have shutdown.

HEMANT>select count(*) from v$px_process
2 /

COUNT(*)
----------
0

HEMANT>


Later, I will demonstrate how to join V$PX_PROCESS and V$PX_SESSION.
I will also demonstrate how you manage the number of PX Servers.

.
.
.

Categories: DBA Blogs

Different plan_hash_value same plan

Bobby Durrett's DBA Blog - Mon, 2015-03-02 15:38

I mentioned this same effect in an earlier post about SQL profiles: link

I get a different plan_hash_value values for a query each time I run an explain plan or run the query.  I see this in queries whose plan includes a system generated temporary segment like this:

|   1 |  TEMP TABLE TRANSFORMATION   |                             |
...
|  72 |    TABLE ACCESS STORAGE FULL | SYS_TEMP_0FD9D668C_764DD84C |

For some reason the system generated temporary table name gets included in the plan_hash_value calculation.  This makes plan_hash_value a less than perfect way to compare two plans to see if they are the same.

Last week I was using my testselect package to test the effect of applying a patch to fix bug 20061582.  I used testselect to grab 1160 select statements from production and got their plans with and without the patch applied on a development database.  I didn’t expect many if any plans to change based on what the patch does.  Surprisingly, 115 out of the 1160 select statements had a changed plan, but all the ones I looked at had the system generated temporary table names in their plan.

Now, I am going to take the queries that have different plans with and without the patch and execute them both ways.  I have a feeling that the plan differences are mainly due to system generated temp table names and their execution times will be the same with and without the patch.

I’ve run across other limitations of plan hash value as I mentioned in an earlier post: link

I’m still using plan_hash_value to compare plans but I have a list of things in my head that reminds me of cases where plan_hash_value fails to accurately compare two plans.

– Bobby

P.S. After posting this I realized that I didn’t know how many of the 115 select statements with plans that differed with and without the patch had system generated temp tables.  Now I know.  114 of the 115 have the string “TEMP TABLE TRANSFORMATION” in their plans.  So, really, there is only one select statement for which the patch may have actually changed its plan.

P.P.S. I reapplied the patch and verified that the one sql_id didn’t really change plans with the patch.  So, that means all the plan changes were due to the system generated name.  Also, all the executions times were the same except for one query that took 50 seconds to parse without the patch and 0 with the patch.  So, one of the queries with the system generated temp table name happened to benefit from the patch.  Very cool!

P.P.P.S This was all done on an 11.2.0.4 Exadata system.

Categories: DBA Blogs

Parallel Execution -- 1b The PARALLEL Hint and AutoDoP (contd)

Hemant K Chitale - Mon, 2015-03-02 09:38
Continuing the previous thread, having restarted the database again, with the same CPU_COUNT and missing I/O Calibration statistics  ....

The question this time is : What if the table level DoP is specifically 1 ?

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Mon Mar 2 23:22:28 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 135
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 64
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
SYS>
SYS>select * from dba_rsrc_io_calibrate;

no rows selected

SYS>
SYS>connect hemant/hemant
Connected.
HEMANT>set serveroutput off
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>alter table large_table parallel 1;

Table altered.

HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2622 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
8

HEMANT>

Aaha ! Again ! The same SQL statement, the same SQL_ID, the same Execution Plan (Plan Hash Value) and Oracle chose to use 8 PX Servers for the query !  Again, ignoring the table level DoP (of 1)

So, once again, we see that Oracle actually computes a DoP that looks like it is CPU_COUNT x PARALLEL_THREADS_PER_CPU. Let's verify this.

HEMANT>connect / as sysdba
Connected.
SYS>alter system set parallel_threads_per_cpu=4;

System altered.

SYS>alter system flush shared_pool;

System altered.

SYS>connect hemant/hemant
Connected.
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>set serveroutput off
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8b0ybuspqu0mm, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from Large_Table

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1311 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='8b0ybuspqu0mm';

PX_SERVERS_EXECUTIONS
---------------------
16

HEMANT>

YES SIR ! Oracle chose to use 16 PX Servers this time. So that does look like CPU_COUNT x PARALLEL_THREADS_PER_CPU.  Have you also noticed the COST ?  The COST has also dropped to half.  So, the COST is also computed based on the number of PX Servers that it expects to be able to grab and use.

.
.
.


Categories: DBA Blogs

Webcast - Oracle Database 12c High Availability New Features

Organizations today are dependent on IT to run efficient operations, quickly analyze information and compete more effectively. Consequently, it is essential that their IT infrastructure and databases...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Even More Oracle Database Health Checks with ORAchk 12.1.0.2.1 and 12.1.0.2.3 (Beta)

As we have discussed before, it can be a challenge to quantify how well your database is meeting operational expectations and identify areas to improve performance. Database health checks are...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #412, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-02-27 10:58

This Log Buffer Edition makes it way through the realms of Oracle, SQL Server and MySQL and brings you some of the blog posts.

Oracle:

Introducing Oracle Big Data Discovery Part 3: Data Exploration and Visualization

FULL and NO_INDEX Hints

Base64 Encode / Decode with Python (or WebLogic Scripting Tool) by Frank Munz

Why I’m Excited About Oracle Integration Cloud Service – New Video

Reminder: Upgrade Database 12.1.0.1 to 12.1.0.2 by July 2015

SQL Server:

An article about how we underestimate the power of joins and degrade our query performance by not using proper joins

Most large organizations have implemented one or more big data applications. As more data accumulates internal users and analysts execute more reports and forecasts, which leads to additional queries and analysis, and more reporting.

How do you develop and deploy your database?

A database must be able to maintain and enforce the business rules and relationships in data in order to maintain the data model.

Error handling with try-catch-finally in PowerShell for SQL Server

MySQL:

MySQL Enterprise Monitor 3.0.20 has been released

MySQL Cluster 7.4 is GA!

Connector/Python 2.1.1 Alpha released with C Extension

Worrying about the ‘InnoDB: detected cycle in LRU for buffer pool (…)’ message?

MySQL Cluster 7.4 GA: 200 Million QPS, Active-Active Geographic Replication and more

Categories: DBA Blogs

Parallel Execution -- 1 The PARALLEL Hint and AutoDoP

Hemant K Chitale - Thu, 2015-02-26 09:29
The behaviour of the PARALLEL Hint has changed subtly but significantly in 11.2.  From the documentation :
Beginning with Oracle Database 11g Release 2 (11.2.0.1), the PARALLEL and NO_PARALLEL hints are statement-level hints and supersede the earlier object-level hints .......... If you omitinteger, then the database computes the degree of parallelism.

Further down, the documentation states :
This hint overrides the value of the PARALLEL_DEGREE_POLICY initialization parameter. 
and 
PARALLEL: The statement always is run parallel, and the database computes the degree of parallelism, which can be 2 or greater.

It is important to note that the Statement Level PARALLEL Hint *overrides* the PARALLEL_DEGREE_POLICY.
So, even if PARALLEL_DEGREE_POLICY is set to MANUAL, implying that automatic degree of parallelism is disabled, the PARALLEL Hint, itself, allows Oracle to auto-compute a DoP.

What further complicates understanding of the behaviour is a reading of Oracle Support Note 1269321.1 that implies that if I/O calibration statistics are missing, Oracle does not use the automatic degree of parallelism feature.
So, one would assume that if I/O Calibration is not done, with the "automatic degree of parallelism" feature not being used, the PARALLEL Hint would not compute any Auto DoP !

Let's run a simple test case :

HEMANT>select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

HEMANT>
HEMANT>show parameter parallel;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 40
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 16
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
HEMANT>
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 10488 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 10488 (1)| 00:02:06 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 10488 (1)| 00:02:06 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
2

HEMANT>
HEMANT>select * from dba_rsrc_io_calibrate;

no rows selected

HEMANT>

Apparently, I/O Calibration statistics are not present ("are missing"). And yet, Oracle chose to use 2 PX Servers (not 4, not 1) for the query.  Isn't this confusing ?  Is AutoDoP used or is it not used ?

Let's make a change somewhere (else ?)

HEMANT>show parameter cpu_count

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 1

HEMANT>

Let's try on a server with more CPUs.  I reconfigure the same VM to run with 4 "CPUs" (cores) and restart the VM and database instance.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Thu Feb 26 23:18:47 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 135
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 64
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
SYS>

We can now see that CPU_COUNT, PARALLEL_MAX_SERVERS and PARALLEL_SERVERS_TARGET have all gone up.

SYS>connect hemant/hemant
Connected.
HEMANT>set serveroutput off
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2622 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>
HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
8

HEMANT>

Aaha !  The same SQL statement, the same SQL_ID, the same Execution Plan (Plan Hash Value) and Oracle  chose to use 8  (not 1, not 4, not 2) PX servers for the query !

So, there IS some way that AutoDoP is being used even when I/O calibration statistics are missing. Is this AutoDoP simply a function CPU_COUNT x PARALLEL_THREADS_PER_CPU ?

UPDATE :  Herlindo QV also confirmed to me on Twitter : "it seems that way and in RAC the formula looks like (CPU_COUNT x PARALLEL_THREADS_PER_CPU) x RAC nodes"  and, later, when I ask him to check with PARALLEL_FORCE_LOCAL=TRUE : "right, restricts parallel to just one instance. The tricky part is how to choose the best DOP for each unique scenario"
.
.
.

Categories: DBA Blogs

Virtual CPUs with Google Compute Engine

Pythian Group - Wed, 2015-02-25 12:12

Continuing on my series of virtual CPU behavior in Amazon Web Services, Amazon Web Service HVM instances, and Microsoft Azure, I’m taking a look at Google Compute Engine (GCE). GCE is a relative newcomer to the public cloud world, become generally available in December 2013. It does have some interesting features, including transparent maintenance through live migration, and automatic sustained-use discounts without upfront commitments.

Unlike Amazon or Microsoft, Google is very upfront about their vCPU definition.

For the n1 series of machine types, a virtual CPU is implemented as a single hyperthread on a 2.6GHz Intel Sandy Bridge Xeon or Intel Ivy Bridge Xeon (or newer) processor. This means that the n1-standard-2 machine type will see a whole physical core.

I still believe calling such a hyperthread a “virtual CPU” is misleading. When creating a virtual machine in a non-cloud VM platform, 1 virtual CPU = 1 physical core. Plain and simple. But when using a cloud platform, I need 2 virtual CPUs to get that same physical core.

cpu-comparison

Anyways, off to run some CPU tests. n1-standard-4 is a close match to the m3.xlarge instances previously tested, so I’ll try that.

Getting set up on Google Compute Engine

I already signed up with Google Compute Engine’s free trial and created a project I’m calling marc-cpu-test. Installing the gcloud compute command-line tools.

[marc@quartz ~]$ gcloud auth login --no-launch-browser
Go to the following link in your browser:

https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%(redacted)&access_type=offline

Enter verification code: (redacted)
Saved Application Default Credentials.

You are now logged in as [fielding@pythian.com].
Your current project is [None].  You can change this setting by running:
  $ gcloud config set project PROJECT
[marc@quartz ~]$ gcloud config set project marc-cputest
[marc@quartz ~]$ gcloud config set compute/zone us-central1-a
[marc@quartz ~]$ gcloud compute instances create cpu-test-n4 --image centos-6 --machine-type "n1-standard-4" --zone us-central1-a
Created [https://www.googleapis.com/compute/v1/projects/marc-cputest/zones/us-central1-a/instances/cpu-test-n4].
NAME        ZONE          MACHINE_TYPE  INTERNAL_IP    EXTERNAL_IP   STATUS
cpu-test-n4 us-central1-a n1-standard-4 10.240.222.194 104.154.75.96 RUNNING
[marc@quartz ~]$ gcloud compute ssh cpu-test-n4
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/marc/.ssh/google_compute_engine.
Your public key has been saved in /home/marc/.ssh/google_compute_engine.pub.
The key fingerprint is:
(redacted)
Updated [https://www.googleapis.com/compute/v1/projects/marc-cputest].
Warning: Permanently added '104.154.75.96' (RSA) to the list of known hosts.
Warning: Permanently added '104.154.75.96' (RSA) to the list of known hosts.

OK, instance all set and connected. As a CentOS 6 image it doesn’t allow SSH root logins by default, so attempting to set up a gcloud environment as a root user will get you “permission denied” errors on SSH. Serves me right for trying to run these tools as root in the first place :-).

Looking around

Checking what they got us:

[marc@cpu-test-n4 ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
processor       : 1
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
processor       : 2
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2
processor       : 3
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2

Google has redacted the exact CPU model numbers, but has clearly marked this as a 2-core system with core IDs 0 and 1.

The single-CPU case
[marc@cpu-test-n4 ~]$ taskset -pc 0 $$
pid 1558's current affinity list: 0-3
pid 1558's new affinity list: 0
[marc@cpu-test-n4 ~]$  dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 14.3427 s, 151 MB/s
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 29.3081 s, 74.1 MB/s
2170552320 bytes (2.2 GB) copied, 29.3065 s, 74.1 MB/s

We get a nice boost in raw CPU numbers as compared to the 120 MB/s I saw in AWS. With two processes sharing this CPU, see a tiny bit less than half the throughput.

Sharing the cores (or trying to)
[marc@cpu-test-n4 ~]$ taskset -pc 0,1 $$
pid 1558's current affinity list: 0
pid 1558's new affinity list: 0,1
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c &gt; /dev/null &amp; done
[1] 1803
[2] 1805
[marc@cpu-test-n4 ~]$ 2170552320 bytes (2.2 GB) copied, 14.6959 s, 148 MB/s
2170552320 bytes (2.2 GB) copied, 14.7139 s, 148 MB/s

This is interesting; we see almost full-speed throughput in spite of processors 0 and 1 said to have a shared core. With processors 0 and 2 the situation is the same.

[marc@cpu-test-n4 ~]$ taskset -pc 0,2 $$
pid 1558's current affinity list: 0,1
pid 1558's new affinity list: 0,2
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c &gt; /dev/null &amp; done
[1] 1830
[2] 1833
[marc@cpu-test-n4 ~]$ 2170552320 bytes (2.2 GB) copied, 14.6683 s, 148 MB/s
2170552320 bytes (2.2 GB) copied, 14.6692 s, 148 MB/s

Is the CPU scheduler ignoring my taskset commands? Running mpstat 2-second samples during the test to see actual CPU usage:

[marc@cpu-test-n4 ~]$ mpstat -P ALL 2
...
06:08:44 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
06:08:46 PM  all   46.31    0.00    3.75    0.00    0.00    0.00    0.00    0.00   49.94
06:08:46 PM    0   93.50    0.00    6.50    0.00    0.00    0.00    0.00    0.00    0.00
06:08:46 PM    1   92.00    0.00    8.00    0.00    0.00    0.00    0.00    0.00    0.00
06:08:46 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
06:08:46 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
...
06:08:52 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
06:08:54 PM  all   46.75    0.00    3.25    0.00    0.00    0.00    0.00    0.00   50.00
06:08:54 PM    0   93.47    0.00    6.53    0.00    0.00    0.00    0.00    0.00    0.00
06:08:54 PM    1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
06:08:54 PM    2   93.50    0.00    6.50    0.00    0.00    0.00    0.00    0.00    0.00
06:08:54 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

So the taskset commands are working: when we ask for CPUs 0 and 1, we are getting them, but throughput shows that cores aren’t being shared. It means that the CPUs in the virtual machine are not statically bound to hardware threads as seen under AWS. I’d call it a win as it gets more consistent performance even if the guest operating system is forced to makes poor CPU scheduling decisions as in this case.

[marc@cpu-test-n4 ~]$ taskset -pc 0-3 $$
pid 1558's current affinity list: 0,2
pid 1558's new affinity list: 0-3
[marc@cpu-test-n4 ~]$ for i in {1..4}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 22.9823 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 22.9914 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 22.9915 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 23.1333 s, 93.8 MB/s

This is more the throughput we would expect with two shared cores.

Lessons learned

Over the course of these tests, I’ve discovered a few things:

  • Although they share virtual CPUs like competitors, Google is very upfront about this behavior.
  • Actual throughput for a simple gzip workload is excellent.
  • Google Compute Engine has an abstraction layer in front of CPUs that dynamically schedules tasks between CPU threads, in addition to the regular scheduler in the virtual machine. In my testing, it allocates tasks efficiently across CPU cores, even when the OS scheduler is configured suboptimally.
Categories: DBA Blogs

Exadata X5 – A Practical Point of View of the New Hardware and Licensing

Pythian Group - Wed, 2015-02-25 12:10

Oracle recently announced its latest iteration of Exadata – X5-2. It includes a refresh of the hardware to the most recent Xeon® E5-2699 v3 CPUs. These new CPUs boost the total cores count in a full rack to 288. This is higher than the current 8 socket “big machine” version X4-8, which has only 240 cores.

But the most exciting part is the all flash version of Exadata. In the previous generation – X4 – Oracle had to switch from 15K drives to 10K drives in order to boost capacity from 600 GB to 1200 GB per hard drive to keep disk space higher than flash cache size. At that time of X4 announcements, we were already wondering why Oracle was still offering high-speed disks and not switching to all flash, and now we know why. Because that type of high-performance flash wasn’t quite ready.

Maintaining high IO rates over long periods of times needed some changes to the ILOM in order to maintain cooling fans speed based on many individual temperature sensors inside the flash cards (details). Removing the SAS controller and using the new NVMe connectivity resulted in much higher bandwidth per hard drive – 3.2 GBytes/sec vs. the old 1.2 GBytes/sec SAS.

With temperature and bandwidth sorted out, we now have a super-high performance option (EF – Extreme Flash) for Exadata which delivers the stunning 263 GB/sec uncompressed scan speed in a full rack. The difference in performance between the High Capacity and High Performance EF flash option is now much higher. The high-performance option in Exadata X5 is now viable. In Exadata X4 it made so little difference, that it was pointless.

x4 vs x5

The one thing I wonder with the X5 announcement is why the X5-2 storage server still uses the very old and quite outdated 8 core CPUs. I’ve seen many cases where a Smart Scan on an HCC table is CPU bound on the storage server even when reading from spinning disk. I am going to guess that there’s some old CPU inventory to cleanup. But that may not end up being such a problem (see “all columnar” flash cache feature).

But above all, the most important change was the incremental licensing option. With 36 cores per server, even the 1/8th rack configuration was in the multi-million dollars in licenses, and in many cases was too much for the problem in hand.

The new smallest configuration is:

  • 1/8th rack, with 2 compute nodes
  • 8 cores enabled per compute node (16 total)
  • 256 GB RAM per node (upgradable to 768 GB per node)
  • 3 storage servers with only half the cores, disks and flash enabled

Then you can license additional cores as you need them, 2 cores at a time. Similar to how ODA licensing option worked. You cannot reduce licensed cores.

The licensing rules changes go even further. Now you can mix & match compute and storage servers to create even more extreme options. Some non-standard examples:

  • Extreme Memory – more compute nodes with max RAM, reduced licensed cores
  • Extreme Storage – replace compute node with storage nodes, reduced licensed cores

x5 custom
Link to video

In conclusion, Oracle Exadata X5 configuration options and the changes it brings to licensing allows an architect to craft a system that will meet any need and allow for easy, small step increments in the future, potentially without any hardware changes.

There are many more exciting changes in Oracle 12c, Exadata X5 and the new storage server software which I may cover in the future as I explore them in detail.

Categories: DBA Blogs

Log Buffer #411, A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2015-02-25 12:00

This Log Buffer Edition brings you some blog posts from Oracle, SQL Server and MySQL.

Oracle:

Suppose you have a global zone with multiple zpools that you would like to convert into a native zone.

The digital revolution is creating abundance in almost every industry—turning spare bedrooms into hotel rooms, low-occupancy commuter vehicles into taxi services, and free time into freelance time

Every time I attend a conference, the Twitter traffic about said conference is obviously higher.  It starts a couple weeks or even months before, builds steadily as the conference approaches, and then hits a crescendo during the conference.

Calling All WebLogic Users: Please Help Us Improve WebLogic Documentation!

Top Two Cloud Security Concerns: Data Breaches and Data Loss

SQL Server:

This article describes a way to identify the user who truncated the table & how you can recover the data.

When SQL Server 2014 was released, it included Hekaton, Microsoft’s much talked about memory-optimized engine that brings In-Memory OLTP into play.

Learn how you can easily spread your backup across multiple files.

Daniel Calbimonte has written a code comparison for MariaDB vs. SQL Server as it pertains to how to comment, how to create functions and procedures with parameters, how to store query results in a text file, how to show the top n rows in a query, how to use loops, and more.

The article show a simple way we managed to schedule index rebuild and reorg for an SQL instance with 106 databases used by one application using a Scheduled job.

MySQL:

How to setup a PXC cluster with GTIDs (and have async slaves replicating from it!)

vCloud Air and business-critical MySQL

MySQL Dumping and Reloading the InnoDB Buffer Pool

How to benchmark MongoDB

MySQL Server on SUSE 12

Categories: DBA Blogs

SQL Server 2014 Cumulative Update 6

Pythian Group - Wed, 2015-02-25 11:59

Hello everyone,

Just a quick note to let you know that this week, while most of North America was enjoying a break, Microsoft released the 6th cumulative update for SQL Server 2014. This update contains fixes for 64 different issues, distributed as follows:

SQL 2014 Cumulative Update 6

As the name implies, this is a cumulative update, that means it is not necessary to install the previous 5 in case you don’t have them. Please remember to test thoroughly any update before applying to production.

The cumulative update and the full release notes can be found here: https://support.microsoft.com/kb/3031047/en-us?wa=wsignin1.0

 

 

Categories: DBA Blogs

Partner Webcast – Oracle Business Process Management 12c : The Game Changer for your Business

The Oracle Business Process Management Suite 12c (BPM) is of one the most complete BPM suites in the market and the most feature rich BPM suite offerings. There have been a wide variety of changes...

We share our skills to maximize your revenue!
Categories: DBA Blogs