Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 8 hours 2 min ago

How to Migrate a Database Using GoldenGate

Tue, 2014-12-30 09:06

There are many ways to migrate a database from A server to B server like Datapump, RMAN,etc. Using the combination of datapump and GoldenGate to migrate your database on cross-platform will minimize your down-time to even three minutes.

This method can be used for any size database from MB to TB level. Here is a simple sample to demonstrate this idea.

The prerequisites I assume that the GoldenGate has been configured in the source database and target database. To simulate the OLTP database, in my source database “SOURCE” there is a job will keep inserting a record into the table HOWIE.TEST as shown below.

CREATE PROCEDURE howie.insert_test
IS
BEGIN
   insert into test values(test_seq.nextval,sysdate);
   commit;
END;
/ 

SQL> SELECT * FROM HOWIE.TEST ORDER BY ID;

        ID MOD_DATE
---------- -------------------
         1 12/13/2014 21:19:17
         2 12/13/2014 21:24:03
         3 12/13/2014 21:31:11
         
		 .....................
           
        21 12/15/2014 19:14:25
        22 12/15/2014 19:15:25
        23 12/15/2014 19:16:25

23 rows selected.

2nd step, you need to start capture process on the source database and stop replicate process on the target database

SOURCE:

GGSCI (11gGG1) 4> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        44:56:46      00:00:01

TARGET:

GGSCI (11gGG2) 6> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
REPLICAT    STOPPED     REP1        00:00:00      00:00:53

3rd step, export the source database using datapump with flashback_scn

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
     284867
	 
[ggadmin@11gGG1 11.2.0]$ expdp directory=DATA_PUMP_DIR dumpfile=source.dmp logfile=source.log schemas=HOWIE flashback_scn=284867




4th step, transferred the dumpfile to the target server

[ggadmin@11gGG1 11.2.0]$ scp /u01/app/oracle/admin/SOURCE/dpdump/source.dmp 11gGG2:/u01/app/oracle/admin/TARGET/dpdump/

5th step, import the dumpfile into the target database.

[ggadmin@11gGG2 11.2.0]$ impdp directory=DATA_PUMP_DIR dumpfile=source.dmp logfile=source.log schemas=HOWIE

6th step, verify the data in the target database

SQL> SELECT * FROM HOWIE.TEST ORDER BY ID;

        ID MOD_DATE
---------- -------------------
         1 12/13/2014 21:19:17
         2 12/13/2014 21:24:03
         3 12/13/2014 21:31:11

		 ...............

    	21 12/15/2014 19:14:25
        22 12/15/2014 19:15:25
        23 12/15/2014 19:16:25

23 rows selected.

7th step, start replicate process on the target database using ATCSN

GGSCI (11gGG2) 8> start rep rep1 atcsn 284867

Sending START request to MANAGER ...

8th step, confirm the data has been synced

SQL> SELECT * FROM HOWIE.TEST ORDER BY ID;

        ID MOD_DATE
---------- -------------------
         1 12/13/2014 21:19:17
         2 12/13/2014 21:24:03
         3 12/13/2014 21:31:11
         4 12/13/2014 21:44:33
         5 12/13/2014 21:45:33
         6 12/13/2014 21:46:33
         7 12/13/2014 21:47:33

		 ...............

        60 12/15/2014 19:53:33
        61 12/15/2014 19:54:33
        62 12/15/2014 19:55:33
        63 12/15/2014 19:56:33

63 rows selected.

Action plan Summary

Step Source Target Source DB (11g) Target DB (11g) 1 Configure goldengate for capture processes. Configure goldengate for Replicate processes. 2 Start capture processes. Don’t start replicate now. 3 start export from the source database (Mark SCN when export started.) 4 Export completed. start SCP of dumpfile to target server. 5 SCP completed. Start Import on Target database using dumpfiles. 6 Import Finished. 7 Start replicat using atcsn 8 Replicate applied all changes 9 when lag is zero for capture,stop capture wait till replicate apply all changes , lag should be zero for replicate. After this stop replicate. 10 Redirect db connection point to target db. Redirect db connection point to target db.
Categories: DBA Blogs

Merging Apps Patches in Oracle EBS R12.2

Tue, 2014-12-30 09:04

It’s public knowledge that the traditional patching tool in Oracle EBS “adpatch” is replaced with “adop” utility. It’s also known that adop utility automatically merges patches when more than one patch is specified in the command line arguments. So whats the need for blog post on merging patches when its taken care automatically?

This blog is for people who like to dig little deep into EBS to shave off some downtime during the upgrades. We save some downtime if we merge the patches ahead of time instead of letting adop do it during the upgrade window. This is especially true when you are applying big patches like 12.2.4.

Merging patches is done using same utility as in earlier versions called “admrgpch”. Except that there are few extra steps needed after merging the patches.

In EBS 12.2 after merging the patches using admrgpch, we need to copy the actual unzipped patches that we merged also into the destination  directory. This is required as adop utility seems to be looking for these patches during the prepare phase. If you don’t copy the unzipped patch directories, you can still apply the patches. But when you run adop=prepare during next patching cycle,  it will fail as it will look for actual patch directories inside the merged patch dir.

Here is how a sample merging procedure will look like in EBS R12.2

# merge patches 111111 & 222222
$ pwd
  /u01/EBS/fs_ne/EBSapps/patch
$ ls
  111111 222222 
$ mkdir dest
$ admrgpch -s /u01/EBS/fs_ne/EBSapps/patch -d /u01/EBS/fs_ne/EBSapps/patch/dest
$ cd dest
$ pwd
  /u01/R122_EBS/fs_ne/EBSapps/patch/dest
$ ls
  fnd u_merged.drv 

# After admrgpch is finished, we need to copy patch directories into the dest dir

$ cd ..
$ mv 111111 /u01/EBS/fs_ne/EBSapps/patch/dest
$ mv 222222 /u01/EBS/fs_ne/EBSapps/patch/dest
$ cd /u01/EBS/fs_ne/EBSapps/patch/dest
$ ls
  111111 222222 fnd u_merged.drv

# Now you can the patches using adop=apply
Categories: DBA Blogs

Log Buffer #403, A Carnival of the Vanities for DBAs

Tue, 2014-12-30 09:02

As the 2014 is drawing to its end, the Log Buffer edition is looking back proudly at some of the blog posts from this week looking at whats happening in around database field.

Oracle:

Fusion Applications provides web services that allow external systems to integrate with Fusion Applications.

OEM 12c Release 4 has several new EM CLI verbs, including manage_agent_partnership.

To reflect the Oracle Retail enterprise applications newest code base, the 14.1 release of the Oracle Retail application enterprise includes new End User documents, considerable updates to existing End User documentation sets, and a wide range of new White Papers and Technical Papers.

If you programmatically change data in your Oracle MAF application then you need to ensure the UI reflects those data changes.

Configuring MDS Customisation Layer and Layer Value Combination in ADF.

SQL Server:

15 Quick Short Interview Questions Useful When Hiring SQL Developers.

Business Intelligence architect, Analysis Services Maestro, and author Bill Pearson exposes the DAX SUM() and SUMX() functions, comparing and contrasting the two.

SQL Server Data Aggregation for Data with Different Sampling Rates.

SSRS continues to use SET FMTONLY ON even though it has many problems. How can we cope?

When a hospital’s mission-critical database fails at Christmas, disaster for the hospital – and its hapless DBA – seems certain.

MySQL:

Does your dataset consist of InnoDB tables with large BLOB data such that the data is stored in external BLOB pages?

InnoDB crash recovery speed in MySQL 5.6.

Somebody wanted to know how to find any non-unique indexes in information_schema of the MySQL.

What is a data type?

File carving methods for the MySQL DBA.

Categories: DBA Blogs

OLTP type 64 compression and ‘enq: TX – allocate ITL entry’ on Exadata

Mon, 2014-12-22 11:33

Recently we’ve seen a strange problem with the deadlocks at the client database on Exadata, Oracle version 11.2.0.4 . Wait events analysis showed that sessions were waiting for “enq: TX – allocate ITL entry” event. It was strange because there are at most two sessions making DMLs and at least two ITL slots are available in the affected tables blocks. I made some block dumps and found that affected blocks contain the OLTP-compressed data, Compression Type = 64 (DBMS_COMPRESSION Constants – Compression Types).  Actually table has the “compress for query high” attribute, but direct path inserts have never used, so I’m not expecting any compressed data here. Compression Type 64 is very specific type. Oracle migrates data out of HCC compression units into Type 64 compression blocks in case of updates of HCC compressed data. We made some tests and were able to reproduce Type 64 compression without direct path operations. Here is one of the test cases. MSSM tablespace has been used, but problem is reproducible with ASSM too.

create table z_tst(num number, rn number, name varchar2(200)) compress for query high partition by list(num)
(
partition p1 values(1),
partition p2 values(2));

Table created.

insert into z_tst select mod(rownum , 2) + 1, rownum, lpad('1',20,'a') from dual connect by level <= 2000;

2000 rows created.

commit;

Commit complete.

select dbms_compression.get_compression_type(user, 'Z_TST', rowid) comp, count(*)  cnt from Z_tst
group by dbms_compression.get_compression_type(user, 'Z_TST', rowid);

      COMP        CNT
---------- ----------
        64       2000

select  dbms_rowid.rowid_block_number(rowid) blockno, count(*) cnt from z_tst a
group by dbms_rowid.rowid_block_number(rowid);

   BLOCKNO        CNT
---------- ----------
      3586        321
      2561        679
      3585        679
      2562        321

select name, value from v$mystat a, v$statname b where a.statistic# = b.statistic# and lower(name) like '%compress%' and value != 0;

NAME                                                    VALUE
-------------------------------------------------- ----------
heap block compress                                        14
HSC OLTP Compressed Blocks                                  4
HSC Compressed Segment Block Changes                     2014
HSC OLTP Non Compressible Blocks                            2
HSC OLTP positive compression                              14
HSC OLTP inline compression                                14
EHCC Block Compressions                                     4
EHCC Attempted Block Compressions                          14

alter system dump datafile 16 block min 2561 block max 2561;

We can see that all rows are compressed by compression type 64. From the session statistics we can see that HCC had been in place before the data was migrated into OLTP Compressed Blocks. I think, this is not an expected behavior and there is should not be any compression involved at all. Let’s take a look into the block dump:

Block header dump:  0x04000a01
 Object id on Block? Y
 seg/obj: 0x6bfdc  csc: 0x06.f5ff8a1  itc: 2  flg: -  typ: 1 - DATA
     fsl: 0  fnx: 0x0 ver: 0x01

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0055.018.0002cd54  0x00007641.5117.2f  --U-  679  fsc 0x0000.0f5ffb9a
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
bdba: 0x04000a01
data_block_dump,data header at 0x7fbb48919a5c
===============
tsiz: 0x1fa0
hsiz: 0x578
pbl: 0x7fbb48919a5c
     76543210
flag=-0----X-
ntab=2
nrow=680
frre=-1
fsbo=0x578
fseo=0x5b0
avsp=0x6
tosp=0x6
        r0_9ir2=0x1
        mec_kdbh9ir2=0x1
                      76543210
        shcf_kdbh9ir2=----------
                  76543210
        flag_9ir2=--R-LNOC      Archive compression: N
                fcls_9ir2[3]={ 0 32768 32768 }
                perm_9ir2[3]={ 0 2 1 }

It’s bit odd that avsp (available space) and tosp (total space) = 6 bytes. So there is no free space in the block at all, but I’m expecting to see 10% pctfee defaults here since it’s OLTP compression.
Let’s try to update two different rows in the same type 64 compressed block:

select rn from z_tst where DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) = 3586 and rownum <= 4;

        RN
----------
      1360
      1362
      1364
      1366
From the first session:
update z_tst set name = 'a' where rn = 1360;
From the second:
update z_tst set name = 'a' where rn = 1362;
-- waiting here

Second session waits on “enq: TX – allocate ITL entry” event.

Summary

In some cases HCC and subsequent OLTP, type 64 compression can take place even without direct path operations (probably a bug).

OLTP, type 64 compressed block, in contrast to regular OLTP compression, can have no free space after data load.

In case of DML operations, the whole type 64 compressed block gets locked (probably a bug).

Better not to set HCC attributes on segments until the real HCC compression operation.

 

Categories: DBA Blogs

Watch: HBase vs. Cassandra

Mon, 2014-12-22 09:45

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“When we look at HBase and Cassandra, they can look very similar,” Alex says. “They’re both part of the NoSQL ecosystem.” Although they’re capable of handling very similar workloads, Alex explains that there are also quite a few differences. “Cassandra is designed from the ground up to handle very high, concurrent, write-intensive workloads.” HBase on the other hand, has its limitations in scalability, and may require a bit more thinking to achieve the same quality of service, Alex explains. Watch his video HBase vs. Cassandra for specific use cases.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Log Buffer #402, A Carnival of the Vanities for DBAs

Fri, 2014-12-19 09:15

This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.

SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.

OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.

Oracle 12.1.0.2 Bundle Patching.

Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?

Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.

Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.

Introduction to Azure SQL Database Scalability.

What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.

Making HAProxy 1.5 replication lag aware in MySQL.

Monitor MySQL Performance Interactively With VividCortex.

InnoDB’s multi-versioning handling can be Achilles’ heel.

Memory summary tables in Performance Schema in MySQL 5.7.

Categories: DBA Blogs

Performance Issues with the Sequence NEXTVAL Call

Thu, 2014-12-18 08:51

Is SELECTing from a sequence your Oracle Performance Problem? The answer to that question is: it might be!

You wouldn’t expect a sequence select to be a significant problem but recently we saw that it was—and in two different ways. The issue came to light when investigating a report performance issue on an Oracle 11.2.0.4 non-RAC database. Investigating the original report problem required an AWR analysis and a SQL trace (actually a 10046 level 12 trace – tracing the bind variables was of critical importance in troubleshooting the initial problem with the report).

 

First problem: if SQL_ID = 4m7m0t6fjcs5x appears in the AWR reports

SELECTing a sequence value using the NEXTVAL function is supposed to be a fairly lightweight process. The sequence’s last value is stored in memory and a certain definable number of values are pre-fetched and cached in memory (default is CACHE=20). However when those cached values are exhausted the current sequence value must be written to disk (so duplicate values aren’t given upon restarts after instance crashes). And that’s done via an update on the SYS.SEQ$ table. The resulting SQL_ID and statement for this recursive SQL is:

SQL_ID   = 4m7m0t6fjcs5x

SQL Text = update seq$ set increment$=:2, minvalue=:3, maxvalue=:4, cycle#=:5, order$=:6,
           cache=:7, highwater=:8, audit$=:9, flags=:10 where obj#=:1

 

This is recursive SQL and consequently it and the corresponding SQL_ID is consistent between databases and even between Oracle versions.

Hence seeing SQL_ID 4m7m0t6fjcs5x as one of the top SQL statements in the AWR report indicates a possible problem. In our case it was the #1 top statement in terms of cumulative CPU. The report would select a large number of rows and was using a sequence value and the NEXTVAL call to form a surrogate key.

So what can be done about this? Well like most SQL tuning initiatives one of the best ways to tune a statement is to run it less frequently. With SQL_ID 4m7m0t6fjcs5x that’s easy to accomplish by changing the sequence’s cache value.

In our case, seeing SQL_ID 4m7m0t6fjcs5x as the top SQL statement quickly lead us to check the sequence settings and saw that almost all sequences had been created with the NOCACHE option. Therefore no sequence values were being cached and an update to SEQ$ was necessary after every single NEXTVAL call. Hence the problem.

Caching sequence values adds the risk of skipped values (or a sequence gap due to the loss of the cached values) when the instance crashes. (Note, no sequence values are lost when the database is shutdown cleanly.)  However in this case, since the sequence is just being used as a surrogate key this was not a problem for the application.

Changing the sequences CACHE setting to 100 completely eliminated the problem, increased the overall report performance, and removed SQL_ID 4m7m0t6fjcs5x from the list of top SQL in AWR reports.

Lesson learned: if you ever see SQL_ID 4m7m0t6fjcs5x in any of the top SQL sections in an AWR or STATSPACK report, double check the sequence CACHE settings.

 

Next problem: significant overhead of tracing the sequence update

Part of investigating a bind variable SQL regression problem with the report required a SQL trace. The report was instrumented with:

alter session set events '10046 trace name context forever, level 12';

 

The tracing made the report run over six times longer. This caused the report to overrun it’s allocated execution window and caused other job scheduling and SLA problems.

Normally we’d expect some overhead of a SQL trace due to the synchronous writes to the trace file, but over a 500% increase was far more than expected. From the developer’s viewpoint the report was essentially just executing a single query. The reality is that it was slightly more complicated than that as the top level query accessed a view. Still the view was not overly complex and hence the developer believed that the report was query intensive. Not executing many queries: just the original top level call and the view SQL.

Again the issue is largely related to the sequence, recursive SQL from the sequence, and specifically statement 4m7m0t6fjcs5x.

Starting with an AWR SQL report of SQL_ID 4m7m0t6fjcs5x from two report executions, one with and one without SQL trace enabled showed:

Without tracing:
Elapsed Time (ms):      278,786
CPU Time (ms):          278,516
Executions:             753,956
Buffer Gets:          3,042,991
Disk Reads:                   0
Rows:                   753,956

With tracing:
Elapsed Time (ms):    2,362,227
CPU Time (ms):        2,360,111
Executions:             836,182
Buffer Gets:          3,376,096
Disk Reads:                   5
Rows:                   836,182

 

So when the report ran with tracing enabled it ran 4m7m0t6fjcs5x 836K times instead of 753K times during the previous non-traced run: a 10.9% increase due to underlying application data changes between the runs. Yet 2.36M ms vs 278K ms in both CPU and elapsed times: a 847% increase!

The question was then: could this really be due to the overhead of tracing or something else? And should all of those recursive SQL update statements materialize as CPU time in the AWR reports? To confirm this and prove it to the developers a simplified sequence performance test was performed on a test database:

The simplified test SQL was:

create sequence s;
declare
   x integer;
begin
   for i in 1 .. 5000000
   loop
      x := s.nextval;
   end loop;
end;
/

 

From AWR SQL reports on SQL_ID 4m7m0t6fjcs5x:

Without tracing:

Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                            10,259            0.0     7.1
CPU Time (ms)                                 9,373            0.0     6.7
Executions                                  250,005            N/A     N/A
Buffer Gets                                 757,155            3.0    74.1
Disk Reads                                        0            0.0     0.0
Parse Calls                                       3            0.0     0.3
Rows                                        250,005            1.0     N/A


With tracing:

Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                            81,158            0.3    20.0
CPU Time (ms)                                71,812            0.3    17.9
Executions                                  250,001            N/A     N/A
Buffer Gets                                 757,171            3.0    74.4
Disk Reads                                        0            0.0     0.0
Parse Calls                                       1            0.0     0.1
Rows                                        250,001            1.0     N/A

 

Same number of executions and buffer gets as would be expected but 7.66 times the CPU and 7.91 times the elapsed time just due to the SQL trace!  (Similar results to the 8.47 times increase we saw with the actual production database report execution.)

And no surprise, the resulting trace file is extremely large. As we would expect, since the sequence was created with the default CACHE value of 20 it’s recording each UPDATE with the set of binds followed by 20 NEXTVAL executions:

=====================
PARSING IN CURSOR #140264395012488 len=100 dep=0 uid=74 oct=47 lid=74 tim=1418680119565405 hv=152407152 ad='a52802e0' sqlid='dpymsgc4jb33h'
declare
   x integer;
begin
   for i in 1 .. 5000000
   loop
      x := s.nextval;
   end loop;
end;
END OF STMT
PARSE #140264395012488:c=0,e=256,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=1418680119565401
=====================
PARSING IN CURSOR #140264395008592 len=26 dep=1 uid=74 oct=3 lid=74 tim=1418680119565686 hv=575612948 ad='a541eed8' sqlid='0k4rn80j4ya0n'
Select S.NEXTVAL from dual
END OF STMT
PARSE #140264395008592:c=0,e=64,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119565685
EXEC #140264395008592:c=0,e=50,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119565807
=====================
PARSING IN CURSOR #140264395000552 len=129 dep=2 uid=0 oct=6 lid=0 tim=1418680119566005 hv=2635489469 ad='a575c3a0' sqlid='4m7m0t6fjcs5x'
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1
END OF STMT
PARSE #140264395000552:c=0,e=66,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=1935744642,tim=1418680119566003
BINDS #140264395000552:
 Bind#0
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb120  bln=22  avl=02  flg=09
  value=1
 Bind#1
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb132  bln=22  avl=02  flg=09
  value=1
 Bind#2
  oacdty=02 mxl=22(15) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb144  bln=22  avl=15  flg=09
  value=9999999999999999999999999999
 Bind#3
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=7f91d96ca6b0  bln=22  avl=01  flg=05
  value=0
 Bind#4
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=7f91d96ca6c8  bln=22  avl=01  flg=01
  value=0
 Bind#5
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb156  bln=22  avl=02  flg=09
  value=20
 Bind#6
  oacdty=02 mxl=22(05) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=a52eb168  bln=22  avl=05  flg=09
  value=5000021
 Bind#7
  oacdty=01 mxl=32(32) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=01 csi=178 siz=32 off=0
  kxsbbbfp=a52eb17a  bln=32  avl=32  flg=09
  value="--------------------------------"
 Bind#8
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=7f91d96ca668  bln=22  avl=02  flg=05
  value=8
 Bind#9
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=7f91d96ca680  bln=22  avl=04  flg=01
  value=86696
EXEC #140264395000552:c=1000,e=798,p=0,cr=1,cu=2,mis=0,r=1,dep=2,og=4,plh=1935744642,tim=1418680119566897
STAT #140264395000552 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE  SEQ$ (cr=1 pr=0 pw=0 time=233 us)'
STAT #140264395000552 id=2 cnt=1 pid=1 pos=1 obj=79 op='INDEX UNIQUE SCAN I_SEQ1 (cr=1 pr=0 pw=0 time=23 us cost=0 size=69 card=1)'
CLOSE #140264395000552:c=0,e=3,dep=2,type=3,tim=1418680119567042
FETCH #140264395008592:c=1000,e=1319,p=0,cr=1,cu=3,mis=0,r=1,dep=1,og=1,plh=3499163060,tim=1418680119567178
STAT #140264395008592 id=1 cnt=1 pid=0 pos=1 obj=86696 op='SEQUENCE  S (cr=1 pr=0 pw=0 time=1328 us)'
STAT #140264395008592 id=2 cnt=1 pid=1 pos=1 obj=0 op='FAST DUAL  (cr=0 pr=0 pw=0 time=1 us cost=2 size=0 card=1)'
CLOSE #140264395008592:c=0,e=1,dep=1,type=3,tim=1418680119567330
EXEC #140264395008592:c=0,e=19,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=3499163060,tim=1418680119567378
FETCH #140264395008592:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=1,plh=3499163060,tim=1418680119567425
CLOSE #140264395008592:c=0,e=1,dep=1,type=3,tim=1418680119567458
...
< Repeats #140264395008592 18 more times due to CACHE=20 >

 

From the trace, it’s apparent that not only is there the overhead of updating the SEQ$ table but maintaining the I_SEQ1 index as well. A tkprof on the test shows us the same information:

declare
   x int;
begin
   for i in 1..5000000 loop
      x := s.nextval;
   end loop;
end;

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          2          0           0
Execute      1    241.55     247.41          0     250003          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    241.56     247.41          0     250005          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 74

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  log file sync                                   1        0.01          0.01
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
********************************************************************************

SQL ID: 0k4rn80j4ya0n Plan Hash: 3499163060

Select S.NEXTVAL
from
 dual


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 5000000     35.37      30.49          0          0          0           0
Fetch   5000000     50.51      45.81          0          0     250000     5000000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   10000001     85.88      76.30          0          0     250000     5000000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 74     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SEQUENCE  S (cr=1 pr=0 pw=0 time=910 us)
         1          1          1   FAST DUAL  (cr=0 pr=0 pw=0 time=2 us cost=2 size=0 card=1)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  latch free                                      1        0.00          0.00
********************************************************************************

SQL ID: 4m7m0t6fjcs5x Plan Hash: 1935744642

update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
  cache=:7,highwater=:8,audit$=:9,flags=:10
where
 obj#=:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute 250000     71.81      81.15          0     250003     507165      250000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   250000     71.81      81.15          0     250003     507165      250000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 2)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  Disk file operations I/O                        1        0.00          0.00
  log file switch (checkpoint incomplete)         1        0.19          0.19
  log file switch completion                      4        0.20          0.75
********************************************************************************

So clearly we can see a lot of additional overhead when performing a SQL trace of the many calls to the sequence NEXTVAL function. Of course the overhead is due to recursive SQL and the synchronous write of the trace file. It just wasn’t obvious that a simple query could generate that much recursive DML and trace data.

 

Combining the two issues

The next question is what is the effect of the CACHE setting for the sequence as well as the different between a LEVEL 8 and LEVEL 12 trace. Using a similar PL/SQL test block but with only 100,000 executions on a lab database showed the following results measuring CPU time (in seconds):

Cache Size No Trace 10046 level 8 10046 level 12 0 31.94 58.71 94.57 20 7.53 15.29 20.13 100 4.85 13.36 13.50 1000 3.93 10.61 11.93 10000 3.70 10.96 12.20

Hence we can see that with even an extremely high CACHE setting for the sequence, the 10046 trace adds roughly 300% to 400% overhead for this one particular statement. And that the caching sweet-spot seems to be around 100.

 

Conclusions

We often take the Oracle sequence for granted assume that it’s an optimized and efficient internal structure—and for the most part it is. But depending on how it’s implemented, it can be problematic.

If we ever see SQL_ID 4m7m0t6fjcs5x as one of our worst performing SQL statements, we should double check the sequence configuration and usage. Was the CACHE value set low by design, or inadvertently? Is the risk of a sequence gap after an instance crash worth the overhead of a low CACHE value? Perhaps the settings need to be reconsidered and changed?

And a caution about enabling a SQL trace. It’s something we expect to add some overhead. But not 3x to 10x which may make the tracing process unreasonable.  Of course the tracing overhead will be dependent on the actual workload.  But for those that are sequence NEXTVAL heavy, don’t underestimate the underlying recursive SQL as the overhead can be significant—and much more than one may think.

Categories: DBA Blogs

ORA-28043: Invalid Bind Credentials for DB-OID Connection

Thu, 2014-12-18 08:46

Have you ever encountered this error connecting to a DB using global authentication against OID? Was re-registration a temporary workaround, but the issue came back after some time? Check out this solution for ORA-28043: invalid bind credentials for DB-OID connection.

During a long project which included changing human account’s authentication method from local to global on several databases, users started to report ORA-28043 after a couple of days.

$ sqlplus rambo@orcl

SQL*Plus: Release 11.2.0.3.0 Production on Tue Nov 4 07:28:03 2014 

Copyright (c) 1982, 2011, Oracle. All rights reserved. 

Enter password: 

ERROR: 

ORA-28043: invalid bind credentials for DB-OID connection 

Since some of these were production assets, we tried to restore the service as soon as possible. The fastest workaround we found was to re-register the DBs using DBCA:

$ dbca -silent -configureDatabase -sourceDB orcl -unregisterWithDirService true -dirServiceUserName cn=orcladmin -dirServicePassword ****** -walletPassword ******

Preparing to Configure Database

6% complete

13% complete

66% complete

Completing Database Configuration

100% complete

Look at the log file /e00/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details.

$ dbca -silent -configureDatabase -sourceDB orcl -registerWithDirService true -dirServiceUserName cn=orcladmin -dirServicePassword ****** -walletPassword ******

Preparing to Configure Database

6% complete

13% complete

66% complete

Completing Database Configuration

100% complete

Look at the log file "/e00/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details.

Good news: the service was restored quickly. Bad news: the issue came back after a couple of days. We started a deeper investigation which included opening a SR in My Oracle Support. Luckily, we found the real culprit for this error very quickly: PASSWORD EXPIRATION. These were the commands they provided us to verify that the wallet couldn’t bind to the directory:

$ mkstore -wrl . -list 

Oracle Secret Store Tool : Version 11.2.0.3.0 - Production 

Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved. 

Enter wallet password:xxx 

Oracle Secret Store entries: 

ORACLE.SECURITY.DN 

ORACLE.SECURITY.PASSWORD 

$ mkstore -wrl . -viewEntry ORACLE.SECURITY.DN -viewEntry ORACLE.SECURITY.PASSWORD 

Oracle Secret Store Tool : Version 11.2.0.3.0 - Production 

Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved. 

Enter wallet password: xxx 

ORACLE.SECURITY.DN = cn=ORCL,cn=OracleContext,DC=ppl,DC=com 

ORACLE.SECURITY.PASSWORD = Z8p9a1j1 

$ ldapbind -h oidserver -p 3060 -D cn=ORCL,cn=OracleContext,DC=ppl,DC=com -w Z8p9a1j1 

ldap_bind: Invalid credentials 

ldap_bind: additional info: Password Policy Error :9000: GSL_PWDEXPIRED_EXCP :Your Password has expired. Please contact the Administrator to change your password. 

Oracle’s recommendation was to set “pwdmaxage” attribute to 0. We achieved this by changing the value from the GUI, under Security/Password Policy/Password Expiry Time

Note that for OID versions older than 10.0.4, changing the parameter’s value to zero doesn’t work due to Bug 3334767. Instead, you can place a very large value.

Categories: DBA Blogs

Watch: Hadoop vs. Riak

Mon, 2014-12-15 09:14

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“Riak and Hadoop are quite different data platforms,” Alex says. “Hadoop is actually the system that would process the data that Riak is collecting.” Learn how the two systems are complementary rather than competitive by watching Alex’s video Hadoop vs. Riak.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Log Buffer #401, A Carnival of the Vanities for DBAs

Fri, 2014-12-12 09:00

This Log Buffer Edition goes right through the fields of salient database blog posts and comes out with something worth reading.


Oracle:

Extract SQL full text from SQL Monitor html.

Disruption: Are Hot Brands Breaking the Rules?

Understanding Flash: Unpredictable Write Performance.

The caveats of running .sql scripts with GUI tools.

File Encoding in the Next Generation Outline Extractor.

SQL Server:

Arshad Ali discusses how to use CTE and the ranking function to access or query data from previous or subsequent rows.

SSRS – Report for Stored Procedure with Multiple Values Passed.

Continuous Delivery for Databases: Microservices, Team Structures, and Conway‘s Law.

Scripting SQL Server databases with SMO using EnforceScriptingOptions.

How to troubleshoot SSL encryption issues in SQL Server.

MySQL:

MySQL 5.7: only_full_group_by Improved, Recognizing Functional Dependencies, Enabled by Default!

MaxScale, manual control, external monitors and notification methods.

MySQL 5.7: only_full_group_by Improved, Recognizing Functional Dependencies, Enabled by Default!

Recover MySQL root password without restarting MySQL (no downtime!)

Oracle DBAs have has the luxury of their V$ variables for a long time while we MySQL DBAs pretended we were not envious.

Categories: DBA Blogs

Call for Papers for the O’Reilly MySQL Conference

Tue, 2014-12-09 14:35

The call for papers for the O’Reilly MySQL Conference is now open, and closes October 25th.  Submit your proposal now at http://en.oreilly.com/mysql2011/user/proposal/propose/cfp/126!

Categories: DBA Blogs