Skip navigation.

Feed aggregator

ASM Commands : 2 -- Migrating a DiskGroup to New Disk(s)

Hemant K Chitale - Sun, 2014-08-24 08:52
In 11gR2 Grid Infrastructure and RAC

After the previous demonstration of adding a new DiskGroup, I now demonstrate migrating the DiskGroup to a new pair of disks.

First, I create a table in the Tablespace on that DiskGroup.

[root@node1 ~]# su - oracle
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:17:28 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create table new_tbs_tbl
2 tablespace new_tbs
3 as select * from dba_objects
4 /

Table created.

SQL> select segment_name, bytes/1048576
2 from user_segments
3 where tablespace_name = 'NEW_TBS'
4 /

SEGMENT_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
NEW_TBS_TBL
9


SQL> select file_name, bytes/1048576
2 from dba_data_files
3 where tablespace_name = 'NEW_TBS'
4 /

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA3/racdb/datafile/new_tbs.256.855792859
100


SQL>


Next, I verify that the DiskGroup is currently on disk asmdisk.7 and that the two new disks that I plan to migrate the DiskGroup to are available as asmdisk.8 and asmdisk.9  (yes, unfortunately, they are on /fra, instead of /data1 or /data2 because I have run out of disk space in /data1 and /data2).
This I do from node1 :

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ exit
logout

[root@node1 ~]#
[root@node1 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:22:32 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> l
1 select d.name, d.path
2 from v$asm_disk d, v$asm_diskgroup g
3 where d.group_number=g.group_number
4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0000
/data1/asmdisk.7


SQL>
SQL> !sh
sh-3.2$ ls -l /fra/asmdisk*
-rwxrwxr-x 1 grid oinstall 1024000000 Aug 24 22:06 /fra/asmdisk.8
-rwxrwxr-x 1 grid oinstall 1024000000 Aug 24 22:07 /fra/asmdisk.9
sh-3.2$


Note how the ownership and permissions are set for the two new disks (see my previous post).

I now add the two new disks.

sh-3.2$ exit
exit

SQL> show parameter power

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_power_limit integer 1
SQL> alter diskgroup data3 add disk '/fra/asmdisk.8', '/fra/asmdisk.9';

Diskgroup altered.

SQL>
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
3 REBAL RUN 1 1 1 101 60
1


SQL>


With ASM_POWERLIMIT set to 1, Oracle ASM automatically starts a REBALANCE operation.  However, since I did *not* drop the existing asmdisk.7, Oracle will still continue to use it.

After a while, I confirm that the REBALANCE has completed.  I can now drop asmdisk.7.  Unfortunately, this will trigger a new REBALANCE !

SQL> l
1* select * from v$asm_operation
SQL> /

no rows selected

SQL>
SQL> l
1 select d.name, d.path
2 from v$asm_disk d, v$asm_diskgroup g
3 where d.group_number=g.group_number
4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0000
/data1/asmdisk.7

DATA3_0002
/fra/asmdisk.9

DATA3_0001
/fra/asmdisk.8


SQL>
SQL> alter diskgroup data3 drop disk '/data1/asmdisk.7';
alter diskgroup data3 drop disk '/data1/asmdisk.7'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15054: disk "/DATA1/ASMDISK.7" does not exist in diskgroup "DATA3"


SQL> alter diskgroup data3 drop disk 'DATA3_0000';

Diskgroup altered.

SQL>
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
3 REBAL RUN 1 1 2 102 120
0


SQL>
SQL> l
1* select * from v$asm_operation
SQL>
SQL> /

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
3 REBAL RUN 1 1 47 101 95
0


SQL> /

no rows selected

SQL>


NOTE : Note how I must specify the Disk NAME (not the PATH) for the DROP. When I added disks asmdisk.8 and asmdisk.9, I could have given then meaningful names as well. Oracle has automatically named them.

Ideally, what I should have done is to use the ADD and DROP command together.  That way, I would have a single-pass REBALANCE required.

After a while, I run my validation queries on node2.


[root@node2 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:42:39 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select d.name, d.path
from v$asm_disk d, v$asm_diskgroup g
where d.group_number=g.group_number
and g.name = 'DATA3' 2 3 4
5
SQL> l
1 select d.name, d.path
2 from v$asm_disk d, v$asm_diskgroup g
3 where d.group_number=g.group_number
4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0002
/fra/asmdisk.9

DATA3_0001
/fra/asmdisk.8


SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ exit
logout

[root@node2 ~]# su - oracle
-sh-3.2$
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:44:10 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select count(*) from new_tbs_tbl;

COUNT(*)
----------
72460

SQL>


I have now accessed the table, tablespace, diskgroup and disks from node2 successfully. Disk asmdisk.7 is no longer part of the DiskGroup.

I can physically remove disk asmdisk7 from the storage.


SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ exit
logout

[root@node1 ~]# cd /data1
[root@node1 data1]# ls -l asmdisk.7
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 24 22:39 asmdisk.7
[root@node1 data1]# rm asmdisk.7
rm: remove regular file `asmdisk.7'? y
[root@node1 data1]#
[root@node1 data1]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:50:18 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> set pages 60
SQL> col name format a15
SQL> col path format a20
SQL> select group_number, name, path
2 from v$asm_disk
3 order by 1,2;

GROUP_NUMBER NAME PATH
------------ --------------- --------------------
0 /crs/voting.disk
0 /data1/votedisk.1
0 /data2/votedisk.2
0 /fra/votedisk.3
1 DATA1_0000 /data1/asmdisk.1
1 DATA1_0001 /data2/asmdisk.4
2 DATA2_0000 /data1/asmdisk.2
2 DATA2_0001 /data2/asmdisk.5
2 DATA2_0002 /data2/asmdisk.6
3 DATA3_0001 /fra/asmdisk.8
3 DATA3_0002 /fra/asmdisk.9
4 DATA_0000 /crs/ocr.configurati
on

5 FRA_0000 /fra/fradisk.3
5 FRA_0001 /fra/fradisk.2
5 FRA_0002 /fra/fradisk.1
5 FRA_0003 /fra/fradisk.4

16 rows selected.

SQL>

The disk asmdisk.7 is no longer part of the storage. (Remember : All my disks here are on NFS).
.
.
.
Categories: DBA Blogs

Transactional Data Caching for ADF Mobile MAF Application

Andrejus Baranovski - Sun, 2014-08-24 00:20
I have described basic idea for data caching in ADF Mobile MAF application in my previous post - Data Caching Implementation for ADF Mobile MAF Application. Today I would like to extend similar solution to handle transactional data caching. Sample application displays list of tasks, user can update task status and description. These updates will be saved locally, until the next synchronisation time with the server.

Updated sample application with MAF mobile and server side implementations can be downloaded from here - MAFMobileLocalApp_v4.zip. User could select a task on his mobile device:


Update task status and description, these changes are stored locally in SQLite DB, no extra call is done to the Web Service at this time:


Changes for multiple tasks can be synchronised in batch. User could go and change different task without submitting previous changes to the Web Service, update task status:


Finally when user decided to synchronise his changes with the Web Service, he could use refresh option:


When synchronisation happens, we can see activity in the server side log. ADF BC executes update operation for submitted task, update for the first task:


Next is called update for the second task:


MAF task flow is updated for this use case, and is based on the task flow from application described in the previous post. We are not calling Web Service from the task flow method call anymore, this call is done programmatically, during refresh:


Web Service method in ADF BC is updated to check if row exists - if yes, data is updated. If row doesn't exist - no error is thrown, task list will be refreshed and latest changes will be displayed to the user on the mobile device. In the real application, you may inform user about the missing task:


Custom method is exposed as ADF BC Web Service method:


To understand how sample application works on MAF mobile side, we should refer to the task flow diagram. Edited changes are submitted to the local SQLite DB through updateTaskLocal method call. This call invokes Java method, where changes are stored locally in the list of tasks. In addition, record about latest task changes is stored in special table with change history:


We are checking in the change history, if the same task was already updated - if no, we are simply storing task ID. This would allow later during synchronisation to retrieve changed task details and update task first through Web Service, before fetching data. If task was already updated, there is no need to insert task ID again:


During data synchronisation, we are checking if there are any rows changed and stored locally. Only after this check is completed, task list data is refreshed and loaded from Web Service:


Changes for each updated task are synchronized through Web Service. Firstly, task ID is retrieved from changes history table. Using this ID, actual changed are fetched from SQLite DB and sent over Web Service to the server side ADF BC for update. Once all the changes are synchronised, local changes history is cleared:


Web Service call to update data through ADF BC is done programmatically, using MAF mobile helper class AdfmfJavaUtilities. We are constructing list of parameters with names, values and types - invoked method even doesn't need to be defined in the Page Definition, it will be referenced directly from the bindings:

★ How BIG is Oracle OpenWorld?

Eddie Awad - Sat, 2014-08-23 15:19

Oracle OpenWorld

Here is how big it was in 2013. Compare it to its size in 2012. It is safe to assume that it will be bigger in 2014!

I will attend this year’s event by invitation from the Oracle ACE Program. Prior to the start of the conference, I will be attending a two day product briefing with product teams at Oracle HQ. It’s like a mini OpenWorld but only for Oracle ACE Directors.

During the briefing, Oracle product managers talk about the latest and greatest product news. They also share super secret information that is not yet made public. I will report this information to you here on awads.net and via Twitter, unless of course it is protected by a non-disclosure agreement.

See you there!

Continue reading...

© Eddie Awad's Blog, 2014. | Permalink | Add a comment | Topic: Oracle | Tags:

Related articles:

DRM Free Downloads - Is That OK?

Bradley Brown - Fri, 2014-08-22 22:01
I talked to a prospect today that was using a competitive platform to deliver their video products.  We talked extensively about differentiation of InteliVideo over their current platform (from brand to analytics to customers to apps and so on).  But one thing that came up had to do with DRM free downloads.  You might think that's a good thing if you don't understand what it means...as they did.  I said "so you're OK with your customers downloading the MP4 and putting it on YouTube or emailing it to their friends or throwing it on a thumb drive."  He said, "no, our customer's can't do that."  So I showed him the language on their website (DRM free downloads), to which he replied, "right, it's protected."  When I looked puzzled, he said "what is DRM?"  I said DRM stands for Digital Rights Management.  You can think of it as protection.  So that wording is saying "you get this video that has NO protection, use it as you wish."  Of course there's likely wording in the click through contract that says the customer is only supposed to use the content on their own devices.  But...you've just handed them a risky piece of content.
So...in my humble opinion, I was say that "no" - DRM free downloads is not OK.

How the Big Boys Do It!

Bradley Brown - Fri, 2014-08-22 16:03
We have a number of VERY large (video on demand) customers.  We've broken our customer-base into 3 categories:

1. Infoprenuers - these are companies who do something and have video, but video is not their primary business.  For example, RunBare.  Their primary business is teaching people how to run in your barefeet.  They do have a video on this topic, along with books and other items.  Another example is BDB Software, which is my original side consulting business.  I started it to sell online Oracle Application Express training.  So all the business does is video, yet, I'm the CEO at InteliVideo.  That's my primary business - so BDB is just a side business.  You could say that Infoprenuers are wannabes.

2. Enterprise Content Providers (ECP) - these are companies who's primary business revolves around selling digital goods, education, etc. online.  These are the "big boys" of our industry.  In this blog post I'm going to talk more about how the Infoprenuer can become an ECP.

3. Corporations - these are companies that use video content to educate their own employees.  They typically aren't interested in selling their videos, they are interested in protecting their videos.

So how do the big boys become so big.  Like any business, it's typically over an extended period of time.  Even those who focus on infomercials are not usually overnight successes.  Take P90X or Beachbody.  They've been selling DVDs for MANY years.  Infomercials contribute to their success, but also the fact that they figured out how to do affiliate marketing is another large piece of their success.

So how do they do it?  Creative marketing, marketing, and more marketing - lots of it!  Promotions, A/B split tests, refining the message and analyzing what works and building on it.  Said another way, they are professional marketers!  You might ask - social marketing or pay per click marketing or SEO or.... - the answer is yes.

You want to get big?  Well then...it's time to focus.  On what?  One guess!

Commit Puzzle

Bobby Durrett's DBA Blog - Fri, 2014-08-22 11:54

This graph represents commit time compared to CPU utilization and redo log write time.  I’ve included only the hourly intervals with more than 1,000,000 commits.  At these peaks the number of commits ranges 1 to 1.6 million commits per hour so each point on the graph represents roughly the same commit rate.  I’m puzzled by why the commit time bounces around peaking above 5 milliseconds when I can’t see any peaks in I/O or CPU that correspond to the commit time peaks.

commitvscpuio

I derived CPU% from DBA_HIST_OSSTAT.  I got the other values by getting wait events from DBA_HIST_SYSTEM_EVENT.  Commit time is log file sync wait time.  Redo write time is log file parallel write wait time.  I converted the wait times to milliseconds so they fit nicely on the chart with CPU%.

I thought I would pass this along as a puzzle that I haven’t figured out.

Here is a zip of the script I used to get the data, its raw output, and the spreadsheet I used to make the chart: zip

- Bobby

P.S.  This is on HP-UX 11.31, Itanium, Oracle 11.2.0.3

P.P.S  Did some more work on this today.  Looks like the high commit time periods have short spikes of long redo log writes even though the average over the hour is still low.  I’m looking at DBA_HIST_SYSTEM_EVENT to get a histogram of the log file parallel write waits and there are a number in the 1024 bucket when the log file sync time is high on average.

END_INTERVAL_TIME   LFPW_MILLI LFPW_COUNT AVG_COMMIT_MS AVG_WRITE_MS
------------------- ---------- ---------- ------------- ------------
21-AUG-14 11.00 AM           1     268136    9.14914833   2.45438987
21-AUG-14 11.00 AM           2     453913    9.14914833   2.45438987
21-AUG-14 11.00 AM           4     168370    9.14914833   2.45438987
21-AUG-14 11.00 AM           8      24436    9.14914833   2.45438987
21-AUG-14 11.00 AM          16       5675    9.14914833   2.45438987
21-AUG-14 11.00 AM          32       6122    9.14914833   2.45438987
21-AUG-14 11.00 AM          64       3369    9.14914833   2.45438987
21-AUG-14 11.00 AM         128       2198    9.14914833   2.45438987
21-AUG-14 11.00 AM         256       1009    9.14914833   2.45438987
21-AUG-14 11.00 AM         512        236    9.14914833   2.45438987
21-AUG-14 11.00 AM        1024         19    9.14914833   2.45438987
21-AUG-14 11.00 AM        2048          0    9.14914833   2.45438987
21-AUG-14 02.00 PM           1     522165    2.97787777   1.64840599
21-AUG-14 02.00 PM           2     462917    2.97787777   1.64840599
21-AUG-14 02.00 PM           4     142612    2.97787777   1.64840599
21-AUG-14 02.00 PM           8      17014    2.97787777   1.64840599
21-AUG-14 02.00 PM          16       4656    2.97787777   1.64840599
21-AUG-14 02.00 PM          32       5241    2.97787777   1.64840599
21-AUG-14 02.00 PM          64       1882    2.97787777   1.64840599
21-AUG-14 02.00 PM         128        820    2.97787777   1.64840599
21-AUG-14 02.00 PM         256        149    2.97787777   1.64840599
21-AUG-14 02.00 PM         512         10    2.97787777   1.64840599
21-AUG-14 02.00 PM        1024          2    2.97787777   1.64840599
21-AUG-14 02.00 PM        2048          0    2.97787777   1.64840599

There were 19 waits over half a second in the first hour and only 2 in the second hour.  Maybe all the log file sync waits pile up waiting for those long writes.  Here is a graph that compares the number of waits over half a second – the 1024 ms bucket – to the average log file sync and log file parallel write times for the hour:

longwrites

You can see that the average redo write time goes up a little but the commit time goes up more.  Maybe commit time is more affected by a few long spikes than by a lot of slightly longer write times.

Found a cool blog post that seems to explain exactly what we are seeing: blog post

 

Categories: DBA Blogs

Best of OTN - Week of August 17th

OTN TechBlog - Fri, 2014-08-22 11:43
Architect CommunityThe Top 3 most popular OTN ArchBeat video interviews of all time:
  1. Oracle Coherence Community on Java.net | Brian Oliver and Randy Stafford [October 24, 2013]
    Brian Oliver (Senior Principal Solutions Architect, Oracle Coherence) and Randy Stafford (Architect At-Large, Oracle Coherence Product Development) discuss the evolution of the Oracle Coherence Community on Java.net and how developers can actively participate in product development through Coherence Community open projects. Visit the Coherence Community at: https://java.net/projects/coherence.

  2. The Raspberry Pi Java Carputer and Other Wonders | Simon Ritter [February 13, 2014]
    Oracle lead Java evangelist Simon Ritter talks about his Raspberry Pi-based Java Carputer IoT project and other topics he presented at QCon London 2014.

  3. Hot Features in Oracle APEX 5.0 | Joel Kallman [May 14, 2014]
    Joel Kallman (Director, Software Development, Oracle) shares key points from his Great Lakes Oracle Conference 2014 session on new features in Oracle APEX 5.0.

Friday Funny from OTN Architect Community Manager Bob Rhubart:
Comedy legend Steve Martin entertains dogs in this 1976 clip from the Carol Burnette show.

Database Community

OTN Database Community Home Page - See all tech articles, downloads etc. related to Oracle Database for DBA's and Developers.

Java Community

JavaOne Blog - JRuby and JVM Languages at JavaOne!  In this video interview, Charles shared the JRuby features he presented at the JVM Language Summit. He'll be at JavaOne read the blog to see all the sessions.

Java Source Blog - IoT: Wearables! Wearables are a subset of the Internet of Things that has gained a lot of attention. Learn More.

I love Java FaceBook - Java Advanced Management Console demo - Watch as Jim Weaver, Java Technology Ambassador at Oracle, walks through a demonstration of the new Java Advanced Management Console (AMC) tool.

Systems Community

OTN Garage Blog - Why Wouldn't Root Be Able to Change a Zone's IP Address in Oracle Solaris 11? - Read and learn the answer.

OTN Garage FaceBook - Securing Your Cloud-Based Data Center with Oracle Solaris 11 - Overview of the security precautions a sysadmin needs to take to secure data in a cloud infrastructure, and how to implement them with the security features in Oracle Solaris 11.


SQL Server techniques that mitigate data loss

Chris Foot - Fri, 2014-08-22 10:58

Natural disasters, cybercriminals and costly internal mistakes can cause companies to lose critical information. If the appropriate business continuity strategies aren't employed, organizations could lose customers or be subject to sustaining government penalties. 

These concerns have motivated database administrators to use programs with strong recovery tools, such as replication, mirroring and failover clustering.

SQL Server Pro contributor and DH2i Chief Technology Officer Thanh Ngo outlined a number of SQL Server functions that improve DR and ensure applications are highly available. Some of the most useful features he named are detailed below.

AlwaysOn Availability Groups

This particular application allows a set of user databases to fail over as a cohesive unit. The program forms a primary replication of the cluster, which is then set to be available for read-write tasks. From there, one to eight secondary duplicates are created for read-only, but must be configured to perform this action. 

Availability Groups is supported by the concept of database mirroring, which enhances a database's security and accessibility. The feature essentially copies communications from principal servers to their cloned counterparts. Ngo outlined how mirroring functions in through one of the following modes:

  • High safety mode with automatic failover: Transactions carried out by two or more partners are synchronized while a "witness partner" orchestrates automated failover. 
  • High safety mode without automatic failover: The same operation detailed above is executed, but without the presence of a witness partner.
  • High performance mode: A primary database will employ a transaction without waiting for the mirrored counterpart to write the log to a disk

Replication 
Ngo also acknowledged replication, which consists of a primary server (also known as the publisher) allocating data to one  or more secondary databases (known as subscribers). Replication can be executed in one of three ways: 

  • Transactional allows for real-time data availability because it enables the publisher to distribute information to subscribers immediately, or in regular intervals. 
  • Snapshot copies and provisions data in the primary server and sends the cloned data to the secondary database once it's created.
  • Merge sanctions bi-directional replication, meaning all changes made in both the subscriber and publisher databases are synchronized automatically. 

A contemporary need 
With these techniques in mind, if a company chooses to outsource its database administration needs, it should detail which government standards it needs to abide by. From there, DBAs can carry out a thorough risk assessment of how much customer data is vulnerable – a task MSPmentor contributor CJ Arlotta asserted as critical. 

Employing SQL Server's Replication and Availability Group ensures all data is retained even if a database breach occurs. 

The post SQL Server techniques that mitigate data loss appeared first on Remote DBA Experts.

Remote DML with DBMS_PARALLEL_EXECUTE

Dominic Brooks - Fri, 2014-08-22 10:20

An example of sucking data into a table over a db link using DBMS_PARALLEL_EXECUTE.

This particular example is based on something I needed to do in the real world, copying data from one database into another over a db link. Datapump is not available to me. Tables in question happen to be partitioned by a date-like number (boo!) hence some of the specific actions in the detail.

I think it’s a good example of how to use dbms_parallel_execute but also it might be interesting to see how we might combine that functionality with parallel sessions each operating on a single numeric partition.

For setup, let’s create a suitable source table on a remote db.
In this example, I’m recreating the entries in dba_objects for every day for a couple of years.

CREATE TABLE remote_px_test
(dt,owner,object_name,subobject_name,object_id,data_object_id,object_type,created,last_ddl_time,timestamp,status,temporary,generated,secondary,namespace,edition_name)
PARTITION BY RANGE(dt) INTERVAL(1)
(PARTITION p_default VALUES LESS THAN (20120101))
AS
WITH days AS
(SELECT TO_NUMBER(TO_CHAR(TO_DATE(20120101,'YYYYMMDD') + ROWNUM - 1,'YYYYMMDD')) dt
 FROM   dual
 CONNECT BY ROWNUM <= (TRUNC(SYSDATE) - TO_DATE(20120101,'YYYYMMDD')))
SELECT d.dt, o.*
FROM   dba_objects o
CROSS JOIN days d;
SELECT /*+ parallel(16) */ COUNT(*) FROM remote_px_test;
209957272

SELECT round(sum(bytes)/power(1024,3)) FROM user_segments WHERE segment_name = 'REMOTE_PX_TEST';
31

First step is to see how long it takes to do a parallel INSERT SELECT over a db link.

The benefits of parallelisation in such an operation is severely limited because we have a single session over the db link.

Back to the target database.

First create an empty destination table, same as remote.

CREATE TABLE remote_px_test
(dt,owner,object_name,subobject_name,object_id,data_object_id,object_type,created,last_ddl_time,timestamp,status,temporary,generated,secondary,namespace,edition_name)
PARTITION BY RANGE(dt) INTERVAL(1)
(PARTITION p_default VALUES LESS THAN (20100101))
AS
WITH days AS
(SELECT TO_NUMBER(TO_CHAR(TO_DATE(20120101,'YYYYMMDD') + ROWNUM - 1,'YYYYMMDD')) dt
 FROM   dual
 WHERE 1=0)
SELECT d.dt, o.*
FROM   dba_objects o
CROSS JOIN days d;

Now, let’s see how long it takes to do an INSERT SELECT over a db link.
Time is often not a good measure but in this case I’m primarily interested in how long it takes to copy a whole bunch of tables from A to B over a db link.

insert /*+ append */ into remote_px_test l
select * 
from   remote_px_test@d1 r;

209,957,272 rows inserted.

commit;

This executed in 20 minutes.

As mentioned, you could parallelise bits of it either side but the benefit is limited, it might even make things worse thanks to BUFFER SORT operation.

Next let’s compare to method with DBMS_PARALLEL_EXECUTE.

We want some parallel threads to work on independent partitions, doing direct path inserts, concurrently.

First I’m just going to create a view on the SOURCE DB to make my chunking on daily partition interval simpler.

I could create this on the TARGET DB with references to the dictionary tables over db link but it could be significantly slower depending on the number of partitioned tables and whether predicates are being pushed.

CREATE OR REPLACE VIEW vw_interval_partitions
AS
SELECT table_name, partition_name, partition_position, hi
FROM   (SELECT table_name, partition_name, partition_position
        ,      to_char(
                 extractvalue(
                   dbms_xmlgen.getxmltype
                  ('select high_value from user_tab_partitions x'
                 ||' where x.table_name   = '''||t.table_name||''''
                 ||' and   x.partition_name = '''|| t.partition_name|| ''''),'//text()')) hi
        FROM   user_tab_partitions t);

Secondly, I’m going to create a little helper package which will generate the dynamic SQL for our inserts into specific partitions (PARTITION FOR clause not able to use binds).

		
CREATE OR REPLACE PACKAGE sid_data_pkg
AS
  --
  PROCEDURE sid_ipt (
    i_table_name                 IN     VARCHAR2,
    i_table_owner                IN     VARCHAR2,
    i_column_name                IN     VARCHAR2,
    i_dblink                     IN     VARCHAR2,
    i_start_id                   IN     NUMBER,
    i_end_id                     IN     NUMBER
  );
  --
END sid_data_pkg;
/

CREATE OR REPLACE PACKAGE BODY sid_data_pkg
AS
  PROCEDURE sid_ipt (
    i_table_name                 IN     VARCHAR2,
    i_table_owner                IN     VARCHAR2,
    i_column_name                IN     VARCHAR2,
    i_dblink                     IN     VARCHAR2,
    i_start_id                   IN     NUMBER,
    i_end_id                     IN     NUMBER
  )
  AS
    --
    l_cmd CLOB;
    --
  BEGIN
     --
     l_cmd :=
     q'{INSERT /*+ APPEND */}'||chr(10)||
     q'{INTO   }'||i_table_name||chr(10)||
     q'{PARTITION FOR (}'||i_start_id||')'||chr(10)||
     q'{SELECT *}'||chr(10)||
     q'{FROM   }'||CASE WHEN i_table_owner IS NOT NULL THEN i_table_owner||'.' END
                 ||i_table_name
                 ||CASE WHEN i_dblink IS NOT NULL THEN '@'||i_dblink END
                 ||chr(10)||
     q'{WHERE  }'||i_column_name||' < '||i_end_id||chr(10)||
     CASE WHEN i_start_id IS NOT NULL THEN q'{AND   }'||i_column_name||' >= '||i_start_id END;
     --
     --DBMS_OUTPUT.PUT_LINE(l_cmd);
     --
     EXECUTE IMMEDIATE l_cmd;
     --
     COMMIT;
     --
  END sid_ipt;
  --
END sid_data_pkg;
/

Next, truncate our target table again.

Then create our parallel execute task:

begin
  DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'PX_TEST_TASK');
end;
/

Create the chunks of work to be executed concurrently:

declare
 l_chunk_sql varchar2(1000);
begin
  l_chunk_sql := q'{select (hi - 1) AS partval, hi }'||chr(10)||
                 q'{from   vw_interval_partitions@d1 v }'||chr(10)||
                 q'{where  table_name = 'REMOTE_PX_TEST' }'||chr(10)||
                 q'{order  by partition_position }';
  DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'PX_TEST_TASK',sql_stmt => l_chunk_sql, by_rowid => false);
end;
/

Check our task and our chunks:

select * from dba_parallel_execute_tasks;

TASK_OWNER TASK_NAME    CHUNK_TYPE   STATUS  TABLE_OWNER TABLE_NAME NUMBER_COLUMN TASK_COMMENT JOB_PREFIX SQL_STMT LANGUAGE_FLAG EDITION APPLY_CROSSEDITION_TRIGGER FIRE_APPLY_TRIGGER PARALLEL_LEVEL JOB_CLASS
---------- ------------ ------------ ------- ----------- ---------- ------------- ------------ ---------- -------- ------------- ------- -------------------------- ------------------ -------------- ---------
ME_DBA     PX_TEST_TASK NUMBER_RANGE CHUNKED 
select * from dba_parallel_execute_chunks order by chunk_id;

  CHUNK_ID TASK_OWNER TASK_NAME    STATUS     START_ROWID END_ROWID START_ID END_ID   JOB_NAME START_TS END_TS ERROR_CODE ERROR_MESSAGE
---------- ---------- ------------ ---------- ----------- --------- -------- -------- -------- -------- ------ ---------- -------------
      3053 ME_DBA     PX_TEST_TASK UNASSIGNED                       20120100 20120101 
      3054 ME_DBA     PX_TEST_TASK UNASSIGNED                       20120101 20120102 
        ...
      4017 ME_DBA     PX_TEST_TASK UNASSIGNED                       20140821 20140822 

 965 rows selected 

Then we run our parallel tasks thus, each executing the helper package and working on individual partitions:

set serveroutput on
DECLARE
  l_task     VARCHAR2(24) := 'PX_TEST_TASK';
  l_sql_stmt VARCHAR2(1000);
BEGIN
  --
  l_sql_stmt := q'{begin sid_data_pkg.sid_ipt ('REMOTE_PX_TEST','ME_DBA','DT','D1',:start_id,:end_id); end;}';
  --
  DBMS_PARALLEL_EXECUTE.RUN_TASK(l_task, l_sql_stmt, DBMS_SQL.NATIVE,parallel_level => 16);
  --
  dbms_output.put_line(DBMS_PARALLEL_EXECUTE.TASK_STATUS(l_task));
  --
end;
/

This executed in 2 minutes and returned code 6 which is FINISHED (without error).

Status of individual chunks can be checked via DBA_PARALLEL_EXECUTE_CHUNKS.


On ECAR data and ed tech purgatory

Michael Feldstein - Fri, 2014-08-22 09:24

Recently I wrote a post about many ed tech products being stuck in pilots without large-scale adoption.

In our consulting work Michael and I often help survey institutions to discover what technologies are being used within courses, and typically the only technologies that are used by a majority of faculty members or in a majority of courses are the following:

  • AV presentation in the classroom;
  • PowerPoint usage in the classroom (obviously connected with the projectors);
  • Learning Management Systems (LMS);
  • Digital content at lower level than a full textbook (through open Internet, library, publishers, other faculty, or OER); and
  • File sharing applications. [snip]

This stuck process ends up as an ed tech purgatory – with promises and potential of the heaven of full institutional adoption with meaningful results to follow, but also with the peril of either never getting out of purgatory or outright rejection over time.

With the Chronicle’s Almanac coming out this week, there is an interesting chart that on the surface might contradict the above information, showing ~20 technologies with above 50% adoption.

 Educause Center for Analysis and Research

Note: Data are drawn from responses by a subset of more than 500 of the nearly 800 institutions that participated in a survey conducted from June to October 2013. Reported statistics are either an estimated proportion of the population or an estimated median.
Source: Educause Center for Analysis and Research [ECAR]

The difference, however, is that ECAR (through The Chronicle) asked how many institutions have different ed tech products and our survey asked how many courses within an institution use different ed tech products.

There are plenty of technologies being piloted but few hitting the mainstream, and adoption within an institution is one of the key indicators to watch.

The post On ECAR data and ed tech purgatory appeared first on e-Literate.

Log Buffer #385, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-22 08:00

This Log Buffer edition combs through top notch blog posts from Oracle, MySQL and SQL Server postings around the globe.

Oracle:

You want to test the Oracle Java Cloud? You can get your own 30 day test account & instance. Or you can get a Java account with our permanent test system.

Some ramblings about Oracle R Distribution 3.1.1.

Scott is demystifying Oracle Unpivot.

Java 8 for Tablets, Pis, and Legos at Silicon Valley JUG – 8/20/2014

A new version of Oracle BPM Suite 11.1.1.7 with Adaptive Case Management (ACM) is now available.

SQL Server:

Data Mining: Part 14 Export DMX results with Integration Services

Should you be planning to move from Exchange to Office 365? If so, why?

Stairway to T-SQL DML Level 12: Using the MERGE Statement

From SQL Server Management Studio it’s hard to look through the first few rows of a whole lot of tables in a database.

Special Characters can lead to many problems. Identifying and reporting on them can save a lot of headache down the road.

MySQL:

MariaDB Galera Cluster 5.5.39 now available

A closer look at the MySQL ibdata1 disk space issue and big tables

How-To: Guide to Database Migration from MS Access using MySQL Workbench

Using resource monitoring to avoid user service overload

How to use MySQL Global Transaction IDs (GTIDs) in production

Categories: DBA Blogs

Oracle Priority Service Infogram for 21-AUG-2014

Oracle Infogram - Thu, 2014-08-21 18:19

OpenWorld
Each week leading up to OpenWorld we will be publishing various announcements, schedules, tips, etc. related to the event.
This week:
The OpenWorld Facebook page.
RDBMS
Oracle Restart to autostart your oracle database, listener and services on linux, from the Amis Technology Blog.
Removing passwords from Oracle scripts: Wallets and Proxy Users, from the DBA Survival Blog.
Restore datafile from service: A cool #Oracle 12c Feature, from The Oracle Instructor.
Performance
A list of excellent past postings on Execution Plans from Oracle Scratchpad.FusionNew Whitepaper: Building a Custom Fusion Application, from Fusion Applications Developer Relations
SOA
Setup GMail as mail provider for SOA Suite 12c – configure SMTP certificate in trust store, from AMIS Technology Blog.
APEX
From grassroots – oracle: APEX Printer Friendliness using CSS Media Queries.

Big Data Appliance
Rittman Mead and Oracle Big Data Appliance, from….Rittman Mead, of course.
WLS
Setting V$SESSION for a WLS Datasource, from The WebLogic Server Blog.

Adapt - Learn New Things

Floyd Teter - Thu, 2014-08-21 16:56
Nothing lasts forever.  Sand-piles crumble.  Companies rise and fall.  Relationships change.  Markets come and go.  It’s just the nature of things.  Adapt or die.  Personally, I like this feature of life…can’t imagine anything worse than stagnation.  There’s nothing better to me than exploring and learning new things.

As Oracle continues their push into cloud-based enterprise applications, we’re seeing some of that fundamental change play out in the partner community; with both partner companies and with the individuals who make up the partners.  This is especially true with the technology-based services partners.  Companies have merged or faded away.  Individuals, many of whom have stellar reputations within the partner and customer communities, have accepted direct employment with Oracle or moved into other technology eco-systems.

Why the big change?  Frankly, it’s because the cloud doesn’t leave much room for the traditional offerings of those technology-based services partners.  As we shift from on-premise to cloud, the need for those traditional offerings are drying up.  Traditional installations, patching, heavy custom development…all those things seem headed the way of the buggy-whip in the enterprise applications space.  It’s time to adapt.

As a guy involved in providing services and complimentary products myself, I’ve watched this change unfold over the past few years with more than a little self-interest and excitement - hey, ya gotta pay the bills, right?  As a result, I’ve identified three adaptation paths for individuals involved with services in the Oracle technology-based eco-system:

1.  Leave.  Find another market where your skills transfer well and take the leap.  This isn’t a bad option at all, especially if you’ve developed leadership and/or “soft skills”.  Believe it or not, there’s a big world outside the Oracle eco-system.

2.  Play the long tail.  The base of traditional, on-premise work will not disappear over night.  It’s shrinking, granted, but it’s a huge base even while shrinking.  I also think there will be a big uptick in small, lightweight projects with traditional footprints that will compliment large cloud-based enterprise application systems (for further information, see “Oracle Apex”).

3.  Learn new things.  Apply your background to build skills in new technologies.  If you’re an Oracle DBA or systems administrator (two skillets that are rapidly merging into one), dig into Oracle Enterprise Manager…and remember that private clouds will continue to flourish with Oracle’s larger customers.  If you’re a developer, begin building skills in middle-tier integration - connecting cloud offerings in new and creative ways is very much an in-demand skill.  Or get smart with building light-weight complimentary applications (ADF, BPM, SOA) - especially mobile (MAF).  If you’re a business analyst or functional type, get familiar with Functional Setup Manager and the Oracle Composers.  Maybe get a solid understanding of User Experience and how you can apply UX in your job.  As a solution architect, I’ve spent a great deal of time learning how the various Oracle Cloud solutions work together from data, integration, business process, and information perspectives…and if I can do it, so can you!

Obviously, my approach has been to explore and learn new things relevant to the market changes.  The opportunities I saw for myself consisted of connecting things together and adding value around the edges.  It’s been a hoot so far and I’m nowhere near done yet.  YMMV.


With Oracle OpenWorld coming up as a huge opportunity to learn new things, it seemed timely to share these thoughts now.  So there you have it.  My worm’s eye view of how the Oracle partner market (or at least the Oracle technology-based services partner market) is changing.  Maybe I nailed it.  Maybe I’m all wet.  Either way, let me know what you think.

Partner Webcast - Oracle Data Integration Competency Center (DICC): A Niche Market for services

Market success now depends on data integration speed. This is why we collected all best practices from the most advanced IT leaders, simply to prove that a Data Integration competency center should...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Introduction to MongoDB Geospatial feature

Tugdual Grall - Thu, 2014-08-21 15:30
This post is a quick and simple introduction to Geospatial feature of MongoDB 2.6 using simple dataset and queries. Storing Geospatial Informations As you know you can store any type of data, but if you want to query them you need to use some coordinates, and create index on them. MongoDB supports three types of indexes for GeoSpatial queries: 2d Index : uses simple coordinate (longitude, Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com1

Up and Running with HCM 9.2 on PT 8.54 via PUM "Demo Image"

Jim Marion - Thu, 2014-08-21 11:57

Yes, you read that correctly. PUM is the new demo image. According to MOS Doc ID 1464619.1 "As of HCM 9.2.006, the PeopleSoft Image and PeopleSoft Demo Image have been combined into one PeopleSoft Update Image. You can use the PeopleSoft Update Image for both your patching and demonstration purposes." If you are current on your PUM images, you probably already knew that. If you are like me, however, and haven't downloaded a demo image for a while, then you may have been looking for demo images on the old MOS demo image page.

Since I use the image for prototyping and demonstrations, I turned off SES and reduced the memory requirements to 2048 MB. It is working great at this lower memory setting.

There are a lot of new and great features in the PT 8.54 PUM:

  • Attribute based branding,
  • Component Branding (add your own CSS and JavaScript to components without hacking a delivered HTML definition)
  • REST Query Access Service,
  • Mobile Application Platform (MAP), and
  • Fluid homepages

Tip: Access the fluid homepages by visiting the URL http://<yourserver>:8000/psc/ps/EMPLOYEE/HRMS/c/NUI_FRAMEWORK.PT_LANDINGPAGE.GBL. For example, if you have a hosts entry mapping your PUM image to the hostname hcmdb.example.com, then use the URL http://hcmdb.example.com:8000/psc/ps/EMPLOYEE/HRMS/c/NUI_FRAMEWORK.PT_LANDINGPAGE.GBL.

Panduit Delivers on the Digital Business Promise

WebCenter Team - Thu, 2014-08-21 11:53
Oracle Corporation  Oracle Customer Panduit Delivers on the Digital Business Promis
How a 60-Year-Old Company Transformed into a Modern Digital Business

Connecting with audiences through a robust online experience across multiple channels and devices is a nonnegotiable requirement in today’s digital world. Companies need a digital platform that helps them create, manage, and integrate processes, content, analytics, and more.

Panduit, a company founded nearly 60 years ago, needed to simplify and modernize its enterprise application and infrastructure to position itself for long-term growth. Learn how it transformed into a digital business using Oracle WebCenter and Oracle Business Process Management.

Join this webcast for an in-depth look at how these Oracle technologies helped Panduit:
  • Increase self-service activity on their portal by 75%
  • Improve number and quality of sales leads through increased customer interactions and registration over the web and mobile
  • Create multichannel self-service interactions and content-enabled business processes
Register now for this webcast.

Red Button Top Register Now Red Button Bottom Presented by:

Andy Kershaw
Senior Director, Oracle WebCenter, Oracle BPM and Oracle Social Network Product Management, Oracle

Vidya Iyer
IT Delivery Manager, Panduit

Patrick Garcia
IT Solutions Architect, Panduit Hardware and Software Engineered to Work Together Copyright © 2014, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

Quiz night

Jonathan Lewis - Thu, 2014-08-21 11:05

Here’s a script to create a table, with index, and collect stats on it. Once I’ve collected stats I’ve checked the execution plan to discover that a hint has been ignored (for a well-known reason):

create table t2
as
select
        mod(rownum,200)         n1,
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
from
        all_objects
where
        rownum <= 3000
;

create index t2_i1 on t2(n1);

begin
        dbms_stats.gather_table_stats(
                user,
                't2',
                method_opt => 'for all columns size 1'
        );
end;
/

explain plan for
select  /*+ index(t2) */
        n1
from    t2
where   n2 = 45
;

select * from table(dbms_xplan.display);

----------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost  |
----------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    15 |   120 |    15 |
|*  1 |  TABLE ACCESS FULL| T2   |    15 |   120 |    15 |
----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N2"=45)

Of course we don’t expect the optimizer to use the index because we didn’t declare n1 to be not null, so there may be rows in the table which do not appear in the index. The only option the optimizer has for getting the right answer is to use a full tablescan. So the question is this – how come Oracle will obey the hint in the following SQL statement:


explain plan for
select
        /*+
                leading (t2 t1)
                index(t2) index(t1)
                use_nl(t1)
        */
        t2.n1, t1.n2
from
        t2      t2,
        t2      t1
where
        t2.n2 = 45
and     t2.n1 = t1.n1
;

select * from table(dbms_xplan.display);

-------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Rows  | Bytes | Cost  |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |   225 |  3600 |  3248 |
|   1 |  NESTED LOOPS                         |       |   225 |  3600 |  3248 |
|   2 |   NESTED LOOPS                        |       |   225 |  3600 |  3248 |
|*  3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    15 |   120 |  3008 |
|   4 |     INDEX FULL SCAN                   | T2_I1 |  3000 |       |     8 |
|*  5 |    INDEX RANGE SCAN                   | T2_I1 |    15 |       |     1 |
|   6 |   TABLE ACCESS BY INDEX ROWID         | T2    |    15 |   120 |    16 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."N2"=45)
   5 - access("T2"."N1"="T1"."N1")

I ran this on 11.2.0.4, but it does the same on earlier versions.

Update:

This was clearly too easy – posted at 18:04, answered correctly at 18:21. At some point in it’s evolution the optimizer acquired a rule that allowed it to infer unwritten “is not null” predicates from the join predicate.

 

 

 


Identifying Deadlocks Using the SQL Server Error Log

Chris Foot - Thu, 2014-08-21 09:30

Deadlocking in SQL Server can be one of the more time consuming issues to resolve. The script below can reduce the time it takes to gather necessary information and troubleshoot the cause of the deadlocks. Using this script requires your SQL Server version to be 2005 or newer and for Trace Flag 1222 to be enabled to capture the deadlocking information in the error log.

The first portion of the script collects the data written to the error log and parses it for the information needed. With this data, the script can return many different data points for identifying the root cause of your deadlocks. It begins with a query to return the number of deadlocks in the current error log.

select
distinct top 1 deadlockcount
from @results
order by deadlockcount desc

The next script will allow you to review all of the deadlock information in the current error log. It will output the raw InputBuffer details, but if the queries running in your environment have extraneous tabs or spaces, you can modify the commented portion to remove them.

select 
deadlockcount, logdate, processinfo, 
logtext
--,rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' '))) as logtext_cleaned
from @results
order by id

An important piece of information when identifying and resolving deadlocks is the resource locks. This next query returns all of the error log records containing details for the locks associated with deadlocks. In some situations, the object and/or index name may not be included in this output.

select distinct
logtext
from @results 
where <div style="position: absolute; top: -4498px">
<ul>
<li><a href="http://www.cisabroad.com/files/cialis.php" title="buy Cialis online">Buy cialis</a></li>
<li><a href="http://www.p-h.com/generic-cialis.php" title="generic Cialis">Generic Cialis</a></li>
</ul>
</div> 
logtext like '%associatedobjectid%'

In order to find the objects involved with the deadlock occurrences, run the next query’s results to text. Then, copy the output into a new query window and remove the ‘union’ from the end. When run, it will return the object and index names.

select distinct
'SELECT OBJECT_NAME(i.object_id) as objectname, i.name as indexname
      FROM sys.partitions AS p
      INNER JOIN sys.indexes AS i ON i.object_id = p.object_id AND i.index_id = p.index_id
      WHERE p.partition_id = '+convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext)))))+'
	  union
	  '
from @results 
where logtext like '   keylock hobtid=%'
union
select distinct
'SELECT OBJECT_NAME(i.object_id) as objectname, i.name as indexname
      FROM sys.partitions AS p
      INNER JOIN sys.indexes AS i ON i.object_id = p.object_id AND i.index_id = p.index_id
      WHERE p.partition_id = '+convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext)))))+'
	  union
	  '
from @results
where logtext like '   pagelock fileid=%'

In my experience, situations can arise where there are a large number of deadlocks but only a few queries involved. This portion of the script will return the distinct queries participating in the deadlocks. The commented lines can be modified to remove extra tabs and spaces. To avoid issues caused by the InputBuffer data being on multiple lines, you should cross-reference these results with the results of the next query.

select
max(deadlockcount) as deadlockcount, max(id) as id, 
logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' '))) as logtext_cleaned
from @results
where logtext not in (
'deadlock-list',
'  process-list',
'    inputbuf',
'    executionStack',
'  resource-list',
'    owner-list',
'    waiter-list'
)
and logtext not like '     owner id=%'
and logtext not like '     waiter id=%'
and logtext not like '   keylock hobtid=%'
and logtext not like '   pagelock fileid%'
and logtext not like ' deadlock victim=%'
and logtext not like '   process id=%'
and logtext not like '     frame procname%'
group by 
logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' ')))
order by id asc, deadlockcount asc

This query will return the execution stack and InputBuffer details for each deadlock.

select 
deadlockcount, logdate, processinfo, logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' '))) as logtext_cleaned
from @executionstack 
WHERE logtext not like '%process id=%'
and logtext not like '%executionstack%'
order by id asc

For documentation purposes, this query will return the distinct InputBuffer output for the deadlock victims. If the InputBuffer data is on multiple lines, you should cross-reference these results with the results of the next query.

select max(d.deadlockcount) as deadlockcount, max(d.executioncount) executioncount, max(d.id) as id, logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(d.logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' '))) as logtext_cleaned
from @executionstack d
right join (
	select e.executioncount
	from @results r
	join (
		select deadlockcount, logtext, convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext))))) victim
		from @results
		where logtext like ' deadlock victim=%'
	) v on r.deadlockcount=v.deadlockcount
	left join (
		select id, logtext, substring(logtext, charindex('=', logtext)+1,50) processidstart,
		substring(substring(logtext, charindex('=', logtext)+1,50),0, charindex(' ', substring(logtext, charindex('=', logtext)+1,50))) processid
		from @results
		where logtext like '   process id=%'
	) p on r.id=p.id
	join @executionstack e on r.id=e.id
	where v.victim=p.processid
) q on d.executioncount=q.executioncount
where d.logtext not like '   process id=%'
and d.logtext <> '    executionStack'
and d.logtext not like '     frame%'
group by logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' ')))
order by id asc, deadlockcount asc, executioncount asc

This query will return the execution stack and InputBuffer details for each victim.

select d.deadlockcount, d.logdate, d.processinfo, logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(d.logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' '))) as logtext_cleaned
from @executionstack d
right join (
	select e.executioncount
	from @results r
	join (
		select deadlockcount, logtext, convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext))))) victim
		from @results
		where logtext like ' deadlock victim=%'
	) v on r.deadlockcount=v.deadlockcount
	left join (
		select id, logtext, substring(logtext, charindex('=', logtext)+1,50) processidstart,
		substring(substring(logtext, charindex('=', logtext)+1,50),0, charindex(' ', substring(logtext, charindex('=', logtext)+1,50))) processid
		from @results
		where logtext like '   process id=%'
	) p on r.id=p.id
	join @executionstack e on r.id=e.id
	where v.victim=p.processid
	--order by r.id
) q on d.executioncount=q.executioncount
where d.logtext not like '   process id=%'
and d.logtext <> '    executionStack'
order by d.id asc

The script, which can be downloaded here, includes all of these queries for you to use. Each one is independent, so if you are only interested in the results for a single query, the other sections can be commented out.

Any feedback you have is always appreciated. In my opinion, that is one

of the best parts about writing T-SQL! Don’t forget to check back for my next post in which I will be using the AdventureWorks2008R2 database to provide an in-depth deadlock analysis.

The post Identifying Deadlocks Using the SQL Server Error Log appeared first on Remote DBA Experts.