Skip navigation.

DBA Blogs

Impressions from #oug_ire 2014

The Oracle Instructor - Thu, 2014-03-13 01:53

The Oracle User Group Ireland had its annual conference for 2014 at the Convention Centre in Dublin – which is a very nice location:

Convention Centre in DublinThere have been many well known Oracle Experts speaking, as you can see from the agenda. Oracle University was represented by Joel Goodman and myself – we really enjoyed talking there and listening to the other speakers :-)

Joel Goodman and Uwe Hesse at the OUG Ireland Conference 2014The first presentation I attended was delivered by Tom Kyte, who in the absence of Maria Colgan delivered an Introduction to the New Oracle Database In-Memory Option. This extra charged option will probably be available with 12.1.0.2, which we expect to be released this calendar year. Safe Harbor etc. as usual. One key point that I took away here was that it is extremely easy to implement the In-Memory Option: Just set one initialization parameter and designate the use on the segment level with alter table e.g. Especially, the storage structure on disk remains unchanged and the whole thing is completely transparent for the application layer.

Next talk was delivered by fellow Oakie Tim Hall: An Oracle DBA’s Guide to WebLogic Server.

Tim Hall

He tried to pack all the information that he wished he knew before he dealt with WebLogic for the first time into that presentation. Very useful indeed for starters, I suppose. There was no single key point here to highlight, instead we got a collection of small but helpful tips that I will definitely come back to when implementing WebLogic myself.

Next I went to Simon Haslam, who talked about What’s So Special about the Oracle Database Appliance?

Simon HaslamSome key points here: ODA is definitely to be considered for rapid RAC deployment, especially for small to midsized projects with less demand for extremely good I/O performance. Also, it is probably a clever move to install it as virtualized platform based on Oracle VM, even in case that at first only databases are supposed to run there. This is easy to install initially (but an effort to change that after the fact) and has no negative performance impact.

Afterwards I did my own two presentations: Real-Time Query for Data Guard 11g in Action and Data Guard 12c New Features in Action

Uwe Hesse presenting about Data GuardMany thanks to my dear colleague Mina Sagha Zadeh who took the picture and introduced my talk in the most charming and flattering way I have ever experienced so far :-)

All in all it was a fantastic event in Dublin, kudos to the Oracle User Group Ireland for organizing and running it so well – great job!


Tagged: OUG Ireland 2014
Categories: DBA Blogs

I think you should submit an abstract

This is not your typical posting from me. But I just received a LinkedIn message and it got me motivated  enough to write this.

A colleague, who has been working with Oracle for over 15 years, sent me a message about the pearls of working for a consulting company that has kept him on the road for about a year now. He's had enough and is looking for something else that will keep him close to home. While this type of life can be painful,  it can also be plentiful.

I challenged him to submit an abstract to IOUG/Collaborate next year. While this year's big IOUG conference only a few weeks away (April 7-11) in Las Vegas, he should start thinking about submitting an abstract now.

He has tons of experience. I wrote, "You have tons of experience that younger folks need to hear about. Perhaps taking just one topic/problem that you found really interesting and had a chance to dig into this past year. People love that kind of stuff."

Most people think that their abstract will not get accepted. I've been involved with Oracle User Groups since I started working with Oracle technology in 1989! And I can tell you from experience, that I always rank a well written abstract with a well thought out outline and a very small yet deep scope very high. In fact, I recruit people for IOUG in my track that I know will do this.

So I wrote him, "There are a lot of papers accepted in the DBA area, especially when they are about real life experiences that you turn into a rock'n conference presentation!"

Think about your DBA experiences; the real specific ones that you feel proud about, the ones that you're really interested in. That's what people love to hear about, because they feel your passion!

I feel so strongly about this, I'll offer this: I will pre-review your abstract and help make it a better one.

Hope to see your abstract at next years IOUG!

All the best in your Oracle performance endeavors,

Craig.


If you enjoy my blog, I suspect you'll get a lot out of my online or in-class training. Go to www.orapub.com . I also offer on-site training and consulting services.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com. Another option is to send an email to OraPub's general email address, which is currently info@ orapub. com.
  

Categories: DBA Blogs

Performance Tuning with AWR & ASH in #em12c

DBASolved - Wed, 2014-03-12 10:42

Oracle monitoring and performance tuning has grown over the years.  With every release of the Oracle database the monitoring capabilities that have been provided have evolved.  In Oracle11g, monitoring with statpack has given way to Automatic Workload Repository (AWR) and Active Session History (ASH) for near real time monitoring of items within the Oracle database.  Adding AWR and ASH to your performance tool kit can yield huge savings of time to tuning endeavors.

Note: AWR and ASH were also in Oracle10g, but not as robust as it currently is in Oracle11g.

Note: If you are interested in how to run AWR and ASH from the command line check out this post by a good friend of mine: here

Instead of focusing on the command line to run AWR and ASH, lets take a look at how these tools are used through Oracle Enterprise Manager 12c. 

Note: The AWR and ASH tools are covered under the Diagnostic and Tuning Management Packs for the database and OEM.  Before using these tools, make sure you have access to these management packs.

Automatic Workload Repository (AWR)

The Automatic Workload Repository (AWR) is used to automate database statistic gathering by collecting, processing, and maintaining performance statistics for database problem identification and self-tuning.  Part of the AWR is snapshots.  AWR Snapshots are gathered by default every hour and queried from memory.  Snapshots can be used to identify performance problems over time.

In order to use AWR, there are two initialization parameters that are relevant:

  • STATISTICS_LEVEL – set to TYPICAL (default) or AL, enables statistics gathering for AWR. setting it to BASIC will disable statistics gathering.
  • CONTROL_MANAGEMENT_PACK_ACCESS – set to DIAGNOSTIC+TUNING(default) or DIAGNOSTIC to enable diagnostic monitoring.  Setting to NONE will disable many database features including ADDM (not covered in this post).
Automatic Workload Repository (AWR) Report

With the AWR enabled for the database, an AWR report can be ran from within OEM.  While on a database home page within OEM, AWR report can be accessed from the Performance menu (Performance –> AWR –> AWR Report) (Image 1). 

Image 1:
image

Before you can run an AWR Report, OEM will ask you to log in to the database.  Like with anything in OEM, you can used Named Credentials or create a new one (Image 2). 

Image 2:
image

After logging in you will be asked if you want to use “By Baseline” or “By Snapshot”.  Most of the time, “By Snapshot” will be used (Image 3).  This step corresponds to the command line version of AWR Report when it asks what snapshots you would like to use.  Then click the Generate Report button to build the report.

Tip: Use the magnifine glass to lookup the snapshots.  This will open a separate window.

Image 3:
image

Once the report is generated; the report can be read in the OEM interface or saved to file for later reading or emailing to others.

Reading AWR Report

One thing to notice, the difference between the AWR Report from command line and OEM interface is that at the command line, text based report can be chosen.  When running the AWR Report from OEM, the only option is the report is automatically saved in HTML format.  Although there is a difference in report formats; the report is read the same way no matter what the format is.

The header of the report (Image 4) provides you with general information about the environment where there report was gathered from.  It also provides information about the snapshots used to generate the report and the time differences between the snapshots.

Image 4:image

The next section in the report is the Report Summary.  In the Report Summary, there are a few items in interest.  The first of which is the “Instance Efficiency Percentage” section (Image 5).  In this section, you can get a sense of how well the instance is managing it memory operations.  Ideally, the target should be 100%.

Note: The only time I’m concern about this area is if any of the values are below 90%. Often this is an indication that the SGA may be to small.

Image 5:
image

In the same section, I also look at the “Top 5 Time Foreground Event” section (Image 6).  This section of the report provides you with a list of the current wait events that are taking the most time waiting and what wait class they belong to.  At this time, I can see what I should be tuning the environment to prevent.  In the image provided, I can see that I have a wait dealing with high water mark contention (enq: HW – contention). 

From looking at these waits, it appears that I have an object that is possibly having a problem with extents be allocated.  What I need to do next is see if I can find the object that is causing the issue.

Image 6:
image

Since I know that I have a high water mark contention issue, this is telling me that I need to look at what objects are being written to in the database.  In the “Segments Statistics” section of the report, I want to look at the “Segments by Physical Writes” sub-section (Image 7).  From looking at this section, I can see that the problem is on a LOB object names SYS_LOB0000253792C00002$$.

Image 7:image

From looking at the AWR Report, I can quickly see the object that is causing the problem with the wait.  I can make an attempt to correct the problem and remove the wait by affecting the LOB object in an appropriate way.  In this case, that may be adding extents, looking at moving the LOB to a less accessed tablespace, etc. 

Active Session History (ASH)

Although I have identified the object that is causing the wait, I want to find the session that is causing this wait.  This is where I can use the Active Session History (ASH) Report. Before diving into the ASH Report, lets take a look at what Active Session History (ASH) actually is.

Active Session History provides detailed history of session activity by sampling database activity by sessions every second.  These samplings are taken from memory and then stored in persistent storage (tables).  The sampling that ASH does is done at the session level rather than the instance level.  The statistics that are captured is only for active sessions and is directly related to the work be performed by the sessions rather than the entire database.

Active Session History (ASH) Reports are used to analyzed transient performance problems within the database during specific time. 

Let’s take a look at how the ASH Report is ran and what it provides.

Active Session History (ASH) Report

In Oracle Enterprise Manager 12c (OEM), the ASH Report is accessed through the “Top Activity” menu (Performance –> Top Activity) (Image 8) for that database. 

Image 8:
image

Once on the “Top Activity” page, below the graph that is displayed there is a button labeled “Run ASH Report” (Image 9). 

Image 9:
image

Clicking on the “Run ASH Report” button will bring up the page to provide specific information about the ASH Report that you would like to run (Image 10).  Since the ASH samples so much data, per second, you can select a beginning date, ending date and associated times that you would like to look at in the report. 

Note: Most times, when I use generate an ASH report, I try to keep to the same timeframe as the AWR report.  Provides some consistency to findings.

Image 10:
image

Once I have selected the timeframe I want to use, the report can be generated.  Just like the AWR Report, the ASH Report can be viewed inside of OEM or saved to a file for later reference or sharing with others.

Active Session History (ASH) Report

Let’s read the report now.  As you recall, I found an issue with a high-water mark contention wait.  What I will be looking for in the ASH Report is similar information and possibly who is behind the wait.

Just like the AWR Report, the ASH Report has a header section where I can find all the information related to the database, database instance and timing of the report (Image 11).  This section is general in nature, but it also provides you similar information on SGA usage just like the AWR Report does.

Image 11:image

The next think I like to look at in the ASH Report is the “Top User Events” (Image 12).  If I have selected the same timeframe as the AWR Report, I expect to see similar wait events as I found earlier.  As you can see, the high-water contention wait is at the top of the list again.

Image 12:
image

Now that I have confirmed that the high-water contention wait is the top wait for the sessions during this timeframe, I want to confirm that it is the same object I found in the AWR Report.  Information on the object causing the wait can be found in the “Top Event P1/P2/P3 Values” (Image 13) section.

Image 13:
image

In order to confirm that it is the same object, I need to search DBA_EXTENTS with the values from P2 (data file) and P3 (block number).  Depending on how fast the system is, I can quickly identify the object being accessed that is causing the wait.

In any tuning that is being done, I want to find the SQL statement that the session is using.  This information can also be found in the ASH Report along with the top sessions and if there are any sessions being blocked.

Summary

Overall, these two reports (AWR and ASH) provide a wide range information for anyone to use in performance diagnosing and tuning.   These reports can be ran from the command line or from OEM depending on your comfort levels with the tools; however, I recommend that any DBA looking to keep a database in good working order and top performance learn how to use and read these reports.  II t will save a lot of time in resolving issues.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database, OEM, Performance
Categories: DBA Blogs

Partner Webcast – Oracle Cloud Opportunities: The broadest SaaS Portfolio in the Market

Cloud computing is a significant advancement in the delivery of information technology and services and we see a strong industry trend going there. By providing on demand access to a shared pool...

We share our skills to maximize your revenue!
Categories: DBA Blogs

What is the difference between logical and physical corruption in Oracle?

Pythian Group - Wed, 2014-03-12 07:49

When we talk about logical corruption, there are two different failure states that fall under this label:

  1. Accidental or incorrect modification of application data by a user or application.

    In this scenario, a user or application, either by misadventure or resulting from an application bug, changes data in a database to incorrect or inappropriate values. An example would be an engineer who performs an update, but forgets to formulate the predicate such that it updates only a single record, and instead accidentally updates (and commits) changes to thousands of records. When we perform an assessment of a client’s systems, we look carefully at how the client is managing retention of database undo data, archived redo logs and the recycle bin. Many clients assume that physical backups serve all aspects of recoverability for Oracle. On the contrary, effective management of these components can greatly reduce the complexity, RPO and RTO in repairing this type of fault.

  2. Logical (and physical) corruption of data blocks. Block corruptions come in two types:

    Physical corruptions (media corrupt blocks) are blocks that have sustained obvious physical damage. When Oracle detects an inconsistency between the CSN in the block header and the CSN in the block footer, or the expected header and footer structures are not present or are mangled, then the Oracle session raises an exception upon read of the block (ORA-01578: ORACLE data block corrupted…). The call to Oracle fails, and the exception is written to the Oracle alert log and trace files. Physical corruptions are generally the result of infrastructure problems, and can be introduced in a variety of ways. Some possible sources of physical corruption are storage array cache corruption, array firmware bugs, filesystem bugs and array controller battery failure combined with power outage. One can imagine at least a dozen other possible sources of such corruption. Physically corrupt blocks can be repaired using Oracle Recovery Manager’s BLOCKRECOVER command. This operation restores and recovers the block in place in the file without interrupting any other sessions operating against the database.

    Logically corrupt blocks are blocks that have good header and footer CSNs, but that have some other kind of internal inconsistency. For instance, one of the block header structures, which tracks the number of locks associated with rows in the block, differs from the actual number of locks present. Another example would be if the header information on available space differs from the true available space on the block. Upon encountering these types of faults, the calling session generally will raise ORA-00600 (“internal error”) with additional arguments that allow us to diagnose the specific type of defect, and the call will fail. The exception will be written to the alert log and trace files. Like physical corruption, there are a wide range of possible ways that the fault could have been introduced, including all of the ways listed above for physical corruption. However, logically corrupt blocks are much more likely to have been introduced as a result of a failure in the Oracle software, or as a result of an Oracle bug or cache corruption.

    By default, Oracle has features that seek to perform sanity checks on blocks before they are written. However, for highly risk-averse enterprises, additional checks, including checks for logical inconsistencies and block checksum verification can be enabled. These features consume additional resources, so should be used judiciously.

Categories: DBA Blogs

Find ILOM(s) for #exadata

DBASolved - Wed, 2014-03-12 06:58

Oracle Exadata can be a complex piece of machine/equipment that is housed in a data center or some other remote location. Once it is all configured and attached to the network managing it can be done using standard SSH tools.  At times, there is a need to manage the Exadata or any other engineered systems with Integrated Lights Out Management (ILOM). 

What is ILOM? 

Integrated Lights Out Manager (ILOM) provides the service processor (SP), hardware and software, a way to manage an Exadata machine component; i.e. compute nodes, cell nodes, storage nodes, etc.  The ILOM is pre-installed on these components.

What is ILOM used for? 

ILOM enables you to actively manage and monitor nodes independently of the operating system; providing you with a reliable way to manage Exadata without direct access.

Using ILOM, you can proactively manage the following:

  • Identify hardware error and faults
  • Remotely control the power of the node
  • View the graphical and non-graphical console of the host
  • View current status of sensors and indicators of the system
  • Identify the hardware configuration of the system
  • Receive alerts that are generated about system events

With the highlights/benefits of ILOM listed, how do you find the addresses for the ILOM?  Normally, this information is provided to you when the Exadata is initially configured.  From time-to-time you will need to look up the addresses to access the ILOM. 

Addresses for any of the Exadata components can be found in the “dbm.dat” file.  This file is located under /opt/oracle.Support/onecommand.  In order to access the “dbm.dat” file, you will need root access.

$ cd /opt/oracle.Support/onecommand
$ cat dbm.dat

Image 1 shows the ILOM excerpt from the dbm.dat file.

Image 1:
image

Once you have identified the hostname/IP address for ILOM, then it can be used in a web browser to access the ILOM web interface (Image 2).

Image 2:
image

Once at the login page for the ILOM, it can be accessed using the “root” user and password.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Exadata
Categories: DBA Blogs

Finished reading 12c Concepts manual

Bobby Durrett's DBA Blog - Tue, 2014-03-11 17:47

I decided to read the entire Oracle 12c Concepts manual.  It’s kind of overkill to read the entire manual if you just want to learn the new 12c features but it has been a long time since I’ve read the entire Concepts manual on the current version of Oracle.  I’m not sure that I remember correctly, but I think the last version I read entirely was the 9i version.  I definitely read the Oracle 7 concepts and several other manuals when I was first getting started.

The 12c manual has things in it that seem virtually unchanged from what I remember from earlier versions.  But, it also has a lot more technical depth than I remember.  It was boring at times reading over sections filled with things I’ve worked on for almost 20 years now.  But, these monotonous sections still had reminders of things I probably don’t have in the front of my brain because I don’t use them every day.  Then there were whole sections about features that are either new in 12c or new to me such as the multitenant features.

I’ve heard many people say that a new DBA (database administrator) should start with the Concepts manual.  Reading it again just confirms this often stated advice.  Concepts has pointers to all these other manuals where you can find details on the things you found interesting or needed for your job.  For example, in the sections relating to partitioned tables there are references to the VLDB and Partitioning manual.  A new DBA who will be working on partitioned tables might not realize that this manual existed, but the Concepts manual will send you there.

I realize that a lot of people don’t like to read manuals, but if you had to force yourself to sit down and read one Oracle manual cover to cover so you could be a better DBA you should read Concepts.  It’s the starting point and after 19 years and counting as an Oracle DBA I still got a lot out of reading it.

- Bobby

 

 

 

 

 

Categories: DBA Blogs

final version of my Hotsos 2014 presentation

Grumpy old DBA - Sat, 2014-03-08 12:38
This is the "really really" final version because it was updated and some additional references added even after the presentation was delivered.

Hoping to do it at Open World 2014 so ... I guess this is the early version of that presentation eh?

You can find it here: Three approaches to shared pool monitoring Hotsos 2014
Categories: DBA Blogs

So you think you have disabled ASMM/AMM oracle automatic SGA sizing behavior?

Grumpy old DBA - Sat, 2014-03-08 12:34
I polled my session attendee's at Hotsos 2014 ( Three approaches to shared pool monitoring ) and asked them who was using automatic stuff versus who was doing it all manually?

My guess is that there were about 100 people in the room and it looked like automatic "won" but not by much.  Maybe 55 percent automatic and 45 manual ... so there are still a whole bunch of people nervous about using that functionality.

My presentation was updated ( after delivering it ) to add in some additional doc id's and warnings ( both pro and con ) about manual memory settings.

Specifically worth noting is the un obvious fact that EVEN IF you have disabled oracle from doing SGA resizing operations automatically ... well it may do them anyhow.  Makes me a little grumpy eh?

So you can set SGA_TARGET to zero ( and/or MEMORY_TARGET to zero ) and still have oracle make decisions for you?  It turns out that after some point you have to set yet another hidden underscore parameter to tell the oracle software "yeah thanks but I really really mean it" ...

_MEMORY_IMM_MODE_WITHOUT_AUTOSGA

Oracle doc id 1269139.1 ... a pretty good writeup here ... really turn off ammasmm
Categories: DBA Blogs

Sqlplus script to check for row chaining

Bobby Durrett's DBA Blog - Fri, 2014-03-07 15:10

Here is the sqlplus scripting I use to check whether row chaining is degrading a query’s performance:

VARIABLE monitored_sid number;

begin

SELECT sid into :monitored_sid 
from v$session 
where audsid=USERENV('SESSIONID');

end;
/

select b.value "table fetch continued rows"
from
V$SESSTAT b
where b.SID = :monitored_sid
and b.STATISTIC# = (SELECT STATISTIC# FROM V$STATNAME 
WHERE NAME='table fetch continued row');

I create a test script with the problem query and put this code after it to see how much chaining affected it.

Here is an example of a fast query:

Elapsed: 00:00:15.87

table fetch continued rows
--------------------------
                     19723

Here is an example of a slow query:

Elapsed: 00:03:17.46

table fetch continued rows
--------------------------
                  23775056

This was the same query for two different date ranges.  The second range had a lot of row chaining and the query was much slower.

- Bobby

 

 

 

 

Categories: DBA Blogs

Log Buffer #362, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-03-07 09:11

This Log Buffer Editions rounds up, yet again, some cool blog posts from across the planet. Sit back, relax and enjoy.

Oracle:

Marc has completed review of enhancement of SQL regarding RMAN in the 12c.

This weekend we have finalized latest update for our ADF runtime performance audit tool – Red Samurai Performance Audit Tool v 2.4.

This isn’t a SQL Developer post per-se, however if you are using the Oracle Developer Day VirtualBox 12c image, you will find this useful.

aud$ and ASSM – a good combination.

Why You Don’t Want to Code Validation in Before Commit?

SQL Server:

Basit A. Farooq talks about Different approaches of counting number of rows in a table.

The Top 5 SQL Server Features You Aren’t Using And Why.

A CLR Application: Deleting Three Billion Rows.

The simple query to find the database backup status The select statment consists of ServerName dbname BackupStartDate BackupFinishDate BackupAge Size status Type The query will only run from sql 2005 and later version.

Characters you should never ever use in an object or schema name!

MySQL:

Migration of MTR suites to use InnoDB.

50 tips to boost MySQL Performance Webinar follow up .

MySQL Workbench is a great tool when someone asks you to look at their schemas.

MySQL for Excel Main Features Video.

Many-table joins in MySQL 5.6.

Categories: DBA Blogs

Blog second anniversary

Bobby Durrett's DBA Blog - Wed, 2014-03-05 09:52

Today is the second anniversary of my first post on this blog.

Some time ago I played with a free blog that came with a yahoo email account that my wife and I share but didn’t get very far with that.  Finally for this blog I decided to spend a small amount of money to get my own hosting and domain name.

I’m using iPage for the hosting.  I’ve been happy with their service.  There were a couple of billing issues, but I opened tickets with them and they were quickly resolved in a fair way.

I evaluated a few blog sites and chose WordPress because it allowed me to do what was most important.  WordPress’s software enabled me to upload sqlplus scripts and their logs along with PowerPoint, Microsoft Word, and PDF files.  WordPress also enabled me to include output from sqlplus and have the columns line up nicely like this:

SQL> select table_name, blocks
  2  from dba_tables
  3  where owner='SYS'
  4  order by table_name;

TABLE_NAME                         BLOCKS
------------------------------ ----------
ACCESS$                               759
ALERT_QT
APPLY$_CHANGE_HANDLERS                  0
APPLY$_CONF_HDLR_COLUMNS                0
APPLY$_CONSTRAINT_COLUMNS               0
APPLY$_DEST_OBJ                         0
APPLY$_DEST_OBJ_CMAP                    0
APPLY$_DEST_OBJ_OPS                     0
APPLY$_ERROR                            0

I’m pretty happy with the results.  I’ve had some nice interaction with the Oracle community and I’ve used links to my posts with coworkers on the job.  Plus, I actually query the site myself when I forget how to do something I wrote about.

I.e. If I forget where to plus sign on an outer join I search for it on my blog!

Overall it has been an enjoyable experience and I think helpful to me and my employer as I try to communicate with others the things I’m learning about Oracle database technology, especially in the performance tuning area.

- Bobby

 

 

Categories: DBA Blogs

Combat with an enqueue wait event (enq: HW–Contention)

DBASolved - Wed, 2014-03-05 07:24

Recently I’ve been combating a high water mark enqueue wait (enq: HW – contention) on a single node within an Exadata I’m supporting.   I first noticed the wait when I was looking at the performance page for the node in Oracle Enterprise Manager 12c.  What I noticed was the a lot of brown looking spikes (Image 1).  These spikes correspond to a Configuration wait.

Image 1:

image

When I clicked on Configuration in the legend on the side of the graph, I’m taken to the Active Session Waiting: Configuration  (Image 2)  page.  On this page, I can clearly see that the wait event I’m waiting on is the “enq: HW – contention” wait.

Image 2:

image

Now, that I know what wait event I’m waiting on, I needed to figure out what object was causing the wait.  Everyone has a different approach to how they performance tune; I find it easier to identify objects causing the wait by using the Active Session History (ASH) report.  ASH reports can be ran directly from the OEM Active Session Waiting page (page with the graphs) or from the command line.   Since I like the pretty graphs that OEM provides, I prefer to run ASH from the command line and use the graphs to help identify what direction my tuning is going.

Active Session History (ASH) Report

The Active Session History (ASH) report is handy in helping to identify what object(s) the wait is waiting on.  In order to run the ASH Report from the command line,  there are a few requirements needed.  They are:

  • Set ORACLE_HOME
  • Move to the directory where you would like to save the report

Once these requirements have been meet, the ASH Report can be ran from the command line using the ashrpt.sql script. 

                SQL> @$ORACLE_HOME/rdbms/admin/ashrpt.sh

When the script starts to run, it will ask for input on what format you would like the report in. For search ability purposes (from command line), I go with the “Text” option.   Next, it asks for a timeframe (Image 3); provide a timeframe as per the layout listed.

Image 3:
image

Lastly, it will ask you what you would like to name the report.  I normally prefix it with my initials and an underscore (bc_) (Image 4). 

Image 4:
image

Once I press enter, the report runs and the output will be stored in the directory you started SQL*Plus from.

Reading the ASH Report

In reading the ASH Report, I want to identify the primary object that is holding the “enq: HW – contention” wait.   In order to do this, I need to look at the “Top Event P1/P2/P3 Values” section of the report (Image 5).  Reviewing this section, I see that the Top Event is the “enq: HW – contention”, the same as I saw from the OEM performance pages.  Next, I need to find the values of P2 and P3.   

Image 5:
image

The values for P2 and P3 will tell me what data file (P2) to look at for the block (P3) holding the wait.  The values P2 & P3 for the wait are currently “85” and “16018”.

Finding the Object

Armed with the values for P2 and P3 from the ASH Report, I can now go and locate the object via SQL.  The SQL used to identify the object is as follows:

select /*+ parallel(a dba_extents, 4) */ a.owner, a.segment_type,a.segment_name
from DBA_EXTENTS a
where a.FILE_ID = &P2
and &P3 between a.block_id and a.block_id + blocks – 1;

Note: I’m using a parallel hint; this is to speed of the identification of the block.  Can help in VLDBs. 

Once the script is done running, it returns the owner, segment_type, and segment_name of the object holding the enq: HW – contention wait (Image 6).  In this case, the wait is on a system generated LOB index. 

Image 6:
image

Resolving the Wait

At this point, I have identified the following:

  • The wait: enq: HW – contention
  • The object holding the wait

Next, I want to resolve the wait as fast as I can.  Knowing that the wait is a high watermark (enq: HW –contention) wait event, there needs to be an basic understanding of high watermarks and how extents are allocated.  The basic reason why the ‘enq: HW – contention’ raises is because the  additional extents are not being allocated to in a timely manner for the data being inserted into the object. 

In order to resolve this wait, additional extents need to be allocated.  The environment I’m working in is using big file data files with automatic segment space management (ASSM).  When initially looking into how to increase the extents; there are many different ways.  Being that this database is using ASSM, makes the extent allocation simpler.  To allocate another extent for the object identified, use the following:

             SQL> alter index <owner>.SYS_IL0000253792C00002$$ allocate extent;

The above alter statement is correct; however, the problem is that I was trying to allocate an extent on a system generated index.  The below error message states that this is not allowed:

Error starting at line : 32 in command -
alter index <OWNER>.SYS_IL0000253792C00002$$ allocate extent
Error report -
SQL Error: ORA-22864: cannot ALTER or DROP LOB indexes
22864. 00000 -  “cannot ALTER or DROP LOB indexes”
*Cause:    An attempt was made to ALTER or DROP a LOB index.
*Action:   Do not operate directly on the system-defined LOB index.
           Perform operations on the corresponding LOB column.

In order to add extents, I needed to identify the object that is using the system generated LOB index.  This information can be found in DBA_LOBS (SQL below).

select owner, table_name, column_name, segment_name, index_name
from DBA_LOBS
where index_name = ‘SYS_IL0000253792C00002$$’;

The SQL returns the name of the owner, table_name and LOB column within the table that is using the system generated LOB index (Image 7).   Using this information, I can now allocate additional extents to the LOB.

Image 7:image 

In order to allocate an extent on the object identified, the following was used:

alter table <owner>.<table> modify lob(<column>) (allocate extent (size 5M));

Once this completed, I was expecting wait (enq: HW – contention) to go away.  After sometime, I saw no improvement.  This lead me to open an service request (SR) with Oracle Support in trying to resolve. 

Research

Opening the SR has lead me to dig a bit and do some research on extent allocation and LOBs while waiting on Oracle’s answer.  Prior to Oracle Database 11g (11.2.0.1), there is a known bug (6376915).  This bug is related to high watermark enqueue contention for ASSM LOB segments (Note ID: 6376915.8).  This bug was actually fixed in release 11.2.0.1 but it needs to be “turned on” in later releases.  To turn the fix for this bug on in 11.2.0.2+, an event needs to be set in the spfile.

SQL> ALTER SYSTEM SET EVENT=”44951 TRACE NAME CONTEXT FOREVER, LEVEL <1 .. 1024>” scope=spfile;

By setting this event between 1 and 1024 will identify the number of chunks to be cleaned up each time a reclamation operation is performed.  In turn this reduces the number of requests against the high watermark enqueue. 

Once the event has been set in the spfile, the database instance needs to be rebooted to make the change effective. 

Conclusion

The steps in resolving this enqueue event are not difficult; however, when extents need to be allocated manually it becomes very interesting.  The downside to setting the event, per Oracle Support, is the need to reboot the database instance to make the event active.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database, Exadata, Performance
Categories: DBA Blogs

Index Rebuild, the Need vs the Implications Support Note 989093.1 (Getting Better)

Richard Foote - Tue, 2014-03-04 23:49
Once upon a time, Oracle Support had a note called Script: Lists All Indexes that Benefit from a Rebuild (Doc ID 122008.1) which lets just say I didn’t view in a particularly positive light :) Mainly because it gave dubious advice which included that indexes should be rebuilt if: Deleted entries represent 20% or more of […]
Categories: DBA Blogs

Pythian Raises $6 Million to Help Retailers Eliminate Data Security Breaches

Pythian Group - Tue, 2014-03-04 15:46

Pythian announced today that it raised $6 million to expand the market reach of Adminiscope, its data security solution, to help retailers eliminate data security breaches. As global corporations continue to collect, store, and monetize massive amounts of sensitive data, they’re increasingly turning to Pythian to secure their infrastructures, especially from internal threats. Investors include the Business Development Bank of Canada (BDC) and Royal Bank of Canada (RBC).

“It’s clear there’s an appetite out in the marketplace for greater data security measures among all organizations, especially retailers who have to protect sensitive customer financial information,” said Paul Vallée, Founder and Executive Chairman at Pythian. “The recent challenges continue to underscore the need for solutions like Adminiscope, and strengthen our resolve to ensure our clients operate with the highest level of security and accountability. Our technology offers organizations and their customers unwavering confidence in their data’s security. With this investment, we will bring our solution to more retailers large and small, and help them increase consumer confidence and protect their brand.”

Adminiscope, a software-as-a-service (SaaS) privileged session management solution, secures access to enterprise systems, cloud infrastructures, databases, servers, and applications, and enables stakeholders to monitor and record in real time all privileged activity performed across an IT infrastructure. Database Trends and Applications magazine recently named Adminiscope to its list of Trend-Setting Products for 2014 in the data management and analysis space.

See the full press release here.

Categories: DBA Blogs

Finished reading multitenant parts of db admin manual

Bobby Durrett's DBA Blog - Tue, 2014-03-04 14:27

I’m trying to get up to speed on 12c, especially the multitentant features.  So, I decided to read the sections of the manuals that related to using multitentant.  Here is the name of the manual I was looking at:

Oracle® Database Administrator’s Guide 12c Release 1 (12.1)

It isn’t great reading.  It is kind of slow-paced and really more of a reference but since it is a large new feature I wanted to go over the practical details of using it.  Some of the sections had a command line sqlplus version and a web-based cloud control version so I skipped the cloud version.  I’m more of a command line, DBA and v$ view kind of DBA.

Here are the chapter numbers and titles of the sections I read:

36 Overview of Managing a Multitenant Environment
37 Creating and Configuring a CDB
38 Creating and Removing PDBs with SQL*Plus
40 Administering a CDB with SQL*Plus
42 Administering PDBs with SQL*Plus
43 Viewing Information About CDBs and PDBs with SQL*Plus
44 Using Oracle Resource Manager for PDBs with SQL*Plus
46 Using Oracle Scheduler with a CDB

I haven’t really used what I’ve learned yet but I feel like I have a good overview and know where to go to get more information.

I’m also working on reading through the 12c concepts manual on my Kindle but I haven’t gotten to the multitenant part.  Once I’m done with that I hope to get serious about studying up for the 12c OCP upgrade exam.  It looks like you can sign up for the tests now, but I don’t have any practice exam software or a book and I usually get one of these before I take the test.  I see one book that is supposed to be available in June so I may have to wait until then but I have plenty to do between now and then anyway so I may not be ready to take the test before the book is available so that should work out.

In any case, certification or no, it was helpful to read the db admin guide multitenant sections to get a feel for how it works.  Probably chapter 44 was the most interesting because it talked about using resource manager to divide resource usage between the pluggable databases that are sharing the same instance.

- Bobby

 

 

 

Categories: DBA Blogs

managed to make it to Dallas sunday morning ... Hotsos 2014 is rolling

Grumpy old DBA - Mon, 2014-03-03 19:01
It was a little dicey sunday morning with weather in Ohio but I caught very early flight and made it down to Dallas well before noon.  Lots of turbulence in the airplane ride and bounced around some but all in all not bad.  Lots of people traveling ran into hitches one way or another but by the end of the day Monday most everyone seems to be here now.

Even better I had pretty good turnout for my session maybe even close to 1/2 the attendees dunno exactly but room was pretty full.  Presentation went pretty well only blanked out once and then jumped into next bullet point.  Actually all in all for as many times as I had practiced this thing and thought I was going to fail at various points I think it went over pretty well.

Managed one really cheesy comparison of the shared pool to a picture that I will probably get in trouble for eventually.

Besides some of the people I already knew I have met several other nice speakers and now much more relaxed with my presentation over with.  Even managed to update my presentation slightly with some material from this morning Kerry Osborne and Tim Gorman.
Categories: DBA Blogs

Good blog post on learning Oracle performance

Bobby Durrett's DBA Blog - Mon, 2014-03-03 17:44

Saw this blog post about how Cary Millsap learned about Oracle performance tuning and I thought it was interesting:

http://carymillsap.blogspot.com/2014/02/how-did-you-learn-so-much-stuff-about.html

It is interesting because I’ve learned so much from Cary Millsap and Jeff Holt’s book and it is cool to see how Cary got to the position he is in as a well-known Oracle performance expert.  I guess I’ve always found the performance tuning part of my Oracle DBA job the most interesting so it is intriguing to see someone else who feels the same way about it and how they pursued their interest and made a living doing it.

Cary’s blog post contains good advice to someone who wants to be an Oracle DBA/performance analyst.  Read the Concepts manual and other relevant manuals.  Read books by the performance experts.  Find opportunities to work on real performance problems.  Pretty much these are the steps I’ve taken, although I’ve always worked as an employee for a company instead of being a consultant.  Still, working for a large company has been a good situation for me because we have a variety of systems and performance is consistently a problem that needs work.

- Bobby

 

 

 

Categories: DBA Blogs

Effective Customer Communications

Pythian Group - Mon, 2014-03-03 15:44

How well do you communicate with your customers? Are you listening, does your customer hear what you are saying? Every day we interact with our customers, team mates, colleagues and managers. Is your message getting delivered?

1. Actively listen.

It is easy to be on a call, but no one can see you “nod”. Make sure to acknowledge the person by replying with “I see” or paraphrase what they said. Side benefit: you look impressive!

2. Ask targeted questions.

Once you’ve practiced #1, you can now ask intelligent, pointed questions. Consider what your customer said, and more importantly what they didn’t say.

3. Show respect.

Respect your audience. Stop talking, try not to interrupt and focus. Avoid distractions such as typing on your computer or checking email. Be in the moment. When you demonstrate respect for your audience you show them you respect yourself as well as them!

4. Tell the truth.

“The whole truth, and nothing but the truth…” Are you on a call with a very upset customer because the DB crashed, and we were at fault? Escalate the call to your manager, and then listen. If we are at fault, we’ll always admit it. It’s counterproductive to try to shift blame. Even in times of stress, try not to lose sight of that fact that we’re in the solutions business. The customer can handle the truth, and they will respect you more for delivering it.

5. Understand what your customer values.

What systems or DBs are most important to your customer? Are they cost driven? React appropriately.

6. Be candid.

Be straight forward — don’t cover things up. State facts and avoid excuses. This, in turn, builds trust and a sense of partnership.

7. Be consistent.

“Say what you mean, and mean what you say.” Nothing could be truer when dealing with customers. Enough said.

8. Dedicate yourself.

Relationships aren’t built overnight. Dedicate yourself to the process, and you will reap the benefits in the long term. Patience is key.

9. Ask for feedback.

How do we get better if we don’t know where we need to improve? Asking for feedback will open the doors of honesty, so be open to the response that you’ve requested.

10. Be Persistent.

Still not sure on the requirements? Seek clarification by rephrasing your questions and using paraphrasing. Before the end of the conversation, be sure that you are clear on next steps. Ask until you know.

11. Build rapport.

Find common ground and let the customer know you can relate. Compliment them and focus on areas of agreement. They have lives outside of the office, so try get to know them.

12. Smile.

When you smile while you’re talking, your customer can hear it in your voice.

13. You’re an expert — act like one.

Don’t undermine your expertise by asking questions that can be answered internally. Write a note and ask your team.

14. Be flexible.

Be creative in finding solutions. The only limits are the ones we place on ourselves by thinking small. Customers needs change, we need to change and adapt with it. Don’t get stuck in the past.

15. Maintain constant communication.

Don’t restrict yourself to only talking to your customer when there is bad news. Share the good news, too!

16. Be Careful What You Say.

Customers often take things literally. Avoid words such as outage or crash, etc. Continue to be honest, but these words and ones like them trigger panic. Just use different ones.

Effective communication takes work and patience. Commit yourself to improving every day, not just with your customers but with your peers, managers & even your family! 

What do you think?  Have something to add, let me know what works for you!

Please note: Not all of these ideas were my own, but some are. Still I must give credit to that class I took called “Encouraging Upward Communication “. Author/teacher unknown.

Categories: DBA Blogs

Automating DataPump Export

Pythian Group - Mon, 2014-03-03 09:20

What’s the most elaborate thing you have done with DataPump?

So there I was, given the requirement to export multiple partitions for multiple tables where each partition has its own dump file having the format “tablename_partitionanme.dmp”, pondering how this can be done efficiently.

With the following metadata and requirements, what approach would you take?

If you are curious about the I approach I used, then read on.

TABLE_OWNER                    TABLE_NAME                     PARTITION_NAME
------------------------------ ------------------------------ ------------------------------
MDINH                          A_TAB                          P001
MDINH                          A_TAB                          P002
MDINH                          A_TAB                          P003
MDINH                          A_TAB                          P004
MDINH                          A_TAB                          P005
MDINH                          B_TAB                          P001
MDINH                          B_TAB                          P002
MDINH                          B_TAB                          P003
MDINH                          B_TAB                          P004
MDINH                          B_TAB                          P005
Here’s the demo:
$ nohup sqlplus "/ as sysdba" @exp_api.sql > exp_api.log 2>&1 &

$ cat exp_api.log
nohup: ignoring input

SQL*Plus: Release 11.2.0.4.0 Production on Wed Feb 26 20:28:07 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@db01):PRIMARY> -- DataPump Export (EXPDP) Fails With Errors ORA-39001 ORA-39000 ORA-31641 ORA-27054 ORA-27037 When The Dump File Is On NFS Mount Point (Doc ID 1518979.1)
ARROW:(SYS@db01):PRIMARY> -- Work around for the above mentioned error
ARROW:(SYS@db01):PRIMARY> alter system set events '10298 trace name context forever, level 32';

System altered.

Elapsed: 00:00:00.00
ARROW:(SYS@db01):PRIMARY> declare
  2      h1 number;
  3      dir_name varchar2(30);
  4  begin
  5      dir_name := 'DPDIR';
  6      for x in (
  7          select table_owner, table_name, partition_name
  8          from   dba_tab_partitions
  9          where  table_owner = 'MDINH' and table_name in ('A_TAB','B_TAB') and regexp_like(partition_name,'[0-4]$')
 10          order  by table_owner, table_name, partition_position
 11      ) loop
 12
 13          h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'TABLE');
 14
 15          dbms_datapump.add_file (
 16              handle    => h1,
 17              filename  => x.table_name||'_'||x.partition_name||'.dmp',
 18              reusefile => 1,
 19              directory => dir_name,
 20              filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
 21
 22          dbms_datapump.add_file (
 23              handle    => h1,
 24              filename  => 'exp_'||x.table_name||'_'||x.partition_name||'.log',
 25              directory => dir_name,
 26              filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
 27
 28          dbms_datapump.set_parameter (
 29              handle => h1,
 30              name   => 'INCLUDE_METADATA',
 31              value  => 0);
 32
 33          dbms_datapump.metadata_filter (
 34              handle => h1,
 35              name   => 'SCHEMA_EXPR',
 36              value  => 'IN ('''||x.table_owner||''')');
 37
 38          dbms_datapump.metadata_filter (
 39              handle => h1,
 40              name   => 'NAME_EXPR',
 41              value  => 'IN ('''||x.table_name||''')');
 42
 43          dbms_datapump.data_filter (
 44              handle      => h1,
 45              name        => 'PARTITION_LIST',
 46              value       => x.partition_name,
 47              table_name  => x.table_name,
 48              schema_name => x.table_owner);
 49
 50          dbms_datapump.start_job (handle => h1);
 51          dbms_datapump.detach (handle => h1);
 52      end loop;
 53  end;
 54  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:10.92
ARROW:(SYS@db01):PRIMARY> alter system set events '10298 trace name context off';

System altered.

Elapsed: 00:00:00.00
ARROW:(SYS@db01):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Review export log:
$ ls -l exp*.log-rw-r--r--. 1 oracle oinstall 2888 Feb 26 20:28 exp_api.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P001.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P002.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P003.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P004.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P001.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P002.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P003.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P004.log
Review export dump:
$ ls -l *.dmp
-rw-r-----. 1 oracle oinstall   90112 Feb 26 20:28 A_TAB_P001.dmp
-rw-r-----. 1 oracle oinstall   98304 Feb 26 20:28 A_TAB_P002.dmp
-rw-r-----. 1 oracle oinstall  188416 Feb 26 20:28 A_TAB_P003.dmp
-rw-r-----. 1 oracle oinstall 1069056 Feb 26 20:28 A_TAB_P004.dmp
-rw-r-----. 1 oracle oinstall   90112 Feb 26 20:28 B_TAB_P001.dmp
-rw-r-----. 1 oracle oinstall   98304 Feb 26 20:28 B_TAB_P002.dmp
-rw-r-----. 1 oracle oinstall  188416 Feb 26 20:28 B_TAB_P003.dmp
-rw-r-----. 1 oracle oinstall 1069056 Feb 26 20:28 B_TAB_P004.dmp
Review job status:
$ grep "successfully completed" exp*.log
exp_api.log:PL/SQL procedure successfully completed.
exp_A_TAB_P001.log:Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Wed Feb 26 20:28:09 2014 elapsed 0 00:00:01
exp_A_TAB_P002.log:Job "SYS"."SYS_EXPORT_TABLE_03" successfully completed at Wed Feb 26 20:28:10 2014 elapsed 0 00:00:02
exp_A_TAB_P003.log:Job "SYS"."SYS_EXPORT_TABLE_04" successfully completed at Wed Feb 26 20:28:11 2014 elapsed 0 00:00:02
exp_A_TAB_P004.log:Job "SYS"."SYS_EXPORT_TABLE_05" successfully completed at Wed Feb 26 20:28:13 2014 elapsed 0 00:00:02
exp_B_TAB_P001.log:Job "SYS"."SYS_EXPORT_TABLE_06" successfully completed at Wed Feb 26 20:28:14 2014 elapsed 0 00:00:02
exp_B_TAB_P002.log:Job "SYS"."SYS_EXPORT_TABLE_07" successfully completed at Wed Feb 26 20:28:16 2014 elapsed 0 00:00:02
exp_B_TAB_P003.log:Job "SYS"."SYS_EXPORT_TABLE_08" successfully completed at Wed Feb 26 20:28:17 2014 elapsed 0 00:00:03
exp_B_TAB_P004.log:Job "SYS"."SYS_EXPORT_TABLE_09" successfully completed at Wed Feb 26 20:28:19 2014 elapsed 0 00:00:02
Review exported partition:
$ grep "exported" exp*.log
exp_A_TAB_P001.log:. . exported "MDINH"."A_TAB":"P001"                      6.351 KB       9 rows
exp_A_TAB_P002.log:. . exported "MDINH"."A_TAB":"P002"                      14.89 KB      90 rows
exp_A_TAB_P003.log:. . exported "MDINH"."A_TAB":"P003"                      101.1 KB     900 rows
exp_A_TAB_P004.log:. . exported "MDINH"."A_TAB":"P004"                      963.3 KB    9000 rows
exp_B_TAB_P001.log:. . exported "MDINH"."B_TAB":"P001"                      6.351 KB       9 rows
exp_B_TAB_P002.log:. . exported "MDINH"."B_TAB":"P002"                      14.89 KB      90 rows
exp_B_TAB_P003.log:. . exported "MDINH"."B_TAB":"P003"                      101.1 KB     900 rows
exp_B_TAB_P004.log:. . exported "MDINH"."B_TAB":"P004"                      963.3 KB    9000 rows
Example of completed log:
$ cat exp_B_TAB_P001.log
Starting "SYS"."SYS_EXPORT_TABLE_06":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 8 MB
. . exported "MDINH"."B_TAB":"P001"                      6.351 KB       9 rows
Master table "SYS"."SYS_EXPORT_TABLE_06" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_06 is:
  /tmp/B_TAB_P001.dmp
Job "SYS"."SYS_EXPORT_TABLE_06" successfully completed at Wed Feb 26 20:28:14 2014 elapsed 0 00:00:02
SQL Scripts:

exp_api.sql:

set timing on echo on
-- DataPump Export (EXPDP) Fails With Errors ORA-39001 ORA-39000 ORA-31641 ORA-27054 ORA-27037 When The Dump File Is On NFS Mount Point (Doc ID 1518979.1)
-- Work around for the above mentioned error
alter system set events '10298 trace name context forever, level 32';
declare
    h1 number;
    dir_name varchar2(30);
begin
    dir_name := 'DPDIR';
    for x in (
        select table_owner, table_name, partition_name
        from   dba_tab_partitions
        where  table_owner = 'MDINH' and table_name in ('A_TAB','B_TAB') and regexp_like(partition_name,'[0-4]$')
        order  by table_owner, table_name, partition_position
    ) loop

        h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'TABLE');

        dbms_datapump.add_file (
            handle    => h1,
            filename  => x.table_name||'_'||x.partition_name||'.dmp',
            reusefile => 1, -- REUSE_DUMPFILES=Y
            directory => dir_name,
            filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

        dbms_datapump.add_file (
            handle    => h1,
            filename  => 'exp_'||x.table_name||'_'||x.partition_name||'.log',
            directory => dir_name,
            filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

        -- CONTENT = DATA_ONLY    
        dbms_datapump.set_parameter (
            handle => h1,
            name   => 'INCLUDE_METADATA',
            value  => 0);

        dbms_datapump.metadata_filter (
            handle => h1,
            name   => 'SCHEMA_EXPR',
            value  => 'IN ('''||x.table_owner||''')');

        dbms_datapump.metadata_filter (
            handle => h1,
            name   => 'NAME_EXPR',
            value  => 'IN ('''||x.table_name||''')');

        dbms_datapump.data_filter (
            handle      => h1,
            name        => 'PARTITION_LIST',
            value       => x.partition_name,
            table_name  => x.table_name,
            schema_name => x.table_owner);

        dbms_datapump.start_job (handle => h1);
        dbms_datapump.detach (handle => h1);
    end loop;
end;
/
alter system set events '10298 trace name context off';
exit
Reference: DBMS_DATAPUMP
Categories: DBA Blogs