Skip navigation.

DBA Blogs

\d in Vertica

Pakistan's First Oracle Blog - Fri, 2015-09-04 23:13
A quick neat way to list down important and oft-needed information like names of databases, schemas, users, tables, projections etc. We can also use patterns with the '\d' to narrow down the results. Let's see it in action:

Connect with Vertica vsql:

vsql  -U dbadmin -w vtest -h -p 5433 -d vtest

 Welcome to vsql, the Vertica Analytic Database interactive terminal.
Type:  \h or \? for help with vsql commands
\g or terminate with semicolon to execute query
\q to quit
vtest=> \dn

List of schemas
Name     |  Owner  | Comment
v_internal   | dbadmin |
v_catalog    | dbadmin |
v_monitor    | dbadmin |
public       | dbadmin |
TxtIndex     | dbadmin |
store        | dbadmin |
online_sales | dbadmin |
mytest       | mytest  |
(8 rows)
vtest=> \dn mytest

List of schemas
Name  | Owner  | Comment
mytest | mytest |
(1 row)
vtest=> \dn my*

List of schemas
Name  | Owner  | Comment
mytest | mytest |
(1 row)
vtest=> \dn v

List of schemas
Name | Owner | Comment
(0 rows)
vtest=> \dn *v*

List of schemas
Name    |  Owner  | Comment
v_internal | dbadmin |
v_catalog  | dbadmin |
v_monitor  | dbadmin |
(3 rows)

Likewise you can list down other information like :
vtest=> \dj

List of projections
Schema    |            Name             |  Owner  |       Node       | Comment
mytest       | ptest                       | mytest  | v_vtest_node0002 |
mytest       | testtab_super               | mytest  |                  |

To list down views:
vtest=> \dv
No relations found.

If you connect with the mytest user and run:
vtest=> \dt

List of tables
Schema |  Name   | Kind  | Owner  | Comment
mytest | testtab | table | mytest |
(1 row)

Following are more '\d' options from help:

\d [PATTERN]   describe tables (list tables if no argument is supplied)
PATTERN may include system schema name, e.g. v_catalog.*
\df [PATTERN]  list functions
\dj [PATTERN]  list projections
\dn [PATTERN]  list schemas
\dp [PATTERN]  list table access privileges
\ds [PATTERN]  list sequences
\dS [PATTERN]  list system tables. PATTERN may include system schema name
such as v_catalog, v_monitor, or v_internal.
Example: v_catalog.a*
\dt [PATTERN]  list tables
\dtv [PATTERN] list tables and views
\dT [PATTERN]  list data types
\du [PATTERN]  list users
\dv [PATTERN]  list views
Categories: DBA Blogs

Partner Webcast – Oracle Cloud Platform: Database Cloud Update

Oracle is expanding the portfolio of PaaS solutions available, helping enterprise IT and partners rapidly build and deploy rich applications - or extend Oracle Cloud SaaS apps - using an...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The Importance of Data Virtualization

Kubilay Çilkara - Fri, 2015-09-04 11:37
It’s been a long time since the way data was stored and accessed has been addressed. We went from scrolls to books to mainframes and this last method hasn’t budged all that much over the last decade or so. This is despite the fact that we keep creating more and more information, which means that better ways for storing and finding it would make a world of difference. Fortunately, this is where data virtualization comes into play. If your company isn’t currently using this type of software, you’re missing out on a better way of leveraging your organization’s data.

Problems with Traditional Methods

No matter what line of work you’re in, your company is creating all kinds of information each and every business day. We’re not just talking about copy for your website or advertising materials either. Information is created with each and every transaction and just about any time you interact with a customer.

You’re also not just creating information in these moments. You need to access stored data too. If you don’t do this well—and many, many companies don’t—you’re going to end up with a lot of unnecessary problems. Of course, you’re also wasting a lot of money on all that info you’re creating but not using.

The main problem with traditional methods of data storage and retrieval is that they rely on movement and replication processes and intermediary servers and connectors for integrating via a point-to-point system. This worked for a long time. For a few companies, it may seem like it’s still working to some degree.

However, there’s a high cost to this kind of data movement process. If you’re still trying to shoulder it, chances are your overhead looks a lot worse than competitors that have moved on.That’s not the only problem worth addressing, though. There’s also the huge growth of data that’s continuing to head north. We’ve touched on this, but the problem is so prevalent that there’s a term for it throughout every industry: Big Data. Obviously, it refers to the fact that there is just so much information out there, but the fact this term exists also shows how much it affects all kinds of companies.

Big Data is also a statement about its creation. Its sheer proliferation is massive. Every year, the amount increases at an exponential rate. There’s just no way the old methods for storing and finding it will work.

Finally, customers expect that you’ll be able to find whatever information you need to meet their demands. Whether it’s actual information they want from you or it’s just information you need to carry out their transaction, most won’t understand why this isn’t possible. After all, they use Google and other search engines every day. If those platforms can find just about anything for free, why can’t your company do the same?

The Solution Is Data Virtualization

If all of the above sounded a bit grim, don’t worry. There’s a very simple solution waiting for you in data virtualization. This type of software overcomes the challenges that come with your data being stored all over your enterprise. You never again have to run a million different searches to collect it all or come up with some halfway decent workaround. Finally, there’s a type of platform made specifically for your company’s data gathering needs.This software isn’t just effective. It’s convenient too. You work through one, single interface and can have access to the entirety of your company’s data store. What it does is effectively separate the external interface from the implementation going on inside.

How It Works

Every data virtualization platform is going to have its own way of working, so it’s definitely worth researching this before putting your money behind any piece of software. However, the basic idea remains the same across most titles. With virtualization software, the data doesn’t have to be physically moved because this technology uses meta data to create a virtual perspective of the sources you need to get to. Doing so provides a faster and much more agile way of getting to and combining data from all the different sources available:·      Distributed ·      Mainframe ·      Cloud·      Big DataAs you can probably tell, this makes it much easier to get your hands on important information than the way you’re currently doing it.

Finding the Right Title for Your Company

Although they all serve the same purpose and the vast majority will follow the outline we just went through, there are still enough virtualization software titles out there that it’s worth thinking about what your best option will look like. As with any type of software, you don’t want to invest your money anywhere it’s not going to do the most good.The good news is that this software has been around long enough that there have been best practices established for how it should work. First, you want to look for a title that will actually transform the way your mainframe works by leveraging what it can do for every search. This is just a good use of your resources and gives you more bang for your buck as far as your mainframe is concerned. Software that uses it for virtualization purposes is going to work even better than a distribution-based application and won’t cost any more.

However, it’s also going to work a lot better for that price too. A lot of companies also love that this way of carrying out a search is more secure as well. The last thing you want is to pay for a piece of software that’s actually going to leave you worse off.

Secondly, although this may sound complex, you can and should find a solution that keeps things simple. Data integration can be straightforward with the method we just described without any need for redundant processes that would slow down your ability to scale up.

With data virtualization, there is no downside. Furthermore, it’s becoming more and more of a necessity. Companies are going to have to invest in this software as Big Data continues to get bigger.

Mike Miranda writes about enterprise software and covers products offered by software companies like Rocket Software.about topics such as Terminal Emulation, Legacy Modernization, Enterprise Search, Big Data and Enterprise Mobility.

Categories: DBA Blogs

Don’t Settle When It Comes to Enterprise Search Platforms

Kubilay Çilkara - Fri, 2015-09-04 11:32
No company should try to operate for very long without an enterprise search platform. With so many options out there, though, you could be confused about which one is worth a spot in your organization’s budget. Let’s look at two very common workarounds some have tried, and then we will talk about why you must go with a reputable developer when you make your final decision.
Leveraging Web Search Engines
One way a lot of companies have handled their need for an enterprise search platform is to simply use one like Google or some other web engine. On paper, this may seem to make a lot of sense. After all, who doesn’t trust Google? Most of us use it every single day and would be lost without the search engine giant. If you can leverage its power for your company’s needs, that would seem like a no-brainer, right?
Unfortunately, if you try this, you’ll learn the hard way that Google leaves a lot to be desired when it comes to this type of an environment. That’s not to say it won’t work; it just won’t work well and definitely not nearly as well as you want for a search engine working your company’s internal systems. Google and other web search engines are fantastic at what they’re designed to do. They are not designed to work in an enterprise search environment though. For that, you need a true enterprise search platform.

The major problem is that web search engines are all about sheer volume, which makes a lot of sense once you think about it. When you want to find a pizza parlor in your city, you want to know about every single option and then some. Google’s ability to harvest this much information and present it quickly is one of the reasons it’s so popular. Any search engine that can’t deliver this kind of volume is going to alienate users and soon be left on the scrap heap.

What web search engines like Google don’t do well, though, is carry out deep, detail-oriented searches. Again, this makes sense. Google is driven largely by keywords and backlinks. These are “surface searches” that weren’t created to do some of the tasks you need an enterprise search platform form.

Your employees may do some very simple searches, but they probably also have some pretty demanding queries too. Enterprise search platforms are designed to handle these types of searches and drill down to the last detail in any file or piece of data they come across. If your employees aren’t able to do these types of searches on a regular basis, the search software you invested in will be a waste of money. Worse, it could land you with all kinds of other problems because your people will think they’re doing the best possible search they can and concluding that what they want can’t be found.

Also, don’t forget that your company stores information in different places. Yes, it may all be on the same server, but in the digital world, there are various “silos” that hold onto information. Each silo is its own environment with its own rules. When you try using a web search engine to look through all your company’s silos, what will most likely happen is that it will have to go through one at a time. This is far from ideal, to say the least.

If you have a good number of silos, your employees will most likely give up. They won’t want to walk the search engine from one silo to the next like they’re holding onto the leash of a bloodhound. The whole point of a search engine is that it’s supposed to cut down on the exhaustive amount of “manual” work you’d otherwise have to do to find the data you need.

Silos aren’t all the same, so you want a search program that can go in and out of the type you have without requiring the employee to reconfigure their query.

Open Source Software

Another very popular method of acquiring enterprise search software is to go with an open source title. Once again, on paper, this seems like a very logical route to take. For one thing, the software is free. You can’t beat that price, especially when it comes to an enterprise-level platform. This usually seems like an unbeatable value for small- and medium-sized businesses that don’t have a lot of leeway where their budget is concerned.

That’s just one benefit that proponents of open source search engines tout though. There’s also the fact that you can modify the software as you see fit. This gives the user the ability to basically mold the code into a result that will fit their company’s needs like a glove.

There are a couple problems though. The first is that you get what you pay for. If your open source software doesn’t deliver, you have no one to complain to. You can always do your research to find a title that others have given positive reviews to. At the end of the day, though, don’t expect much in the way of support. There are plenty of forums to go through to find free advice, but that’s not the type of thing most professionals want to wade through.

When you go with a true, professional version, your employees will never be far from help if something goes wrong. Most companies these days have email and phone lines you can use, but many also have chat boxes you can open up on their website. None of these will be available for your company if you go with open source software.

Also, you can definitely modify the search engine software, but this isn’t necessarily unique to open-source platforms. Professional search platforms can be modified a number of ways, allowing the user to streamline their software and fine-tune the result so they get relevant results again and again.This type of architecture, known as a pipeline, is becoming more and more the standard in this industry. Enterprise platforms come with all kinds of different search features, but that can also be a problem if they start getting in the way of one another. To ensure there are never too many cooks in the kitchen, pipeline architecture is used to line them all up, one in front of the other. By doing so, you’ll have a much easier time getting the search results you want, especially because you can just reconfigure these features as you see fit whenever you like.

Ongoing Updates Are Yours

One very important aspect of professional enterprise search platforms that is worth pointing out is that most developers are constantly putting out updates for their product. This is the same thing web search engines do, of course. Google, Yahoo and Bing all release upgrades constantly. The difference, however, is that enterprise platforms get upgrades that are specific to their purposes. While there are updates for open source software, expect sporadic results. The developer of your favourite title could give up and go on to another project, leaving you to look for someone else to continue creating great updates.

If you have a skilled developer who is familiar with open source search engines on your team, this may be an attractive option. Still, most will find this route is just too risky. Most of us also don’t have that kind of developer on staff and it wouldn’t be worth it to hire someone on specifically for this reason (it’d be much more affordable to just buy professional software). Also, remember that, even if you do have this kind of talent within your ranks, you’ll soon become completely beholden to them if you start trusting them with this kind of job. Having someone who is completely responsible for your search engine being able to work and not having someone else on staff who can offer support or replace them is not a good idea.

Scalability Is a Given
Every company understands how important scalability is. This is especially true when it comes to software though. The scalability of a program can really make or break its value. Either it will turn into a costly mistake that greatly holds your business back or it will become the type of agile asset you actually take for granted, it’s so helpful.

Open source platforms are only as scalable as their code allows, so if the person who first made it didn’t have your company’s needs in mind, you’ll be in trouble. Even if they did, you could run into a problem where you find out that scaling up actually reveals some issues you hadn’t encountered before. This is the exact kind of event you want to avoid at all costs.

Now that you realize the importance of going with a reputable developer, your next step is picking which one to choose. You definitely won’t lack for options these days, so just take your time to ensure you go with the best one for your business.
Mike Miranda writes about enterprise software and covers products offered by software companies like about topics such as Terminal Emulation, Legacy Modernization, Enterprise Search, Big Data and Enterprise Mobility.
Categories: DBA Blogs

Log Buffer #439: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-09-04 07:41

This Log Buffer Edition covers some nifty blog posts from Oracle, SQL Server and MySQL.


  • Real Application Testing report On Premise vs. Oracle Public Cloud
  • Foreign Keys and Library Cache Locks
  • IN/EXISTS bugs by Jonathan Lewis.
  • Creating a trace file from EM12c is quite easy and doesn’t require a DBA offering up the world to allow a developer or support person to perform this action.
  • Oracle Cloud : First Impressions

SQL Server:

  • Hidden Tricks To SQL Server Table Cleanup
  • An Introduction to the OpenPOWER Foundation
  • Configuring Service Broker Architecture
  • Understand the Limitations of SQL Server Dynamic Data Masking
  • Creating Dashboards for Mobile Devices with Datazen – Part 3


  • Second day with InnoDB transparent page compression
  • Amazon RDS Migration Tool
  • A new client utility called mysqlpump that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects and table data.
  • How MySQL-Sandbox is tested, and tests MySQL in the process
  • Orchestrator 1.4.340: GTID, binlog servers, Smart Mode, fail-overs and lots of goodies


Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

COLLABORATE16 IOUG – Call For Papers

Pythian Group - Thu, 2015-09-03 11:48

There’s so many ways to proceed
To get the knowledge you need
One of the best
Stands out from the rest
COLLABORATE16 – indeed!

Why not be part of the show
By sharing the stuff that you know
Got something to say
For your colleagues each day
Call for papers –> let’s go

I believe many of you would agree that regardless of how insignificant you believe your corner of the Oracle technology may be, everyone has something to say. I attended my first show in Anaheim CA USA in 1990 and started presenting at shows the year after in Washington DC USA. It’s not hard to get over the hump, moving from I would love to present a paper at a show but I just don’t have the koyich to wow that was fun. The only way you will ever get the strength is to do it (and do it and do it …).

Some suggestions for getting started …

  1. Co-present with a colleague
  2. Collaborate through paper and slides development WITH your colleague rather than parcel off portions to one another then merge at the end.
  3. Be cautions of trying to cover too much in too little time (I once attended a session at IOUW [a pre-cursor to COLLABORATE] where the presenter had over 400 slides to cover in 45 minutes].
  4. Ask for assistance from seasoned presenters (mentor/protégé type relationship).
  5. Go slowly at first and set yourself some realistic but aggressive goals.

The experience of presenting at shows is rewarding and I for one do it as much as I can … Ensuring Your Physical Standby is Usable and Time to Upgrade to 12c (posting of 2015 presentation details pending).

The confidence gain, personal koyich, and rewards of presenting at events are life long and can help propel your career into the ionosphere. Speaking of confidence, 20 months ago I started playing bridge. Now look where my experience presenting at shows and writing for Oracle Press got me … check this out :).

Surprises surprises abound
With the new confidence found
Presenting is great
Get now on your plate
In no time you’ll be so renowned


Discover more about our expertise in the world of Oracle.

Categories: DBA Blogs

Oracle EBS R12.2: Restarting Online Patching Enablement patch

Pythian Group - Thu, 2015-09-03 11:33

If you are in process of upgrading to Oracle E-Business Suite 12.2.4, you would have went though this critical phase in the upgrade which is to apply the Online Patching Enablement patch:


It’s very common to run into errors with this patch in the first try and have to apply it couple of times, in order to get all issues fixed and get online patching enabled. The recommended command to apply this patch is:

adpatch options=hotpatch,forceapply

When the time comes to re-apply the patch to fix problems, if you use the same command to reapply the patch, you will notice that the patch completed normal with in no time and nothing happens in the back end. This is because of a specific feature from Adpatch. ADPATCH by default skips jobs that are marked as “run successfully” in previous runs or as part of another patch. So we have to force it re-run those jobs. This can be done by using command below:

adpatch options=hotpatch,forceapply,nocheckfile

Sometimes we run into cases where Online Patching Enablement patch completes as “normal” and the actual online patching feature gets enabled where we see that a schema or two have failed to enable the EBR feature. As soon as APPS schema gets EBR enabled by this patch, even though other custom schemas failed to get enabled, Adpatch gets disabled and we are forced to adop utility from then on. In this scenario, we can still re-apply the Online Patch Enablement using Adpatch after setting the environment variable below:


I see that online patching enablement exercise for every customer is a unique experience. Do post your experiences with this online patching enablement patch in the comments section. I’d love to hear your story!

Discover more about Pythian’s expertise in the world of Oracle.

Categories: DBA Blogs

VMware Debuts SQL Server DBaaS Platform

Pythian Group - Thu, 2015-09-03 11:14


Yesterday at VMworld, VMware announced its entry into the managed database platform market with the introduction of vCloud Air SQL. This new service is an on-demand, managed service offering of Microsoft SQL Server. It’s meant to further the adoption of hybrid operations, since it can be used to extend on-premises SQL Server use into the cloud.

Currently the two major players in this space are Amazon RDS and Azure SQL. Both of those offerings are significantly more mature and feature-rich than VMware’s service as outlined in the early access User Guide.

The beta version of vCloud Air SQL has a number of initial limitations such as:

  • SQL Server Licensing is not included or available. Meaning that the vCloud Air SQL platform is utilizing a “bring your own license” (BYOL) model. This requires that you have an enterprise agreement with software assurance in order to leverage license mobility for existing instances.
  • SQL 2014 is not currently offered, only SQL 2008 & SQL 2012 are supported at this time.
  • SQL Server Instances are limited to 150GB
  • Service tiers are limited to three choices at launch and altering the service tier of an existing instance is not supported at this time.

Although there are a number of limitations, reviewing the early access beta documentation reveals some interesting details about this service offering:

  • “Instant” Snapshot capabilities appear to be superior to any competitors managed service offerings. These features will be appealing to organizations leveraging DevOps and automated provisioning methodologies.
  • Persistent storage is solid state (SSD) based and will likely be more performant than competing HDD offerings.
  • A new cloud service named vCloud Air SQL DR is planned as a companion product. This service will integrate with an organization’s existing on-premises SQL Server instances. Once integrated, it will provide a variety of cloud based disaster recovery options leveraging Asynchronous replication topologies.

If you want to try this new service, VMware is offering a $300 Credit for first time vCloud Air users HERE.

Discover more about Pythian’s expertise in SQL Server.



Categories: DBA Blogs

Autoconfig in Oracle EBS R12.2

Pythian Group - Thu, 2015-09-03 08:28

All seasonal Oracle Apps DBAs know that Autoconfig is the master utility that can configure the whole E-Business Suite Instance. In E-Business Suite releases 11i, 12.0 and 12.1 running Autoconfig recreated all the relevant configurations files used by Apache server. If the context file has the correct settings, then configuration files should include the correct setting after running Autoconfig. This is not the case anymore in Oracle E-Business Suite 12.2. Some of the Apache config files are under fusion middleware control now, namely httpd.conf, admin.conf and ssl.conf. All other Apache config files are still under Autoconfig control. But these 3 critical config files include the main config pieces like Webport, SSL port etc.

So if you have to change the port used by EBS instance, then you have to log into the Weblogic admin console and change port there and then sync context xml file using This utility will get the current port values from Weblogic console and update the xml with new port values. Once the context xml file syncs, we have to run Autoconfig to sync other config files and database profile values to pickup new webport

Similarly, if you want to change the JVM augments or class path, you have run another utility called to make those changes from command line or login to the Weblogic admin console to do those changes. Interestingly, few of the changes done in Weblogic admin console or fusion middleware control are automatically synchronized with context xml file by the script that runs in the background all the time. But Apache config file changes were not picked by this script, so Apache changes had to be manually synchronized

There are a few My Oracle Support notes that can help you understand these utilities little more, such as 1676430.1 and 1905593.1. But understand that Autoconfig is a different ball game in Oracle E-Business Suite R12.2.

Discover more about Pythian’s expertise in the world of Oracle.

Categories: DBA Blogs

Amazon RDS Migration Tool

Pythian Group - Wed, 2015-09-02 15:06

Amazon has just released their RDS Migration Tool, and Pythian has recently undertaken training to use for our clients. I wanted to share my initial thoughts on the tool, give some background on its internals, and provide a walk-through on the functionality it will be most commonly used for.

There are many factors to consider when evaluating cloud service providers, including cost, performance, and high availability and disaster recovery options. One of the most critical and overlooked elements of any cloud offering though, is the ease of migration. Often, weeks are spent evaluating all of the options only to discover after the choice is made that it will take hours of expensive downtime to complete the migration, and that there is no good rollback option in the case of failure.

In order to reduce the friction inherent in the move to a DBaaS offering, Amazon has developed an RDS Migration tool. This is an in-depth look at this new tool, which will be available after September 1, 2015. Contact Pythian to start a database migration.

With the introduction of the RDS Migration tool, Amazon has provided a powerful engine capable of handling much more than basic migration tasks. It works natively with Oracle, SQL Server, Sybase, MySQL, PostgreSQL, Redshift (target only), Aurora (target only), and provides an ODBC connector for all other source systems. The engine is powerful enough to handle fairly complex transformations and replication topologies; however, it is a migration tool and isn’t intended for long-term use.


Amazon’s RDS Migration Tool architecture is very simple. It consists of your source system, an AWS VM with the Migration Tool installed on it, and the target RDS instance.

Each migration is broken up into Tasks. Within a Task, a source and target database are defined, along with the ability to transform the data, filter the tables or data being moved, and perform complex transformations.

Tasks can be scheduled to run at particular times, can be paused and resumed, and can alert on success or failure. It’s important to note that if a task is paused while a table is loading, that table will be reloaded completely from the beginning when the task resumes.

Within a running task, the following high-level steps are performed:
• Data is pulled from the source using a single thread per table
• Data is converted into a generic data type
• All transformations are applied
• Data is re-converted into the target system’s datatype and inserted
• After the initial load, if specified, the tool monitors for updates to data and applies them in near real-time

While processing the data, each table has a single thread reading from it, and any updates are captured using the source system’s native change data capture utility. Changes are not applied until after the initial load is completed. This is done to avoid overloading the source system, where it’s assumed client applications will still be running.

Performance Considerations

There are several factors which might limit the performance seen when migrating a database.

Network Bandwidth
Probably the biggest contributor to performance issues across data centers, there is no magic button when moving to RDS. If the database is simply too big or too busy for the network to handle the data being sent across, then other options may need to be explored or used in conjunction with this tool.

Some workarounds to consider when network performance is slow include:
• Setup AWS Direct Connect
• Use a bulk-load utility, and then use the tool to catch up on transactions
• Only migrate data from a particular point in time

RDS Migration Tool Server CPU
The migration tool converts all data into a common data type before performing any transformations, then converts them into the target database’s data type. This is obviously very heavy on the server’s CPU, and this is where the main performance bottlenecks on the server are seen.

Capacity of Source database
This tool uses a single SELECT statement to migrate the data, and then returns for any changed data after the initial bulk load is completed. On a busy system, this can be a lot of undo and redo data to migrate, and the source system needs to be watched closely to ensure the log files don’t grow out of control.

Capacity of Target database
In the best case scenario, this will be the limiter as it means all other systems are moving very fast. Amazon does recommend disabling backups for the RDS system while the migration is running to minimize logging.


The following walkthrough looks at the below capabilities of this tool in version 1.2:

• Bulk Data Migration to and from the client’s environment and Amazon RDS
• Near Real-Time Updates to data after the initial load is completed
• The ability to transform data or add auditing information on the fly
• Filtering capabilities at the table or schema level

You will need to have setup network access to your databases for the RDS Migration Tool.

1. After confirming access with your account manager, access the tool by opening the AWS console, selecting EC2, and choosing AMIs.
AWS Console

2. Select the correct AMI and build your new VM. Amazon recommends an M4.large or M4.xlarge.

3. After building the new VM, you will need to install the connectors for your database engine. In this example, we’ll be using Oracle Instant Client and MySQL ODBC Connector 5.2.7.

  • For the SQL Server client tools, you will need to stop the Migration services before installing.

4. Access the Migration Tool

  • Within VM: http://localhost/AmazonRDSMigrationConsole/
  • Public URL: https:[VM-DNS]/AmazonRDSMigrationConsole/
    • Username/Password is the Administrator login to the VM

5. The first screen after logging in displays all of your current tasks and their statuses.
RDS Migration Tool Home Screen

6. Clicking on the Tasks menu in the upper-left corner will bring up a drop-down menu to access Global Settings. From here, you can set Notifications, Error Handling, Logging, etc…
RDS Migration Tool Global Settings

7. Back on the Tasks menu, click the Manage Databases button to add the source and target databases. As mentioned earlier, this walkthrough will be an Oracle to Aurora migration. Aurora targets are a MySQL database for the purposes of this tool.
RDS Migration Tool Manage Databases Pop-Up

8. After defining your connections, close the Manage Databases pop-up and select New Task. Here, you can define if the task will perform a bulk-load of your data and/or if it will attempt to apply changes made.
RDS Migration Tool New Task

9. After closing the New Task window, simply drag & drop the source and target connectors into the task.

10. By selecting Task Settings, you can now define task level settings such as number of threads, truncate or append data, and define how a restart is handled when the task is paused. You can also override the global error handling and logging settings here.

  • The best practice recommendation is to find the largest LOB value in your source database and set that as the max LOB size in the task. Setting this value allows the task to optimize LOB handling, and will give the best performance.

RDS Migration Tool Task Settings

11. Select the Table Selection button to choose which tables will be migrated. The tool uses wildcard searches to allow any combination of tables to exclude or include. For example, you can:

  • Include all tables in the database
  • Include all tables in a schema or set of schemas
  • Exclude individual tables and bring over all remaining tables
  • Include individual tables and exclude all remaining tables

The tool has an Expand List button which will display all tables that will be migrated.

In this screenshot, all tables in the MUSER08 schema that start with T1 will be migrated, while all tables that start with T2 will be excluded EXCEPT for the T20, T21, T22, & T23 tables.
RDS Migration Tool Table Selection

12. After defining which tables will be migrated, select an individual table and choose the Table Settings button. Here you can add transformations for the individual tables, add new columns or remove existing ones, and filter the data that is brought over.

In this screenshot, the T1 table records will only be brought over if the ID is greater than or equal to 50 and the C1 column is LIKE ‘Migrated%’
RDS Migration Tool Table Settings

13. Select the Global Transformations button. Like the table selection screen, you use wildcards to define which tables these transformations will be applied to.
You can:

  • Rename the schema
  • Rename the table
  • Rename columns
  • Add new columns
  • Drop existing columns
  • Change the column data types

In this screenshot, a new column named MigratedDateTime will be created on all tables and populated with the current DateTime value.
RDS Migration Tool Global Transformations

14. Finally, save the task and choose Run. This will kick off the migration process and bring up the Monitoring window. From here, you can see the current task’s status, notifications, and errors, as well as get an idea of the remaining time.
RDS Migration Tool Monitoring Window

Categories: DBA Blogs

You work for a software company. You don’t? Think again.

Pythian Group - Tue, 2015-09-01 09:26

If someone asks what business your company is in, you might say transportation, or networks, or retail, or a hundred other possibilities. But what you should be saying is that you are also in the software business.

At its core, your company is a software company. Or it should be.

Why? Because your competitors are growing in numbers, emerging from nowhere and aggressively using digital strategies to succeed over you.

To be successful, you must continually innovate and differentiate your company, no matter what your industry. You must do things better, faster, and cheaper. And you must engage your customers and your partners in new and meaningful ways. It doesn’t matter whether you’re a bank, a pharmaceutical company, or a logistics provider. Think like a startup and use software to stay one step ahead.

This connection can be easy if your business is already using software to provide differentiating product features or services. If you sell goods online, or you deliver content, or you offer software as a service, you know you’re in the software business. You probably focus on being more responsive and being agile, that is delivering new features faster than ever before and using data to gain business insights and to optimize the user experience.

For those companies who don’t initially think of themselves as software companies, it’s a little more interesting. In time, they will realize that software is what is differentiating them.

For example, Redline Communications thinks of itself as a wireless infrastructure company that delivers wireless networks in remote locations. In actuality, it uses software to add new features to its network components. It also uses software to expand a network’s capacity on demand, and to troubleshoot problems. Redline might manufacture hardware but it is solidly in the software business.

Pythian is often described as an IT services company, but it is undoubtedly a software company. Nobody at Pythian touches a customer’s critical production system or data without going through a software portal called Adminiscope that secures access and records all activity. Pythian doesn’t sell software, but it is absolutely in the software business.

Then there are the companies that would not traditionally be classified as technology businesses at all, but have clearly made the connection. And that doesn’t mean just having an online presence. Take retailer Neiman Marcus, a company that has consciously entered the software space with the development of apps like  “Snap. Find. Shop.” a tool that lets users take photos of a product they want and helps them track it down. They know they need to engage customers more personally, and the way to do that is through software that enables them to interact with customers, to understand and respond to buying behaviors and preferences.

KAR Auction Services, who you might know as a car auction company, has stated publicly that that they no longer want to be a car auction company that uses technology but “a technology company that sells cars”. They know that software will drive the future of their business.

It is increasing difficult to sell, deliver or support any product or service without using software. It is increasingly difficult to truly understand your business without being data driven, the byproduct of software. It is increasingly difficult to recruit employees without using software. Your customers and your employees expect you to be agile and responsive, and software helps you meet those expectations, and then measures, monitors, analyzes, and integrates data to keep you ahead of the game.

In today’s hyper-competitive world, your company must use software and technology to become agile in order to respond to ever-changing customer needs. Then you must remain as aggressive by measuring, monitoring, evaluating, and responding to data about your products and services as well as their impact on your customers and their environment. Whether it’s customer feedback about product features, or changing market trends, you need to be ready to react and iterate your products and processes at lightning speed. Software is the one thing that’s going to enable that. 

So what does it mean to use software to be competitive? It means departing from tradition. It means empowering IT to go beyond cutting costs to transform the business. It means empowering everyone in the company to innovate around software. It means encouraging radical disruptive ideas on how to change the business. And it means putting a digital strategy at the heart of your planning. And this is certainly what your competition is doing.

Categories: DBA Blogs

Plan change monitor prevents user impact from bad plan

Bobby Durrett's DBA Blog - Mon, 2015-08-31 13:07

This morning a colleague of mine noticed an alert from our plan change monitor and prevented any impact on the users by forcing a SQL’s plan to its normal fast plan before the users could be affected.  I want to share the monitor script and describe how we use its output.

I know this is long but I want to paste the SQL text of the monitor script here.  Review my comments on each step.  I can not remember the details of the script myself even though I wrote it but the comments hopefully will help:

set linesize 160
set pagesize 1000
set echo on
set termout on
set trimspool on
set define off

-- status active
-- this plan isn't the most commonly
-- run plan.
-- average this plan in v$sql > 10 x average of
-- most common plan

-- status active sessions
-- sql_id and plan_hash_value
-- elapsed and executions
-- max elapsed for the sql
-- eliminate pl/sql which has
-- plan hash value of 0

drop table active_sql;

create table active_sql as
max(sq.ELAPSED_TIME) elapsed,
max(sq.executions) executions
v$session vs,
v$sql sq
vs.sql_id=sq.sql_id and
vs.SQL_CHILD_NUMBER=sq.child_number and
vs.status='ACTIVE' and
sq.plan_hash_value <> 0
group by 

-- to get the most frequently
-- used plan first get the number
-- of exections by plan
-- for each of the active sqls

drop table plan_executions;

create table plan_executions as
sum(ss.executions_delta) total_executions
dba_hist_sqlstat ss,
active_sql a
group by 

-- use the previous table to get
-- the plans that are most frequently 
-- used. note that two plans could
-- have the same number of executions
-- but this is unlikely.

drop table most_frequent_executions;

create table most_frequent_executions as
from plan_executions pe1
pe1.total_executions =
(select max(pe2.total_executions)
from plan_executions pe2

-- handle special case of two plans with
-- same number of executions.
-- pick one with highest plan value
-- just to eliminate dups.

drop table most_frequent_nodups;

create table most_frequent_nodups as
from most_frequent_executions mfe1
mfe1.plan_hash_value =
(select max(mfe2.plan_hash_value)
from most_frequent_executions mfe2

-- get list of active sql that 
-- are not running the most
-- frequently executed plan

drop table not_most_freq;

create table not_most_freq as
select * from active_sql
(sql_id,plan_hash_value) not in
(select sql_id,plan_hash_value from most_frequent_nodups);

-- working on this part of the logic:
-- average this plan in v$sql > 10 x average of
-- most common plan

-- get average elapsed of most
-- frequently executed plans
-- add 1 to handle case of 0 executions

drop table avg_elapsed_most_f;

create table avg_elapsed_most_f as
most_frequent_nodups nd
ss.sql_id = nd.sql_id and
ss.plan_hash_value = nd.plan_hash_value
group by

-- get list of the sqls that are running
-- the plan that isn't most frequently 
-- executed and has an average elapsed 
-- more than 10 times the average of 
-- the most frequently executed plan
-- add 1 to executions to prevent 
-- divide by zero

drop table more_than_10x;

create table more_than_10x as
not_most_freq n,
avg_elapsed_most_f m
(n.elapsed/(n.executions+1)) > 10 * m.avg_elapsed and

spool planchangemonitor.log

select name db_name from v$database;

-- The listed sql_id and plan_hash_value items correspond to 
-- sql statements that have plans that may be
-- inefficient and need to be investigated.
-- The session id and username are included if a
-- session is currently running the sql with the plan.

'CHANGED '||'PLAN' flag,
more_than_10x m,
v$session s,
v$sql q
m.sql_id=s.sql_id(+) and
m.plan_hash_value=q.plan_hash_value(+) and
s.sql_id=q.sql_id and
order by

spool off

If I remember correctly I think the script looks for sessions running a plan whose current run time is 10 times that of the most frequently executed plan. This script is not perfect. The join to v$sql is not perfect and in some cases you can get duplicates.  People could quibble about the approach.  Why 10 times the previous run time?  I thought about more complex approaches but I just needed to get something in place.  But, on one database with a lot of small transactions we have made good use of this script, despite its flaws.

This morning my colleague noticed emails from the script that runs this SQL listing a query whose plan had changed.  The output looked like this:

------------ ------------- --------------- ---------- ---------
CHANGED PLAN 75ufmwrcmsuwz      2484041482         35 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        394 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        395 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        446 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        463 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        464 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        544 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        613 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        631 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        665 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        678 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        738 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        746 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        750 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        752 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       1333 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       1416 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       1573 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       1943 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       1957 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       3038 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       3445 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       3816 YOURUSER

I changed the real user to YOURUSER.  This output indicates that 23 sessions were all running the same SQL – sql_id=75ufmwrcmsuwz – and that this SQL was running on a new plan that was causing the SQL to run at least 10 times the normal run time.  In fact it was about 30 times as long.

To resolve the issue my colleague used our script to find the history of plans for 75ufmwrcmsuwz.

     select ss.sql_id,
  2  ss.plan_hash_value,
  4  ss.executions_delta,
  5  ELAPSED_TIME_DELTA/(executions_delta*1000) "Elapsed Average ms",
  6  CPU_TIME_DELTA/(executions_delta*1000) "CPU Average ms",
  7  IOWAIT_DELTA/(executions_delta*1000) "IO Average ms",
  8  CLWAIT_DELTA/(executions_delta*1000) "Cluster Average ms",
  9  APWAIT_DELTA/(executions_delta*1000) "Application Average ms",
 10  CCWAIT_DELTA/(executions_delta*1000) "Concurrency Average ms",
 11  BUFFER_GETS_DELTA/executions_delta "Average buffer gets",
 12  DISK_READS_DELTA/executions_delta "Average disk reads",
 13  ROWS_PROCESSED_DELTA/executions_delta "Average rows processed"
 15  where ss.sql_id = '75ufmwrcmsuwz'
 16  and ss.snap_id=sn.snap_id
 17  and executions_delta > 0
 19  order by ss.snap_id,ss.sql_id;

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
75ufmwrcmsuwz      2297146707 30-AUG-15 AM              830         587.207673     49.9638554    541.617188                  0                      0                      0          5234.01928         125.083133              332.66747
75ufmwrcmsuwz      2297146707 30-AUG-15 AM             1178         578.205867     49.3972835    532.377174                  0                      0                      0          4870.22326         126.048387              324.50764
75ufmwrcmsuwz      2297146707 30-AUG-15 AM             1433         631.484713     49.1486392    585.826676                  0                      0                      0          4624.11305         125.446615              299.57083
75ufmwrcmsuwz      2297146707 30-AUG-15 PM             1620         592.593823     49.5987654     546.29731                  0                      0                      0          4744.17284         121.961728             312.735185
75ufmwrcmsuwz      2297146707 30-AUG-15 PM             1774         534.412339      51.059752     485.46836                  0                      0                      0          4983.44983         119.564825             326.067644
75ufmwrcmsuwz      2297146707 30-AUG-15 PM             1757         447.385105     44.9345475    404.415659                  0                      0                      0          4525.13147         107.277746             293.739328
75ufmwrcmsuwz      2297146707 30-AUG-15 PM             1626         431.718507      45.904059    388.200416                  0                      0                      0          4462.93296         118.027675             300.724477
75ufmwrcmsuwz      2297146707 30-AUG-15 PM             1080         375.905966      44.212963    334.434835                  0                      0                      0          4766.81574         109.157407             310.712037
75ufmwrcmsuwz      2297146707 30-AUG-15 PM              707         368.289475     44.3140028    327.166223                  0                      0                      0          4894.20509         108.050919             315.565771
75ufmwrcmsuwz      2297146707 30-AUG-15 PM              529         341.483588     39.6408318     305.47356                  0                      0                      0          4381.19849         96.2646503             288.030246
75ufmwrcmsuwz      2297146707 30-AUG-15 PM              356         380.733635     41.5168539    342.034876                  0                      0                      0           4553.4691         105.272472             292.283708
75ufmwrcmsuwz      2297146707 30-AUG-15 PM              313         435.449406     37.1565495    402.636489                  0                      0                      0          4144.30351         92.8690096             264.923323
75ufmwrcmsuwz      2297146707 30-AUG-15 PM              214         516.455509     44.5794393    477.020692                  0                      0                      0          4567.67757         114.415888             289.607477
75ufmwrcmsuwz      2297146707 30-AUG-15 PM              182         720.749681     44.3956044    684.439467                  0                      0                      0          3811.83516         95.2362637             239.027473
75ufmwrcmsuwz      2297146707 30-AUG-15 PM               83          1043.1503     43.7349398    1008.41358                  0                      0                      0          3575.96386         114.289157             250.120482
75ufmwrcmsuwz      2484041482 30-AUG-15 PM                6         25314.6558     4311.66667    22971.4913                  0                      0                      0          78533.8333         69813.3333             157.833333
75ufmwrcmsuwz      2484041482 31-AUG-15 AM               96         25173.7346     5105.20833    21475.9516                  0                      0                      0          135242.802         62433.3125             118.395833
75ufmwrcmsuwz      2484041482 31-AUG-15 AM               39         26877.0626     5540.51282    22977.6229                  0                      0                      0          139959.308         68478.1795             93.7179487
75ufmwrcmsuwz      2484041482 31-AUG-15 AM               38          26993.419     5998.15789    22768.4285                  0                      0                      0          153843.342              74492             149.342105
75ufmwrcmsuwz      2484041482 31-AUG-15 AM               29         25432.5074     4952.06897    22288.7966                  0                      0                      0          112813.552         69803.0345             187.689655
75ufmwrcmsuwz      2484041482 31-AUG-15 AM               34         27281.7339     4541.47059    24543.1609                  0                      0                      0             95144.5         69187.3824             135.676471
75ufmwrcmsuwz      2484041482 31-AUG-15 AM              146         30512.9976     5421.43836    26984.2559                  0                      0                      0          115531.801         71886.6644             136.321918
75ufmwrcmsuwz      2484041482 31-AUG-15 AM              405         24339.6641     4853.40741    20794.0472                  0                      0                      0           115490.01         62004.4642             229.106173
75ufmwrcmsuwz      2484041482 31-AUG-15 AM              838         27552.3731     4903.06683    23661.2101                  0                      0                      0          111654.558         58324.9511             176.058473
75ufmwrcmsuwz      2484041482 31-AUG-15 AM             1653         30522.8358     4550.56261    26526.2183                  0                      0                      0           93818.418         49865.4701             137.212341

Based on this output my colleague chose 2297146707 as the good plan.  She ran coe_xfr_sql_profile.sql which is found in SQLT’s utl directory like this:

coe_xfr_sql_profile.sql 75ufmwrcmsuwz 2297146707

Then she ran the generated script:


This forced the plan back to its earlier efficient plan and my coworker did all of this early this morning before we reached our peak time of usage and before the bad plan could affect our users.


P.S. This example is from an database running on HP-UX Itanium.

Categories: DBA Blogs

RMAN -- 8 : Using a Recovery Catalog Schema

Hemant K Chitale - Sun, 2015-08-30 06:21
Besides retaining information about backups in the controlfile, Oracle allows the use of an (external) Recovery Catalog schema.  This schema is queryable via SQL in the same manner as querying any user / application schema.

Let's start with a database that already has backups present but created without a Recovery Catalog Schema.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release Production on Sun Aug 30 19:48:30 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select to_char(completion_time,'DD-MON HH24:MI') Completed_at, datafile_blocks, blocks_read
2 from v$backup_datafile
3 where file#=1
4 order by completion_time;

--------------------- --------------- -----------
01-AUG 22:10 107648 107648
10-AUG 15:14 107648 107648
10-AUG 19:58 107648 107648
30-AUG 16:59 107648 107648


I now create a Catalog Schema and register this database into that schema.  There are 4 steps to this.  The first is to create the database that will hold t he Catalog Schema (here, we presume that the database has been created before today's steps).  The next is to create the database account for the Catalog Schema (I create the account RCAT_OWNER).  The third step is to login to the Catalog Schema with RMAN and run the CREATE CATALOG command.  The final step is to create an RMAN connection from the TARGET database to the Catalog and REGISTER the database.

[oracle@localhost ~]$ sqlplus system/oracle@rcat

SQL*Plus: Release Production on Sun Aug 30 19:52:15 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYSTEM>create user rcat_owner identified by rcat_owner
2 default tablespace users quota unlimited on users;

User created.

SYSTEM>grant create session to rcat_owner;

Grant succeeded.

SYSTEM>grant recovery_catalog_owner to rcat_owner;

Grant succeeded.

Disconnected from Oracle Database 11g Enterprise Edition Release - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman catalog rcat_owner@rcat

Recovery Manager: Release - Production on Sun Aug 30 19:53:26 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

recovery catalog database Password:
connected to recovery catalog database

RMAN> create catalog;

recovery catalog created

RMAN> exit

Recovery Manager complete.
[oracle@localhost ~]$ rman target / catalog rcat_owner/rcat_owner@rcat

Recovery Manager: Release - Production on Sun Aug 30 19:55:59 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)
connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete


When my (TARGET) database ("ORCL") is registered, Oracle automatically does a RESYNC CATALOG. Can I now see my backups in the Catalog ?

RMAN> list backup of datafile 1;

List of Backup Sets

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
381 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 391 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 381
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
385 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 395 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 385
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf


Why does it show only the latest two backups ? What about the preceding two backups that I could see in V$BACKUP_DATAFILE ? Let me check those backups without a CATALOG connection.

RMAN> exit

Recovery Manager complete.
[oracle@localhost ~]$ rman target /

Recovery Manager: Release - Production on Sun Aug 30 19:59:59 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> list backup of datafile 1;

using target database control file instead of recovery catalog

List of Backup Sets

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
254 Full 733.27M DISK 00:04:51 01-AUG-15
BP Key: 266 Status: AVAILABLE Compressed: YES Tag: TAG20150801T220612
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/o1_mf_nnndf_TAG20150801T220612_bvsnlnpn_.bkp
List of Datafiles in backup set 254
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14157609 01-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
262 Full 733.23M DISK 00:03:17 10-AUG-15
BP Key: 274 Status: AVAILABLE Compressed: YES Tag: TAG20150810T151144
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T151144_bwjmojs6_.bkp
List of Datafiles in backup set 262
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14158847 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
271 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 283 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 271
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
275 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 287 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 275
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf


When I disconnect from the Catalog Schema and do a local only (TARGET) connection, I can see 4 backups of the datafile.  So, what gives ?  Let me try an SQL query on V$BACKUP_DATAFILE.
(As an aside : Note above how there an be discrepancy in the listings showed by LIST BACKUP in the two scenarios (a) without a Recovery Catalog connection  and  (b) with a Recovery Catalog that was created after the last RESETLOGS).  This is something to remember.

RMAN> exit

Recovery Manager complete.
[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release Production on Sun Aug 30 20:01:54 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select to_char(completion_time,'DD-MON HH24:MI') Completed_at, datafile_blocks, blocks_read,
2 resetlogs_change#, resetlogs_time
3 from v$backup_datafile
4 where file#=1
5 order by completion_time
6 /

--------------------- --------------- ----------- ----------------- ---------
01-AUG 22:10 107648 107648 14082620 04-JUL-15
10-AUG 15:14 107648 107648 14082620 04-JUL-15
10-AUG 19:58 107648 107648 14185666 10-AUG-15
30-AUG 16:59 107648 107648 14185666 10-AUG-15


Notice that I have added two columns (RESETLOGS_CHANGE# and RESETLOGS_TIME) in the query.  Now, I see that the two older backups were from an *older* incarnation of the database.  They have a different RESETLOGS_CHANGE# / RESETLOGS_TIME.  So, those backups of the older incarnation are not cataloged into the Catalog Schema !

Can I do anything about this ?  It seems that the full RESYNC doesn't resync for backups of previous incarnations.   Can I reset my RETENTION POLICY and then do a RESYNC ?

Disconnected from Oracle Database 11g Enterprise Edition Release - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target / catalog rcat_owner/rcat_owner@rcat

Recovery Manager: Release - Production on Sun Aug 30 20:08:06 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)
connected to recovery catalog database

RMAN> show all;

RMAN configuration parameters for database with db_unique_name HEMANTDB are:
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/snapcf_orcl.f'; # default

RMAN> configure retention policy to recovery window of 36 days;

old RMAN configuration parameters:
new RMAN configuration parameters:
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN> resync catalog;

starting full resync of recovery catalog
full resync complete

RMAN> list backup of datafile 1;

List of Backup Sets

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
381 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 391 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 381
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
385 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 395 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 385
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf


No, extending the Recovery Window still doesn't help.  Can I try something else ?  What about the CATALOG command ?

[oracle@localhost ~]$ ls -l /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/
total 894280
-rw-rw---- 1 oracle oracle 7786496 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnlfrc_.bkp
-rw-rw---- 1 oracle oracle 2421248 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnlk3q_.bkp
-rw-rw---- 1 oracle oracle 56320 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnll9n_.bkp
-rw-rw---- 1 oracle oracle 3595776 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnlmdx_.bkp
-rw-rw---- 1 oracle oracle 165888 Aug 1 22:12 o1_mf_annnn_TAG20150801T221209_bvsnxs75_.bkp
-rw-rw---- 1 oracle oracle 16896 Aug 1 22:14 o1_mf_annnn_TAG20150801T221404_bvso1f28_.bkp
-rw-rw---- 1 oracle oracle 768901120 Aug 1 22:11 o1_mf_nnndf_TAG20150801T220612_bvsnlnpn_.bkp
-rw-rw---- 1 oracle oracle 110075904 Aug 1 22:11 o1_mf_nnndf_TAG20150801T220612_bvsnvwjj_.bkp
-rw-rw---- 1 oracle oracle 21782528 Aug 1 22:12 o1_mf_nnndf_TAG20150801T220612_bvsnx9v4_.bkp
[oracle@localhost ~]$ ls -l /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/
total 1771844
-rw-rw---- 1 oracle oracle 786944 Aug 10 15:11 o1_mf_annnn_TAG20150810T151143_bwjmohc7_.bkp
-rw-rw---- 1 oracle oracle 28672 Aug 10 15:16 o1_mf_annnn_TAG20150810T151601_bwjmxk5v_.bkp
-rw-rw---- 1 oracle oracle 4140032 Aug 10 19:55 o1_mf_annnn_TAG20150810T195509_bwk48xxv_.bkp
-rw-rw---- 1 oracle oracle 526336 Aug 10 19:55 o1_mf_annnn_TAG20150810T195509_bwk4911j_.bkp
-rw-rw---- 1 oracle oracle 311296 Aug 10 19:55 o1_mf_annnn_TAG20150810T195509_bwk4923s_.bkp
-rw-rw---- 1 oracle oracle 32256 Aug 10 19:59 o1_mf_annnn_TAG20150810T195901_bwk4j5lt_.bkp
-rw-rw---- 1 oracle oracle 768851968 Aug 10 15:15 o1_mf_nnndf_TAG20150810T151144_bwjmojs6_.bkp
-rw-rw---- 1 oracle oracle 110075904 Aug 10 15:15 o1_mf_nnndf_TAG20150810T151144_bwjmvycy_.bkp
-rw-rw---- 1 oracle oracle 21938176 Aug 10 15:15 o1_mf_nnndf_TAG20150810T151144_bwjmx1sv_.bkp
-rw-rw---- 1 oracle oracle 905863168 Aug 10 19:58 o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
[oracle@localhost ~]$

RMAN> catalog start with '/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/';

searching for all files that match the pattern /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/
no files found to be unknown to the database

RMAN> list backup of datafile 1;

List of Backup Sets

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
381 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 391 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 381
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
385 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 395 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 385
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN> catalog start with '/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/';

searching for all files that match the pattern /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/
no files found to be unknown to the database

Well, apparently, even the CATALOG command refuses to catalog backup pieces from an older incarnation !

So, it seems that :
1.  If you've done a RESETLOGS recently and even though the controlfile may show previous backups, if you create a Catalog Schema after the RESETLOGS, previous backups (i.e. of the preceding incarnation)  are not visible in the Catalog  (disconnecting from the Catalog does allow you to view the previous backups in RMAN !)
2. The CATALOG command also will not include the previous backups, it will only accept backups of the current incarnation.

This testing has been done with    Has the behaviour changed in / / ?


Categories: DBA Blogs

asmcmd> a better “du”

Pythian Group - Fri, 2015-08-28 14:28

I discovered ASM with a RAC running on Linux Itanium and that was a big adventure. At this time there was no asmcmd. In 2005, Oracle released Oracle 10gR2 and asmcmd came into the place and we figured out how to make it work with a 10gR1 ASM. We were very excited to have a command line for ASM until… we tried it ! let’s call a spade a spade,  it was very poor…

10 years after, Oracle has released 11gR1, 11gR2, 12cR1, asmcmd has been improved but the “ASM shell” remains very weak and specially the “du” command :

Used_MB Mirror_used_MB
 556178 556178
ASMCMD> du .
Used_MB Mirror_used_MB
 556178 556178
ASMCMD> du *
Used_MB Mirror_used_MB
 556265 556265

Why “du *” does not act as it acts in any Unix shell ? How do I know the size of each subdirectory in my current directory ?


Nowadays, we use to have dozens of instances running on the same server sharing the same ASM :

[oracle@higgins ~]$ ps -ef | grep pmon | wc -l
[oracle@higgins ~]$

so should I use one “du” per database (directory) to know the size used by each database ? what if I keep one month of archivelogs in my FRA ? should I wait for the month of February to have only 28 “du” to perform if I want to know the size of archivelogs generated each day (if this is a non-leap year !) ?


This is why I wrote this piece of code to have a “du” under ASM that makes my life easier everyday :

[oracle@higgins ~]$ cat
# du of each subdirectory in a directory for ASM

if [[ -z $D ]]
 echo "Please provide a directory !"
 exit 1

(for DIR in `asmcmd ls ${D}`
     echo ${DIR} `asmcmd du ${D}/${DIR} | tail -1`
 done) | awk -v D="$D" ' BEGIN {  printf("\n\t\t%40s\n\n", D " subdirectories size")           ;
                                  printf("%25s%16s%16s\n", "Subdir", "Used MB", "Mirror MB")   ;
                                  printf("%25s%16s%16s\n", "------", "-------", "---------")   ;}
                                  printf("%25s%16s%16s\n", $1, $2, $3)                         ;
                                  use += $2                                                    ;
                                  mir += $3                                                    ;
                         END   { printf("\n\n%25s%16s%16s\n", "------", "-------", "---------");
                                 printf("%25s%16s%16s\n\n", "Total", use, mir)                 ;} '
[oracle@higgins ~]$
Let's see it in action with some real life examples :
[oracle@higgins ~]$. oraenv
The Oracle base remains unchanged with value /oracle
[oracle@higgins ~]$./ DATA

DATA subdirectories size

Subdir  Used MB Mirror MB
------  ------- --------
DB01/    2423    2423
DB02/    2642    2642
DB03/    321201  321201
DB04/    39491   39491
DB05/    180753  180753
DB06/    4672    4672
DB07/    1431    1431
DB08/    2653    2653
DB09/    70942   70942
DB10/    96001   96001
DB11/    57322   57322
DB12/    70989   70989
DB13/    4639    4639
DB14/    40800   40800
DB15/    13397   13397
DB16/    15279   15279
DB17/    19020   19020
DB18/    8886    8886
DB19/    4671    4671
DB20/    14994   14994
DB21/    502245  502245
DB22/    4839    4839
DB23/    10169   10169
DB24/    7772    7772
DB25/    7828    7828
DB26/    112109  112109
DB27/    5564    5564
DB28/    16895   16895
------  ------- ---------
Total   1639627 1639627
[oracle@higgins ~]$


Another one with many archivelogs directories :
[oracle@higgins ~]$./ FRA/THE_DB/ARCHIVELOG/

 FRA/THE_DB/ARCHIVELOG/ subdirectories size

 Subdir       Used MB Mirror MB
 ------        ------ ---------
 2015_02_19/    114   114
 2015_02_20/    147   147
 2015_02_21/    112   112
 2015_02_22/    137   137
 2015_02_23/    150   150
 2015_02_24/    126   126
 2015_02_25/    135   135
 2015_02_26/    130   130
 2015_02_27/    129   129
 2015_02_28/    119   119
 2015_03_01/    146   146
 2015_03_02/    150   150
 2015_03_03/    128   128
 2015_03_04/    134   134
 2015_03_05/    44    44
 2015_05_27/    28    28
 2015_05_28/    95    95
 2015_05_29/    76    76
 2015_05_30/    187   187
 2015_05_31/    78    78
 2015_06_01/    111   111
 2015_06_02/    105   105
 2015_06_03/    43    43
 2015_06_04/    142   142
 2015_06_05/    42    42
 2015_06_06/    84    84
 2015_06_07/    70    70
 2015_06_08/    134   134
 2015_06_09/    77    77
 2015_06_10/    143   143
 2015_06_11/    2     2
 2015_06_21/    14    14
 2015_06_22/   14918 14918
 ------       ------- ---------
 Total         18250   18250

[oracle@higgins ~]$

This example is a very nice one as it shows us that 2015 is not a leap year and that some archivelogs are still on disk even if they probably shouldn’t and that’s a good information as v$log_history do not contain these information anymore :

SQL> select trunc(FIRST_TIME), count(*) from v$log_history group by trunc(FIRST_TIME) order by 1 ;

--------- ----------
22-JUN-15 402


Hope it will also makes your life easier,

Have a good day :)

Categories: DBA Blogs

Simplify Oracle Tracing with Creative Scripting

Pythian Group - Fri, 2015-08-28 14:26

Running a SQL trace is something that all DBAs do to varying degrees. Let’s say you are working on optimizing a SQL statement, and experimenting with some different hints for indexes and optimizer directives. This kind of effort typically goes something like this:

  • modify the SQL statement
  • enable tracing
  • run the statement
  • disable tracing
  • disconnect
  • retrieve the trace file
  • use a profiler to process the trace file
    this might be Method-R mrskew,Oracle tkprof, or something of your own.
  • delete the trace file if no longer needed

That process is OK if all you need to do is look at a couple of trace files, but quickly becomes tedious for any serious optimization effort as there will be many iterations of this process.  This is the kind of job that just cries out for some simple automation.

Let’s walk though automating much of this process using Sqlplus, ssh and some profiling tools.

First let’s consider the environment:

  • Oracle 11.2 database on a remote server
  • Workstation has 11.2 client software installed
  • ssh is setup for connecting to the oracle user on the database server
  • some profiling tools are available

Let’s get started with the script that is the subject of our ‘tuning’ effort.

-- sql2trace.sql
select * from dual;

As you can see there is not really going to be any tuning done in this article; it is all about the process.

The following script tracefile_identifier_demo.sql is used to setup the trace environment by collecting some information about the database host the process owner, and then setting the tracefile_identifier parameter.  The values for these are then used to set sqlplus define variables.

-- tracefile_identifier_demo.sql

-- column variables to capture host, owner and tracefile name
col tracehost new_value tracehost noprint
col traceowner new_value traceowner noprint
col tracefile new_value tracefile noprint

set term off head off feed off

-- get oracle owner
select username traceowner from v$process where pname = 'PMON';

-- get host name
select host_name tracehost from v$instance;

-- set tracefile identifier
alter session set tracefile_identifier = 'MYTRACEFILE';

select value tracefile from v$diag_info where name = 'Default Trace File';

set term on head on feed on

-- do your tracing here
alter session set events '10046 trace name context forever, level 12';

-- run your SQL here

alter session set events '10046 trace name context off';

-- disconnect to ensure all trace data flushed
-- the disconnect must be done in the called script
-- otherwise the values of the defined vars are lost

-- now get the trace file, or other processing
--@@mrskew '&&traceowner@&&tracehost' '&&tracefile'
@@tkprof '&&traceowner@&&tracehost' '&&tracefile'

This article began as an idea to write about tracefile_identier, hence the script name.

Most of this script is quite straightforward:

  • set column command initiated define variables to capture host, process owner and tracefile name
  • collect the data
  • enable tracing
  • run the target script
  • disable tracing
  • call the tkprof.sql script to run tkprof

The interesting bit is found in tkprof.sql.

-- tkprof.sql

col ssh_target new_value ssh_target noprint
col scp_filename new_value scp_filename noprint

set term off feed off verify off echo off

select '&&1' ssh_target from dual;
select '&&2' scp_filename from dual;

set feed on term on verify on

host ssh &&ssh_target 'cat &&scp_filename' | tkprof /dev/stdin ./tkprof.out sort=exeqry sys=no
host cat ./tkprof.out

There are a couple of things to take notice of in tkprof.sql.  Did you notice the disconnect statement?  There are couple of points of interest about that.  Prior to 11g it was necessary to disconnect from Oracle to ensure that all cursors were closed and all STAT and row source operation rows were written to the trace file.  Disconnecting the session is not necessary in Oracle 11g+.

Another interesting bit about this disconnect statement is its placement.  At first the disconnect statement was in the main script.  The problem was that the define variables would all lose their values prior to calling the tkprof.sql script, and so the call would fail; and so the disconnect command is in the called script.

Finally the trace output is retrieved via ssh and piped to tkprof.  Notice that there is no need to actually copy the file, rather the contents of the file are simple sent to STDOUT and piped to tkprof.

The tkprof command does not read from STDIN.  If for instance you try this; cat somefile | tkprof – ./tkprof.out sort=exeqry; tkprof will exit with an error that an input file is needed.  That problem is circumvented by using the file /dev/stdin.

Put it all together and it looks like this:

11:34:11 JKSTILL@oravm > @tracefile_identifier_demo

Session altered.

Elapsed: 00:00:00.00


1 row selected.

Elapsed: 00:00:00.00

Session altered.

Elapsed: 00:00:00.00

TKPROF: Release - Development on Thu Aug 27 11:34:18 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Host key fingerprint is de:ad:be:ed:a2:d6:63:4b:rx:77:fd:1c:e1:36:2b:88
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|         .  .    |
|        S  +.    |
|        ..ox.o   |
|       o+.F.* o  |
|      99+o.o.= . |
|     .  |

TKPROF: Release - Development on Thu Aug 27 11:34:18 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Trace file: /dev/stdin
Sort options: exeqry
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call

SQL ID: a5ks9fhw2v9s1 Plan Hash: 272002086

select *

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          2          0           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 90
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS FULL DUAL (cr=2 pr=0 pw=0 time=22 us cost=2 size=2 card=1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  log file sync                                   1        0.00          0.00
  SQL*Net message from client                     2        0.00          0.00

SQL ID: 06nvwn223659v Plan Hash: 0

alter session set events '10046 trace name context off'

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0
Parsing user id: 90


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        6      0.00       0.00          0          2          0           1

Misses in library cache during parse: 0

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       3        0.00          0.00
  SQL*Net message from client                     3        0.00          0.00
  log file sync                                   1        0.00          0.00


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          3           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        1      0.00       0.00          0          0          3           1

Misses in library cache during parse: 0

    2  user  SQL statements in session.
    1  internal SQL statements in session.
    3  SQL statements in session.
Trace file: /dev/stdin
Trace file compatibility:
Sort options: exeqry
       1  session in tracefile.
       2  user  SQL statements in trace file.
       1  internal SQL statements in trace file.
       3  SQL statements in trace file.
       3  unique SQL statements in trace file.
     218  lines in trace file.
       0  elapsed seconds in trace file.

The same process was used to run the trace data through the Method-R mrskew command:

-- mrskew.sql

col ssh_target new_value ssh_target noprint
col scp_filename new_value scp_filename noprint

set term off feed off verify off echo off

select '&&1' ssh_target from dual;
select '&&2' scp_filename from dual;

set feed on term on verify on
host ssh &&ssh_target 'cat &&scp_filename' | mrskew

The results of calling mrskew.sql  rather than tkprof.sql:

CALL-NAME                    DURATION       %  CALLS      MEAN       MIN       MAX
—————————  ——–  ——  —–  ——–  ——–  ——–
SQL*Net message from client  0.003733   74.1%      3  0.001244  0.001004  0.001663
log file sync                0.001300   25.8%      1  0.001300  0.001300  0.001300
SQL*Net message to client    0.000008    0.2%      3  0.000003  0.000002  0.000003
PARSE                        0.000000    0.0%      2  0.000000  0.000000  0.000000
FETCH                        0.000000    0.0%      2  0.000000  0.000000  0.000000
CLOSE                        0.000000    0.0%      2  0.000000  0.000000  0.000000
EXEC                         0.000000    0.0%      2  0.000000  0.000000  0.000000
—————————  ——–  ——  —–  ——–  ——–  ——–
TOTAL (7)                    0.005041  100.0%     15  0.000336  0.000000  0.001663

These scripts can all be found at

If you have ideas about how to improve these, please feel free to clone the repo, make some changes and issue a pull request.

If you don’t know what all of that means, might I suggest this article?  Git for Beginners

The next time you have some tracing to do, why not give this method a try?  Doing so will save you time and make you more productive.


Categories: DBA Blogs

Pillars of PowerShell: SQL Server – Part 2

Pythian Group - Fri, 2015-08-28 14:24

This is the seventh and final post in the series on the Pillars of PowerShell. The previous posts in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling
  5. Windows OS
  6. SQL Server – Part 1

In this final post I am going to touch on SQL Server Management Objects (SMO) with PowerShell. SMO is one of the most widely used methods, and offers the most versatile way of working with SQL Server to me. It can be a bit tedious to work with being that you are going to be using raw .NET objects now instead of cmdlets, but offers so much more compared to SQLPS. In this post I am just going to touch on the basics of loading SMO, and how you can connect to an instance of SQL Server (or multiple). I am going to end it showing you a function I published a few years ago and use fairly frequently to this day.

Loading SMO

As with SQLPS, you have to load SMO into your PowerShell session before you can utilize it. SMO is what is referred to as an “assembly”, basically a collection of types and other objects that form a logical unit of functionality for interacting with various parts of SQL Server. SQL Server 2012 and above you can import the SQLPS module and it will automatically import the associated version of SMO. However, being that SQLPS is loading in more than just SMO it can take time for that to complete before your script will continue. In that regard, it can shave off some time by just loading SMO directly without all the overhead of the SQLPS module. You will commonly see the following line of code used to load SMO into your session:



Generally this command is going to load the highest version registered in the GAC on your machine. In the screenshot you may see the version is “13.0.0”, this is from SQL Server Management Studio preview (July 2015) that is installed on my machine. Now with PowerShell things change over time and using LoadWithPartialName is actually the version 1 method of loading SMO. This method is actually no longer supported, but still works for now. In PowerShell 2.0 a cmdlet was added to do this for you called, Add-Type. If you were to just type in Add-Type ‘Microsoft.SqlServer.Smo’ when you have multiple versions, your are going to get an error similar to this:


In this situation you have to specify the assembly you want to load, so there is a bit more to doing this with SMO. You can load an assembly by specifying the file itself or by the assembly name along with 4 bits of information:

  1. Name
  2. Version
  3. Culture
  4. PublicKeyToken

To date, Microsoft always uses the same Culture and PublicKeyToken on almost all of their assemblies that come out of Redmond. So the only thing lacking is the version, which is going to be in the format of a 4-part version number, If you have worked with SQL Server and you are familiar with the build numbers, you simply need to know that “10” is SQL Server 2008, “11” is SQL Server 2012, “12” is SQL Server 2014, and “13” is going to be SQL Server 2016. So, if I want to load the SQL Server 2012 SMO into my session I simply use this command:

Add-Type -AssemblyName "Microsoft.SqlServer.Smo, Version=, Culture=neutral, PublicKeyToken=89845dcd8080cc91"
The first connection…

To connect to a single instance of SQL Server with Windows Authentication you can use the following:

$srvObject = New-Object Microsoft.SqlServer.Management.Smo.Server "MyServer"

Once you hit enter, it will make a connection to your instance and then the variable $srvObject will contain properties and methods that you can use to manipulate the server-level objects of your instance. If you recall from the previous pillars in this series, this is where Get-Member comes in real handy for exploring. As an example let’s say you wanted to get similar information to what SELECT @@VERSION returns in T-SQL. You simply need to know the properties that hold this information and pipe the object to select:

$srvObject | select Product, VersionString, Edition, OSVersion 

In PowerShell it is good to start out with the mindset “if I write it for one server, might as well write it to handle multiple”. What I mean by this is you get to the point of developing a script into a tool. If I wanted to turn the above bit of code into something I can reuse, and run for one instance or 50 instances it just takes a bit of work and you are there before you know it:

function Get-SqlVersion {
 param (
 $allServers = @()
 $props = @{ServerName="";Product="";Version="";Edition="";OSVersion=""}
 foreach ($s in $server) {
 $srvObject = New-Object Microsoft.SqlServer.Management.Smo.Server $s

 $cserver = New-Object psobject -Property $props
 $cserver.ServerName = $s
 $cserver.Product = $srvObject.Product
 $cserver.Version = $srvObject.VersionString
 $cserver.Edition = $srvObject.Edition
 $cserver.OSversion = $srvObject.OSVersion
 $allServers += $cserver

Now, don’t let this scare you as it may look more complicated than it seems. You could just put two lines inside the foreach loop that create your server object and then just select the properties, then you are done. It is best though when you start to write functions that the output of your function is an object. So that is the only additional step I take using New-Object psobject to create a PowerShell object with the properties ServerName, Product, Version, Edition, and OSVersion. In the event you expand on this function in the future, and wanted to pipe this output to another cmdlet or custom bit of code it will be in a more formal object type for you to work against.

Golden Nugget

One of the things I got annoyed with fairly quickly when troubleshooting an instance of SQL Server was having to search through the error log(s). You could be dealing with the default of 6 logs for an instance or up to 99 of them. Now there is some T-SQL code out there of people iterating through each log for you, but I just prefer to use PowerShell. I published this code on my personal blog back in December of 2014. You can find the write-up and code here: Search-SqlErrorLog. It will be good practice for you to try and understand it on your own, but I include help information just in case.

This is one of the few times I wrote a function that only works with one server at a time. You can do some one-liner tricks with the pipeline to easily call it for multiple servers:

"server1","server2" | foreach {Search-SqlErrorLog -server $_ -all -value "^backup"}

The output of this function provides the number of the log it was found in, the date, the process (if noted in the log), and the text found matching the value you provided (which can accept regex expressions, the “^” means the start of the string):


The End

I hope you learned something new in this series on PowerShell, and good scripting to you all.

Categories: DBA Blogs

Migration of Oracle Database to Amazon RDS using Golden Gate

Pythian Group - Fri, 2015-08-28 14:15

Amazon RDS is a web service used to manage databases, like Oracle, in the cloud. Small- and medium-sized enterprises with databases of normal load, volume, and SLA, can certainly leverage the ease and cost efficiency Amazon RDS offers.

There are two other methods that are widely used to migrate databases with minimal downtime: Oracle Data Guard and Oracle GoldenGate. AWS RDS doesn’t support Data Guard, but luckily it does support Oracle GoldenGate. There are some version constraints though.

The following steps are involved while migrating a database from on-premises to AWS RDS:

— Source database on premises
— Oracle GoldenGate Hub on EC2 instance
— Target database on AWS RDS

Now there could be different topologies for the above 3 components, but we are just using this topology for simplicity. For details on this topology, refer to this very fine and simple Appendix: Using Oracle GoldenGate with Amazon RDS.

Generally and roughly, the steps used to migrate databases from on-premises Oracle database to AWS RDS could be as follows:

— Create target database targetdb in AWS RDS with same parameters as that of the source database sourcedb.

— Create same tablespaces on targetdb in AWS RDS as they exist in source database sourcedb.

— Create same non default users on targetdb in AWS RDS as they exist in source database sourcedb.

— Create same non default roles on targetdb in AWS RDS as they exist in source database sourcedb and assign these roles to users on targetdb.

— Export data/objects from sourcedb database to specific SCN from non default schemas

— Import data/objects into targetdb database

— Configure GoldenGate extract process on sourcedb , for configuration see this

— Configure GoldenGate replicate processes on targetdb , for configuration see this

— Set up Oracle GoldenGate (GG) Hub on EC2 , for configuration see this

— Start GG extract process on sourcedb

— Start GG replicate process on targetdb starting after that SCN until it catch all changes generated on sourcedb database during exp/imp time.

— Then plan the cut-off time for applications to switch to new AWS RDS database after stopping replicat process at targetdb.

— Cleanup of sourcedb.

These are just the skeleton steps and need refining and proper planning. It’s always good to first thoroughly test such action plans. But as you can see, Oracle GoldenGate is a viable tool to migrate databases to the AWS RDS. Pythian has a full range of skills, experience, and capabilities to oversee such migrations as its our daily routine to use GoldenGate to do migrations. And yes, even if AWS RDS is a cloud service, you still need a DBA :)

Categories: DBA Blogs

Three Hidden Azure SQL Database Gotchas

Pythian Group - Fri, 2015-08-28 13:35

Azure SQL Database is Microsoft’s Database as a Service (DBaaS) platform offering. It allows end users to leverage the power of SQL Server in the cloud without the expense and complexity of building a private infrastructure. Additionally, this offering simplifies database maintenance tasks while providing seamless high availability and disaster recovery capabilities.

Although DBaaS offerings are still crawling out their infancy, with the correct planning and use cases, implementing an Azure SQL Database solution can be a relatively straightforward process. However, as this platform continues to mature, you can expect to encounter some “Ghosts in the Machine”. Hopefully this post will allow you to avoid some of these unexpected behaviors.

  1. What’s in a name?

Azure SQL Servers all share the same public domain, and access is controlled through IP white-lists and user credentials. Until recently, Azure SQL Database dynamically allocated server names comprised of long random strings for security purposes and because each Azure server name must be unique globally. However, recently Microsoft provided the ability to allocate specific server names specified by the end user, i.e.

This feature is a more than a welcome addition, particularly for organizations who wish to pre-configure connection strings for cloud implementations.

The hidden gotcha resides in the implementation of this feature. Once you create a server with a user defined name, the Azure cloud reserves that name for you within the Azure fabric. If for any reason you remove the server you will be unable to recreate the server using the same name for at least 5 days. When you attempt to recreate the server, you will receive the message “Specified server name is already used” as depicted below:


Microsoft is aware of this limitation, however, at this time, the only way to correct the situation is to contact Microsoft Support and have them remove the Azure fabric metadata manually.

Additionally, it should be noted that you can only specify a specific Azure SQL Database Server name in the preview portal. This feature is not available in the standard portal or via the New-AzureSqlDatabaseServer Cmdlet in PowerShell.

2. You can change the performance tier at any time, unless you can’t.

One of the fantastic benefits of leveraging Azure SQL Database is the ability to switch service tiers at any time, without service disruption in order to leverage pay per minute costing efficiencies.

Unfortunately, another hidden gotcha may rear its ugly head during the switching process. Organizations that utilize BCP processes against an Azure SQL instance need to be wary when performing a service level switch. BCP operations often simply “Hang” when switching between service levels. The only resolution for this issue is to terminate the process and re-initiate once the tier switch has been completed.

3. I know you’re there, but I can’t see you.

Just like all could offerings, Azure SQL Database continues to mature and improve. However, you need to be prepared for some management inconsistencies. The preview portal is aptly named and although some functions are only available within the preview portal, you may need to frequently revert to the standard portal for a more consistent experience.

As an example, I have a client who switched databases between standard and premium tiers and vice versa. These databases no longer display in the preview portal at all. However, they do appear correctly in the standard portal as shown in the CIA level of redacted screen captures below.



Categories: DBA Blogs

Trust and confidence from Pythian

Pythian Group - Fri, 2015-08-28 13:25

Recently I “inherited” some new responsibilities at work. It’s not the first time during my 11 or so of the last 16 years at Pythian. Throughout my employ at Pythian, I have been continually given new titles based on new roles I have taken on. For me, besides the enjoyment I have been lucky to have at Pythian, this trust and confidence are two of the biggest contributors to one’s longevity with a company.

For Pythian and me, it all started one spring afternoon in about 1998. Paul and Steve had been doing the Pythian-thing for a year or more, and were looking for assistance getting “off-the-ground” so to speak. That endeavour was part of the reason for our new association and it’s been a magic carpet ride since. I did leave at one point for almost 6 years, but returned in early 2011. Between 1998 and 2011, the size of the company changed, but it was still the same old company.

I now manage the day-to-day operations of the consulting group and take pride in the work I do. Touché all you people out there in Pythian-land.

Categories: DBA Blogs

Creating an Oracle Database Cloud Service

Pythian Group - Fri, 2015-08-28 13:10

Back in late June of 2015, Larry Ellison launched several public cloud services and one of those services was the public DBaaS. Today, I had the opportunity to try out this new service. This blog post will examine how to create it and how to connect it with sqlcli. As with any cloud service, it all happens in the background, saving you from doing tedious configuration steps to start using your service.


In my case, it took about 30 mins from when I clicked on create service to start using my database.

So the first thing that you have to do, obviously, is access the Oracle Cloud My Services application.  If you do not currently have access, speak with your sales rep or cloud administrator, but remember that this application is not free. Once you have access, click on the Oracle Database Cloud Service link and the following page will come up. Click on “Create Service” :

Once you have done that, we need to choose the type of service we will solicit and the billing frequency. As I have talked about in previous posts, it all depends on your business needs and abilities. The difference here between choosing a “Cloud Service” and a “Cloud Service – Virtual Image” is that in the first option, the database and the database instance are created for you, whereas in the “Virtual Image“, you will need to create it yourself, so choose carefully. One of the good things that comes with the first option is that the cloud patching option comes with it, but in the “Virtual Image“, you have to do this yourself.

As of the writing of this post, Oracle offers two database versions – and I chose the latter.



In the Edition section, we get to choose the type of service we will get when choosing the Cloud Software Edition. Unlike the previous one, here we will choose the bells and whistles that you will be licensed to use in this database. I won’t include the differences between the two here, but you can view them in in the PaaS section, under Database. In my case, I just chose the regular Enterprise Edition :

In the details section, we can set the characteristics of the database service. It is important to select the “Compute Shape” correctly as this is critical to your usage billing. It is also good to know that one OCPU (Oracle CPU) is equivalent to a 3.0 GHz 2012 Intel Xeon with HyperThreading Enabled. Also you will have to add a Public SSH key to access your compute node. You can learn how here: how to create one. This is where you will also set the usable storage, your system or administrator password for the database, the name of the SID, the version (in this case, you are using version, the name of the PDB. Last, but not least, you will choose your backup destination. In my case, I just chose a local, but you can choose the Oracle Database Backup Service if you have one.



Last, but not least, you will get a confirmation of the service you are about to create. I didn’t copy this particular screenshot when I created it, but here is a similar one, so you get the gist.


Once you click on create, you can select the service and see the details of the creation process, as well as some others, like the Public IP, Port, etc.

Once the DB and VM are allocated, you need to go back to the Oracle Cloud My Services application  and go to the Oracle Compute Cloud Service console. This is to enable the security rule that will allow us to connect to port 1521 for this DB.



In the page that comes up, go to the Network section, and you will see a set of Security Rules, which you will find disabled.2015-08-21_1056

In my case, I enabled the “dbaas/test-orcl/db/ora_p2_dblistener” rule.


In this particular case – and I want to emphasize this – I am not concerned with security, so I also enabled the Security List for Inbound/Outbound Policy traffic.



Once I had done this, I am now ready to connect to my DB via sqlcli  like I would connect to any other DB:

Renes-iMac:bin Rene$ ./sql system@***.***.****.****:1521:ORCL

SQLcl: Release RC on Fri Aug 21 11:41:42 2015

Copyright (c) 1982, 2015, Oracle. All rights reserved.

Password? (**********?) ************
Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Oracle Label Security option 

SQL&gt; select name from v$database;


SQL&gt; set lines 200 pages 9999



---------- --------------- ---------

SQL&gt; alter session set container=PDB1;

Session altered.


As you can see, it is quite easy to request a database service and start using it. You will have to start building your case to use the public cloud, but once you do, you can see that using your database is no different from an on-premise to a cloud service.

Note– This was originally published on

Categories: DBA Blogs