Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 1 hour 27 min ago

Log Buffer #412, A Carnival of the Vanities for DBAs

Fri, 2015-02-27 10:58

This Log Buffer Edition makes it way through the realms of Oracle, SQL Server and MySQL and brings you some of the blog posts.


Introducing Oracle Big Data Discovery Part 3: Data Exploration and Visualization


Base64 Encode / Decode with Python (or WebLogic Scripting Tool) by Frank Munz

Why I’m Excited About Oracle Integration Cloud Service – New Video

Reminder: Upgrade Database to by July 2015

SQL Server:

An article about how we underestimate the power of joins and degrade our query performance by not using proper joins

Most large organizations have implemented one or more big data applications. As more data accumulates internal users and analysts execute more reports and forecasts, which leads to additional queries and analysis, and more reporting.

How do you develop and deploy your database?

A database must be able to maintain and enforce the business rules and relationships in data in order to maintain the data model.

Error handling with try-catch-finally in PowerShell for SQL Server


MySQL Enterprise Monitor 3.0.20 has been released

MySQL Cluster 7.4 is GA!

Connector/Python 2.1.1 Alpha released with C Extension

Worrying about the ‘InnoDB: detected cycle in LRU for buffer pool (…)’ message?

MySQL Cluster 7.4 GA: 200 Million QPS, Active-Active Geographic Replication and more

Categories: DBA Blogs

Virtual CPUs with Google Compute Engine

Wed, 2015-02-25 12:12

Continuing on my series of virtual CPU behavior in Amazon Web Services, Amazon Web Service HVM instances, and Microsoft Azure, I’m taking a look at Google Compute Engine (GCE). GCE is a relative newcomer to the public cloud world, become generally available in December 2013. It does have some interesting features, including transparent maintenance through live migration, and automatic sustained-use discounts without upfront commitments.

Unlike Amazon or Microsoft, Google is very upfront about their vCPU definition.

For the n1 series of machine types, a virtual CPU is implemented as a single hyperthread on a 2.6GHz Intel Sandy Bridge Xeon or Intel Ivy Bridge Xeon (or newer) processor. This means that the n1-standard-2 machine type will see a whole physical core.

I still believe calling such a hyperthread a “virtual CPU” is misleading. When creating a virtual machine in a non-cloud VM platform, 1 virtual CPU = 1 physical core. Plain and simple. But when using a cloud platform, I need 2 virtual CPUs to get that same physical core.


Anyways, off to run some CPU tests. n1-standard-4 is a close match to the m3.xlarge instances previously tested, so I’ll try that.

Getting set up on Google Compute Engine

I already signed up with Google Compute Engine’s free trial and created a project I’m calling marc-cpu-test. Installing the gcloud compute command-line tools.

[marc@quartz ~]$ gcloud auth login --no-launch-browser
Go to the following link in your browser:

Enter verification code: (redacted)
Saved Application Default Credentials.

You are now logged in as [].
Your current project is [None].  You can change this setting by running:
  $ gcloud config set project PROJECT
[marc@quartz ~]$ gcloud config set project marc-cputest
[marc@quartz ~]$ gcloud config set compute/zone us-central1-a
[marc@quartz ~]$ gcloud compute instances create cpu-test-n4 --image centos-6 --machine-type "n1-standard-4" --zone us-central1-a
Created [].
cpu-test-n4 us-central1-a n1-standard-4 RUNNING
[marc@quartz ~]$ gcloud compute ssh cpu-test-n4
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/marc/.ssh/google_compute_engine.
Your public key has been saved in /home/marc/.ssh/
The key fingerprint is:
Updated [].
Warning: Permanently added '' (RSA) to the list of known hosts.
Warning: Permanently added '' (RSA) to the list of known hosts.

OK, instance all set and connected. As a CentOS 6 image it doesn’t allow SSH root logins by default, so attempting to set up a gcloud environment as a root user will get you “permission denied” errors on SSH. Serves me right for trying to run these tools as root in the first place :-).

Looking around

Checking what they got us:

[marc@cpu-test-n4 ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
processor       : 1
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
processor       : 2
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2
processor       : 3
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2

Google has redacted the exact CPU model numbers, but has clearly marked this as a 2-core system with core IDs 0 and 1.

The single-CPU case
[marc@cpu-test-n4 ~]$ taskset -pc 0 $$
pid 1558's current affinity list: 0-3
pid 1558's new affinity list: 0
[marc@cpu-test-n4 ~]$  dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 14.3427 s, 151 MB/s
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 29.3081 s, 74.1 MB/s
2170552320 bytes (2.2 GB) copied, 29.3065 s, 74.1 MB/s

We get a nice boost in raw CPU numbers as compared to the 120 MB/s I saw in AWS. With two processes sharing this CPU, see a tiny bit less than half the throughput.

Sharing the cores (or trying to)
[marc@cpu-test-n4 ~]$ taskset -pc 0,1 $$
pid 1558's current affinity list: 0
pid 1558's new affinity list: 0,1
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
[1] 1803
[2] 1805
[marc@cpu-test-n4 ~]$ 2170552320 bytes (2.2 GB) copied, 14.6959 s, 148 MB/s
2170552320 bytes (2.2 GB) copied, 14.7139 s, 148 MB/s

This is interesting; we see almost full-speed throughput in spite of processors 0 and 1 said to have a shared core. With processors 0 and 2 the situation is the same.

[marc@cpu-test-n4 ~]$ taskset -pc 0,2 $$
pid 1558's current affinity list: 0,1
pid 1558's new affinity list: 0,2
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
[1] 1830
[2] 1833
[marc@cpu-test-n4 ~]$ 2170552320 bytes (2.2 GB) copied, 14.6683 s, 148 MB/s
2170552320 bytes (2.2 GB) copied, 14.6692 s, 148 MB/s

Is the CPU scheduler ignoring my taskset commands? Running mpstat 2-second samples during the test to see actual CPU usage:

[marc@cpu-test-n4 ~]$ mpstat -P ALL 2
06:08:44 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
06:08:46 PM  all   46.31    0.00    3.75    0.00    0.00    0.00    0.00    0.00   49.94
06:08:46 PM    0   93.50    0.00    6.50    0.00    0.00    0.00    0.00    0.00    0.00
06:08:46 PM    1   92.00    0.00    8.00    0.00    0.00    0.00    0.00    0.00    0.00
06:08:46 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
06:08:46 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
06:08:52 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
06:08:54 PM  all   46.75    0.00    3.25    0.00    0.00    0.00    0.00    0.00   50.00
06:08:54 PM    0   93.47    0.00    6.53    0.00    0.00    0.00    0.00    0.00    0.00
06:08:54 PM    1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
06:08:54 PM    2   93.50    0.00    6.50    0.00    0.00    0.00    0.00    0.00    0.00
06:08:54 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

So the taskset commands are working: when we ask for CPUs 0 and 1, we are getting them, but throughput shows that cores aren’t being shared. It means that the CPUs in the virtual machine are not statically bound to hardware threads as seen under AWS. I’d call it a win as it gets more consistent performance even if the guest operating system is forced to makes poor CPU scheduling decisions as in this case.

[marc@cpu-test-n4 ~]$ taskset -pc 0-3 $$
pid 1558's current affinity list: 0,2
pid 1558's new affinity list: 0-3
[marc@cpu-test-n4 ~]$ for i in {1..4}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 22.9823 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 22.9914 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 22.9915 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 23.1333 s, 93.8 MB/s

This is more the throughput we would expect with two shared cores.

Lessons learned

Over the course of these tests, I’ve discovered a few things:

  • Although they share virtual CPUs like competitors, Google is very upfront about this behavior.
  • Actual throughput for a simple gzip workload is excellent.
  • Google Compute Engine has an abstraction layer in front of CPUs that dynamically schedules tasks between CPU threads, in addition to the regular scheduler in the virtual machine. In my testing, it allocates tasks efficiently across CPU cores, even when the OS scheduler is configured suboptimally.
Categories: DBA Blogs

Exadata X5 – A Practical Point of View of the New Hardware and Licensing

Wed, 2015-02-25 12:10

Oracle recently announced its latest iteration of Exadata – X5-2. It includes a refresh of the hardware to the most recent Xeon® E5-2699 v3 CPUs. These new CPUs boost the total cores count in a full rack to 288. This is higher than the current 8 socket “big machine” version X4-8, which has only 240 cores.

But the most exciting part is the all flash version of Exadata. In the previous generation – X4 – Oracle had to switch from 15K drives to 10K drives in order to boost capacity from 600 GB to 1200 GB per hard drive to keep disk space higher than flash cache size. At that time of X4 announcements, we were already wondering why Oracle was still offering high-speed disks and not switching to all flash, and now we know why. Because that type of high-performance flash wasn’t quite ready.

Maintaining high IO rates over long periods of times needed some changes to the ILOM in order to maintain cooling fans speed based on many individual temperature sensors inside the flash cards (details). Removing the SAS controller and using the new NVMe connectivity resulted in much higher bandwidth per hard drive – 3.2 GBytes/sec vs. the old 1.2 GBytes/sec SAS.

With temperature and bandwidth sorted out, we now have a super-high performance option (EF – Extreme Flash) for Exadata which delivers the stunning 263 GB/sec uncompressed scan speed in a full rack. The difference in performance between the High Capacity and High Performance EF flash option is now much higher. The high-performance option in Exadata X5 is now viable. In Exadata X4 it made so little difference, that it was pointless.

x4 vs x5

The one thing I wonder with the X5 announcement is why the X5-2 storage server still uses the very old and quite outdated 8 core CPUs. I’ve seen many cases where a Smart Scan on an HCC table is CPU bound on the storage server even when reading from spinning disk. I am going to guess that there’s some old CPU inventory to cleanup. But that may not end up being such a problem (see “all columnar” flash cache feature).

But above all, the most important change was the incremental licensing option. With 36 cores per server, even the 1/8th rack configuration was in the multi-million dollars in licenses, and in many cases was too much for the problem in hand.

The new smallest configuration is:

  • 1/8th rack, with 2 compute nodes
  • 8 cores enabled per compute node (16 total)
  • 256 GB RAM per node (upgradable to 768 GB per node)
  • 3 storage servers with only half the cores, disks and flash enabled

Then you can license additional cores as you need them, 2 cores at a time. Similar to how ODA licensing option worked. You cannot reduce licensed cores.

The licensing rules changes go even further. Now you can mix & match compute and storage servers to create even more extreme options. Some non-standard examples:

  • Extreme Memory – more compute nodes with max RAM, reduced licensed cores
  • Extreme Storage – replace compute node with storage nodes, reduced licensed cores

x5 custom
Link to video

In conclusion, Oracle Exadata X5 configuration options and the changes it brings to licensing allows an architect to craft a system that will meet any need and allow for easy, small step increments in the future, potentially without any hardware changes.

There are many more exciting changes in Oracle 12c, Exadata X5 and the new storage server software which I may cover in the future as I explore them in detail.

Categories: DBA Blogs

Log Buffer #411, A Carnival of the Vanities for DBAs

Wed, 2015-02-25 12:00

This Log Buffer Edition brings you some blog posts from Oracle, SQL Server and MySQL.


Suppose you have a global zone with multiple zpools that you would like to convert into a native zone.

The digital revolution is creating abundance in almost every industry—turning spare bedrooms into hotel rooms, low-occupancy commuter vehicles into taxi services, and free time into freelance time

Every time I attend a conference, the Twitter traffic about said conference is obviously higher.  It starts a couple weeks or even months before, builds steadily as the conference approaches, and then hits a crescendo during the conference.

Calling All WebLogic Users: Please Help Us Improve WebLogic Documentation!

Top Two Cloud Security Concerns: Data Breaches and Data Loss

SQL Server:

This article describes a way to identify the user who truncated the table & how you can recover the data.

When SQL Server 2014 was released, it included Hekaton, Microsoft’s much talked about memory-optimized engine that brings In-Memory OLTP into play.

Learn how you can easily spread your backup across multiple files.

Daniel Calbimonte has written a code comparison for MariaDB vs. SQL Server as it pertains to how to comment, how to create functions and procedures with parameters, how to store query results in a text file, how to show the top n rows in a query, how to use loops, and more.

The article show a simple way we managed to schedule index rebuild and reorg for an SQL instance with 106 databases used by one application using a Scheduled job.


How to setup a PXC cluster with GTIDs (and have async slaves replicating from it!)

vCloud Air and business-critical MySQL

MySQL Dumping and Reloading the InnoDB Buffer Pool

How to benchmark MongoDB

MySQL Server on SUSE 12

Categories: DBA Blogs

SQL Server 2014 Cumulative Update 6

Wed, 2015-02-25 11:59

Hello everyone,

Just a quick note to let you know that this week, while most of North America was enjoying a break, Microsoft released the 6th cumulative update for SQL Server 2014. This update contains fixes for 64 different issues, distributed as follows:

SQL 2014 Cumulative Update 6

As the name implies, this is a cumulative update, that means it is not necessary to install the previous 5 in case you don’t have them. Please remember to test thoroughly any update before applying to production.

The cumulative update and the full release notes can be found here:



Categories: DBA Blogs

Detecting Source of MySQL Queries with Comments

Tue, 2015-02-17 19:22

As a MySQL DBA I already know the data changes that happen on my system. I have logs for that.

However, it’s a common problem that several years into the life of an application, the current developers won’t know where in the codebase queries come from. It’s often hard for them to find the location in the code if queries are formed dynamically; the pattern I show them to optimize doesn’t match anything in the code.

I stumbled on a trick a couple years ago that has been invaluable in tracking down these problematic queries: query comments.

Here’s an example:

When a query generally shows up in a slow query log, it might look something like this:

# Time: 150217 10:26:01
# User@Host: comments[comments] @ localhost []  Id:    13
# Query_time: 0.000231  Lock_time: 0.000108 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424186761;
select * from cars;

That logging shows me who executed the query (the comments user), the server it was executed from (localhost in this case), and what time it was executed (who, where, when).

What this doesn’t tell me is where in my codebase this query is. For example, if I want to change that select * to a more targeted select column1, column2, we may not know where to find it.

This is where comments help.

Comments can be any of three styles:

select * from cars; # comment to end of line here
select * from cars; -- comment to end of line here
select * from /* inline comment here */ cars;

When you add a comment to your query, you’ll capture the comment. Here’s an example.

On the command line:

mysql> select * from cars; # valerie - command line - testing comments

In the slow log:

# Time: 150217 11:25:24
# User@Host: comments[comments] @ localhost []  Id:     3
# Query_time: 0.000246  Lock_time: 0.000119 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424190324;
select * from cars # valerie - command line - testing comments;

This is especially useful if you are forming queries dynamically and can add the name of the piece of code, function, user, etc. to the comment. Those comments would look something like this in the slow log:

# Time: 150217 11:32:04
# User@Host: comments[comments] @ localhost []  Id:     3
# Query_time: 0.000225  Lock_time: 0.000104 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424190724;
select * from cars # userid 7695424 - index.php?search=true - display function;


Categories: DBA Blogs

Pythian’s Annual MySQL Community Dinner at Pedro’s

Tue, 2015-02-17 08:54

Once again, Pythian is organizing an event that by now may be considered a tradition: The MySQL community dinner at Pedro’s! This dinner is open to all MySQL community members since many of you will be in town for Percona Live that week. Here are the details:

What: The MySQL Community Dinner

When: Monday April 13, 2015 –  7:00 PM at Pedro’s (You are welcome to show up later, too!)

Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

Cost: Tickets are $35 USD, Includes Mexican buffet, non-alcoholic drinks, taxes, and gratuities (Pedro’s Dinner Menu 2015)

How: RSVP through Eventbrite


Laine Campbell

Derek Downey

Gillian Gunson

Miklos Szel

Marco Tusa

Mark Filipi

Alkin Tezuysal

Brian Cain

Brian Kelly

Joe Muraski

Patrick Pearson

Looking forward to seeing you all at the event!


Categories: DBA Blogs

Log Buffer #409, A Carnival of the Vanities for DBAs

Mon, 2015-02-16 10:29

This Log Buffer Edition sheds light at some of the nifty blog post of the week from Oracle, SQL Server and MySQL.


Patch Set Update: Hyperion Data Relationship Management

The Hitchhiker’s Guide to the EXPLAIN PLAN Part 33: The mother of all SQL antipatterns?

MongoDB as a Glassfish Security Realm

E-Business Suite customers must ensure that their database remains on a level that is covered by Error Correction Support (ECS)

EM12c: How to Retrieve Passwords from the Named Credentials

SQL Server:

How does a View work on a Table with a Clustered Columnstore Index ?

How do you develop and deploy your database?

Microsoft Azure Storage Queues Part 3: Security and Performance Tips

Stairway to SQL Server Security Level 6: Execution Context and Code Signing

Centralize Your Database Monitoring Process


New Galera Cluster version is now released! It includes patched MySQL server 5.6.21 and Galera replication provider 3.9

Shinguz: Nagios and Icinga plug-ins for MySQL 1.0.0 have been released

The next release of MongoDB includes the ability to select a storage engine, the goal being that different storage engines will have different capabilities/advantages, and user’s can select the one most beneficial to their particular use-case. Storage engines are cool.

The MySQL grant syntax allows you to specify dynamic database names using the wildcard characters.

Oracle‘s 10 commitments to MySQL – a 5 year review

Categories: DBA Blogs

Log Buffer #410, A Carnival of the Vanities for DBAs

Mon, 2015-02-16 10:28

This Log Buffer Edition spread love of databases just before Valentine’s Day. Lovely blog posts from Oracle, SQL Server and MySQL are here for you to love.


Creating a Mobile-Optimized REST API Using Oracle Service Bus by Steven Davelaar.

GROUP BY – wrong results in

Using Edition-Based Redefinition to Bypass Those Pesky Triggers

It’s easy to make mistakes, or overlook defects, when constructing parallel queries – especially if you’re a developer who hasn’t been given the right tools to make it easy to test your code.

If you have a sorted collection of elements, how would you find index of specific value?

SQL Server:

How to use the IsNothing Inspection Function in SSRS

What better way to learn how to construct complex CHECK CONSTRAINTs, use the SQL 2012 window frame capability of the OVER clause and LEAD analytic function, as well as how to pivot rows into columns using a crosstab query?

SQL Server’s GROUP BY clause provides you a way to aggregate your SQL Server data and to group data on a single column, multiple columns, or even expressions. Greg Larsen discusses how to use the GROUP by clause to summarize your data.

Surely, we all know how T-SQL Control-of-flow language works? In fact it is surprisingly easy to get caught out.

Resilient T-SQL code is code that is designed to last, and to be safely reused by others.


The NoSQL databases are gaining increasing popularity. MongoDB, being one of the most established among them, uses JSON data model and offers great scalability and ease of use due to the dynamic data schemas..

Is upgrading RDS like a shit-storm that will not end?

Over the last few MySQL releases the size of the MySQL package have increased in size and it looks like the trend is continuing.

This article details the proper method of load balancing either a Percona XTRADB Cluster (PXC) or MariaDB Cluster.

One common job for a DBA is working with a Development Team member on a batch loading process that is taking more time than expected.

Categories: DBA Blogs

Slimming Down Oracle RAC 12c’s Resource Footprint

Sun, 2015-02-08 09:00

I’ve been working on setting up a demo for my upcoming presentation on application continuity at RMOUG training days later this month. The challenge is to get a multi-node cluster, plus a load generator, and a host OS, to fit on a memory-constrained laptop.

According to the Oracle grid installation guide, 4GB per virtual host is the minimum requirement. However with a few tweaks I’ve been able to get the full stack to run in 2GB of memory. For anyone else out there installing 12c clusters into virtual machines, here are a few tips.

But first the disclaimer: these changes are mostly unsupported by Oracle, and intended for test and demo databases. They can potentially cause unexpected behaviour, hangs, or crashes. Don’t try this in production!

  • Grid Infrastructure management repository database (GIMR): This is a full Oracle database that stores operating system workload metrics generated by the cluster health monitor, for use Oracle QoS management and troubleshooting. Being a full database, it has a large memory and CPU footprint. I originally installed Oracle skipping the database on install, and upgraded to without it. However, it looks like it’s no longer possible to skip the installation on the GUI. My colleague Gleb suggests adding -J-Doracle.install.mgmtDB=false on the installer command line to skip it.
  • Cluster health monitor (CHM): this tool colleccts a myriad fo worklaod-related metrics to store in the GIMR. And it uses a surprisingly high amount of CPU: it was the top CPU consumer in my VM before removal. It can be disabled fairly easily, with a hat tip to rocworks:

    $ crsctl stop res ora.crf -init
    # crsctl delete res ora.crf -init
  • Trace File Analyzer Collector (TFA): collects log and trace files from all nodes and products into a single location. Unfortunately it’s written in Java with its own Java Virtual Machine, again requiring a large memory footprint for the heap etc. It can be removed wit ha single command, though note that next time you run (patching for example) it will reinstall itself.

    # tfactl uninstall
  • Cluster Verification Utility (CVU): As you install Oracle Grid Infrastructure, the CVU tool automatically runs, pointing out configuration issues that may affect system operation (such as running under 4GB of RAM). In Oracle, it also gets scheduled to run automatically every time the cluster is started and periodically after that. The CVU itself and checks use CPU and RAM resources, and are better run manually when such resources are limited. It’s also a quick removal:

    $ srvctl cvu stop
    $ srvctl cvu disable
  • OC4J: Every Oracle 12c grid infrasturucture install contains OC4J, Oracle’s old Java J2EE web application server, since replaced with WebLogic. And no, please don’t make me install WebLogic too now, Oracle! I’m honestly not sure what it’s used for, but I’ve been able to disable it without any obvious ill effects

    $ srvctl stop oc4j
    $ srvctl disable oc4j
  • ASM memory target: as of 12c, the ASM instance has a default memory target of 1 gigabyte, a big jump from the 256mb of Oracle 11g. And if you set a lower target, you’ll find it’s ignored unless it’s overridden with a hidden parameter. I’ve set it to 750mb with good results, and it can possibly be set even lower in light-utilization workloads:

    $ sqlplus "/ as sysasm"
    alter system set "_asm_allow_small_memory_target"=true scope=spfile;
    alter system set memory_target=750m scope=spfile;
    alter system set memory_max_target=750m scope=spfile;
    # service ohasd stop
    # service ohasd start

A non-memory issue I’ve run into is the VKTM, virtual keeper, to time background process using large amounts of CPU time in both ASM and database instances. I’ve noticed it to be especially pronounced in virtual environments, and in Oracle Enterprise Linux 6. I’ve ended up disabling it completely without obvious ill effects, but as always, don’t try on your “real” production clusters.

alter system set "_disable_highres_ticks"=TRUE scope=spfile;

(Hat tip to MOS community discussion 3252157, also this IBM slide deck)

Additionally, Jeremy Schneider has taken on the biggest remaining GI memory user, the Oracle cluster synchronization service daemon (OCSSD). This is an important cluster management process, and Jeremy figured out a way to unlock its memory in the gdb debugger, allowing it to be swapped out. My own tests were less successful: the process wasn’t swapped out even after trying his changes. But his blog post is worth a read, and others may have more success than I did.

I also noted that during the link phase of installation and patching, the ld process alone takes over 1GB of RAM. So either shut down clusterware or add swap and wait while linking.

So to wrap up, I’ve managed to get a full Oracle GI stack including database to run in a virtual machine with 2GB RAM. Readers, any other tips to put the goliath that is Oracle GI on a diet?

Categories: DBA Blogs

Log Buffer #408, A Carnival of the Vanities for DBAs

Thu, 2015-01-29 21:45

This Log Buffer Edition covers various innovative blog posts from various fields of Oracle, MySQL and SQL Server. Enjoy!!!


A user reported an ORA-01114 and an ORA-27069 in a 3rd party application running against an Oracle 11.1 database.

Oracle SOA Suite 12c: Multithreaded instance purging with the Java API.

Oracle GoldenGate for Oracle Database has introduced several features in Release

Upgrade to 12c and Plugin – one fast way to move into the world of Oracle Multitenant.

The Oracle Database Resource Manager (the Resource Manager) is an infrastructure that provides granular control of database resources allocated to users, applications, and services. The Oracle Database Resource Manager (RM) enables you to manage multiple workloads that are contending for system and database resources.

SQL Server:

Database ownership is an old topic for SQL Server pro’s.

Using T-SQL to Perform Z-Score Column Normalization in SQL Server.

The APPLY operator allows you to join a record set with a function, and apply the function to every qualifying row of the table (or view).

Dynamic Management Views (DMVs) are a significant and valuable addition to the DBA’s troubleshooting armory, laying bare previously unavailable information regarding the under-the-covers activity of your database sessions and transactions.

Grant Fritchey reviews Midnight DBA’s Minion Reindex, a highly customizable set of scripts that take on the task of rebuilding and reorganizing your indexes.


It’s A New Year – Take Advantage of What MySQL Has To Offer.

MySQL High Availability and Disaster Recovery.

MariaDB Galera Cluster 10.0.16 now available.

Multi-threaded replication with MySQL 5.6: Use GTIDs!

MySQL and the GHOST: glibc gethostbyname buffer overflow.

Categories: DBA Blogs

From 0 to Cassandra – An Exhaustive Approach to Installing Cassandra

Thu, 2015-01-29 21:44

All around the Internet you can find lots of guides on how to install Cassandra on almost every Linux distro around. But normally all of this information is based on the packaged versions and omit some parts that are deemed essential for proper Cassandra functioning.

Note: If you are adding a machine to an existing Cluster please approach this guide with caution and replace the configurations here recommended by the ones you already have on your cluster, specially the Cassandra configuration.

Without further conversation lets start!


Start your machine and install the following:

  • ntp (Packages are normally ntp, ntpdata and ntp-doc)
  • wget (Unless you have your packages copied over via other means)
  • vim (Or your favorite text editor)

Retrieve the following packages

Set up NTP

This can be more or less dependent of your system, but the following commands should do it (You can check this guide also):

~$ chkconfig ntpd on
~$ ntpdate
~$ service ntpd start
Set up Java (Let’s assume we are doing this in /opt)

Extract Java and install it:

~$ tar xzf [java_file].tar.gz
~$ update-alternatives --install /usr/bin/java java /opt/java/bin/java 1 

Check that is installed:

~$ java -version
java version '1.7.0_75'
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)

Let’s put JNA into place

~$ mv jna-VERSION.jar /opt/java/lib
Set up Cassandra (Let’s assume we are doing this in /opt)


Extract Cassandra:

~$ tar xzf [cassandra_file].tar.gz

Create Cassandra Directories:

~$ mkdir /var/lib/cassandra
~$ mkdir /var/lib/cassandra/commitlog
~$ mkdir /var/lib/cassandra/data
~$ mkdir /var/lib/cassandra/saved_caches
~$ mkdir /var/log/cassandra
Configuration  Linux configuration
~$ vim /etc/security/limits.conf

Add the following:

root soft memlock unlimited
root hard memlock unlimited
root – nofile 100000
root – nproc 32768
root – as unlimited

CentOS, RHEL, OEL, set in the following in /etc/security/limits.d/90-nproc.conf:

* – nproc 32768

Add the following to the sysctl file:

~$ vim /etc/sysctl.conf
vm.max_map_count = 131072

Finally (Reboot also works):

~$ sysctl -p

Firewall, the following ports must be open:

# Internode Ports
7000    Cassandra inter-node cluster communication.
7001    Cassandra SSL inter-node cluster communication.
7199    Cassandra JMX monitoring port.
# Client Ports
9042    Cassandra client port (Native).
9160    Cassandra client port (Thrift).

Note: Some/Most guides tell you to disable swap, I think of swap as an acrobat’s safety net, it should never have to be put to use, but in need it exists. As such, I never disable it and I put a low swappiness (around 10). You can read more about it here and here.

 Cassandra configuration

Note: Cassandra has a LOT of settings, these are the ones you should always set if you are going live. Lots of them depend on hardware and/or workload. Maybe I’ll write a post about them in the near future. In the meantime, you can read about them here.

~$ vim /opt/cassandra/conf/cassandra.yaml

Edit the following fields:

cluster_name: <Whatever you would like to call it>
data_file_directories: /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog

saved_caches_directory: /var/lib/cassandra/saved_caches

# Assuming this is your first node, this should be reachable by other nodes
seeds: “<IP>”

# This is where you listen for intra node communication
listen_address: <IP>

# This is where you listen to incoming client connections
rpc_address: <IP>

endpoint_snitch: GossipingPropertyFileSnitch

Edit the snitch property file:

~$ vim  /opt/cassandra/conf/

Add the DC and the RACK the server is in. Ex:


Finally make sure your logs go to /var/log/cassandra:

~$ vim /opt/cassandra/conf/logback.xml

Start Cassandra

~$ service cassandra start

You should see no error here, wait a bit then:

~$ grep  JNA /var/log/cassandra/system.log
INFO  HH:MM:SS JNA mlockall successful

Then check the status of the ring:

~$ nodetool status
Datacenter: DC1
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Owns   Host ID                               Token                                    Rack
UN  140.59 KB  100.0%  5c3c697f-8bfd-4fb2-a081-7af1358b313f  0                                        RAC

Creating a keyspace a table and inserting some data:

~$ cqlsh xxx.yy.zz.ww

cqlsh- CREATE KEYSPACE sandbox WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', DC1 : 1};
Should give no errors
cqlsh- USE sandbox;
cqlsh:sandbox- CREATE TABLE data (id uuid, data text, PRIMARY KEY (id));
cqlsh:sandbox- INSERT INTO data (id, data) values (c37d661d-7e61-49ea-96a5-68c34e83db3a, 'testing');
cqlsh:sandbox- SELECT * FROM data;

id                                   | data
c37d661d-7e61-49ea-96a5-68c34e83db3a | testing

(1 rows)

And we are done, you can start using your Cassandra node!

Categories: DBA Blogs

Log Buffer #407, A Carnival of the Vanities for DBAs

Tue, 2015-01-27 08:32

This Log Buffer Edition keeps the aim high and brings few of the best blog posts from Oracle, SQL Server and MySQL.


3 Modes for Moving Data to the BI Applications DW from a Source Application Database.

JSON for APEX Developers.

Neelakanth Nadgir posted a useful utility that prints out various statistics about the ZFS Adaptive Replacement Cache (ARC).

Obtaining Bonus Depreciation Methods for Oracle Fixed Assets.

Existing News – Java Cloud Service just got an update – WebLogic Server 12.1.3

SQL Server:

Tracking Database DDL Changes with SQLVer.

While a diminished level of control is certainly a factor to consider when contemplating migration of on-premises systems to Microsoft Azure, especially when dealing with PaaS resources such as Azure SQL Database, you have a range of techniques at your disposal that allow you to control and monitor both the status of and access to your Azure-resident services.

A Custom Execution Method – Level 19 of the Stairway to Integration Services

When a SQL Server database is operating smoothly and performing well, there is no need to be particularly aware of the transaction log, beyond ensuring that every database has an appropriate backup regime and restore plan in place.

SQL Server Reporting Services Basics: Deploying Reports


Mo’ Data, Mo’ Problems

FromDual.en: FromDual Backup Manager for MySQL 1.2.1 has been released

Tracking MySQL query history in long running transactions

MySQL Cluster 7.4.3 RELEASE CANDIDATE now available

Streaming MariaDB backups in the cloud.

Categories: DBA Blogs

Oracle Database 12c Patching: DBMS_QOPATCH, OPATCH_XML_INV, and datapatch

Mon, 2015-01-19 19:36

Oracle Database 12c brings us many new features including: the long needed ability to run OPatch and query the patch software installed in the Oracle Home programatically through SQL using the new DBMS_QOPATCH package.

If you’re a DBA working in an environment where patching consistently among databases is important, this is a welcomed enhancement. Sometimes one-off critical bug fix patches are important in the environment. Other times regular and consistent application of the quarterly PSUs is important to the organization (i.e. for regulatory compliancy). For sake of illustration I’m going to use the later as my need case for the duration of this article as I’ve regularly run into the situation where clients need PSUs applied properly, regularly, and consistently to their databases.

As a recap, many Oracle Patches including PSUs require a two-step process to implement properly. First the software change needs to be applied to the associated Oracle Home via the OPatch utility. Secondly it needs to be installed into all databases running in that Oracle Home (possibly updating internal packages and objects).

With Oracle 11g the latest patch is applied within the database using:

SQL> @catbundle.sql psu apply

Oracle 12c introduces a new utility called “datapatch” which replaces the need to run the 11g command shown above.

As with Oracle 11g you first install the patch into the Oracle Home with all services & databases down. But with Oracle Database 12c after restarting the database simply run datapatch from the OPatch directory:

./datapatch -verbose


Given that patching is a two-step process, some common DBA questions are:

1) Has a patch been installed in a given database?
2) Has a patch been installed into the $ORACLE_HOME via Opatch but not one or more of the databases?
3) Is the patch in the database but not the $ORACLE_HOME?

You may be thinking that #3 isn’t very likely but the reality is that it does happen. Circumstances that lead to a database being at a higher patch level than the Oracle Home include:

A) If the database was create from an RMAN duplicate from a home with a patch to one without.
B) A data guard standby switch-over or fail-over where the primary Oracle home was patched but the standby home was missed.
C) A 12c PDB that was unplugged from a patched CDB and plugged into an unpatched CDB.
D) The patch was uninstalled via Opatch from the Oracle home without uninstalling from the database (unlikely).

Since patching is a two-step process, historically answering questions such as these is also a two step process requiring the DBA to query the Oracle Home inventory and the database and to manually correlate and reconcile the results. This is cumbersome, prone to human errors, and not scalable due to the manual component.

But with Oracle 12c and the new DBMS_QOPATCH package, both can be queried programatically and joined using SQL allowing the DBA to easily report on questions like 1, 2, and 3 above via SQL queries, custom extensions to monitoring tools, reporting programs, etc. Or just to report accurately on patch deployments for security compliance reports, general database security assessments, or database health checks.

This is actually quite a significant breakthrough with regards to patch management!


How it works

The first thing to note is that in Oracle Database 12c there are some new Directory Objects installed by default:

SQL> select owner, directory_name, directory_path from dba_directories
  2  where directory_name like 'OPATCH%' order by 2;

------------ -------------------- --------------------------------------------------------------------------------
SYS          OPATCH_INST_DIR      /u01/app/oracle/product/12.1.0/dbhome_1/OPatch
SYS          OPATCH_LOG_DIR       /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch
SYS          OPATCH_SCRIPT_DIR    /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch



Two of those point to a new QOpatch directory which contains a batch file which runs the OPatch utility:

SQL> !ls /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch
qopatch_log.log  qopiprep.bat

SQL> !cat /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch/qopiprep.bat
# $Header: rdbms/src/client/tools/qpinv/qopiprep.bat /unix/2 2013/04/04 20:59:27 tbhukya Exp $
# qopiprep.bat
# Copyright (c) 2012, 2013, Oracle and/or its affiliates. All rights reserved.
#    NAME
#      qopiprep.bat - bat file for preprocessor
#      bat file for external table preprocessor.
#    NOTES
#      .
#    tbhukya     04/03/13 - Bug 16226172 : Forward merge of fix 16076845
#    tbhukya     09/23/12 - Creation
export PATH

# Option: "-retry 0" avoids retries in case of locked inventory.

# Option: "-invPtrLoc" is required for non-central-inventory
# locations. $OPATCH_PREP_LSINV_OPTS which may set by users
# in the environment to configure special OPatch options
# ("-jdk" is another good candidate that may require configuration!).

# Option: "-all" gives information on all Oracle Homes
# installed in the central inventory.  With that information, the
# patches of non-RDBMS homes could be fetched.

$ORACLE_HOME/OPatch/opatch lsinventory -xml  $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt
`echo "UIJSVTBOEIZBEFFQBL" >> $ORACLE_HOME/QOpatch/xml_file.xml`
echo `cat $ORACLE_HOME/QOpatch/xml_file.xml`
rm $ORACLE_HOME/QOpatch/xml_file.xml
rm $ORACLE_HOME/QOpatch/stout.txt


The description text in this Oracle provided file states that it’s a batch file used for external table preprocessing. The preprocessor clause was added in Oracle 11g release 2 (and back-ported to Oracle11gR1) to allow for execution of an external script/file to process data before reading it in the database via an external table. A further explanation of this feature is beyond the scope of this article.

Consequently we know an external table is involved and it’s not too hard to find and understand that:

SQL> select owner, table_name from dba_external_tables where table_name like 'OPATCH%' order by 1,2;

------------ --------------------

SQL> select dbms_metadata.get_ddl('TABLE','OPATCH_XML_INV','SYS') from dual;


      READSIZE 8388608
      preprocessor opatch_script_dir:'qopiprep.bat'
      BADFILE opatch_script_dir:'qopatch_bad.bad'
      LOGFILE opatch_log_dir:'qopatch_log.log'
        xml_inventory    CHAR(100000000)
       ( "OPATCH_SCRIPT_DIR":'qopiprep.bat'



Hence we can understand the underlying mechanics of this new feature. It’s based on an external table using a directory object running an external batch file (through the external table preprocessor option) which then runs the actual OPatch utility.

And we can query that external table directly if we want though we’ll get an XMLTYPE result:

SQL> select * from OPATCH_XML_INV;

<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <InventoryInstance> <ora
cleHome> <UId>OracleHome-0243b34c-d8db-43f7-946b-589110de0ef9</UId> <targetTypeI
d>oracle_home</targetTypeId> <inventoryLocation>/u01/app/oraInventory</inventory
(output truncated)



Oracle gives us a wrapper API to this new external table (OPATCH_XML_INV) with DBMS_QOPATCH to make working with the output more manageable.

For example we can start with some basic information on the inventory using DBMS_QOPATCH.GET_OPATCH_INSTALL_INFO :

SQL> set heading off long 50000
SQL> select dbms_qopatch.get_opatch_install_info from dual;




But as we see the output still isn’t easy on the eyes. Fortunately the XML stylesheet is also presented using DBMS_QOPATCH.GET_OPATCH_XSLT function. Hence combining the two gives a much more readable output:

SQL> select xmltransform(dbms_qopatch.get_opatch_install_info, dbms_qopatch.get_opatch_xslt) from dual;

Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Inventory         : /u01/app/oraInventory



The DBMS_QOPATCH package provides many other useful functions and procedures. For example, to see if and when a specified patch was installed, or just to see the list of all of the patches installed by bug number:

SQL> select xmltransform(dbms_qopatch.is_patch_installed('19303936'), dbms_qopatch.get_opatch_xslt) from dual;

Patch Information:
         19303936:   applied on 2014-12-20T13:54:54-07:00

SQL> select xmltransform(dbms_qopatch.get_opatch_bugs, dbms_qopatch.get_opatch_xslt) from dual;

  Bugs fixed:
          19157754  18885870  19303936  19708632  19371175  18618122  19329654
19075256  19074147  19044962  19289642  19068610  18988834  19028800  19561643
19058490  19390567  18967382  19174942  19174521  19176223  19501299  19178851
18948177  18674047  19723336  19189525  19001390  19176326  19280225  19143550
18250893  19180770  19155797  19016730  19185876  18354830  19067244  18845653
18849537  18964978  19065556  19440586  19439759  19024808  18952989  18990693
19052488  19189317  19409212  19124589  19154375  19279273  19468347  19054077
19048007  19248799  19018206  18921743  14643995



Or to run the equivalent of “opatch lsinventory” but from SQL instead of the OS:

SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;

Oracle Querayable Patch Interface 1.0

Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Inventory         : /u01/app/oraInventory
Installed Top-level Products (1):
Oracle Database 12c                          
Installed Products ( 135)

Oracle Database 12c                               
Java Development Kit                              
(output truncated)


Putting it all together

The examples above are interesting but to make it truly useful for patch application querying, monitoring, and reporting we need join the output of DBMS_QOPATCH (which is showing us what’s installed into the Oracle Home) with the new 12c view DBA_REGISTRY_SQLPATCH (which shows us what patches are applied to the database).

Prior to Oracle 12c to list the PSUs installed into the Oracle home we might use:

SQL> !$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'DATABASE PSU|DATABASE PATCH SET UPDATE'
19303936 19303936 Sat Dec 20 13:54:54 MST 2014 DATABASE PATCH SET UPDATE (OCT2014)



I obtained that result by shelling out from a sqlplus session running on the database server. But likely a centralized SQL based monitoring or reporting tool won’t have that ability. A client-server database connection won’t be able to run OPatch easily and hence the DBMS_QOPATCH API is required.

To extract the same information from the DBMS_QOPATCH package we need to parse the resulting XMLTYPE output:

SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.*
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription',
 10               applied_date varchar2(30) path 'appliedDate',
 11               sql_patch varchar2(8) path 'sqlPatch',
 12               rollbackable varchar2(8) path 'rollbackable'
 13         ) x;

---------- ----------
------------------------------ -------- --------
  19303936   18116864
Database Patch Set Update : (19303936)
2014-12-20T13:54:54-07:00      true     true



This is a great result: Information on the PSUs installed into the Oracle Home through SQL and returned to us in an easy to read and easy to work with tabular form!

To make this information even more useful we need to join to the new DBA_REGISTRY_SQLPATCH view which replaces the DBA_REGISTRY_HISTORY view with respect to which patches have been applied in the database:

SQL> select patch_id, patch_uid, version, status, description
  2  from dba_registry_sqlpatch
  3  where bundle_series = 'PSU';

---------- ---------- -------------------- ---------------
  19303936   18116864             SUCCESS
Database Patch Set Update : (19303936)



Joining these two outputs allows us to easily write queries that report on the PSU patches deployed and ones only partially deployed (whether in the Oracle Home but not the database or vice versa):

SQL> --
SQL> --   List of PSUs applied to both the $OH and the DB
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.patch_id, x.patch_uid, x.rollbackable, s.status, x.description
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription',
 10               rollbackable varchar2(8) path 'rollbackable'
 11         ) x,
 12         dba_registry_sqlpatch s
 13   where x.patch_id = s.patch_id
 14     and x.patch_uid = s.patch_uid
 15     and s.bundle_series = 'PSU';

---------- ---------- -------- ---------------
  19303936   18116864 true     SUCCESS
Database Patch Set Update : (19303936)

SQL> --
SQL> --   PSUs installed into the $OH but not applied to the DB
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.patch_id, x.patch_uid, x.description
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription'
 10         ) x
 11  minus
 12  select s.patch_id, s.patch_uid, s.description
 13    from dba_registry_sqlpatch s;

no rows selected

SQL> --
SQL> --   PSUs applied to the DB but not installed into the $OH
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select s.patch_id, s.patch_uid, s.description
  3    from dba_registry_sqlpatch s
  4  minus
  5  select x.patch_id, x.patch_uid, x.description
  6    from a,
  7         xmltable('InventoryInstance/patches/*'
  8            passing a.patch_output
  9            columns
 10               patch_id number path 'patchID',
 11               patch_uid number path 'uniquePatchID',
 12               description varchar2(80) path 'patchDescription',
 13         ) x;

no rows selected


Simple queries such as those three are what can be incorporated into monitoring scripts and reports.

Some other DBMS_QOPATCH functions worth trying include:

set heading off long 50000 pages 9999 lines 180 trims on tab off
select xmltransform(dbms_qopatch.get_opatch_count, dbms_qopatch.get_opatch_xslt) from dual;
select xmltransform(dbms_qopatch.get_opatch_list, dbms_qopatch.get_opatch_xslt) from dual;
select xmltransform(dbms_qopatch.get_pending_activity, dbms_qopatch.get_opatch_xslt) from dual;

set serverout on
exec dbms_qopatch.get_sqlpatch_status;


Back-porting to Oracle Database 11g

An interesting question is: “can we back port this approach to 11g” and the answer is “absolutely“!

First of all, we need to create the directory object, external table, and OS batch script. For simplicity I’m keeping the name and structure of each the same as in 12c but of course you can adjust them if desired.

Using DBMS_METADATA.GET_DDL on an Oracle 12c database I get the necessary DDL to add the directory objects and external tables to an 11g database (note that I’ve updated the Oracle Home path but that’s the only change):

CREATE OR REPLACE DIRECTORY "OPATCH_LOG_DIR" AS '/u01/app/oracle/product/11.2.0/dbhome_1/QOpatch';
CREATE OR REPLACE DIRECTORY "OPATCH_SCRIPT_DIR" AS '/u01/app/oracle/product/11.2.0/dbhome_1/QOpatch';

      READSIZE 8388608
      preprocessor opatch_script_dir:'qopiprep.bat'
      BADFILE opatch_script_dir:'qopatch_bad.bad'
      LOGFILE opatch_log_dir:'qopatch_log.log'
        xml_inventory    CHAR(100000000)
       ( "OPATCH_SCRIPT_DIR":'qopiprep.bat'


Next I need to copy the preprocessor batch script into the 11g home (which can be copied from another machine if necessary):

!cp -r /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch /u01/app/oracle/product/11.2.0/dbhome_1/QOpatch


Now I don’t want to install the DBMS_QOPATCH package into a different version of the database. It’s Oracle supplied “wrapped” code meaning I can’t modify it and it probably has 12c dependencies meaning it would be invalid in Oracle 11g. Instead I performed a SQL Trace on the DBMS_QOPATCH.GET_OPATCH_LSINVENTORY package executing on the 12c database and from the resulting trace file I see that all it’s doing is:



Not exactly complicated. Hence using the directory objects and external table created above I can execute the following on the 11g database:

SQL> select version from v$instance;


SQL> select XMLTYPE(XML_INVENTORY) patch_output from OPATCH_XML_INV;

<?xml version="1.0" encoding="US-ASCII"?>
<INVENTORY isStandAlone="false">
(output truncated)


So as we can see, it’s working perfectly back-ported to 11g.

However again the XML output isn’t really useful for me, I’m more interested in what PSUs have been installed. It’s easy to check that using opatch if I’m on the server:

SQL> !$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'DATABASE PSU|DATABASE PATCH SET UPDATE'
18522509   18522509  Sun Sep 21 20:58:00 MDT 2014   DATABASE PATCH SET UPDATE (INCLUDES CPU
18031668   18031668  Sun Sep 21 20:57:48 MDT 2014   DATABASE PATCH SET UPDATE (INCLUDES CPU
17478514   17478514  Sun Sep 21 20:57:38 MDT 2014   DATABASE PATCH SET UPDATE (INCLUDES CPU


But again what if I’m not on the server, want to do this programatically through SQL, build it into a monitoring query or a security compliance report, etc?

Again we can query the XMLTYPE data and get exactly what we want. Specifically:

SQL> with a as (select XMLTYPE(XML_INVENTORY) patch_output from OPATCH_XML_INV)
  2  select x.* from a, xmltable(
  3     'INVENTORY/HOST/HOME/ONEOFF_LIST/INTERIM_PATCH/oneoff_inventory/base_bugs/bug'
  4     passing a.patch_output columns
  5     --row_number for ordinality,
  6     bug_number number path '@number',
  7     bug_description varchar2(256) path '@description'
  8  ) x
  9  where regexp_like(bug_description, '(DATABASE PSU|DATABASE PATCH SET UPDATE)','i')
 10  order by bug_number;

---------- ------------------------------------------------------------



Voila! By copying the technique used by the Oracle 12c database we’ve now performed an OPatch query from SQL against an 11g database. Just took about 2 minutes of setup.

Again this can be super handy if PSU patching is a regular activity for you or if you have to produce security compliance reports.

Unfortunately though Oracle 11g records less data in DBA_REGISTRY_HISTORY than 12c has in DBA_REGISTRY_SQLPATCH:

SQL> select comments, action_time from dba_registry_history
  2  where bundle_series like '%PSU' order by action_time;

COMMENTS                       ACTION_TIME
------------------------------ ------------------------------
PSU                 23-SEP-14 AM



Hence joining the two is more challenging but certainly possible. The data (in this case the string “”) is present in both the DBA_REGISTRY_HISTORY and my XQuery output from my back-ported OPATCH_XML_INV external table output.



The new DBMS_QOPATCH API is a fantastic improvement with Oracle Database 12c when it comes to patch management (querying/reporting/monitoring). And by understanding the concepts and components added to 12c they can be manually back-ported to 11g. Both using DBMS_QOPATCH and even the back-port to 11g is actually pretty easy. In fact the hardest part for me (not being very familiar with XQuery) was coming up with the query to parse the XMLTYPE data.



Oracle Recommended Patches — Oracle Database (Doc ID 756671.1)

Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)

How do you apply a Patchset,PSU or CPU in a Data Guard Physical Standby configuration (Doc ID 278641.1)

Categories: DBA Blogs

Log Buffer #405, A Carnival of the Vanities for DBAs

Mon, 2015-01-19 19:34

This Log Buffer Edition rides on the wave of new ideas in the database realms. From Oracle technologies through MySQL to the SQL Server, things are piping hot.


The Call for Papers for the Rittman Mead BI Forum 2015 is currently open, with abstract submissions open to January 18th 2015.

The replicat process is the apply process within the Oracle GoldenGate environment.


“log file sync” and the MTTR Advisor.

Working with XML files and APEX – Part 3: Detail elements in a row with OUTER JOIN.

SQL Server:

Get started testing your database code with the tSQLt framework.

Archiving Hierarchical, Deleted Transaction Using XML.

As a part of his “Function / Iterator Pairs” mini-series, Business Intelligence architect, Analysis Services Maestro, and author Bill Pearson introduces the DAX COUNT() and COUNTX() functions, discussing similarities and differences.

Free eBook: Fundamentals of SQL Server 2012 Replication

Importance of Statistics and How It Works in SQL Server – Part 1


Is Zero downtime even possible on RDS?

Monitor MySQL Database Users with VividCortex

Django with time zone support and MySQL

Using Perl to send tweets stored in a MySQL database to twitter

When ONLY_FULL_GROUP_BY Won’t See the Query Is Deterministic.

Categories: DBA Blogs

Log Buffer #406, A Carnival of the Vanities for DBAs

Thu, 2015-01-15 20:32

This Log Buffer Edition covers blog posts from various bloggers of Oracle, SQL Server and MySQL.


Sync tables: generate MERGE using Unique constraint.

What Hardware and Software Do YOU Want Oracle to Build?

There were a number of new features introduced in Ops Center 12.2.2. One of the shiny ones is an expansion of the backup and recovery capabilities to include Proxy Controllers.

Want to Be a Better Leader? Answer One Question.

Managing a remote Oracle Database instance with “Geographic Edition”.

SQL Server:

Learn how you can use SQLCop to prevent your developers from writing stored procedures that are named sp_ something.

Data Cleaning in SQL 2012 with Data Quality Services.

Stairway to PowerPivot and DAX – Level 9: Function / Iterator Function Pairs: The DAX MAX() and MAXX() Functions.

Options to Improve SQL Server Bulk Load Performance.

Dynamically Create Tables Based on an Access Table


Stored Procedures: critiques and defences.

JSON UDF functions 0.3.3 have been released.

Business Scalability, Operational Efficiency and Competitive Edge with MariaDB MaxScale 1.0 GA.

MySQL 5.7 labs and the HTTP Plugin – inserting, updating and deleting records in MySQL via HTTP.

Hyper-threading – how does it double CPU throughput?

Categories: DBA Blogs

Performance Problems with Dynamic Statistics in Oracle 12c

Tue, 2015-01-06 09:55

I’ve been making some tests recently with the new Oracle In-Memory option and have been faced with an unexpected  performance problem.  Here is a test case:

create table tst_1 as
with q as (select 1 from dual connect by level <= 100000)
select rownum id, 12345 val, mod(rownum,1000) ref_id  from q,q
where rownum <= 200000000;

Table created.

create table tst_2 as select rownum ref_id, lpad(rownum,10, 'a') name, rownum || 'a' name2</pre>
from dual connect by level <= 1000;

Table created.

ownname          => user,
tabname          =>'TST_1',
method_opt       => 'for all columns size 1',
degree => 8
ownname          => user,
tabname          =>'TST_2',
method_opt       => 'for all columns size 1'
PL/SQL procedure successfully completed.

alter table tst_1 inmemory;

Table altered.

select count(*) from tst_1;


Waiting for in-memory segment population:

select segment_name, bytes, inmemory_size from v$im_segments;


--------------- ---------- -------------

TST_1           4629463040    3533963264

Now let’s make a simple two table join:

select name, sum(val) from tst_1 a, tst_2 b where a.ref_id = b.ref_id and name2='50a'
group by name;

Elapsed: 00:00:00.17

Query runs pretty fast. Execution plan has the brand new vector transformation

Execution Plan
Plan hash value: 213128033

| Id  | Operation                         | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                  |                          |     1 |    54 |  7756  (21)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION        |                          |       |       |            |          |
|   2 |   LOAD AS SELECT                  | SYS_TEMP_0FD9D66FA_57B2B |       |       |            |          |
|   3 |    VECTOR GROUP BY                |                          |     1 |    24 |     5  (20)| 00:00:01 |
|   4 |     KEY VECTOR CREATE BUFFERED    | :KV0000                  |     1 |    24 |     5  (20)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL            | TST_2                    |     1 |    20 |     4   (0)| 00:00:01 |
|   6 |   HASH GROUP BY                   |                          |     1 |    54 |  7751  (21)| 00:00:01 |
|*  7 |    HASH JOIN                      |                          |     1 |    54 |  7750  (21)| 00:00:01 |
|   8 |     VIEW                          | VW_VT_377C5901           |     1 |    30 |  7748  (21)| 00:00:01 |
|   9 |      VECTOR GROUP BY              |                          |     1 |    13 |  7748  (21)| 00:00:01 |
|  10 |       HASH GROUP BY               |                          |     1 |    13 |  7748  (21)| 00:00:01 |
|  11 |        KEY VECTOR USE             | :KV0000                  |   200K|  2539K|  7748  (21)| 00:00:01 |
|* 12 |         TABLE ACCESS INMEMORY FULL| TST_1                    |   200M|  1716M|  7697  (21)| 00:00:01 |
|  13 |     TABLE ACCESS FULL             | SYS_TEMP_0FD9D66FA_57B2B |     1 |    24 |     2   (0)| 00:00:01 |

Predicate Information (identified by operation id):

   5 - filter("NAME2"='50a')
   7 - access("ITEM_5"=INTERNAL_FUNCTION("C0") AND "ITEM_6"="C2")
  12 - inmemory(SYS_OP_KEY_VECTOR_FILTER("A"."REF_ID",:KV0000))

   - vector transformation used for this statement

After having such impressive performance I’ve decided to run the query in parallel:

select /*+ parallel(8) */ name, sum(val) from tst_1 a, tst_2 b
where a.ref_id = b.ref_id and name2='50a'
group by name;

Elapsed: 00:01:02.55

Query elapsed time suddenly dropped from 0.17 seconds to the almost 1 minute and 3 seconds. But the second execution runs in 0.6 seconds.
The new plan is:

Execution Plan
Plan hash value: 3623951262

| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
|   0 | SELECT STATEMENT                    |          |     1 |    29 |  1143  (26)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                     |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)               | :TQ10001 |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                    |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE                      |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH                   | :TQ10000 |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY                 |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,00 | PCWP |            |
|*  7 |        HASH JOIN                    |          |   200K|  5664K|  1142  (26)| 00:00:01 |  Q1,00 | PCWP |            |
|   8 |         JOIN FILTER CREATE          | :BF0000  |     1 |    20 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|*  9 |          TABLE ACCESS FULL          | TST_2    |     1 |    20 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|  10 |         JOIN FILTER USE             | :BF0000  |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWP |            |
|  11 |          PX BLOCK ITERATOR          |          |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWC |            |
|* 12 |           TABLE ACCESS INMEMORY FULL| TST_1    |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWP |            |

Predicate Information (identified by operation id):

   7 - access("A"."REF_ID"="B"."REF_ID")
   9 - filter("NAME2"='50a')
  12 - inmemory(SYS_OP_BLOOM_FILTER(:BF0000,"A"."REF_ID"))

   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 8 because of hint

We can see a Bloom filter instead of key vector, but this is not the issue. Problem is coming from the “dynamic statistics used: dynamic sampling (level=AUTO)” note.
In 10046 trace file I’ve found nine dynamic sampling queries and one of them was this one:

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
  optimizer_features_enable(default) no_parallel result_cache(snapshot=3600)
  */ SUM(C1)
 (SELECT /*+ qb_name("innerQuery")  */ 1 AS C1 FROM (SELECT /*+
  "B"."NAME2"='50a') "VW_VTN_EE607F02#1" WHERE ("VW_VTN_377C5901#0"."ITEM_2"=
  "VW_VTN_EE607F02#1"."ITEM_4")) innerQuery

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1     43.92      76.33          0          5          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3     43.92      76.33          0          5          0           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 64     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  RESULT CACHE  56bn7fg7qvrrw1w8cmanyn3mxr (cr=0 pr=0 pw=0 time=0 us)
         0          0          0   SORT AGGREGATE (cr=0 pr=0 pw=0 time=8 us)
         0          0          0    HASH JOIN  (cr=0 pr=0 pw=0 time=4 us cost=159242 size=2600000 card=200000)
 200000000  200000000  200000000     TABLE ACCESS INMEMORY FULL TST_1 (cr=3 pr=0 pw=0 time=53944537 us cost=7132 size=800000000 card=200000000)
         0          0          0     TABLE ACCESS FULL TST_2 (cr=0 pr=0 pw=0 time=3 us cost=4 size=9 card=1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  asynch descriptor resize                        1        0.00          0.00
  Disk file operations I/O                        1        0.00          0.00
  CSS initialization                              1        0.00          0.00
  CSS operation: action                           1        0.00          0.00
  direct path write temp                       6267        0.02         30.37

Vector transformation is disabled, inefficient table order is fixed by the ORDERING hint and we are waiting for hash table creation based on huge TST_1 table.
Dynamic statistics feature has been greatly improved in Oracle 12c  with the support for joins and group by predicates. This is why we have such join during the parse time. Next document has the”Dynamic Statistics (previously known as dynamic sampling)” section inside: Understanding Optimizer Statistics with Oracle Database 12c where the new functionality is described.

Let’s make a simpler test:

select /*+ parallel(2) */ ref_id, sum(val) from tst_1 a group by ref_id;

Execution Plan
Plan hash value: 2527371111

| Id  | Operation                         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
|   0 | SELECT STATEMENT                  |          |  1000 |  9000 |  7949  (58)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                   |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)             | :TQ10001 |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                  |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE                    |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH                 | :TQ10000 |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY               |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR          |          |   200M|  1716M|  4276  (21)| 00:00:01 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS INMEMORY FULL| TST_1    |   200M|  1716M|  4276  (21)| 00:00:01 |  Q1,00 | PCWP |            |

   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of hint

We can see a “dynamic statistics used” note again. It’s a simple query without predicates with the single table with pretty accurate statistics. From my point of view, here is no reason for dynamic sampling at all.
Automatic dynamic sampling was introduced in 11G Release 2. Description of this feature can be found in this document: Dynamic sampling and its impact on the Optimizer.
“From Oracle Database 11g Release 2 onwards the optimizer will automatically decide if dynamic sampling will be useful and what dynamic sampling level will be used for SQL statements executed in parallel. This decision is based on size of the tables in the statement and the complexity of the predicates”.
Looks like algorithm has been changed in 12c and dynamic sampling is triggered in a broader set of use cases.
This behavior can be disabled at statement, session or system level using the fix control for the bug 7452863. For example,
ALTER SESSION SET “_fix_control”=’7452863:0′;


Dynamic statistics has been enhanced in Oracle 12c, but this can lead to a longer parse time.
Automatic dynamic statistics is used more often in 12c which can lead to a parse time increase in the more cases than before.

Categories: DBA Blogs

Troubleshooting a Multipath Issue

Tue, 2015-01-06 09:37

Multipathing allows to configure multiple paths from servers to storage arrays. It provides I/O failover and load balancing. Linux uses device mapper kernel framework to support multipathing.

In this post I will explain the steps taken to troubleshoot a multipath issue. This should provide an glimpse into the tools and technology involved. Problem was reported in a RHEL6 system in which a backup software is complaining that the device from which /boot is mounted does not exist.

Following is the device. You can see the device name is a wwid.

# df
Filesystem 1K-blocks Used Available Use% Mounted on
198337 61002 127095 33% /boot

File /dev/mapper/3600508b1001c725ab3a5a49b0ad9848ep1 is missing under /dev/mapper.

# ll /dev/mapper/
total 0
crw-rw—- 1 root root 10, 58 Jul 9 2013 control
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpatha -> ../dm-1
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathap1 -> ../dm-2
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathb -> ../dm-0
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathc -> ../dm-3
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathcp1 -> ../dm-4
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathcp2 -> ../dm-5
lrwxrwxrwx 1 root root 7 Jul 9 2013 vgroot-lvroot -> ../dm-6
lrwxrwxrwx 1 root root 7 Jul 9 2013 vgroot-lvswap -> ../dm-7

From /ect/fstab, it is found that UUID of the device is specified.

UUID=6dfd9f97-7038-4469-8841-07a991d64026 /boot ext4 defaults 1 2

From blkid, we can see the device associated with the UUID. blkid command prints the attributes of all block device in the system.

# blkid
/dev/mapper/mpathcp1: UUID=”6dfd9f97-7038-4469-8841-07a991d64026″ TYPE=”ext4″

Remounting the /boot mount point shows user friendly name /dev/mapper/mpathcp1.

# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/mpathcp1 198337 61002 127095 33% /boot

From this far, we can understand that the system is booting with wwid as device name. But later the device name is converted into user friendly name. In multipath configuration user_friendly_names is enabled.

# grep user_friendly_names /etc/multipath.confuser_friendly_names yes

As per Red Hat documentation,

“When the user_friendly_names option in the multipath configuration file is set to yes, the name of a multipath device is of the form mpathn. For the Red Hat Enterprise Linux 6 release, n is an alphabetic character, so that the name of a multipath device might be mpatha or mpathb. In previous releases, n was an integer.”

As the system is mounting the right disk after booting up, problem should be with the user friendly name configuration in initramfs. Extracting the initramfs file and checking the multipath configuration shows that user_friendly_names parameter is enabled.

# cat initramfs/etc/multipath.conf
defaults {
user_friendly_names yes

Now the interesting point is that, /etc/multipath/bindings is missing in initramfs. But the file is in the system. /etc/multipath/bindings file is used to refer wwid with alias.

# cat /etc/multipath/bindings
# Multipath bindings, Version : 1.0
# NOTE: this file is automatically maintained by the multipath program.
# You should not need to edit this file in normal circumstances.
# Format:
# alias wwid
mpathc 3600508b1001c725ab3a5a49b0ad9848e
mpatha 36782bcb0005dd607000003b34ef072be
mpathb 36782bcb000627385000003ab4ef14636

initramfs can be created using dracut command.

# dracut -v -f test.img 2.6.32-131.0.15.el6.x86_64 2> /tmp/test.out

Building a test initramfs file shows that a newly created initramfs is including /etc/multipath/bindings.

# grep -ri bindings /tmp/test.out
I: Installing /etc/multipath/bindings

So this is what is happening,
When system boots up, initramfs looks for /etc/multipath/bindings for aliases in initramfs to use for user friendly names. But it could not find it and and uses wwid. After system boots up /etc/multipath/bindings is present and device names are changed to user friendly names.

Looks like the /etc/multipath/bindings file is created after kernel installation and initrd generation. This might have happened as multipath configuration was done after kernel installation. Even if the system root device is not on multipath, it is possible for multipath to be included in the initrd. For example, this can happen of the system root device is on LVM. This should be the reason why multupath.conf was included in the initramfs and not /etc/multipath/bindings.

To solve the issue we can to rebuild the initrd and restart the system. Re-installing existing kernel or installing new kernel would also fix the issue as the initrd would be rebuilt in both cases..

# dracut -v -f 2.6.32-131.0.15.el6.x86_64
Categories: DBA Blogs

WARNING!!! Maximum Load 800 lbs

Mon, 2015-01-05 13:45

Is it just my recently discovered dumbness or am I really on to something here? I just returned home from a ritzy shopping mall and had to ride down an elevator with a brimming shopping trolly filled with items which my companion miraculously managed to buy from every sale from every corner of that mall. Anyway before I recall that shopping bill and weep copiously, I just wanted to share something which always bugs me when I use elevators.

Inside the elevators, there are only two things to do. Either stare at the faces of other riders and enjoy the embarrassment of shifting eyes from face to face or the other option is to look onto the walls. Like others, I always select the latter option. One thing which every elevator says to me is something like ‘Maximum Load 800 lbs’ or any other number.

Now that not only baffles me but also creates a passive panic. I quickly calculate my weight, then frantically and stealthily glance at other bodies inside and then guess the total weight in there. More often than not, it turns out that the weight of bodies plus their heavyset trollies exceed or teeter on the edge of the warning lbs.

On this very moment, the other elevator hobbits notice my ashen face, violently trembling body, sunken eyes and follow them to the warning. Some get the point and follow the ritual of getting panicked, some try to recall emergency ambulance number after watching us, and some just don’t give the flying heck.

My question here is why on Earth they write it inside the elevator when its too late to do anything about it. Do they really write it seriously or they have just assumed that never that weight would be reached? I personally know a group of people including me who for sure can easily break that weight record. They all live nearby and use the same shopping center and elevator of course. There is every chance that one day they all will come on same time and use that elevator. What happens then? Or I am just paranoid beyond cure? May be if elevator guys would write it outside it, and place a weighing machine on entry of elevator. The machine would calculate the weight as people would move in, and will close its door as soon as it reaches near it.

Interestingly enough, I have seen databases being used as elevators by the applications and data loaders. Applications and data loading without any consideration of design, scalability or performance or capacity management just throng the database. Unlike elevators, I have seen databases break down or coming to grinding halt due to runaway or in other words over-weight applications.

That panics me in the same way as elevators do.

Categories: DBA Blogs

Log Buffer #404, A Carnival of the Vanities for DBAs

Fri, 2015-01-02 09:35

With new year already in fast gear, bloggers are sparing no stone unturned to come up with innovative ideas. This Log Buffer edition is keeping pace with them as always.


While playing with 12c Scott tried the upgrade to the DEFAULT column syntax that now allows sequences.

This is an age old question and of course the answer depends on how you say “SQL”.

Happy New Year! Upgraded Grid Infrastructure to and applied the Oct 2014 PSU. Had an error during as well, due to the ASM spfile being on disk instead of on ASM diskgroup.

If you (already) created your first Oracle Service Bus 12c application/project with SOAP webservices and tried to deploy it to your IntegratedWeblogic server you might be familiar with this error.

Using Drag-Drop functionality in af:treeTable to move data between nodes.

SQL Server:

Hadoop has been making a lot of noise in the Big Data world.

Lets look at two different ways of creating an HDInsight Cluster: Creating an HDInsight Cluster through Azure Management Portal, and creating an HDInsight Cluster through Windows Azure PowerShell.

Why you need test driven development.

SQL Server Data Import System to Alert For Missed Imports.

Create stunning visualizations with Power View in 20 minutes or less!


So assume you just uploaded the certificate you use to identify yourself to the MySQL server to Github or some other place it doesn’t belong…and there is no undelete.

MySQL Plugin for Oracle Enterprise Manager on VirtualBox: installation gotchas.

MariaDB slave restore using GTID & xtrabackup bug.

How small changes impact complex systems – MySQL example.

In this post, Louis talk about MHA GTID behavior, we test different cases and find something is different from previous versions.

Categories: DBA Blogs