Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 9 hours 6 min ago

Pythian at Oracle OpenWorld 2014

Fri, 2014-09-05 14:41

Calling all Pythian fans, clients, and partners! It’s that time of year again with Oracle OpenWorld 2014 fast approaching! Pythian is excited to be participating once again with our rockstar team of experts in all things Oracle including Database 12c, Oracle Applications (EBS, GoldenGate) and engineered systems, MySQL, and more. We are thrilled to have multiple Pythian folks presenting sessions as listed below with more attending in tow,  including our newest friends & colleagues formerly of BlackbirdIT. Keep a look out for our signature black “Love Your Data” t-shirts.

We’re also excited to again be co-hosting the Annual Bloggers Meetup with our good friends at the Oracle Technology Network. Keep your eyes peeled for a blog post from Alex Gorbachev, Pythian’s CTO, providing details including contest fun & reviews of past years of mayhem and madness.

It’s not Oracle OpenWorld without a conference within a conference. Queue Oaktable World and an action packed agenda for all the hardcore techies out there. Catch Alex and Jeremiah up on Tuesday.

Vasu Balla will also  be attending the Oracle DIS Partner Council Meeting and Oracle EBS ATG Customer Advisory Board, and helping share Pythian’s thought leadership.


Attention Pythian Partners & clients, if you’re attending please reach out to us for details on social happenings you won’t want to miss!

Pythian’s dynamic duo of Emilia (Partner Program Mgr/ 613 355 5038) & Vanessa (Dir. of BD/ 613 897 9444) are orchestrating this year’s efforts. We’ll be live tweeting up to the minute show action from @pythianpartners and are the best way to get a hold of any of the Pythian team.

See you there! #oow14 #pythianlife



Pythian Sessions at Oracle OpenWorld 2014

Thou Shalt Not Steal: Securing Your Infrastructure in the Age of Snowden
Presented by Paul Vallee 
Sunday, Sep 28, 9:00 AM – 9:45 AM – Moscone South – 310

Session ID UGF9199: “In June 2013, Edward Snowden triggered the most costly insider security leak in history, forcing organizations to completely rethink how they secure their infrastructure. In this session, the founder of Pythian discusses how he supervises more than 200 database and system administrators as they perform work on some of the world’s most valuable and mission-critical data infrastructures.”


24/7 Availability with Oracle Database Application Continuity
Presented by Jeremiah Wilton (@oradebugand Marc Fielding (@mfild)
Sunday, Sep 28, 9:00 AM – 9:45 AM – Moscone South – 309

Session ID UGF2563: “Oracle Real Application Clusters (Oracle RAC) enables databases to survive hardware failures that would otherwise cause downtime. Transparent application failover and fast application notification can handle many failure scenarios, but in-flight transactions still require complex application-level state tracking. With application continuity, Java applications can now handle failure scenarios transparently to applications, without data loss. In this session, see actual code and a live demonstration of application continuity during a simulated failure.”


Time to Upgrade to Oracle Database 12c
Presented by Michael Abbey (@MichaelAbbeyCAN)
Sunday, Sep 28, 9:00 AM – 9:45 AM – Moscone South – 307

Session ID UGF2870: “Oracle Database 12c has been out for more than a year now. There is a handful of off-the-shelf features of Oracle Database 12c that can serve the growing requirements of all database installations, regardless of the applications they support and the options for which an installation is licensed. This session zeros in on the baseline enhancements to the 12c release, concentrating on the likes of the Oracle Recovery Manager (Oracle RMAN) feature of Oracle Database; pluggable databases; and a handful of new opportunities to perform many resource-intensive operations by splitting work among multiple separate processes.”


Oracle RMAN in Oracle Database 12c: The Next Generation
Presented by René Antunez (@grantunez)
Sunday, Sep 28, 10:00 AM – 10:45 AM – Moscone South – 309

Session ID UGF1911: “The Oracle Recovery Manager (Oracle RMAN) feature of Oracle Database has evolved since being released, in Oracle8i Database. With the newest version of Oracle Database, 12c , Oracle RMAN has great new features that will enable you to reduce your downtime in case of a disaster. In this session, you will learn about the new features introduced in Oracle Database 12c and how you can take advantage of them from the first day you upgrade to this version.”


Experiences Using SQL Plan Baselines in Production
Presented by Nelson Calero (@ncalerouy)
Sunday, Sep 28, 12:00 PM – 12:45 PM – Moscone South – 250

Session ID UGF7945: “This session shows how to use the Oracle Database SQL Plan Baselines functionality, with examples from real-life usage in production (mostly Oracle Database 11g Release 2) and how to troubleshoot it. SQL Plan Baselines is a feature introduced in Oracle Database 11g to manage SQL execution plans to prevent performance regressions. The presentation explains concepts and presents examples, and you will encounter some edge cases.”


Getting Started with Database as a Service with Oracle Enterprise Manager 12c
Presented by René Antunez
Sunday, Sep 28, 3:30 PM – 4:15 PM – Moscone South – 307

Session ID UGF1941: “With the newest version of Oracle Database 12c, with Oracle Multitenant, we are moving toward an era of provisioning databases to our clients faster than ever, even leaving out the DBA and enabling the developers and project leads to provision their own databases. This presentation gives you insight into how to get started with database as a service (DBaaS) and the latest version of Oracle Enterprise Manager, 12c, and get the benefit of this upcoming database era.”


Using the Oracle Multitenant Option to Efficiently Manage Development and Test Databases
Presented by Marc Fielding (@mfild) and Alex Gorbachev (@alexgorbachev)
Wednesday, Oct 1, 12:45 PM – 1:30 PM – Moscone South – 102

Session ID CON2560: “The capabilities of Oracle Multitenant for large-scale database as a service (DBaaS) environments are well known, but it provides important benefits for nonproduction environments as well. Developer productivity can be enhanced by providing individual developers with their own separate pluggable development databases, done cost-effectively by sharing the resources of a larger database instance. Data refreshes and data transfers are simple and fast. In this session, learn how to implement development and testing environments with Oracle Multitenant; integrate with snapshot-based storage; and automate the process of provisioning and refreshing environments while still maintaining high availability, performance, and cost-effectiveness.”


Oracle Database In-Memory: How Do I Choose Which Tables to Use It For?
Presented by Christo Kutrovsky (@kutrovsky)
Wednesday, Oct 1, 4:45 PM – 5:30 PM – Moscone South – 305

Session ID CON6558: “Oracle Database In-Memory is the most significant new feature in Oracle Database 12c. It has the ability to make problems disappear with a single switch. It’s as close as possible to the fast=true parameter everyone is looking for. Question is, How do you find which tables need this feature the most? How do you find the tables that would get the best benefit? How do you make sure you don’t make things worse by turning this feature on for the wrong table? This highly practical presentation covers techniques for finding good candidate tables for in-memory, verifying that there won’t be a negative impact, and monitoring the improvements afterward. It also reviews the critical inner workings of Oracle Database In-Memory that can help you better understand where it fits best.”


Customer Panel: Private Cloud Consolidation, Standardization, & Automization
Presented by Jeremiah Wilton (@oradebug)
Thursday, Oct 2, 12:00 PM – 12:45 PM – Moscone South – 301

Session ID CON10038: “Attend this session to hear a panel of distinguished customers discuss how they transformed their IT into agile private clouds by using consolidation, standardization, and automation. Each customer presents an overview of its project and key lessons learned. The panel is moderated by members of Oracle’s private cloud product management team.”


Achieving Zero Downtime During Oracle Application and System Migrations – Co-presented with Oracle
Presented by Gleb Otochkin (@sky_vst) and Luke Davies (@daviesluke)
Thursday, Oct 2, 10:45 AM – 11:30 AM – Moscone West – 3018

Session ID CON7655: “Business applications—whether mobile, on-premises, or in the cloud—are the lifeline of any organization. Don’t let even planned outage events such as application upgrades or database/OS migrations hinder customer sales and acquisitions or adversely affect your employees’ productivity. In this session, hear how organizations today are using Oracle GoldenGate for Oracle Applications such as Oracle E-Business Suite and the PeopleSoft, JD Edwards, Siebel, and Oracle ATG product families in achieving zero-downtime application upgrades and database, hardware, and OS migrations. You will also learn how to use Oracle Data Integration products for real-time, operational reporting without degrading application performance. That’s Oracle AppAdvantage, and you can have it too.”


Categories: DBA Blogs

Loose Coupling and Discovery of Services With Consul — Part 2

Fri, 2014-09-05 08:33
Creating a Consul Client Docker Image

In my previous post, I demonstrated how to create a cluster of Consul servers using a pre-built Docker image. I was able to do this because our use case was simple: run Consul.

In this post, however, we will be creating one or more Consul clients that will register services they provide, which can then be queried using Consul’s DNS and / or HTTP interfaces. As we are now interested in running Consul and an application providing a service, things are going to get a bit more complicated.

Before proceeding, I’m going to need to explain a little bit about how Docker works. Docker images, such as progrium/consul we used in the previous post, are built using instructions from a special file called Dockerfile. There are two related instructions that can be specified in this file which control the container’s running environment: that is, the process or shell that is run in the container. They are ENTRYPOINT and CMD.

There can be only one ENTRYPOINT instruction in a Dockerfile, and it has two forms: either an array of strings, which will be treated like an exec, or a simple string which will execute in ‘/bin/sh -c’. When you specify an ENTRYPOINT, the whole container runs as if it were just that executable.

The CMD instruction is a bit different. It too can only be specified once, but it has three forms: the first two are the same as ENTRYPOINT, but the third form is an array of strings which will be passed as parameters to the ENTRYPOINT instruction. It’s important to note that parameters specified in an ENTRYPOINT instruction cannot be overridden, but ones in CMD can. Therefore, the main purpose of CMD is to provide defaults for an executing container.

It’s probably becoming clear to you by now that Docker images are designed to run one process or shell. We want to run two processes, however: the Consul agent and an application. Thankfully, there is an image available called phusion/baseimage that provides runit for service supervision and management, which will make it easy for me to launch Consul and another service (such as nginx) within my containers.

SIDEBAR: There is quite a bit of debate about the intended / recommend use of Docker, and whether the process run in the container should be your application or an init process that will spawn, manage and reap children. If you’re interested in reading more about the pros and cons of each of these approaches, please refer to Jérôme Petazzoni’s post, phusion’s baseimage-docker page, and / or Google the topics of ‘separation of concerns’ and ‘microservices’.

Now that I’ve provided some background, let’s get into the specifics of the Docker image for my Consul clients. I’ll begin with the full contents of the Dockerfile and then describe each section in detail.

FROM phusion/baseimage:latest

# Disable SSH
RUN rm -rf /etc/service/sshd /etc/my_init.d/

# Install nginx
  add-apt-repository -y ppa:nginx/stable && \
  apt-get update && \
  apt-get install -y nginx zip && \
  chown -R www-data:www-data /var/lib/nginx

# Clean up apt
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Define mountable directories.
VOLUME ["/data", "/etc/nginx/sites-enabled", "/var/log/nginx"]

# Add runit configuration for nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
ADD files/nginx/nginx-runit /etc/service/nginx/run
RUN chmod 755 /etc/service/nginx/run

# Install consul
RUN curl -s -L -O
RUN unzip -d /usr/bin
RUN chmod 555 /usr/bin/consul

# Add service configuration
ADD files/consul/consul.json /etc/consul.d/consul.json
RUN chmod 644 /etc/consul.d/consul.json

# Add runit configuration for consul
ADD files/consul/consul-runit /etc/service/consul/run
RUN chmod 755 /etc/service/consul/run

# Expose nginx ports
EXPOSE 80 443

# Expose consul ports
EXPOSE 53/udp 8300 8301 8301/udp 8302 8302/udp 8400 8500

ENV HOME /root

ENTRYPOINT [ "/sbin/my_init" ]

The first section specifies that my image will be based on that of phusion/baseimage, and that I am the maintainer of my image. So far so good.

Next, I am removing the SSHD service from the container. This is part of phusion’s image and is not something I am interested in using for the purposes of my demonstration.

The next step is to install nginx and should look fairly straight forward. I have taken the liberty of installing zip at the same time, as I will be using it later on to install Consul.

The VOLUME instruction lets me define mount points that can be used for mounting volumes in the container, passed as arguments of the docker run command. I am not actually using this in my demonstration, it is just there to make you aware of the capability.

Next I am telling nginx not to daemonize itself, and am adding an nginx configuration for runit. The ADD instruction adds a local file to the image in the specified path. The runit configuration I am adding is pretty simple and looks like this:

exec /usr/sbin/nginx -c /etc/nginx/nginx.conf 2>&1

Now that I am done with nginx, I want to install and configure Consul. I simply retrieve the binary package and extract it into /usr/bin in the image. I then use another ADD instruction to supply a configuration file for Consul. This file is in JSON format and tells Consul to register a service named ‘nginx’.

	"service": {
		"name": "nginx",
		"port": 80

I then use an ADD instruction to supply a runit configuration for Consul in the same manner I did for nginx. Its content is as follows:

if [ -f "/etc/" ]; then
  source /etc/

# Make sure to use all our CPUs, because Consul can block a scheduler thread
export GOMAXPROCS=`nproc`

# Get the public IP
BIND=`ifconfig eth0 | grep "inet addr" | awk '{ print substr($2,6) }'`

exec /usr/bin/consul agent \
  -config-dir="/etc/consul.d" \
  -data-dir="/tmp/consul" \
  -bind=$BIND \
  >>/var/log/consul.log 2>&1

With all of the hard stuff out of the way, I now define the nginx and Consul ports to EXPOSE to other containers running on the host, and to the host itself.

And last but not least, I set the HOME environment variable to /root and set the init process of /sbin/my_init as the ENTRYPOINT of my container.

This creates a good foundation. If we were to run the image as is, we would end up with nginx running and listening on port 80, and Consul would be running as well. However, we haven’t provided Consul with any details of the cluster to join. As you have probably guessed, that’s what CONSUL_FLAGS is for, and we’ll see it in action in the next section.

Creating Consul Clients With Vagrant

So far we’ve gone to the trouble of creating a Docker image that will run Consul and nginx, and we’ve supplied configuration to Consul that will have it register nginx as a service. Now we’ll want to create some clients with Vagrant and see querying of services in action.

Let’s start by modifying our Vagrantfile. Just as was done with the Consul servers, we’ll want to create an array for the nginx members and tell Vagrant to use the Docker provider. This time, however, some additional configuration will be necessary. The full Vagrantfile is now going to look like this:


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # A hash of containers to define.
  # These will be the Consul cluster members.
  consul_members = [ "consul1", "consul2", "consul3" ]
  consul_members.each do |member|
    config.vm.define member do |consul_config|

      # Use Docker provider
      consul_config.vm.provider "docker" do |docker| = member
        docker.image  = 'progrium/consul'
        docker.cmd = [ "-server", "-node=#{member}", "-join=#{JOIN_IP}" ]

  # Create an nginx container running the consul agent
  nginx_members = [ "nginx1", "nginx2", "nginx3" ]
  nginx_members.each do | member|
    config.vm.define member do |nginx_config|
      nginx_config.vm.provider "docker" do |docker| = member
        docker.build_args = [ "-t", "bfraser/consul-nginx", "--rm=true" ]
        docker.build_dir = "."
        docker.cmd = [ "/bin/bash" ]
        docker.create_args = [ "--dns=#{JOIN_IP}", "-t", "-i" ]
        docker.env = { "CONSUL_FLAGS" => "-node=#{member} -join=#{JOIN_IP}" }

Note that this time we are not using docker.image to supply the name of an existing Docker image to use for our containers. Instead, we are going to use docker.build_args and docker.build_dir to build our own.

docker.build_args = [ "-t", "bfraser/consul-nginx", "--rm=true" ]

This is a list of extra arguments to pass to the docker build command. Specifically, I am naming the image bfraser/consul-nginx and telling Docker to remove intermediate containers after a successful build.

docker.build_dir = "."

This should be fairly self-explanatory: I am simply telling Docker to use the current working directory as the build directory. However, I have some files (including the Vagrantfile) that I do not want to be part of the resulting image, so it is necessary to tell Docker to ignore them. This is accomplished with a file called .dockerignore and mine looks like this:


Next, I am using docker.cmd to pass /bin/bash as an extra parameter to the image’s ENTRYPOINT, which allows me to have a shell in the container. A little later, I will show you how this can be useful.

The next line:

docker.create_args = [ "--dns=#{JOIN_IP}", "-t", "-i" ]

is a list of extra arguments to pass to the ‘docker run‘ command. Specifically, I am providing a custom DNS server and instructing Docker to allocate a TTY and keep STDIN open even if not attached to the container.

Lastly, I am supplying a hash to docker.env which will expose an environment variable named CONSUL_FLAGS to the container. The environment variable contains additional parameters to be used when starting Consul.

With this configuration in place, we can now use Vagrant to create three additional containers, this time running Consul and nginx.

$ JOIN_IP= vagrant up --provider=docker

This time if we check the output of ‘consul members‘ we should see our host and six containers: three Consul servers and three nginx servers functioning as Consul clients.

$ consul members -rpc-addr=
Node     Address           Status  Type    Build  Protocol
nginx1  alive   client  0.3.0  2
nginx2  alive   client  0.3.0  2
laptop  alive   server  0.3.0  2
consul2   alive   server  0.3.0  2
consul3  alive   server  0.3.0  2
consul1   alive   server  0.3.0  2
nginx3  alive   client  0.3.0  2
Querying Services

As I mentioned in ‘Where Does Consul Fit In?’ in my original post, Consul is a tool for enabling discovery of services, and it provides two interfaces for doing so: DNS and HTTP. In this section, I’ll show you how we can use each of these interfaces to query for details of services being provided.

First, let’s use the HTTP interface to query which services are being provided by members of the Consul cluster.

$ curl | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    25  100    25    0     0  12722      0 --:--:-- --:--:-- --:--:-- 25000
    "consul": [],
    "nginx": []

This returns JSON-encoded data which shows that ‘consul’ and ‘nginx’ services are being provided. Great, now let’s query for details of the ‘nginx’ service.

$ curl | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   362  100   362    0     0   210k      0 --:--:-- --:--:-- --:--:--  353k
        "Address": "",
        "Node": "nginx1",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null
        "Address": "",
        "Node": "nginx2",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null
        "Address": "",
        "Node": "nginx3",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null

We can see here that there are three nodes providing the nginx service, and we have details of the IP address and port they are listening on. Therefore, if we were to open in a web browser, we would see the ‘Welcome to nginx!’ page.

Notice how the REST endpoint changed between the first and second curl requests, from /v1/catalog/services to /v1/catalog/service/nginx. Consul provides extensive documentation of the various REST endpoints available via the HTTP API.

While the HTTP API is the most powerful method of interacting with Consul, if we are only interested in querying for information about nodes and services, it is also possible to use its DNS server for simple name lookups. Querying for details of the nginx service via the DNS interface is as simple as running the following:

$ dig @ -p 8600 nginx.service.consul

; <> DiG 9.9.5-3-Ubuntu <> @ -p 8600 nginx.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3084
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;nginx.service.consul.		IN	A

nginx.service.consul.	0	IN	A
nginx.service.consul.	0	IN	A
nginx.service.consul.	0	IN	A

;; Query time: 1 msec
;; WHEN: Sat Aug 16 22:35:51 EDT 2014
;; MSG SIZE  rcvd: 146

As you can see, while it is certainly possible to develop a client to tightly integrate with Consul through its API, it is also easy to take advantage of its DNS interface and not have to write a client at all.

Attaching To A Docker Container

I have one last tip, which is especially useful if you are new to Docker: how to attach to your containers.

I mentioned earlier in this post that I was including the following line in my Vagrantfile:

docker.cmd = [ "/bin/bash" ]

What this does is pass /bin/bash as an extra parameter to the image’s ENTRYPOINT instruction, resulting in the /sbin/my_init process spawning a bash shell.

I also instructed Vagrant, via the docker.create_args line, to have Docker allocate a TTY and keep STDIN open even if not attached to the container. This means I can attach to my containers and interact with them through a bash shell as follows (note: press the ‘Enter’ key following the command to get the prompt):

$ docker attach nginx1


Once you are done working with the container, you can detach from it by pressing ^P^Q (that’s CTRL-P followed by CTRL-Q).


With that, we have reached the end of my demonstration. Thanks for sticking with me!

First I described the importance of loose coupling and service discovery in modern service-oriented architectures, and how Consul is one tool that can be used for achieving these design goals.

Then I detailed, by way of a demonstration, how Vagrant and Docker can be used to form a Consul cluster, and how to create a custom Docker image that will run both your application and a Consul agent.

And, last but not least, I showed how you can make use of Consul’s HTTP API and DNS interface to query for information about services provided.

Hopefully you have found these posts useful and now have some ideas about how you can leverage these technologies for managing your infrastructure. I encourage you to provide feedback, and would be very interested in any tips, tricks or recommendations you may have!

Categories: DBA Blogs

Log Buffer #387, A Carnival of the Vanities for DBAs

Fri, 2014-09-05 08:04

Benefits of blogs transcends the technologies as they not only enable the bloggers to pen down their valued experiences but also provide readers to get first hand practical information. This Log Buffer Edition shares those benefits from Oracle, SQL Server and MySQL.


Cloud Application Foundation is the innovator’s complete and integrated modern cloud application infrastructure, built using best of breed components, such as Oracle WebLogic Server 12c, the industry’s best application server for building and deploying enterprise Java EE applications.

Migrating Existing PeopleSoft Attachments into the Managed Attachments Solution.

How to identify SQL performing poorly on an APEX application?

How can you keep your Oracle Applications and systems running at peak performance? What will it take to get more out of your Oracle Premier Support coverage?

Projects Create Accounting Performance Issues With Database Version.

SQL Server:

If your log restores aren’t happening when they’re meant to, you want to know about it. You’ll be relying on restoring from logs should anything happen to your databases, and if you can’t restore to a certain point in time, you risk losing valuable data.

Agile data warehousing can be challenging. Pairing the right methodologies and tools can help.

Introduction to Azure PowerShell Modules for the SQL Server DBA.

The Clustered columnstore index generates “unable to find index entry” error and a memory dump after few DMLs on the table.

With any application organizations face consistent key challenges such as high efficiency and business value, complex configuration, and low total cost of ownership. Extending applications to the cloud in hybrid scenarios addresses many of these challenges.


Analyzing Twitter Data using Datasift, MongoDB and Pig

Cloud storage for MySQL Enterprise Backup (MEB) users

Tracing down a problem, finding sloppy code.

Cloud storage for MySQL Enterprise Backup (MEB) users.

MySQL Enterprise Backup (MEB) is a highly efficient tool for taking backups of your MySQL databases.

Categories: DBA Blogs

Fixing Windows RAC Listener to Listen On IP Address of Hostname

Wed, 2014-09-03 08:31

Recently, there was an issue after a node was added to an existing Windows RAC cluster. After everything was set up, the local listener was not listening to the IP address ( corresponding to the node’s hostname. Oracle version was

The listener.ora was configured properly with the following line:


The above setting works properly in linux, but does not in Windows

The listener.log showed the following error message, when listener started:

TNS-12542: TNS:address already in use
TNS-12560: TNS:protocol adapter error
TNS-00512: Address already in use
64-bit Windows Error: 48: Unknown error

Status of the listener was

lsnrctl status

LSNRCTL for 64-bit Windows: Version - Production on 29-AUG-2014 04:16:42

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Alias                     LISTENER
Version                   TNSLSNR for 64-bit Windows: Version - Production
Start Date                19-AUG-2014 01:47:12
Uptime                    10 days 2 hr. 29 min. 33 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   P:\oracle\app\11.2.0\grid\network\admin\listener.ora
Listener Log File         P:\oracle\app\11.2.0\grid\log\diag\tnslsnr\DEV-02\listener\alert\log.xml
Listening Endpoints Summary...
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+asm2", status READY, has 1 handler(s) for this service...
Service "DEV.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
Service "DEVXDB.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
The command completed successfully

The node’s vip address was

To force the local listener to listen to the ip address corresponding to the hostname of the node,
the listener.ora file was changed to



After the listener was re-started using

srvctl stop listener -n DEV-02

srvctl start listener -n DEV-02

The listener started listening on, which is the IP address for the hostname (DEV-02)

lsnrctl status

LSNRCTL for 64-bit Windows: Version - Production on 01-SEP-2014 17:52:46

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Alias                     LISTENER
Version                   TNSLSNR for 64-bit Windows: Version - Production
Start Date                01-SEP-2014 17:52:46
Uptime                    0 days 0 hr. 1 min. 3 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   P:\oracle\app\11.2.0\grid\network\admin\listener.ora
Listener Log File         P:\oracle\app\11.2.0\grid\log\diag\tnslsnr\DEV-02\listener\alert\log.xml
Listening Endpoints Summary...
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+asm2", status READY, has 1 handler(s) for this service...
Service "DEV.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
Service "DEVXDB.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
The command completed successfully
Categories: DBA Blogs

Loose Coupling and Discovery of Services With Consul — Part 1

Wed, 2014-09-03 08:16

Today I would like to demonstrate the use of Consul to help achieve loose coupling and discovery of services, two important principles of service-oriented architecture (SOA) present in modern, elastic infrastructures.

If you have ever designed an architecture to be deployed in a cloud computing environment, then you are probably already familiar with the idea that you should design for failure. Having the expectation that one or more components of your infrastructure may fail at any given time forces you to design your services in such a way that they are loosely coupled from one another. This usually means running multiple instances of web, application, and database servers, and making use of load balancers, message queues and / or APIs for communication between them.

The environment is scaled horizontally, by adding more instances to the pool. Because of this, instances need to be able to advertise the service they provide, and discover providers of other services.

Where Does Consul Fit In?

Consul’s introduction page does an excellent job of describing the features it provides and what its basic architecture looks like, but I’ll provide a brief summary here.

Consul is a tool for discovering and configuring services in your infrastructure. At its core, it provides service discovery, health checking, and a key/value store. It supports multiple data centers without having to add a layer of abstraction. Nodes that provide services run a Consul agent which talks to one or more Consul servers. Services or nodes can be discovered by querying any of the Consul servers or agents. Data is exposed through DNS and / or HTTP interfaces.

Its capabilities overlap with a number of different types of software including: service discovery systems such as ZooKeeper and etcd; configuration management systems such as Puppet or Chef; and monitoring systems such as Nagios or Sensu. Again, there is an excellent page on the Consul website that describes how it compares with these tools.

The Demo Environment

For my Consul demonstration, I elected to use a combination of Docker and Vagrant. Docker because it makes it easy for me to run multiple lightweight containers on the same machine, and Vagrant because it gives me a great deal of flexibility in building and controlling the containers I will be creating. If you are unfamiliar with either of these tools, allow me to provide a brief overview.


Per Wikipedia, Docker “.. is an open-source project that automates the deployment of applications inside software containers”. It consists of: Docker Engine, a container that runs on the host operating system; and Docker Hub, a cloud service similar to GitHub which allows users to share containers and automate workflows.

Docker makes use of the following features of the Linux kernel: cgroups (or control groups), which make it possible to limit and isolate resource usage (CPU, memory, etc) of process groups; and namespace isolation, where process groups’ view of operating environment resources such as process trees, network, and file systems are isolated from one another.


Vagrant is a tool for building complete, reproducible development environments, with a focus on automation. It started out as essentially a ‘wrapper’ for VirtualBox, but now supports VMware, Docker, kvm/libvirt and Amazon EC2 as providers, as well as a number of provisioners including Puppet, Chef, and Docker.

Vagrant was created by Mitchell Hashimoto, who initially maintained it in his free time. After it went on to become wildly successful, he formed HashiCorp so he could work on the product full time. HashiCorp has since released other products including Packer, Serf, Consul, and most recently Terraform.

Installing The Tools Docker

Ubuntu 14.04 LTS includes Docker in its universe repository under the name and can therefore be installed this way:

$ sudo apt-get update
$ sudo apt-get install

Keep in mind that this may not be the latest version of Docker. If you would like to try the latest version, you will need to add the Docker repository key and add the repository to apt’s sources list. A script has been made available to automate this process for you, so all that is necessary is to run the following:

$ curl -sSL | sudo sh

On Fedora, Docker is provided by the package named docker-io. If you are running Fedora 19 and have the (unrelated) docker package installed, you will need to remove it before continuing.

$ sudo yum -y remove docker

With Fedora 21 and later, the docker package’s functionality is provided by another package named wmdocker, and it does not conflict with docker-io. Use the following command to install Docker.

$ sudo yum -y install docker-io

Whichever operating system you are running Docker on, you will likely want to be able to use the commands as your regular, non-privileged user, without having to elevate privileges to root. Therefore, you will probably want to make yourself a member of the docker group so you have access to the socket file used by the various Docker commands.

$ sudo usermod -a -G docker your_name

For Vagrant, you will need at least version 1.6.0 as that is when the Docker provider was introduced. This demonstration was tested with version 1.6.3. To install Vagrant, visit its download page and obtain the appropriate package for your operating system. You can install the package on Ubuntu using the following command:

$ sudo dpkg -i vagrant_1.6.3_x86_64.deb

That’s it. In the next section, we will install Consul and continue with setting up our cluster.

Setting Up The Consul Cluster

Let’s begin by establishing the Consul cluster, which will be used by nodes to register themselves as providers of services, and can be queried to discover which services are being provided.

The first step is to download and install Consul on the host which will be running the Docker containers.

$ wget
$ unzip -d /usr/local/bin/

Now we can start a Consul server that will bind to the IP address of the docker0 interface on your host.

$ consul agent -server -bootstrap -bind= -client= -data-dir /tmp/consul

This server will elect itself leader of the cluster (since it is currently the only member), and is what the rest of the Consul servers will connect to when joining. Ultimately, the architecture will look like this:

Consul Demo Architecture

Before we can bring up the rest of the Consul servers, however, we’ll need to do some prep work. As mentioned above, the intent is to use Vagrant with the Docker provider to create this environment. Therefore, let’s begin by creating a working directory to contain our Vagrantfile.

$ mkdir -p vagrant/consul && cd vagrant/consul
$ vagrant init

The contents of Vagrantfile should look like this:


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # A hash of containers to define.
  # These will be the Consul cluster members.
  consul_members = [ "consul1", "consul2", "consul3" ]
  consul_members.each do |member|
  	config.vm.define member do |consul_config|

  	  # Use Docker provider
  	  consul_config.vm.provider "docker" do |docker| = member
  	  	docker.image = 'progrium/consul'
  	  	docker.cmd = [ "-server", "-node=#{member}", "-join=#{JOIN_IP}" ]

What should be self evident is that Vagrant is assigning the value of an environment variable called JOIN_IP to a local variable with the same name, and then enumerating through an array of three Consul members which it will create using the Docker provider.

What might not be so obvious, however, are the docker.image and docker.cmd lines, so I will explain them in greater detail.

docker.image = 'progrium/consul'

This line tells Docker to launch (or ‘run’) an instance of the progrium/consul image as found on the Docker Hub Registry. It is a small container based on BusyBox used to run Consul. The project’s source page can be found on GitHub if you’re interested in learning more. The next line:

docker.cmd = [ "-server", "-node=#{member}", "-join=#{JOIN_IP}" ]

is an array of strings which is used to build a custom command to run on the container. Vagrant will perform string interpolation on the member and JOIN_IP variables, replacing them with the current member’s name and the IP address that was provided via the JOIN_IP environment variable. The end result is that the container runs a command such as this:

/bin/consul agent -config-dir=/config -server -node=consul1 -join=

Let’s see it in action by telling Vagrant to create our containers.

$ JOIN_IP= vagrant up --provider=docker
Bringing machine 'consul1' up with 'docker' provider...
Bringing machine 'consul2' up with 'docker' provider...
Bringing machine 'consul3' up with 'docker' provider...
==> consul2: Creating the container...
    consul2:   Name: consul2
    consul2:  Image: progrium/consul
    consul2:    Cmd: -server -node=consul2 -join=
    consul2: Volume: /home/bfraser/vagrant/consul:/vagrant
    consul2: Container created: d85fbfacdb45cabc
==> consul2: Starting container...
==> consul2: Provisioners will not be run since container doesn't support SSH.
==> consul1: Creating the container...
    consul1:   Name: consul1
    consul1:  Image: progrium/consul
    consul1:    Cmd: -server -node=consul1 -join=
    consul1: Volume: /home/bfraser/vagrant/consul:/vagrant
==> consul3: Fixed port collision for 22 => 2222. Now on port 2200.
==> consul3: Creating the container...
    consul3:   Name: consul3
    consul3:  Image: progrium/consul
    consul3:    Cmd: -server -node=consul3 -join=
    consul3: Volume: /home/bfraser/vagrant/consul:/vagrant
    consul1: Container created: 413dfa1a63c94bcc
==> consul1: Starting container...
==> consul1: Provisioners will not be run since container doesn't support SSH.
    consul3: Container creaited: fb54d80e8ce58a46
==> consul3: Starting container...
==> consul3: Provisioners will not be run since container doesn't support SSH.

This created three containers and substituted the node name and join address as expected. Now let’s see what Docker reports.

$ docker ps
CONTAINER ID        IMAGE                    COMMAND                CREATED              STATUS              PORTS                                                                          NAMES
fb54d80e8ce5        progrium/consul:latest   /bin/start -server -   About a minute ago   Up About a minute   53/udp, 8300/tcp, 8301/tcp, 8301/udp, 8302/tcp, 8302/udp, 8400/tcp, 8500/tcp   consul3             
413dfa1a63c9        progrium/consul:latest   /bin/start -server -   About a minute ago   Up About a minute   53/udp, 8300/tcp, 8301/tcp, 8301/udp, 8302/tcp, 8302/udp, 8400/tcp, 8500/tcp   consul1             
d85fbfacdb45        progrium/consul:latest   /bin/start -server -   About a minute ago   Up About a minute   53/udp, 8300/tcp, 8301/tcp, 8301/udp, 8302/tcp, 8302/udp, 8400/tcp, 8500/tcp   consul2             

There are in fact three containers running. Other details are provided but they aren’t relevant at this point, I just wanted to show you how to view the status of your newly created Docker containers.

Now if we check Consul, we should see each of the containers listed as members of the cluster (note: we can’t just run consul members as we need to tell it which IP address to use as the RPC address).

$ consul members -rpc-addr=
Node     Address           Status  Type    Build  Protocol
laptop  alive   server  0.3.0  2
consul2   alive   server  0.3.0  2
consul1   alive   server  0.3.0  2
consul3   alive   server  0.3.0  2

As you can see, we have successfully created a cluster of Consul servers, comprised of an initial server running on the host and three servers running as Docker containers created by Vagrant. In my next post, we will see how to add Consul clients providing services (‘providers’), register the services, and query available services from clients (‘consumers’).

Categories: DBA Blogs

Data Profile: Better Knowing Your Data

Tue, 2014-09-02 08:40

Have you ever needed to start a new data analysis project or create a report for the business users querying a database you never worked before? Or simply know the data distribution of a database to create better indexing strategies?

Working as a consultant, I constantly face this challenge where I have to work with a customer’s database that I don’t know about very deeply. For instance, that “Gender” column stores data as “M” and “F” or “Male” and “Female”? Or even, do they use a bit column for that? (Yeah, I saw that a lot already). Does that “Surname” column accept NULL values? If so, what percent of the table contains NULL for that specific column? In a date/time column, what is the minimum and maximum values so I can create my “Time” dimension in a Data warehouse?

This data discovery process, where I need an overview of the data, usually takes a lot of time and a lot of query writing, doing DISTINCT, MIN, MAX, AVG kind of queries and analyzing the result of each individual query. Even with a lot of really good code, completing third party tools out there, it is a cumbersome task and sometimes the customer is not willing to wait while I learn everything about their environment before expecting results.

Today I want to show you a not-so-new feature that we have in SQL Server that will help with the data discovery process. The feature is the Data Profiler Task in SQL Server Integration Services and the Data Profile Viewer.

Now, that’s the time when you ask me, “Data what?!

It’s easy, you’ll see. One of the several tasks in the SQL Server Integration Services that you never use and never took the time to google what is used for is called Data Profiling Task. This task allows you to select a table and what kind of data analysis you want to do in that table/column. When you run the SSIS package it will analyze the table and generate a XML file. Once you have the XML file, all you need to do is to open it using the Data Profile Viewer, which will take care of creating a nice user interface for you to analyze the XML, as you can see in the Figure 1.



Figure 1: Data Profile Viewer

Cool, now let’s see how to create our own analysis.

Step 1: Open SQL Data Tools or SQL BIDS if you’re using SQL Server 2008 R2 or below

Step 2: Create a new SSIS project

Step 3: Add the Data Profiling Task on your project


Step 4: Double click in the Data Profiling task so we can configure it. In the General tab we have to set the Destination, that means, the location you want to save the XML file. You can choose to save directly to the file system using a File Connection or store in a XML variable inside your package in the case you want to do something else with the XML, maybe store in a database. Let’s leave the default FileConnection option for the Destination Type option and click in New File Connection in the Destination option.


Step 5: Now we can choose the file location, on my example I am using one of the most used folders every on windows. The “tmp” folder, sometimes also called as “temp” or just “stuff”. (Note: the author doesn’t recommend storing everything in folders called temp nor saving everything in the desktop)


Step 6: Ok, we’re back to the main window, we have now to choose which kind of analysis we want to run, the database and the table. We have two options, the first one is to use the Profile Requests tab and choose one by one the data analysis, table and columns. The other option and also the simplest one is to use the Quick Profile tab. Using this option we can define one specific table and what analysis you want to run on that table. If you want to run the analysis on multiple tables you will have to click in the Quick Profile option and choose one by one (nothing on this world is perfect).


As you can see in the image above, I have chosen the Production.Product table of the AdventureWorks2012 database. In the Compute option you have to choose what data analysis you want to run, the names of the options kind of explain what they’ll do, but if you want a detailed explanation of each option you can check the product documentation on this link:

Now all you have to do is to run the SSIS package to create the XML file. Once you’re done, you can use the Data Profile Viewer tool to open the XML and analyze its results.


The Data Profile Viewer is a simple tool that doesn’t need much explanation, just try it for yourself and you’ll certainly like the result.

I hope this can help you to save some time when you need to quickly learn more about the data inside a database. If you have any questions or want to share what your approach is when you need to complete this task, just leave a comment!


Categories: DBA Blogs

Oracle Database: Script to Purge aud$ Table Using dbms_audit_mgmt Package

Tue, 2014-09-02 07:59

With an increase in requirements on database security, database auditing is part of most production databases nowadays. The growth on Oracle database audit table”‘aud$” always add spice on a DBA’s life. Since this table growth directly impacts the database performance, this table got special place on every DBA’s heart.

Traditionally we follow many methods to purge the old data from this table, all these methods require application downtime for most of the time. Oracle introduced a new in-house package named “DBMS_AUDIT_MGMT”, which gives more control for a DBA over management of auditing records.

I tried to test this package on my 11gR2 test database. Although this can be done from oracle rdbms version But we need to apply the required patches on (Patch 6989148) and (Patch 6996030) versions. This package is installed by default on versions and I configured audit_trail parameter value to db_extended and enabled database auditing. Later I moved this db audit table and associated LOB segments to dedicated tablespace named “AUDTBS”. I confirmed the audit functionality is running fine after the tablespace change.

Thanks to MOS notes 1362997.1 and 1508787.1, I successfully tested this purge procedure. I configured the database scheduler jobs in such a way that they should run once per 12 hours, purges data from aud$ table, which are older than 7 days. Here is the script(purge_job.sql) used myself to configure the required jobs.

Script: purge_job.sql ==> Run this script as SYS database user account.

prompt start of the script
set serveroutput on
prompt Change based on our customization done
update dam_config_param$ set string_value=’AUDTBS’ where audit_trail_type#=1 and param_id=22;

prompt First Step: init cleanup (if not already)

dbms_output.put_line(‘Calling DBMS_AUDIT_MGMT.INIT_CLEANUP’);
audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD,
default_cleanup_interval => 12);
dbms_output.put_line(‘Cleanup for STD was already initialized’);
end if;

prompt revert back to default values again
update dam_config_param$ set string_value=’SYSAUX’ where audit_trail_type#=1 and param_id=22;

prompt set last archive timestamp to older than 7 days

last_archive_time => sysdate – 7);

prompt setup a purge job

AUDIT_TRAIL_PURGE_NAME => ‘Standard_Audit_Trail_PJ’,

prompt call DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP regularly to advance the last archive timestamp

create or replace procedure set_archive_retention
(retention in number default 7) as
last_archive_time => sysdate – retention);

DBMS_SCHEDULER.create_job (
job_name => ‘advance_archive_timestamp’,
job_type => ‘STORED_PROCEDURE’,
job_action => ‘SET_ARCHIVE_RETENTION’,
number_of_arguments => 1,
start_date => SYSDATE,
repeat_interval => ‘freq=hourly;interval=12′ ,
enabled => false,
auto_drop => FALSE);
(job_name =>’advance_archive_timestamp’,
argument_position =>1,
argument_value => 7);

DBMS_SCHEDULER.run_job (job_name => ‘advance_archive_timestamp’,
use_current_session => FALSE);

prompt End of the script

To verify the purge status and configured jobs status execute the following queries.

SQL> select min(NTIMESTAMP#) from aud$;
SQL> select LAST_START_DATE,LAST_RUN_DURATION,NEXT_RUN_DATE from dba_scheduler_jobs where job_name=’STANDARD_AUDIT_TRAIL_PJ’;

We can definitely customize this script based on requirement, which is different for each database. But testing is required on the cloned database before configuring these purge jobs on the production database.


Categories: DBA Blogs

Azure Storage: Creating, Maintaining, and Deleting SQL Server Backups

Tue, 2014-09-02 07:50

This post covers how to create, monitor, maintain, and automatically delete SQL Server backups.

What is Azure Storage and why should you use it?

Microsoft Windows Azure is Microsoft’s cloud offering for offsite storage. It offers the ability to seamlessly enable massive storage, Virtual Servers, SQL Server database instances, and many other options without having to worry about the hardware or maintenance in house.

Many companies are currently using Azure as offsite storage for their nightly Production backups. A company chooses one of 15 datacenters that Microsoft has around the world. This datacenter automatically and transparently maintains three copies of each backup file, and also replicates to a second datacenter in a different geographic location. The replication is not real-time, but in general there will always be six copies of each backup file available in case of an emergency.

In the event the Primary datacenter fails, Microsoft will decide when or if to failover to the Secondary datacenter. However, in the coming months they plan to roll out an API which would allow individual clients to make that decision.

SQL Server 2012 SP1 CU6+ is required.

The current pricing is about $90 per month per TB of storage used.

Accessing the Azure front end
To access the Azure front end:

  • Open Internet Explorer and navigate to
    • You will be prompted to login with a Microsoft MSDN Account.
  • The Azure administrator in your company should have granted this account access.
  • Click on the Azure Portal icon to bring up the Azure Main Page for your account.
  • Click on the Storage Icon on the left.
  • Drill down into your storage account to open the Storage Main Page.
  • Click on the Containers tab
  • Drill down into your containerThis is a list of all of the backups being written to the Production container.

The backups are ordered by their name, and unfortunately there is no way to sort by Date Modified or other field. You can see more backups than are listed on the front page by clicking on the arrow at the bottom left.

Checking the status of an Azure backup file
To check the status of a backup to Azure storage, you have two options:
1. The Azure Portal
2. SQL Server Management Studio

To use the Azure Portal, navigate to the container the backup is being written to, and find it in the list of files. If the size is 1 TB, then the backup is currently running. Using the Azure Portal, you can only see if a backup file is being created. You cannot see how much time is remaining until it is done.

To use SQL Server Management Studio, open SSMS and connect to the instance running the backups. Run the following command to get percent done & estimated time to completion:

CONVERT(NUMERIC(6,2),r.percent_complete) AS [PercentComplete],
CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
CONVERT(VARCHAR(1000),(SELECT SUBSTRING(text,r.statement_start_offset/2, CASE WHEN r.statement_end_offset = -1 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END) FROM sys.dm_exec_sql_text(sql_handle)))
sys.dm_exec_requests r

Backing up to an Azure storage container
In order to write a backup file to a Microsoft Azure storage container, two items are needed:

      • A SQL Server credential to connect to the Azure Account
        • This requires the Account Name and a Storage Key.
      • The container URL

To find the Azure Account Name and Storage Key, navigate to the Azure Main Page. At the bottom of the page, there is a “Manage Access Keys”. Clicking on this icon bring ups the Account Name and a Primary & Secondary Access Key.

1. Copy the Account Name and one of the Access Keys. DO NOT REGENERATE THE KEYS.
2.    Copy the URL.
3.    Open SQL Server Management Studio and connect to the RLPRODMSSQL01 instance.
4.    From a new query window, run the following command:

IDENTITY = ‘Account Name’,
SECRET = ‘Storage Key’

5.    Run this same command on all instances that will backup to this container.
6.    Run the following command to backup a database to the Azure Storage container:

FROM URL = ‘Container URL + Backup File Name’
CREDENTIAL = ‘AzureBackups’

Restoring from an Azure Storage Container
To restore from an Azure Storage Container, two items are needed:

1. A credential to connect to the Azure Account (See steps 1-4 of Backing up to an Azure Storage Container)
2. The backup file URL

To get the backup file URL, navigate to the container where the backup file is stored. The URL is to the left of the backup name.

1. Copy the URL.
2. Run the following command on the instance you want to restore the database onto:

FROM URL = ‘Backup File URL’
CREDENTIAL = ‘AzureBackups’

Deleting SQL Server backups from Azure storage

In SQL Server 2012, Azure storage is not fully integrated with Maintenance Plans and deleting old backups is a manual process. This causes issues, because there is no way to quickly delete a batch of backups, and if this is forgotten for a few days then the cost of storage begins to rise quickly.

I have written the below code to create an executable that will connect to the Azure storage container and delete any backups older than x days.

In addition, the code can check for any backups that have a “locked lease”, break the lease, and then delete them.

The parameters for the executable are:

  • Parameter 1 – MS Azure Account Name (string)
  • Parameter 2 – MS Azure Storage Key (string)
  • Parameter 3 – Azure Container Name (string)
  • Parameter 4 – Number of days backups to retain (positive integer)
  • Parameter 5 – File type to delete (.bak, .trn, etc..)
  • Parameter 6 – Delete backups with locked lease? (True/False)
    • Note that a True value for parameter 6 will cause the executable to ignore parameters 4 and 5.
    • This is meant to be run after a failed backup job.

In order to work, the executable will need the Windows Azure Storage Client Library.

Alternatively, you can download and run the executable using this ZIP file.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Auth;
using Microsoft.WindowsAzure.Storage.Blob;

namespace DeleteAzureBackups
class Program

static void Main(string[] args)
if (args.Length != 6) { Console.WriteLine(“Please run with correct number of parameters. Type ? for help.”); return; }

if (args[0] == “?” || args[0] == “help” || args[0] == “h”)
Console.WriteLine(“Pythian Azure Backup Delete Utility”);
Console.WriteLine(“Parameter 1 : String : MS Azure Account Name”);
Console.WriteLine(“Parameter 2 : String : MS Azure Account Key”);
Console.WriteLine(“Parameter 3 : String : Container Name”);
Console.WriteLine(“Parameter 4 : Positive Integer : Number of days to retain backups”);
Console.WriteLine(“Parameter 5 : String : File type to delete (.bak, .trn, etc…)”);
Console.WriteLine(“Parameter 6 : True/False : Delete backups with locked leases (will ignore Parameters 4 & 5)”);

// Account name and key.
string accountName = args[0].ToLower(); //Account Name
string accountKey = args[1]; //Account Key
string containerName = args[2]; //Container Name
int numberOfDays = Int16.Parse(args[3]); //Number of Days before deleting
string fileType = args[4];
bool deleteLockedBlobs = bool.Parse(args[5]);

CloudBlobContainer container = openConnection(accountName, accountKey, containerName);

if (!deleteLockedBlobs)
{ deleteAzureBackups(container, numberOfDays, fileType); }
{ unlockLeasedBlobs(container); }

catch (Exception ex)


static CloudBlobContainer openConnection(string accountName, string accountKey, string containerName)
//Get a reference to the storage account, with authentication credentials
StorageCredentials credentials = new StorageCredentials(accountName, accountKey);
CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, true);

//Create a new client object.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

// Retrieve a reference to a container.
CloudBlobContainer container = blobClient.GetContainerReference(containerName);

return container;
catch (StorageException ex)
Console.WriteLine(“Failed to create connection to MS Azure Storage.”);
return null;

static void deleteAzureBackups(CloudBlobContainer container, int numberOfDays, string fileType)
DateTimeOffset now = DateTimeOffset.Now;

foreach (IListBlobItem item in container.ListBlobs(null, false))
CloudPageBlob blob = (CloudPageBlob)item;

//If date blob was last modified is more than x days out, then it gets deleted.
if ((now – blob.Properties.LastModified.Value).Days >= numberOfDays &&
blob.Name.Substring(blob.Name.Length – fileType.Length) == fileType)

static void unlockLeasedBlobs(CloudBlobContainer container)
foreach (IListBlobItem item in container.ListBlobs(null, false))
CloudPageBlob blob = (CloudPageBlob)item;

if (blob.Properties.LeaseStatus == LeaseStatus.Locked)
Console.WriteLine(“Breaking lease on {0} blob.”, blob.Name);
blob.BreakLease(new TimeSpan(), null, null, null);
Console.WriteLine(“Successfully broken lease on {0} blob.”, blob.Name);

catch (StorageException ex)
Console.WriteLine(“Failed to break lease on {0} blob.”, blob.Name);

static void deleteBlob(CloudPageBlob blob)
Console.WriteLine(“Attempting to delete {0}”, blob.Name);
Console.WriteLine(“Successfully deleted {0}”, blob.Name);
catch (StorageException ex)
Console.WriteLine(“Failed to delete {0}.”, blob.Name);


Categories: DBA Blogs

Log Buffer #386, A Carnival of the Vanities for DBAs

Fri, 2014-08-29 08:16

If you ever wanted an easy-peazy way to get few of the best blog posts of the week from Oracle, SQL Server and MySQL then Log Buffer Editions are the place to be.


The Product Management team have released a knowledge article for Enterprise Performance Management (EPM) and containing details for EPM support with Internet Explorer (IE) 11.

As if anyone needs to be reminded, there’s a ridiculous amount of hype surrounding clouds and big data. There’s always oodles of hype around any new technology that is not well understood.

By mapping an external table to some text file, you can view the file contents as if it were data in a database table.

Vikram has discovered a utility adopreports utility in R12.2.

As a lot of the new APEX 5 features are “by developers for developers”, this one is also a nifty little thing that make our lives easier.

SQL Server:

Data Mining: Part 15 Processing Data Mining components with SSIS.

SQL Server AlwaysOn Availability Groups Fail the Initial Failover Test.

Stairway to PowerPivot and DAX – Level 6: The DAX SUM() and SUMX() Functions.

Questions about T-SQL Expressions You Were Too Shy to Ask

SQL Server Service Engine fails to start after applying CU4 for SQL Server 2008 SP1.


MySQL for Visual Studio 1.2.x recently became a GA version. One of the main features included in this version was the new MySQL ASP.NET MVC Wizard.

Resources for Database Clusters: Performance Tuning for HAProxy, Support for MariaDB 10, Technical Blogs & More.

Trawling the binlog with FlexCDC and new FlexCDC plugins for MySQL

InnoDB provides a custom mutex and rw-lock implementation.

You probably already know that Sphinx supports MySQL binary network protocol. But, if you haven’t heard– Sphinx can be accessed with the regular ol’ MySQL API.

Categories: DBA Blogs

Tungsten Replicator: MariaDB Master-Master and Master-Slave Topologies

Thu, 2014-08-28 12:45

A common concern in the MySQL community is how to best implement high availability for MySQL. There are various built-in mechanisms to accomplish this such as Master/Master and Master/Slave replication using binary logs as well as FOSS solutions such as Galera and Tungsten, just to name a few. Often times, IT Managers and DBAs alike opt to avoid implementing a third party solution due to the added administrative overhead without fully evaluating the available solutions. In today’s blog post, I would like to describe the process for configuring a Master/Slave topology and switching to a Master/Master topology with Tungsten Replicator.

Tungsten Replicator is a well known tool that has gained much acclaim in the area of MySQL Enterprise database implementation, however, many teams tend to stay away from the implementation to avoid over-complicating the replication topology. I have listed and described all of the steps required to configure a replication topology for 1 to N nodes (today’s how-to guide serves for a 2-node implementation but I will described the additional steps that would be required to implement these topologies for N nodes).

The 2 nodes I will be using are vm128-142 and vm129-117, the first part of the document contains the steps that need to be performed on both nodes and the latter describes the steps to be performed on either one of the two nodes. As soon as Tungsten Replicator has been installed on both nodes with the same configuration files the switch is as simple as “one, two, three” – all it requires is running the script that configures the topology of your choice. The main topologies that are available are :

  • Master – Slave: Replication flowing from 1 .. N nodes using Tungsten Replicator
  • Master – Master: Bi-directional replication for 1 .. N nodes
  • Star Topology: A central node acts as a hub and all spokes are Master nodes
  • Fan-in Topology: A single slave node with replication from 1 .. N Master nodes

(Check out for further details)

So, let’s continue with the actual steps required (please note I’m using the “root” account with SSH passwordless authentication for the purposes of this article, it is best to define another user on production systems). The parameters and values in red text require customization for your system / topology. The configuration files are all indented in the text is royal blue:

### The following commands should be executed on all nodes (vm128-142 & vm129-117 in this how-to)

su - root
cd /root # or alternatively to a place like /opt/ or /usr/local/
vi /etc/yum.repos.d/MariaDB.repo

 # MariaDB 5.5 CentOS repository list - created 2014-08-25 16:59 UTC
 name = MariaDB
 baseurl =

vi /etc/security/limits.conf

 # add the following line
 * - nofile 65535

yum update

yum install wget MariaDB-server MariaDB-client ruby openssh-server rsync 
yum install java-1.7.0-openjdk- 
yum install
ln -s /usr/bin/innobackupex /usr/bin/innobackupex-1.5.1

tar -xzvf download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
rm download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
cd tungsten-replicator-2.2.1-403/

vi cookbook/

 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 #export NODE3=host3
 #export NODE4=host4
 #export NODE5=host5
 #export NODE6=host6
 #export NODE7=host7
 #export NODE8=host8

vi cookbook/

 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 # User defined values for the cluster to be installed.

 cookbook_dir=$(dirname $0 )

 # Where to install Tungsten Replicator
 export TUNGSTEN_BASE=/opt/tungsten-replicator/installs/cookbook

 # Directory containing the database binary logs
 export BINLOG_DIRECTORY=/var/lib/mysql

 # Path to the script that can start, stop, and restart a MySQL server
 export MYSQL_BOOT_SCRIPT=/etc/init.d/mysql

 # Path to the options file
 export MY_CNF=/etc/my.cnf

 # Database credentials
 export DATABASE_USER=tungsten
 export DATABASE_PASSWORD=tungsten
 export DATABASE_PORT=3306

 # Name of the service to install
 export TUNGSTEN_SERVICE=cookbook

 # Replicator ports
 export RMI_PORT=10000
 export THL_PORT=2112

 # If set, replicator starts after installation
 [ -z "$START_OPTION" ] && export START_OPTION=start

 # Options used by the "direct slave " installer only
 # Modify only if you are using ''

 # Variables used when removing the cluster
 # Each variable defines an action during the cleanup

 # Local values defined by the user.
 # If ./cookbook/ exists,
 # it is loaded at this point

 if [ -f $cookbook_dir/ ]
 . $cookbook_dir/

service iptables stop 

 # or open ports listed below:
 # 3306 (MySQL database)
 # 2112 (Tungsten THL)
 # 10000 (Tungsten RMI)
 # 10001 (JMX management)

vi /etc/my.cnf.d/server.cnf

 # These groups are read by MariaDB server.
 # Use it for options that only the server (but not clients) should see
 # See the examples of server my.cnf files in /usr/share/mysql/

 # this is read by the standalone daemon and embedded servers

 # this is only for the mysqld standalone daemon
 server-id=1 # make server-id unique per server

 # this is only for embedded server

 # This group is only read by MariaDB-5.5 servers.
 # If you use the same .cnf file for MariaDB of different versions,
 # use this group for options that older servers don't understand

 # These two groups are only read by MariaDB servers, not by MySQL.
 # If you use the same .cnf file for MySQL and MariaDB,
 # you can put MariaDB-only options here


service mysql start
mysql -uroot -p -e"CREATE USER 'tungsten'@'%' IDENTIFIED BY 'tungsten';"
mysql -uroot -p -e"GRANT ALL PRIVILEGES ON *.* TO 'tungsten'@'%' WITH GRANT OPTION;"
mysql -uroot -p -e"FLUSH PRIVILEGES;"

ssh-keygen -t rsa
cat ~/.ssh/ >> ~/.ssh/authorized_keys
cat ~/.ssh/ | ssh vm129-117 'cat >> ~/.ssh/authorized_keys' # from vm128-142
cat ~/.ssh/ | ssh vm128-142 'cat >> ~/.ssh/authorized_keys' # from vm129-117
chmod 600 authorized_keys

cookbook/validate_cluster # this is the command used to validate the configuration

vi cookbook/

 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 CURDIR=`dirname $0`
 if [ -f $CURDIR/ ]
 export NODE1=
 export NODE2=
 export NODE3=
 export NODE4=
 export NODE5=
 export NODE6=
 export NODE7=
 export NODE8=

 # indicate which servers will be masters, and which ones will have a slave service
 # in case of all-masters topologies, these two arrays will be the same as $ALL_NODES
 # These values are used for automated testing

 #for master/slave replication
 export MASTERS=($NODE1)

## The following commands should be performed on just one of the nodes
## In my case either vm128-142 OR 129-117

cookbook/install_master_slave # to install master / slave topology
cookbook/show_cluster # here we see master - slave replication running

 Topology: 'MASTER_SLAVE'
 # node
 cookbook [master] seqno: 1 - latency: 0.514 - ONLINE
 # node
 cookbook [slave] seqno: 1 - latency: 9.322 - ONLINE

cookbook/clear_cluster # run this to destroy the current Tungsten cluster 

cookbook/install_all_masters # to install master - master topology 
cookbook/show_cluster # and here we've switched over to master - master replication

 Topology: 'ALL_MASTERS'
 # node
 alpha [master] seqno: 5 - latency: 0.162 - ONLINE
 bravo [slave] seqno: 5 - latency: 0.000 - ONLINE
 # node
 alpha [slave] seqno: 5 - latency: 9.454 - ONLINE
 bravo [master] seqno: 5 - latency: 0.905 - ONLINE

Categories: DBA Blogs

12c: How to Restore/Recover a Small Table in a Large Database

Thu, 2014-08-28 09:35

As a DBA, you will receive requests from developers or users, indicating that they deleted some data in a small table in a large database a few hours prior. They will probably want you to recover the data as soon as possible, and it will likely be a critical production database. Flashback will not be enabled, and the recycle bin will have been purged. Restoring a full database using RMAN might take you over 10 hours, and you will need a spare server with big storage. Looks like it’s going to be a difficult and time consuming task for you.

In Oracle Database 12c, there is a method available which allows us to recover the table more efficiently, and at a lower cost. The method is to create a second database (often called a stub database) using the backup of the first database. In this situation, we restore the SYSTEM, SYSAUX, and UNDO tablespaces and the the individual tablespaces that contain the data that we want to restore. After the restore is complete, we alter any tablespaces that we did not restore offline. We then apply the archived redo logs to the point in time that we want to restore the table to. Having restored the database to the appropriate point in time, we then use Oracle Data Pump to export the objects, and then you import them into the original database, again using Oracle Data Pump. Oracle Database 12c introduces new functionality in RMAN that supports point-in-time restore of individual database tables and individual table partitions.

Here is an example of when I tested this new feature:

1. The database TEST has 9 tablespaces and a schema called Howie. I created a table with 19377 records called TEST1 which is in the tablespace DATA_HOWIE.

SQL> select * from v$instance;

--------------- ---------------- ---------------------------------------------------------------- ----------------- --------- ------------ --- ---------- ------- --------------- ---------- --- ----------------- ------------------ --------- --- ---------- ----------- ------- --------------------------------------------------------------------------------
1 TEST             12cServer1                                                     17-AUG-14 OPEN         NO           1 STARTED                 ALLOWED    NO  ACTIVE            PRIMARY_INSTANCE   NORMALNO            0 REGULAR     EE

SQL> select tablespace_name from dba_tablespaces order by tablespace_name;


9 rows selected.

SQL> conn howie
Enter password:
SQL> create table test1 as select * from dba_objects;

Table created.

SQL> select count(*) from test1;


SQL> select table_name,tablespace_name from user_tables where table_name='TEST1';

TABLE_NAME                                                                                                                       TABLESPACE_NAME
-------------------------------------------------------------------------------------------------------------------------------- ------------------------------
TEST1                                                                                                                            DATA_HOWIE

2. The database is in archivelog mode, and I took a full backup of the database.

[oracle@12cServer1 RMAN]$ rman target /

Recovery Manager: Release - Production on Sun Aug 17 20:16:17 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2146502230)

RMAN> run
allocate channel d1 type disk format '/u01/app/oracle/RMAN/rmn_%d_t%t_p%p';
incremental level 0
tag backup_level0
filesperset 1
plus archivelog ;
release channel d1;
}2> 3> 4> 5> 6> 7> 8> 9> 10> 11>

3. The data in the table howie.test1 has been deleted.

SQL> select sysdate,current_scn from v$database;

------------------- -----------
08/17/2014 21:01:15      435599

SQL> delete test1;

19377 rows deleted.

SQL> commit;

Commit complete.

4. I ran following scripts to recover the data to an alternative table howie.test1_temp to the point in time “08/17/2014 21:01:15″

[oracle@12cServer1 RMAN]$ rman target /

Recovery Manager: Release - Production on Sun Aug 17 21:01:35 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2146502230)

RMAN> recover table howie.test1
until time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')"
auxiliary destination '/u01/app/oracle/aux'
remap table howie.test1:test1_temp;2> 3> 4>

5. The scripts above will take care of everything and you will see the data has been restored to howie.test1_temp

SQL> select count(*) from TEST1_TEMP;


SQL> select count(*) from TEST1;


Let’s take a look at the log of RMAN recovery and find out how it works.

1. Creation of the auxiliary instance

Creating automatic instance, with SID='ktDA'

initialization parameters used for automatic instance:
#No auxiliary parameter file used

2. Restore of the control file for the auxiliary instance

contents of Memory Script:
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';

3. A list of datafiles that will be restored, followed by their restore and recovery in the auxiliary instance

contents of Memory Script:
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';

contents of Memory Script:
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  8 online";
# recover and open resetlogs
recover clone database tablespace  "DATA_HOWIE", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;

4. Export of tables from the auxiliary instance via Oracle Data Pump

Performing export of tables...
EXPDP> Starting "SYS"."TSPITR_EXP_ktDA_BAkw":
EXPDP> Estimate in progress using BLOCKS method...
EXPDP> Total estimation using BLOCKS method: 3 MB
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
EXPDP> . . exported "HOWIE"."TEST1"                             1.922 MB   19377 rows
EXPDP> Master table "SYS"."TSPITR_EXP_ktDA_BAkw" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_ktDA_BAkw is:
EXPDP>   /u01/app/oracle/aux/tspitr_ktDA_70244.dmp
EXPDP> Job "SYS"."TSPITR_EXP_ktDA_BAkw" successfully completed at Sun Aug 17 21:03:53 2014 elapsed 0 00:00:14
Export completed

5. Import of tables, constraints, indexes, and other dependent objects into the target database from the Data Pump export file

contents of Memory Script:
# shutdown clone before import
shutdown clone abort
executing Memory Script

Oracle instance shut down

Performing import of tables...
IMPDP> Master table "SYS"."TSPITR_IMP_ktDA_lube" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_ktDA_lube":
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
IMPDP> . . imported "HOWIE"."TEST1_TEMP"                        1.922 MB   19377 rows
IMPDP> Job "SYS"."TSPITR_IMP_ktDA_lube" successfully completed at Sun Aug 17 21:04:19 2014 elapsed 0 00:00:19
Import completed

6. Clean-up of the auxiliary instance

Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_temp_9z2yqst6_.tmp deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_3_9z2yrkqm_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_2_9z2yrj35_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_1_9z2yrh2r_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/datafile/o1_mf_data_how_9z2yrcnq_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_sysaux_9z2yptms_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_undotbs1_9z2yq9of_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_system_9z2yp0mk_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/controlfile/o1_mf_9z2yos1l_.ctl deleted
auxiliary instance file tspitr_ktDA_70244.dmp deleted
Finished recover at 17-AUG-14
Categories: DBA Blogs

Building a MariaDB Galera Cluster with Docker

Thu, 2014-08-28 08:13

There’s been a lot of talk about Docker for running processes in isolated userspace (or the cloud for that matter) lately. Virtualization is a great way to compartmentalise applications  and processes however the overhead of virtualization isn’t always worth it – in fact, without directly attached storage IO degradation can seriously impact performance. The solution? Perhaps Docker… With its easy to use CLI as well as the lightweight implementation of cgroups and kernel namespaces.

Without further ado, I present a step-bystep guide on how to build a MariaDB 5.5 Galera Cluster on Ubuntu 14.04. The same guide can probably be applied for MariaDB versions 10+ however I’ve stuck with 5.5 since the latest version of MariaDB Galera Cluster is still in beta.

So we start off with modifying the “ufw” firewall policy to accept forwarded packets and perform a “ufw” service restart for good measure:

root@workstation:~# vi /etc/default/ufw


root@workstation:~# service ufw restart
ufw stop/waiting
ufw start/running

I’m assuming you already have docker installed – this is available as a package within the Ubuntu repositories and also available in the Docker repositories (see You’ll also need to have LXC installed (“apt-get install lxc” should suffice) in order to attach to the Linux Containers / Docker Images.

The next step is pulling the Docker / Ubuntu repository in order to customize an image for our purposes

root@workstation:~# docker pull ubuntu
Pulling repository ubuntu
c4ff7513909d: Pulling dependent layers 
3db9c44f4520: Download complete 
c5881f11ded9: Download complete 
c4ff7513909d: Download complete 
463ff6be4238: Download complete 
822a01ae9a15: Download complete 
75204fdb260b: Download complete 
511136ea3c5a: Download complete 
bac448df371d: Download complete 
dfaad36d8984: Download complete 
5796a7edb16b: Download complete 
1c9383292a8f: Download complete 
6cfa4d1f33fb: Download complete 
f127542f0b61: Download complete 
af82eb377801: Download complete 
93c381d2c255: Download complete 
3af9d794ad07: Download complete 
a5208e800234: Download complete 
9fccf650672f: Download complete 
fae16849ebe2: Download complete 
b7c6da90134e: Download complete 
1186c90e2e28: Download complete 
0f4aac48388f: Download complete 
47dd6d11a49f: Download complete 
f6a1afb93adb: Download complete 
209ea56fda6d: Download complete 
f33dbb8bc20e: Download complete 
92ac38e49c3e: Download complete 
9942dd43ff21: Download complete 
aa822e26d727: Download complete 
d92c3c92fa73: Download complete 
31db3b10873e: Download complete 
0ea0d582fd90: Download complete 
cc58e55aa5a5: Download complete

After the download is complete, we can check the Ubuntu images available for customization with the following command:

root@workstation:~# docker images
 REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
 ubuntu              14.04.1             c4ff7513909d        12 days ago         225.4 MB
 ubuntu              trusty              c4ff7513909d        12 days ago         225.4 MB
 ubuntu              14.04               c4ff7513909d        12 days ago         225.4 MB
 ubuntu              latest              c4ff7513909d        12 days ago         225.4 MB
 ubuntu              utopic              75204fdb260b        12 days ago         230.1 MB
 ubuntu              14.10               75204fdb260b        12 days ago         230.1 MB
 ubuntu              precise             822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.04               822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.04.5             822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.10               c5881f11ded9        9 weeks ago         172.2 MB
 ubuntu              quantal             c5881f11ded9        9 weeks ago         172.2 MB
 ubuntu              13.04               463ff6be4238        9 weeks ago         169.4 MB
 ubuntu              raring              463ff6be4238        9 weeks ago         169.4 MB
 ubuntu              13.10               195eb90b5349        9 weeks ago         184.7 MB
 ubuntu              saucy               195eb90b5349        9 weeks ago         184.7 MB
 ubuntu              lucid               3db9c44f4520        4 months ago        183 MB
 ubuntu              10.04               3db9c44f4520        4 months ago        183 MB

Now that we’ve downloaded our images lets create a custom Dockerfile for our customized MariaDB / Galera Docker image, I’ve added a brief description for each line of the file:

root@workstation:~# vi Dockerfile
 # # MariaDB Galera 5.5.39/Ubuntu 14.04 64bit
 FROM ubuntu:14.04
 MAINTAINER Pythian Nikolaos Vyzas <>

 RUN echo "deb trusty main universe" > /etc/apt/sources.list # add the universe repo
 RUN apt-get -q -y update # update apt
 RUN apt-get -q -y install software-properties-common # install software-properties-common for key management
 RUN apt-key adv --recv-keys --keyserver hkp:// 0xcbcb082a1bb943db # add the key for Mariadb Ubuntu repos
 RUN add-apt-repository 'deb trusty main' # add the MariaDB repository for 5.5
 RUN apt-get -q -y update # update apt again
 RUN echo mariadb-galera-server-5.5 mysql-server/root_password password root | debconf-set-selections # configure the default root password during installation
 RUN echo mariadb-galera-server-5.5 mysql-server/root_password_again password root | debconf-set-selections # confirm the password (as in the usual installation)
 RUN LC_ALL=en_US.utf8 DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::='--force-confnew' -qqy install mariadb-galera-server galera mariadb-client # install the necessary packages
 ADD ./my.cnf /etc/mysql/my.cnf # upload the locally created my.cnf (obviously this can go into the default MariaDB path
 RUN service mysql restart # startup the service - this will fail since the nodes haven't been configured on first boot
 EXPOSE 3306 4444 4567 4568 # open the ports required to connect to MySQL and for Galera SST / IST operations

We’ll also need our base configuration for MariaDB, I’ve included the base configuration variable for Galera – obviously there are more however these are good enough for starting up the service:

root@workstation:~# vi my.cnf

So far so good, we have Docker installed and our Dockerfile as well as our “my.cnf” file ready to go. Now its time to build our Docker image, check that the image exists and startup 3x separate Docker images for each of our Galera nodes:

root@workstation:~# docker build -t ubuntu_trusty/mariadb-galera .
root@workstation:~# docker images |grep mariadb-galera
 ubuntu_trusty/mariadb-galera   latest              afff3aaa9dfb        About a minute ago   412.5 MB
docker run --name mariadb1 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash
docker run --name mariadb2 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash
docker run --name mariadb3 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash

We’ve started up our Docker images, now lets verify that they are in fact up and retrieve the process information we need to connect. We’ll need two pieces of information, the IP-Address and the Docker image name which can be received using the combination the the “docker ps” and the “docker inspect” commands:

}]root@workstation:~# docker ps
 CONTAINER ID        IMAGE                                 COMMAND             CREATED             STATUS              PORTS                                    NAMES
 b51e74933ece        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb3
 03109c7018c0        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb2
 1db2a9a520f8        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb1
root@workstation:~# docker ps |cut -d' ' -f1 |grep -v CONTAINER | xargs docker inspect |egrep '"ID"|IPAddress'
 "ID": "b51e74933ece2f3f457ec87c3a4e7b649149e9cff2a4705bef2a070f7adbafb0",
 "IPAddress": "",
 "ID": "03109c7018c03ddd8448746437346f080a976a74c3fc3d15f0191799ba5aae74",
 "IPAddress": "",
 "ID": "1db2a9a520f85d2cef6e5b387fa7912890ab69fc0918796c1fae9c1dd050078f",
 "IPAddress": "",

Time to use lxc-attach to connect to our Docker images using the Docker image name, add the mounts to “/etc/mtab” to keep them MariaDB friendly and customize the “gcomm://” address as we would for a usual Galera configuration (the Docker image name is a generated when the instance fires up so make sure to use your own instance name in the following commands):

root@workstation:~# lxc-attach --name b51e74933ece2f3f457ec87c3a4e7b649149e9cff2a4705bef2a070f7adbafb0
 root@b51e74933ece:~# cat /proc/mounts > /etc/mtab
 root@b51e74933ece:~# service mysql restart
 * Starting MariaDB database mysqld                            [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.

root@b51e74933ece:~# vi /etc/mysql/my.cnf

root@b51e74933ece:~# exit

root@workstation:~# lxc-attach --name 03109c7018c03ddd8448746437346f080a976a74c3fc3d15f0191799ba5aae74
 root@03109c7018c0:~# cat /proc/mounts > /etc/mtab
 root@03109c7018c0:~# vi /etc/mysql/my.cnf
 root@03109c7018c0:~# service mysql start
 * Starting MariaDB database server mysqld                            [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.
 root@03109c7018c0:~# mysql -uroot -proot
 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 Your MariaDB connection id is 30
 Server version: 5.5.39-MariaDB-1~trusty-wsrep binary distribution, wsrep_25.10.r4014

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like 'wsrep_cluster%';
 | Variable_name            | Value                                |
 | wsrep_cluster_conf_id    | 2                                    |
 | wsrep_cluster_size       | 2                                    |
 | wsrep_cluster_state_uuid | 42bc375b-2bc0-11e4-851c-1a7627c0624c |
 | wsrep_cluster_status     | Primary                              |
 4 rows in set (0.00 sec_

MariaDB [(none)]> exit
 root@03109c7018c0:~# exit

root@workstation:~# lxc-attach --name 1db2a9a520f85d2cef6e5b387fa7912890ab69fc0918796c1fae9c1dd050078f
 root@1db2a9a520f8:~# cat /proc/mounts > /etc/mtab
 root@1db2a9a520f8:~# vi /etc/mysql/my.cnf
 root@1db2a9a520f8:~# service mysql start
 * Starting MariaDB database server mysqld                                                                                                                                                     [ OK ]
 root@1db2a9a520f8:~# mysql -uroot -proot
 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 Your MariaDB connection id is 34
 Server version: 5.5.39-MariaDB-1~trusty-wsrep binary distribution, wsrep_25.10.r4014

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like 'wsrep_cluster%';
 | Variable_name            | Value                                |
 | wsrep_cluster_conf_id    | 3                                    |
 | wsrep_cluster_size       | 3                                    |
 | wsrep_cluster_state_uuid | 42bc375b-2bc0-11e4-851c-1a7627c0624c |
 | wsrep_cluster_status     | Primary                              |
 4 rows in set (0.00 sec)

MariaDB [(none)]> exit
 root@1db2a9a520f8:~# exit

Now be honest… Wasn’t that easier than creating multiple virtual machines and configuring the OS for each?

Enjoy your new MariaDB Galera Cluster and happy Dockering!

Categories: DBA Blogs

How to Configure an Azure Point-to-Site VPN – Part 3

Thu, 2014-08-28 07:58

This blog post is the last of this series and which will demonstrate how to configure a Point-to-Site VPN step-by-step. In my first blog post, I demonstrated how to configure a virtual network and a dynamic routing gateway. This was followed by another post about how to deal with the certificate. Today we will learn how to configure the VPN client.

1. In the Management Portal, navigate to virtual network page; in the “quick glance” you have the links to download the VPN package.

Choose the one appropriate to your architecture (x86 or x64).

Screen Shot 2014-07-31 at 14.10.48

2. After successfully download, copy the file to your servers and execute the setup.
Screen Shot 2014-07-31 at 14.49.34

3. Click Yes when it asks if you want to install the VP and let it run.
Screen Shot 2014-07-31 at 15.09.26

4. After successful installation, it will be visible in your network connections.
Screen Shot 2014-07-31 at 15.46.07

5. In Windows 2012 you can click in the network icon, in the notification area icons (close to the clock), and it will show the right-side bar with all the network connections. You can connect from there.
The other option is right-click the connection in the “Network Connections” window (previous step) and click “Connect / Disconnect”.

6. A window will be shown, click Connect.

Screen Shot 2014-07-31 at 15.58.23

7. Now check the box near to “Do not show this message again for this Connection” and click on “Continue”.

If everything is ok, the connection will succeed.

Screen Shot 2014-07-31 at 16.07.04

8. To confirm that you are connected, execute the command “ipconfig /all” in the command line, and you should see and entry for the VPN with an IP assigned.

Screen Shot 2014-07-31 at 16.24.01

9. After a while, you will be also able to see the connection in you vNet dashboard. As you can see in the image you have data in/out in the vNet.

Screen Shot 2014-07-31 at 16.26.39

After this last part, you are done with the point-to-site VPN configuration. You can test the connectivity by executing the “ping” command and also using the “telnet” client to test if some specific port is opened and reachable.

The point-to-site VPN is recommended if you want connect users/devices to your Azure infrastructure, for few different reasons. If you need to connect the entire or part of your on-premises infrastructure, the way to go is configure a Site-to-Site VPN. Stay tuned for a blog post on how it works.

Thank you for reading!

Categories: DBA Blogs

SQL Server Replication Quick Tips

Tue, 2014-08-26 07:56

There is a time in every SQL Server DBA career where a mail came in with a “replication is not working, please check” message. This article is intended to provide with quick tips on how to handle common replication errors and performance problems in a one way transactional replication topology

Oh boy, there is a data problem:


You check replication monitor and get a :

“Transaction sequence number: 0x0003BB0E000001DF000600000000, Command ID: 1″

The row was not found at the Subscriber when applying the replicated command. (Source: MSSQLServer, Error number: 20598)

Note the sequential number will be used in the following scripts, also the commandID is important to note as not necessarily the whole sequential number has issues, it might be tied to just one command.

Go to the distributor database en run the following command to get the list of articles involved in this issue:

select * from dbo.MSarticles
where article_id in (
select article_id from MSrepl_commands
where xact_seqno = 0x0003BB0E000001DF000600000000)

To get the whole list of commands you can run below query

exec sp_browsereplcmds
@xact_seqno_start = ’0x0003BB0E000001DF000600000000′,
@xact_seqno_end = ’0x0003BB0E000001DF000600000000′

With this last query you can get to the exact command that is failing (by searching the command number in the commandID column)

You will notice that a transactional replication will typically(depending on setup) use insert, delete, update stored procedures to replicate the data, so the command you will see over here will look something like:

{CALL [sp_MSdel_dboMyArticle] (118)}

That is the stored procedure generated to process delete statement over dbo.MyArticle table, and in this case it is trying to delete ID 118. Based on the error reported you will now realize that the issue is that the replication is trying to delete MyArtcile on ID 118 and is not there, so it is trying to delete a non existent record.


  1. You can either check the publisher for this record and manually insert it in the subscriber, this will cause the replication command to succeed and will fix the issue.
  2. You can skip the command, for this specific example you can skip the command as there is no need to delete something that has been already deleted, by removing the command from the MSrepl_commands table. (Beware, only do this when you know what you are doing, manually removing transactions can result in a unstable replication.) In this example you would use something like
    Delete from MSrepl_commands
    where xact_seqno = 0x0003BB0E000001DF000600000000 and commandID=1
  3. Reinitialize, this option is the least famous, you should try to fix the issue before doing this, however if after skipping the command you still get new errors everywhere, something definitely went wrong and there is no easy way to guarantee that your subscription is up to date and stable, this can be indicator that someone or something messed around with the data, there was some type of modification at the subscription and this is causing issues with the replication. Remember most likely a one way transactional replication is intended to have a copy of the data so it can be queried, no modification should be made to the data as this won´t replicate back to the publisher.

Query time outs:

After checking the replication monitor you get a message like:ID-10054415

Query timeout expired
The process is running and is waiting for a response from the server

and then terminating with this error…
Agent ‘MyAgent’ is retrying after an error, YY retries attempted

This can be due to several reasons:

  • Your transaction is taking a long time and needs some tuning. If your transaction is touching too much data or is using a bad query plan it can result in a long running query, check your TSQL and see if the execution plan is optimal
  • There is a problem with the network. If you normally don´t have this issue and this just happened out of the blue, you can try to check the network, sometimes a network failure or saturated endpoint can increase transfer rates affecting your replication.
  • Server performance, either the publisher or subscriber can have a performance problem, either too much CPU or Memory usage can eventually impact a replication transaction causing it to timeout
  • The query just needs some more time to complete. If this is the case you can tweak the time out setting to give the transaction some more time so it can process properly. To do this:
  1. Right click the Replication folder
  2. Click Distributor Properties and select General
  3. Click ‘Profile Defaults’
  4. Choose ‘Distribution Agents’ on left
  5. Click ‘New’ to create a new default agent profile
  6. Choose ‘Default Agent Profile’ from the list displayed, (to copy this)
  7. Pick a name for your new profile and upate the QueryTimeout value in right column
  8. Save
  9. Choose to use this profile across all your replication sets. However I would recommend to only apply to the agent that requires this change
  10. To individually assign the profile, open Replication Monitor and then in the left pane click your replication set
  11. In the right pane, select your desired agent, right click and change the profile to the new one you just created

 Mini Hack on expired subscriptionsID-10098834

When a replication is marked as expired, it will tell you that you need to reinitialize.

To activate it “under the hood”, check your replication monitor last error, it will show you the last sequential number that tried to process, then run this command(using the corresponding seq_no):

update MSsubscriptions
set status=2
where subscription_seqno=0x0002AADE00005030000100000002

The status column means:

0 = Inactive.

1 = Subscribed.

2 = Active.

You can change it to Active and it will try to process again. Why would you use this? if the subscription expired but your distribution cleanup job haven´t run, then it can try to reprocess everything again, if the issue was related to a network time out and now you have your network back up, you can try this as it will try to start from the last sequential number. Also you can try to do this to reproduce the last error reported, so it will fail and eventually expire again but you will have a better idea on why it failed in the first place.

Multi threading or “Streams”

A slow replication, and by slow I mean when you know that your replication is experiencing a delay when your command goes from the distributor to the subscription, you can check this with performance counters or quickly insert a token(

You can improve the performance by adding streams, normally a default setting will write sequentially the replication transactions one by one, with Streams you can add more threads, say you specify to use 4 strems, you will be processing 4 transactions at a time meaning a faster turnaround. This can work beautifully but it can also generate deadlocks and inconsistencies, I would recommend to start low and just add 1 stream at a time and stop when you start seeing a problem. Do not go crazy and feel this is a turbo button and add 30 streams, and like most features, test it in QA first!

To Enable this option follow these steps:

  1. Open Replication Monitor, expand the Publisher and select the Publication in the left pane.
  2. On the right pane window , under “All Subscriptions” , you will see a list of all the Subscribers.
  3. Right Click the Subscriber you want to modify and click on “View Details”. A new Window will appear with the distribution agent session details.
  4. Now click on “Action” in the Menu bar at top and select “Distribution Agent Job Properties”, this will open the corresponding job properties.ID-100203331
  5. Go to  “Steps” in the left pane window followed by highlighting “Run Agent” on the Right pane window, click Edit.
  6. A new Windows will popup , scroll to the right end of the command section and append this parameter “ -SubscriptionStreams 2”
  7. Save the settings and restart the Distribution Agent job.

You might encounter some issues when implementing this, you can read this KB for further info:


There are many tips on how to fix a replication, sometimes is easier to just reinitialize, but sometimes this is not an option when critical systems depend on the subscription to be up to date or your database is so huge that it will take days to complete. When possible try to troubleshoot instead of just restarting the replication from scratch as it will give you a lot more insight on what is going on.

Categories: DBA Blogs

Log Buffer #385, A Carnival of the Vanities for DBAs

Fri, 2014-08-22 08:00

This Log Buffer edition combs through top notch blog posts from Oracle, MySQL and SQL Server postings around the globe.


You want to test the Oracle Java Cloud? You can get your own 30 day test account & instance. Or you can get a Java account with our permanent test system.

Some ramblings about Oracle R Distribution 3.1.1.

Scott is demystifying Oracle Unpivot.

Java 8 for Tablets, Pis, and Legos at Silicon Valley JUG – 8/20/2014

A new version of Oracle BPM Suite with Adaptive Case Management (ACM) is now available.

SQL Server:

Data Mining: Part 14 Export DMX results with Integration Services

Should you be planning to move from Exchange to Office 365? If so, why?

Stairway to T-SQL DML Level 12: Using the MERGE Statement

From SQL Server Management Studio it’s hard to look through the first few rows of a whole lot of tables in a database.

Special Characters can lead to many problems. Identifying and reporting on them can save a lot of headache down the road.


MariaDB Galera Cluster 5.5.39 now available

A closer look at the MySQL ibdata1 disk space issue and big tables

How-To: Guide to Database Migration from MS Access using MySQL Workbench

Using resource monitoring to avoid user service overload

How to use MySQL Global Transaction IDs (GTIDs) in production

Categories: DBA Blogs

Microsoft PowerBI: News from WPC

Wed, 2014-08-20 14:24

During the Worldwide Partner Conference (WPC) that happened last month, Microsoft made public some of the new functionalities that we can expect to see soon on Microsoft PowerBI.

If you were on another planet in the last few months, and didn’t heard about PowerBI for Office 365, I’ve included a list of the set of tools that comprise Microsoft BI stack in the cloud. Keep in mind, they are different tools and not a single product, each tool serving a different purpose. At the date of writing of this article the tools that comprise the entire solutions are:

  • Power Pivot
  • Power View
  • Power Query
  • Power Map
  • Power Q&A
  • Power BI Windows Store App

Some of those were already available as an Excel add-in or built-in as part of the Excel product for a long time and now had being re-packed on this cloud solution, while others, like the Power Q&A are a cloud-only solution.

So, what are the big news from the WPC 2014? During the conference, we watched a demonstration of what to expect for the coming months, so today I’ll discuss the ones I believe were the most important. If you want to see the entire presentation you can find it here.


New visualizations!

One of the key issues we were facing with the current version of Power BI, was the lack of some visualizations the customer always asks for, the main one being the gauge. It is incredible how popular those little gauges have become with time – decision makers love it, so it’s really good to see they are finally available in Power BI.

Besides the gauge, we can see in the image above taken from the WPC demonstration, other data visualizations like the radar chart and the treemap.

Edition capabilities in the browser

Another important thing that was announced was the possibility to edit the dashboard and change the data visualizations on the browser, without using Excel. It doesn’t seem like much, but this can be very important, and indicate a shift in Microsoft behaviour. If you look at all the Power BI functionalities, and in fact, at the entire BI stack, Excel was always the central tool, you needed Excel to basically everything. And now we are seeing some nice features that you can manage and control inside the browser. Let’s wait for the next steps.

Important to mention that everything is HTML5 instead of Silverlight, meaning we can have the same experience in basically any device.

Partner Solution Pack

If I was asked to name just a single announcement that was made that could drastically change the market, it would be this one.

Partner Solution Pack is the ability for the Microsoft partners to create a bundled BI solution including the data, connectivity to the data sources and all the interactive reports. That means that we can as a user buy a solution pack from Salesforce for instance, connect it with our Salesforce account and it would automatically create a fully interactive dashboard with our own data.

Now, imagine the other way around: you, as a Microsoft Partner now has the ability to create a complete BI solution to your customers and make it available on the cloud. And your customers can buy this package, connect it with their data and make use of the solution in a manner of seconds.

The Partner Solution Pack in my opinion will create a huge market for all Microsoft partners and provide us, the users, with tons of good packages and the ability to have a BI solution paying much less than what would cost creating everything from scratch.

PowerBI-Image2 PowerBI-Image3

But you may tell me that we have other tools in the market that can do this, connect on partner applications and build a dashboard, what would be the advantage of using Power BI over the existing tools?

The biggest advantage is the PowerBI Q&A, as you can see in the screenshot in every screen we have a simple search box at the top of the page, which allows the user to do simple natural language questions to query the data. So, if the user wants to know the “opportunity size by month and by industry” all you have to do is ask and PowerBI will find the data, and choose the best visualization method for you. After that, you can just pin this report in the dashboard and that’s it, now you can keep track of this important information on a daily basis. Without requiring a single line of code, without asking for a change request to the IT department and going to a huge queue of requests that would take months to be addressed.


I hope that in this article I was able to show you the potential this new functionalities can bring to your company. If you need more information about PowerBI, or if you’re as excited as I am with it and want to start using it right away, just contact us and our team will be glad to work with your company to either develop a BI solution that consumes your data, or to plan the development of your own Partner Solution Pack so you can offer your customers a complete BI solution using PowerBI for Office 365.

Click here to watch the entire WPC announcement and see the PowerBI reports in action. The PowerBI demonstration starts at 21:10 minutes.


Categories: DBA Blogs

SSAS Database Doesn’t Show Up in SharePoint 2013 Dashboard Designer

Wed, 2014-08-20 07:43

Howdy everyone,

Just a quick tip for everyone that is struggling to configure SharePoint Server 2013 PerformancePoint  to connect to a SQL Analysis Services 2012 or 2014 cube.

After a new SharePoint Server 2013 installation, I have tried to create a new connection to my Analysis Services cube through the SharePoint Dashboard Designer, but no matter what, the Database option always shows up as empty and I can select my Analysis Services database.



In the Windows server log event I could find the following message:

The data source provider for data sources of type ‘ADOMD.NET’ is not registered. Please contact an administrator.

PerformancePoint Services error code 10115.

The reason you would receive this error message, believe it or not, is because even if you are using SQL Server 2012 or newer, and SharePoint Server 2013, it will try to load the SQL Server 2008 version of the ADMD.NET dll.

If you install the SQL Server 2008 R2 ADMD.NET component, that you can download from the following location: , and restart IIS you will fix this issue and will be able to successfully connect to your SQL Server 2012/2014 Analysis Services database.

Hope this helps.


Categories: DBA Blogs

SQL Server Error: 18056, Severity: 20, State: 29

Wed, 2014-08-20 07:40

Howdy everyone,

One of the most frequent error messages I come across when dealing with SQL Server, particularly SQL Server 2008 R2, is Error 18056, Severity: 20, State: 29. Not only do I constantly see this error message in the SQL logs, but also in the Microsoft community forums. I often see inquiries likes, “I have SQL Server 2008 R2 SP2 installed and I see the error below. I have the most recent Service Pack installed, but the problem remains.”

Error: 18056, Severity: 20, State: 29.
The client was unable to reuse a session with SPID XXXX, which had been reset for connection pooling. The failure ID is 29. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.

Now, this error message can be triggered by many things. For instance, lack of user permissions in any database, or in the worst case, lack of resources in the server that made it stop accepting new connection requests. However, in any of those cases you would probably have a lot of angry users calling you and complaining that they can’t do their work. If that’s not the case, you are probably just seeing a generic and internal error message that shouldn’t be in the SQL error log in the first place.

The first step to addressing this issue, or simply investigating if it’s something more serious, is to update to the latest Cumulative Update.

Microsoft released a fix that makes SQL Server stop logging this generic error message in some cases, when it’s not important for you to investigate the issue. This fix is documented in the following Knowledge Base article:

I hope this help you to get rid of this error message.


Categories: DBA Blogs

On the Road with Laerte

Mon, 2014-08-18 09:21

For the month of October, Microsoft PowerShell MVP, Laerte Junior will be touring Brazil and Europe for various SQL Server-related speaking engagements.

“Thankfully, I am working at a company that fully supports their employees to speak and participate in community events.” Laerte says. “I can travel to Europe for 5 SQL Server conferences, and then go to the USA to attend the MVP Global Summit and SQL PASS Summit.”

While most European speaking sessions have been confirmed, we’ll be updating the schedule as the topics become available. You can follow Laerte on his personal blog at

Date Location Event Topic Speaking Schedule September 27, 2014 São Paulo, Brazil SQL Saturday #325 Criando suas próprias solouções usando PowerShell See speaking schedule October 1, 2014 Schelle, Belgium SQL Server Days Mastering PowerShell for SQL Server See speaking schedule October 2, 2014 Utrecht, Holland SQL Saturday #336 Full day pre-conference training session: Mastering PowerShell for SQL Server October 3/4 2014 Utrecht, Holland SQL Saturday #336 TBD See speaking schedule October 11, 2014 Sophia, Bulgaria SQL Saturday #311 Writing Your Solutions Using PowerShell See speaking schedule October 18, 2014 Oporto, Portugal SQL Saturday #341 Criando suas próprias solouções usando PowerShell See speaking schedule October 24, 2014 Barcelona, Spain SQL Saturday #338 TBD See speaking schedule

Will you be attending any of these sessions? If so, which ones?

Categories: DBA Blogs

Log Buffer #384, A Carnival of the Vanities for DBAs

Fri, 2014-08-15 07:32

This Log Buffer Edition starts with some great posts from Oracle arena, then passes through the world of SQL Server, and stops at the MySQL field.


OAG/OES Integration for Web API Security: skin and guts by Andre Correa

Showing Foreign Key Names in your Data Modeler Diagrams

walkmod : A Tool to Apply Coding Conventions

Oracle VM Virtual Appliances for E-Business Suite 12.1.3 Now Available

RMAN Catalog requires Enterprise Edition (EE) since Oracle Database

SQL Server:

Restore Gene : Automating SQL Server Database Restores

With a hybrid cloud, can you get the freedom and flexibility of a public cloud with the security and bandwidth of a private cloud?

A clear understanding of SQL Data Types and domains is a fundamental requirement for the Database Developer, but it is not elementary.

Automating SQL Server Agent Notification

Adding Custom Reports to SQL Server Management Studio


The Road to MySQL 5.6 — A DBA Perspective

Virtual servers for MySQL are popular but are they the answer? Should we be containing our instances instead.

Jeremy Cole recently blogged about the feature SET GLOBAL sql_log_bin.

Which SQL queries take all the time? Using MaxScale to answer that age old question.

SBR vs RBR when using On Duplicate Key Update for High Availability

Categories: DBA Blogs