Skip navigation.

Pythian Group

Syndicate content
Love Your Data
Updated: 17 hours 17 min ago

Oracle E-Business Suite: Virtual Host Names

Tue, 2015-10-06 07:56

The ability to use virtual host names with Oracle E-Business Suite is one of the features that I have been waiting for a long time. When I finally saw a post on Steven Chan’s blog about it, I was very excited. But, when I finally got to review the Mos note “Configuring Oracle E-Business Suite Release 12.x Using Logical Host Names”, I was left with disappointed.

In my opinion, the main advantage of using virtual host names is during a DR failover scenario. By using virtual hosts we can setup the servers in both a primary datacenter and secondary datacenter to use the same virtual hostname, even though their physical hostnames are different. This virtual hostname setup helps when we failover services and databases to a secondary datacenter, as we don’t have to reconfigure the application to use new physical hostnames. Currently when we install E-Business Suite to use a virtual hostname, “Concurrent Managers” dont work, as they internally use the physical hostname to communicate.

The new MOS note describes this very feature of using virtual hostnames with Oracle E-Business Suite. But why I am disappointed? Because it left a very important use case out. In most cases when virtual hostnames are used, the servers are configured with a different physical hostname. i.e., if you run hostname or uname commands you will see that the actual physical hostname and virtual hostname is only present in DNS and hosts file. This scenario is not covered by the MOS note. The MOS note asks us to reconfigure the server with virtual hostname such that when we type hostname or uname command it shows the virtual hostname instead of the physical hostname.

I believe the need to reconfigure the server to use a virtual hostname, defeats the main purpose of setting up virtual hostnames, making this MOS note useless :(

Thus, I will keep on waiting for this out of the box feature. I currently have a custom in-house method to use virtual hostnames with E-Business Suite that I will blog about it in future.


Discover more about our expertise with Oracle.

Categories: DBA Blogs

Five Years, Five Top 5’s

Sun, 2015-10-04 22:59


Five Years, Five Top 5’s

On October 4th, 2010 I joined Pythian. At that time Human Resources (HR) was a team of one, supporting 90 employees in 13 countries. Five years later on October 4th 2015, HR is now a team of fifteen, and today Pythian celebrates a milestone as we reach 400 employees in 36 countries!

What an incredible journey (…and yes, it’s all about the journey)!

The opportunity to work with stunning colleagues and to collectively build a team of passionate and talented HR professionals who continuously make me laugh, inspire me with their creativity, and challenge me to be a better leader is both a privilege and an honour. I am fortunate to work alongside them every day.

As members of Pythian’s HR team, we are business leaders. We care about (and contribute to) the success of Pythian. We solve the puzzles that can come with growing a business from 90 to 400 employees in five years. We deliver quality of service to our clients. We make a difference. We value our employees. We have fun.

To celebrate five incredible, challenging and rewarding years I share my Top 5’s.


Five Things I Appreciate Most About Pythian HR:

We are strategic.

We think global, not local.

We are productive, not busy.

We brainstorm without the “but”.

We are consistent, fair and we care.


Five Favorite Moments:

Each HR hire and expanding our HR team globally in 2014.

Celebrating our 2013 HR Initiative of the Year win with our world class HR team.

Reaching 200 employees in September 2012.

Celebrating our First Geek Day on May 25th 2011 and observing our global team spirit has grown.

Delivering our first BORG and Orientation session on January 4th 2011.


Five Favorite Programs We Have Built:

Love Our Community, the programs that supports our global employees and their communities.

SEED, the Self Directed training and professional development fund.

Delphic Journey, the Performance Feedback program.

Delphic Iris (the newsletter) and Newsflash (the weekly update).

Pythianology, the internal speaker series that showcases our talents, passions and interests.


Favorite Leadership Lessons in 10 Words or less:

Leadership is an art, not a science.

Lead the way you want to be led.

If you don’t lead by example, you are not leading.

Embrace Self Awareness, but focus your efforts on Self-Regulation.

Words matter. Less is more.


Five Favorite Books from the Last 5 Years:

You Are the Placebo: Making Your Mind Matter

The Power of Habit

When All You Have Is Hope

Turn The Ship Around

Speaking As A Leader


Steve Jobs once said that “Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work.  And the only way to do great work is to love what you do”.

Great work, amazing colleagues, and the opportunity to do what I love…thank you Pythian for a fantastic five years!

Categories: DBA Blogs

Pythian’s Velocity of Innovation event gives attendees a view into the future of IT

Fri, 2015-10-02 12:20

This summer I had the opportunity to moderate a discussion that brought together IT’s most innovative thought leaders with some of North America’s top CIOs. And we gave the CIOs the chance to ask the panelists anything. Here’s what happened.

When three of the most respected information technology thought leaders came together with 20 of North America’s top CIOs for an hour and a half in New York City this summer, the result was a compelling discussion about the future of IT.

Pythian supplied the panel of experts at the Velocity of Innovation CIO event: Paul Vallé, Pythian’s founder and CEO; Gene Leganza, vice-president, principal analyst serving enterprise architecture professionals at Forrester Research; and Otto Toth, CTO at Huffington Post. And the attendees supplied the questions, covering an impressive range of topics.

More than an exchange between panel members, the 90-minute session fully engaged the audience in a discussion about everything from agile software development and the importance of being a data-driven business to such forward-looking matters as artificial intelligence and virtual reality.

This blog post is the first of a series that focuses on topics covered in the Velocity of Innovation discussions in New York, and concentrates on the first question addressed: putting innovation and agility into practice.

Question: Innovation and agility are common buzz words to describe what everybody wants. What have you explicitly done to get better at innovating or to make your organization or your clients’ organizations more agile? And how has this push for agility impacted either your enterprise architecture or your enterprise culture?

Otto Toth: Innovating quickly is not always in the best interest of the business, or it may not be easy to do properly. It can actually work against the business. Instead of moving faster, being very agile can slow down everything.

Everybody has a different perception about what agile should be and we have our own definition. We started to build our own CMS three years ago. We started very agile, building everything simultaneously. Later, we decided to use Scala as the programming language, but it’s hard to find people who are educated in this language.

A year ago, I changed the process a little bit. Now we break down the process into small pieces and modules. After three years in development, we finally shipped the first module of our CMS. This is an interesting story about how big agility can slow down the whole process.

Most users don’t really know what they want. Too many decision-makers just slow down the process. It’s better to have a few selective people or a core team who make the decisions and dream about features that are not available.

Gene Leganza: What makes an enterprise agile is being able to sense what’s going on in the marketplace and responding to it. What’s happening in your marketplace is about being able to deal with data. The ability to understand everything that everybody is doing and everything their friends are doing because of available data means you can be in tune with the marketplace as never before.

What does that mean for your organization? Data modelling and data integration are starting to migrate to the business side, where people are doing self-service analytics. If you try to keep up by jumping on some of this technology, you may shoot yourself in the foot. Instead of being on the leading edge, pay attention to what’s going on and when you figure out how to do it right, then do it as soon as possible. You avoid shooting yourself in the foot and you’re not that far behind the leaders of the dot-coms.

The flip side of agility is innovation. An interesting aspect of innovation is getting really hot talent into your environment. Getting the right talent and doing smart things and being leading edge are challenges. You have to figure out what level to drop in on, where you are in the industry. Are you a startup or are you a state organization that needs to be a fast follower?

Paul Vallé: Addressing what Otto Toth highlighted, tying up capital for three years is an enormous cost, in carrying costs, as well as competitive posture. Any solution you imagined three years ago has three years of technical debt when it launches. Three years is about the amount of time we expect the system to last before renewal. This is a key problem that the agile development methodology tries to tackle.

If you can create high bandwidth, you have access to a lot of knowledge and a lot of insights. Then you have low latency, and you’re able to respond to what you learned quickly and you’re able to inform your business decisions.

Your systems must be alive and you must continuously push them forward and tighten up your release cycles. You should try to release four or eight times a year, and eventually get to a point where you can do multiple releases per day, and even do per feature releases.

Per feature releases are ideal in an agile world. It gives you the high bandwidth, the low latency and feedback loops. And a per user feature enablement lets you create an A/B testing capability for new revolutions of your application. Moving to per user feature enablement and per feature releases is an enormous transformation and has everything to do with agility and velocity.

Audience comment: As a CTO manager, what resource do you use to be skeptical of smart engineers who want to build things from scratch?

Otto Toth: I have a very simple rule that if the service costs less than an engineer’s one-year salary, then don’t even think about developing it. Two weeks can turn into two years and 10 people and can result in nothing.

Gene Leganza: There’s a notion at the engineer level that agile means “There’s nothing between me and creating a solution. That’s what you hired me for and I have the smarts. And I don’t want any bureaucracy.” Then there’s agility at the enterprise level, which is about reducing risk and understanding how soon change can be in production.

What I tell people is it depends who you talk to. The higher up you go, the more people are going to be receptive to what improves the whole portfolio rather than one project.

And this is where architects come in, someone who has been hands-on, and has the credibility and knowledge to guide the organization more strategically.

Interested in being a part of a discussion like this one? VELOCITY OF INNOVATION is a series of thought-leadership events for senior IT management hosted by Pythian. Pythian invites leading IT innovators to participate in discussions about today’s disruptive technologies: big data, cloud, advanced analytics, DevOps, and more. These events are by invitation only.

If you are interested in attending an upcoming Velocity of Innovation event in a city near you, please contact To view our schedule of upcoming events visit out Velocity of Innovation page.

Categories: DBA Blogs

Pythian at Oracle Open World 2015

Fri, 2015-10-02 11:45


Are you excited for this year’s Oracle Open World 2015? Pythian will once again be attending and participating in a number of marquee presentations and events.

This event is an amazing opportunity to hear from the finest minds in the Oracle space from Pythian and across the indusrty. We will be sending our Oracle experts to deliver some compelling presentations, packed with best practices, expert tips and advice that you won’t want to miss (See schedule below). Make sure you take some time to connect with us, and engage with our sought-after Oracle experts.


Oak Table World

Don’t miss the popular mini conference – Oak Table World in San Francisco, October 26th & 27th. There is a stacked agenda for this years conference along with our very own CTO Alex Gorbachev on stage at 3PM October 27th.


Bloggers Meet-up

Once again, we are thrilled to announce that we will be co-hosting the Annual Bloggers Meetup, organized in partnership with the Oracle Technology Network on October 28th (Save the date!). Be on the lookout for a blog post from Alex Gorbachev, Pythian’s CTO, providing details including the location, opportunities to connect, and activities.


Contact Us

Should you wish to schedule time to connect with one of our experts at OOW reach out to Emilia (Partner Program Mgr/ 613 355 5038) and she will be happy to help set that up. Alternatively, catch all the action live from @pythianpartners while on site. This is the best way to get a hold of any of the Pythian team attending the event.

Attention Pythian Partners & clients, if you’re attending please reach out to us for details on social happenings you won’t want to miss!

See you there! #oow15 #pythianlife



Pythian Sessions at Oracle OpenWorld 2015

Rene Antunez

Session 1 – Oracle Exadata 101: My First 100 Days with Oracle Exadata 

Date/Time: October 25, 3:30 pm – 4:15 pm | Moscone West—3011

Abstract: The biggest headline at the 2009 Oracle OpenWorld was when Oracle’s Larry Ellison announced that Oracle was entering the hardware business with a prebuilt database machine, engineered by Oracle. Since then, businesses around the world have started to use these engineered systems. This beginner- to intermediate-level session takes you through my first 100 days of administering an Oracle Exadata machine and all the roadblocks and success I had along this new path.


Session 2 – Private Cloud Provisioning Using Oracle Enterprise Manager 12c

Date/Time: October 25, 12:00 pm – 12:45 pm | Moscone South—305

Abstract: With the newest version of Oracle Database 12c and its multitenant option, we are moving toward an era of provisioning databases to our clients faster than we ever could, even leaving out the DBA and allowing the developers and project leads to provision themselves the database that they need. This presentation guides you through the different ways you can provision data from one Oracle Database to another using Oracle Enterprise Manager 12c.


Session 3 – Oracle Recovery Manager 12c>: It’s Alive and All About Recovery, Recovery, Recovery 

Date/Time: October 25, 9:00 am – 9:45 am | Moscone South—304

Abstract: Oracle Recovery Manager (Oracle RMAN) has evolved since being released in Oracle 8i Database. With the newest version of Oracle Database 12c, Oracle RMAN has great new features that allow you to reduce your downtime in case of a disaster. In this session, learn about the new features that were introduced in Oracle Database 12c regarding recovery and configuration tips, and go over several demos so that if you are ever faced with this adversity, you will come out on top.


Session 4 – Database as a Service: What Is It and How Do You Use It? A Q&A with an Expert Panel 

Date/Time: October 29, 9:30 am – 10:15 am | Moscone South—102

Abstract: What is the best method for delivering, setting up, and using database as a service (DBaaS) utilizing Oracle software, hardware, or the cloud? Within a company (private cloud) there are several ways, outside of a company (public cloud) there are additional ways, or you can use a combination of both. For IT, Oracle Database, and technology professionals, platform as a service (PaaS) and DBaaS are technologies you need to know about and use to stay current in your profession. Attend this session to learn about technology and deployment choices available using Oracle technology and solutions. Subject matter experts from Oracle OpenWorld and Collaborate answer your questions so you can architect, plan, and implement a successful DBaaS project.


Christo Kutrovsky

Session 1 – Oracle Real Application Clusters Solving Common Scalability Problems 

Date/Time: October 25, 8:00 am – 8:45 am | Moscone South—304

Abstract: In this session, explore the promise of near-linear scalability that Oracle Real Application Clusters (Oracle RAC) makes. This session investigates several common problems in detail and presents solutions that are transparent to the application. Cache fusion contention, write-write contention, application partitioning and indexing are some of the topics discussed. Specific examples and their solutions are provided as well as performance comparisons.


Alex Gorbachev

Session 1 – Bridging Oracle Database and Hadoop 

Date/Time: October 25, 8:00 am – 8:45 am | Moscone South—303

Abstract: Modern big data solutions often incorporate Hadoop as one of the components and require the integration of Hadoop with other components including Oracle Database. This presentation explains how Hadoop integrates with Oracle products focusing specifically on the Oracle Database products. Explore various methods and tools available to move data between Oracle Database and Hadoop, how to transparently access data in Hadoop from Oracle Database, and review how other products, such as Oracle Business Intelligence Enterprise Edition and Oracle Data Integrator integrate with Hadoop.


Session 2 – Solving the “Dirty Secret” of Big Data with Oracle Big Data Preparation Cloud Service

Date/Time: October 27, 5:15 pm – 6:00 pm | Moscone South—310

Abstract: Big data promises many game-changing capabilities, but the “dirty secret” is that up to 90 percent of big data project time is spent preparing the data so that it can be usefully analyzed. Join this session to see a live demonstration of a new solution to this problem—Oracle Big Data Preparation Cloud Service. See how Oracle Big Data Preparation Cloud Service takes “noisy” data from a broad variety of sources in many different formats, both structured and unstructured, and uses sophisticated statistical, knowledge-based, and machine-learning algorithms to cleanse, enrich, and blend it into useful data streams, ready for further discovery, analysis, and reporting.


Maris Elsins

Session 1 – Mining Automatic Workload Repository: Alternative Methods for Identification of the Top SQLs 

Date/Time: October 25, 1:30 pm – 2:15 pm | Moscone South—308

Abstract: We can use reports from the Automatic Workload Repository feature to quickly identify the SQL_IDs of the top statements in the database. But what if the Automatic Workload Repository report reveals no “low-hanging fruit,” and the resource usage is evenly distributed among multiple statements? Where do we start? Is there a better way to identify the starting point for the tuning of a resource-bound system? This session explains when the Automatic Workload Repository reports are misleading and how to look at stored data from a different angle to determine the top consumers. This session includes a practical demonstration using scripts for Automatic Workload Repository mining that attendees can apply to their own challenging database performance tuning problems.


Nelson Calero

Session 1 – Oracle Exadata Maintenance Tasks 101 

Date/Time: October 25, 1:30 pm – 2:15 pm | Moscone West—3011

Abstract: You have been working as a DBA for years and are planning/starting to support an Oracle Exadata installation. There are several planned and unplanned maintenance tasks that you will face, each one involving many details. Don’t wait to master them through practice—in this session, the speaker shares their experience with Oracle Exadata installations during the last few years, and shares details of the most common maintenance tasks and best practices to make better use of your resources. This includes setting up ASR, replacing components (hard and flash drives, batteries, InfiniBand switches, and more), and patching.


Session 2 – Get More Out of Your Oracle Investment in Latin America

Date/Time: October 25, 2:30 pm – 3:15 pm | Moscone West—3010

Abstract:  Nelson Calero Database ConsultantOracle ACE The LAOUC is the Latin America user group community that has an affiliated group of about 1,000 professionals. It is a self-governing community of Oracle users who volunteer to achieve common goals and objectives connected with Oracle technology. The group’s purpose is to promote membership involvement, education, and influence. Come to this session to learn more.


Discover more about our expertise in the world of Oracle.

Categories: DBA Blogs

Log Buffer #443: A Carnival of the Vanities for DBAs

Fri, 2015-10-02 10:45

This Log Buffer Edition finds and publishes blog posts from Oracle, SQL Server and MySQL.


  • SCRIPT execution errors when creating a DBaaS instance with local and cloud backups.
  • Designing Surrogate vs Natural Keys with Oracle SQL Developer.
  • EBS General Ledger – Accounting Hub Reporting Cloud Service.
  • Oracle Database Standard Edition 2 is available.
  • Disable “Build After Save at JDeveloper” 12c.

SQL Server:

  • Learn where to get the latest installation and learning resources for the ever-evolving components of the Hadoop ecosystem and how those components may complement Microsoft SQL Server common everyday tasks.
  • Creating Dashboards for Mobile Devices with Datazen.
  • Install SQL Server Management Studio with Web Installer without Internet Access.
  • Using wildcard to find patterns in strings going from the basics to some more complicated conditions.
  • A general way to maintain History tables.


  • Capture database traffic using the Performance Schema.
  • Become a MySQL DBA blog series – Database Indexing.
  • Prepping your MySQL indexes for a character set change.
  • HowTo: Retrieve Direct Messages From Twitter and Store Them in MySQL.
  • Using Docker to Visualize MySQL Performance Schema.


Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

My Sales Journey: #5

Thu, 2015-10-01 13:49

I am a little sad to let go of the Life of a newly minted Sales Professional title today but happy that I am starting to develop a bit of a patina. My Sales Journey reflects my state of mind right now and feels more appropriate given the fact that today marks the end of my first month at Pythian!

Last week we explored social media and where people are building their online habitats. Today, lets journey inside the minds of these people to see what they think. Let me explain.

As a sales professional for a managed service I talk to decision makers and stakeholders. My job is to engage them and hopefully they will want to take the conversation one step further. For all this to happen I need to know what their responsibilities, their pain points and their challenges are.

So today lets step inside the mind of a VP/Director/Executive:

They are responsible for ensuring the continual functioning of mission critical operations. They have to stay relevant while bridging the gap between business and technology. They are in charge of driving innovation and improving profitability. They are also tasked with delivering flexible strategies that enable business to deliver high quality/valuable services.

Now that I know this on a broad level I can be more effective with my message. This week has been all about creating/assembling content that answers to these challenges and responsibilities. These can come in the form of white papers, relevant marketing, team resumes, cost analysis and slide decks.

Know their industry and purchasing cycles. This is a research heavy task but pays off when you have a carefully curated list that you can refer to. It becomes a powerful weapon in your sales arsenal.

Sleuth enough information to determine initial feasibility. Your first contact is twice as likely to be successful if you have qualified the type of messaging with their current challenge.

As you can tell, lots of learning happened in the last week. I hope some of these tips will help you with your outreach. Stay tuned next week when we step in to the mind of a manager who may be lower on the rung but can be a key person to land your message in front of the decision makers.

Now, It is Friday and time to kick back and relax with a team lunch! My second one in 4 weeks! Yessirrrr, that is how we roll at Pythian. See you next week!


Categories: DBA Blogs

SQL On The Edge #3 – Azure Elastic Database Jobs

Wed, 2015-09-30 10:15

Hello and welcome to episode number three of SQL On the Edge! In this episode we’re going to cover Azure SQL elastic database jobs. In the last few months, Microsoft has put a lot of effort in augmenting the elastic capabilities of the Azure SQL database service. We’re not only talking about the ability to scale UP but in this case about the capability to scale OUT.

The investments made on this front have produced:

– Elastic database tools for easier .NET development against multiple Azure SQL databases.
– Elastic database pools for better resource management.
– Elastic database jobs for easier job execution against groups of databases.
– Elastic database query for easier integrated querying of groups of databases.

Elastic database jobs are interesting because for a long time there wasn’t a native way to run jobs against Azure SQL databases. The solution we used with many of our clients was to have either a SQL Server on a VM to use the SQL Agent scheduling capabilities or using the Windows scheduler from a Windows Server VM or using the Azure job scheduler. These options, while serviceable were not really optimal for the task.

Elastic database jobs provide capabilities that cover the gaps we had before:

– You can run a job against an entire database pool, a pre-defined shard set or a custom database list.
– Job execution is logged and history can be retrieved per database.
– Information can be collected and stored from the results of each job execution.

For now, elastic database jobs are provided as a customer hosted web solution and an accompanying SQL database. At the moment, Powershell is also required to access the full functionality as the Portal only exposes a small part of it.

Let’s go over the feature and the demo in the video, enjoy!


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

SQL Server: Migrations/Upgrades – 10 Things to Think About

Wed, 2015-09-30 10:04

Here at Pythian we have planned and implemented dozens of migrations and upgrades for our customers. In this blog post I am trying to summarize the 10 most important things to consider when migrating or upgrading a SQL Server environment, based on our lengthy experience.
If you need to migrate/upgrade your SQL Server environment to a new server/environment but you are afraid of having long outages, losing data or degrading performance, here are the things to consider:

  1. To Cloud or not to cloud: if you are considering moving to the cloud, you should research carefully around  the capabilities, constraints and limitations of the cloud solutions you are considering. For example, there are few things you could do on a “regular” server with a SQL Server instance installed that you cannot implement with Amazon RDS or Azure Database (DaaS). I am not posting any links here because things are changing very fast on the cloud currently, so you need to ensure you are up to date about this.
  2. Physical to Virtual: Often, when moving from a physical box with attached local disks to a VM environment with shared storage, SQL Server performance may degrade due to multiple potential factors. It is recommended to test performance of new environments prior to Production cutover and compare to the performance on the existing one. Following configuration Best Practices in all levels of the architecture is essential (please see more details in section 5).
  3. Minimal downtime during Production cutover: there are few methods to copy an instance and databases to the new environment without affecting exiting Production and then cutover with minimal downtime.
    The options range between transferring backups through portable media, setting up Database Mirroring or Log Shipping or simply backing up and restoring databases.
    The solution depends on few things such as:

    1. Current performance of your environment
    2. Database SLAs and how much downtime can be afforded
    3. Network bandwidth and capacity between existing and new environment (is there a connection between the environments at all?)
    4. Volume of data updates in your current environment
    5. Size of the databases
    6. Etc.
  4. Performance baseline comparison before and after: this is a step that is usually missed during migration and upgrade projects. Performance degradation when moving to a new environment may be a common issue but we need to understand what exactly are the new bottlenecks. For that reason, it’s important to record the performance baseline from the old environment before migrating and another baseline after migration (not right away, we need the instance to “warm up” for a while, say: a week). In comparing the two baselines, fixing the problem may be much easier and faster because it will give us an indication of what to look for.
  5. Best Practices: Microsoft and other providers or partners provide lists of Best Practices. We need to ensure we follow Best Practices in all levels: Hardware, storage, VM, OS and SQL Server. Some examples of Best Practices:
    – VMWare: VMWare Best Practices
    – Storage: Disk partition alignment for SQL Server
    – SQL Server: Best Practices list
  6. Testing phase: before implementing the real cutover, it is strongly recommended that you implement a test migration without affecting current Production environment or as required. This step may take longer and add time to the schedule but it will reduce or (hopefully) prevent issues that may occur during the cutover (i.e.: missing components, application not being able to connect to SQL Server, etc).
  7. Rollback and Plan B: when preparing for a migration or an upgrade of a critical Production environment, it’s always important to plan for a rollback in case something goes wrong. Even more important is to understand how much time a rollback would take, and the implications (technical and business wise).
  8. Upgrade plan: if you are planning to upgrade the SQL Server version or the application version at the same time as the migration is happening, you should ensure that all components can be upgraded ahead of time and decide if any database can stay in older compatibility level. There’s the Upgrade Advisor that can help you check things but there is never a 100% verification unless you test all part of the application(s).  Well.. If there’s not a lot of code running, you may be able to trace everything and have 100% verification of the code, but this rarely happens.
  9. HA and DR planning: When migrating to a new environment, it is probably time to redesign the HA and DR plan in advance. Some of the HA/DR SQL Server native solutions: Cluster, Always On Availability Groups (SQL 2012+), Database Mirroring (to be deprecated in future versions), Log Shipping, Replication, backups. There are other non-SQL Server solutions such as VM or storage mirroring as well as 3-rd party and cloud solutions. You can combine some of the HA/DR solutions together in order to meet higher SLAs as well.
  10. Refresh your monitoring abilities: since you are moving to a new environment, it’s probably also time to refresh your monitoring tools if required. A good monitoring tool will notify you about the right issues and on time, preferably one that can also do pro-active monitoring and help you prevent outages. It is important to be able to monitor hardware and storage, network, Virtual Machine (if applies), Operating System and SQL Server instance(s) so that you can always stay on top and understand in which layer there is a problem. You will need to make sure that someone gets the notifications and is able to connect on a timely manner based on your system’s SLAs to fix issues.


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Import/Export Multiple SSIS Packages

Wed, 2015-09-30 09:39

While I was working on a migration project recently, I had the task of importing and then exporting multiple packages – about 120~ on to a new server. Usually, if there are less numbers of packages we can do this task manually but, in this case it was tedious and time consuming to import and then export it manually. Here’s where the utility DTUTIL comes in handy, this utility has many options that you can use according to your need.

What I used is very simple and easy to use. I created a two batch file using the below commands:

Exporting SSIS Packages to File:

dtutil /SQL ExportPackage /COPY FILE;C:\Packages\ExportPackage.dtsx

Importing SSIS Package to SQL:

dtutil /file C:\Packages\ImportPackage.dtsx /copy sql;ImportPackage

Find out more details and a comprehensive list of options.


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Using Docker to Visualize MySQL Performance Schema

Wed, 2015-09-30 09:30

Last week, I was pleased to present at Percona Live Amsterdam 2015 regarding the importance of Performance Schema and how to begin approaching visualizing the data that is available to diagnose performance issues and fine tune your MySQL environment. You can download the slides from the Percona Live conference page.

The session highlighted using the ELK (Elasticsearch+Logstash+Kibana) stack to assist visualizing event data, in addition to graphite+graphana to visualize time-series data.

As many people who attend conferences are aware, the availability of internet access is sketchy at best. To combat this, the visualization stack that I created was strictly local utilizing Docker, specifically the Docker Toolbox.

The docker-compose.yml file that creates the stack is fairly straightforward:

  container_name: mysql_data
  image: tianon/true
    - /var/lib/mysql
    - /var/lib/mysql-files

  container_name: mysql
    - MYSQL_DATABASE=world
  build: mysql
    - mysql_data
    - "./mysql/config:/etc/mysql/conf.d"
    - "3306"
    - "3306:3306"

  container_name: sysbench
  build: ./sysbench
    - "mysql"

  container_name: elasticsearch_data
  image: tianon/true
    - /usr/share/elasticsearch/data

  container_name: elasticsearch
  build: elasticsearch
    - elasticsearch_data
    - "9200:9200"

  container_name: graphite
  image: kamon/grafana_graphite
    - "8000:80"
    - "8126:8126"

  container_name: logstash
  build: logstash
    - "./logstash/scripts:/opt/logstash/scripts"
    - mysql
    - graphite
    - "elasticsearch:es"
    - "9292:9292"

Since the session was about the upcoming changes to the Performance Schema in MySQL 5.7, I chose to use MySQL’s official 5.7 Docker image:

  container_name: mysql_data
  image: tianon/true
    - /var/lib/mysql
    - /var/lib/mysql-files

  container_name: mysql
    - MYSQL_DATABASE=world
  build: mysql
    - mysql_data
    - "./mysql/config:/etc/mysql/conf.d"
    - "3306"
    - "3306:3306"

Pay attention to two aspects of this output.

First, I am creating a mysql_data container. Using a Data Volume Container is highly recommended when working with any data that should be persisted and not baked into the Docker image.

Second, I am building the container using the `build` command. Throughout the entire docker-compose.yml file, I am adding additional functionality to base containers provided by the Docker Hub. For the MySQL container, the Dockerfile looks like this:

FROM mysql:5.7

ADD world_innodb.sql /docker-entrypoint-initdb.d/world.sql

CMD ["mysqld"]

The rest of the docker-compose file follows similar guidelines, and I won’t explain each aspect here. You can find the Dockerfiles for each component of the stack in the dtest/visualize-mysql github repository.

To bring the stack up, the steps are straightforward:

  1. Install Docker Toolbox on your machine
  2. Create the docker-machine:
    $ docker-machine create -d virtualbox \
    --virtualbox-memory "2048" --virtualbox-cpu-count "2" \
    $ eval $(docker-machine env visualize)
  3. Clone the dtest/visualize-mysql repository and change into the parent directory.
    $ git clone
    $ cd visualize-mysql
  4. Bring the environment up:
    $ docker-compose up -d

Verify that the containers are running, noting that the data volume containers should have exited with status 0:

$ docker-compose ps
       Name                     Command               State                            Ports
elasticsearch        / -Des ...   Up>9200/tcp, 9300/tcp
elasticsearch_data   /true                            Exit 0
graphite             /usr/bin/supervisord             Up>80/tcp, 8125/udp,>8126/tcp
logstash             /app/bin/boot                    Up>9292/tcp
mysql                / mysqld            Up>3306/tcp
mysql_data           /true                            Exit 0
sysbench             /etc/ -d            Up

You can then point your browser to the Kibana and Graphana dashboards. The graphana dashboard requires login, and the default credentials are admin/admin.

The dashboards for the presentation are not automatically loaded, so you can find some basic dashboards in the dashboards directory of the repository. Unfortunately, I did not export them before discarding the machine so have lost some of the fine-tuning changes I had made. I do intend to recreate them and even try out the newer versions of the ELK stack available.

The goal here is to show how easy it is to setup a local environment using Docker and existing images on the Docker Hub to get started with your own projects.


Discvover more about our expertise in MySQL.

Categories: DBA Blogs

Oracle 12c – Adaptive Query Optimization

Wed, 2015-09-30 07:51

Scenario: A user complains that a batch job that is critical for the month end business processing is taking more time than expected. As a DBA you quickly check and identify the sql statement that is running, compare the current execution plan with the previous plans and come to the conclusion that it has picked up a wrong plan. Instead of a Hash Join, it is doing a Nested Loop or did not pick the parallel threads and instead does a single threaded full table scan.

At this juncture, you do not have control to change the execution plan unless you cancel the job, fix it and rerun it. A simple analogy is like getting stuck in traffic and cannot do anything about it, even though you know that there are better and faster alternative routes to reach your destination.

Luckily, the above scenario was the case with 11g, though with 12c a new feature called – Adaptive Query Optimization was introduced and this feature helps the optimizer adjust the plans based on the real time data.

12c Optimizer:- With Oracle 11g, an optimizer decides an optimal execution plan for a query based on the conditions in the query, statistical information of the underlying objects and initialization parameters that influence the optimizer. With 12c, a new adaptive approach to query optimization is introduced by adjusting execution plans based on information collected during run time. This new approach is extremely helpful when the existing statistics are not sufficient to generate an optimal plan. There are two aspects in Adaptive Query Optimization:

  1. Improving the initial execution of a query during runtime.
  2. Adaptive statistics, that provide additional information for subsequent executions.

I will be focusing on the first aspect in this article.

The two features that optimizer adjusts based on the real time data are:

Hash Join vs Nested Loops – Optimizer may pick nested loop for a join operation based on the existing table statistics information, but during run time, if it realizes that more data is being processed, it will switch to Hash Join. This adaptive optimizer feature not only changes the plan on run time, but stores the adaptive statistics in the database. This additional information will be useful for future executions.

Parallel Distribution – Hybrid Hash – When a SQL statement is executed in parallel mode, optimizer decides whether to perform certain operations like sorts, joins and aggregations in parallel or as a single threaded operation based on the statistics. With the new adaptive feature, optimizer differs this decision till run time and based on the run time data it decides whether to pick the parallel mode or not.

Good resource about this new feature can be found here and here.

Check some of our other Oracle 12c blogs


Discover more about our Oracle expertise. 

Categories: DBA Blogs

Attending Tech Conferences

Fri, 2015-09-25 11:04


Now that I am in management, and have been for some time, I have still been attending tech shows and presenting at them a few times a year. When I came back to Pythian in February 2011, we had a handful of personnel doing sessions at the shows. Since then the number of people submitting abstracts and appearing at these shows has skyrocketed. I cannot take the credit single-handedly for that growth, but I am pleased that I have been a catalyst getting more Pythianites on the global stage where they should be. Bravo to all the new techs appearing at the shows.

I just attended the EastCoastOracle show in Raleigh/Durham USA and sat in on a steady stream of sessions and found the cloud material especially intriguing. Simon Pane, Roopesh Ramklass, Subhajit Das Chaudhuri, and Gleb Otochkin shone and had substance and pizazz in their sessions. Pythian should be proud.

The Pythian name was all over the show and we were tied with another company for most sessions. There was avid interest in what we do and I did get some “You work for Pythian!” comments from attendees. Our name is out there and getting “outer” every show.

Every show I attend I am thankful for the opportunity and Pythian’s support. It sweetens the deal when I look back at the number of presenters we send now as compared to 5 years ago. I value and I’m very honoured that so many colleagues have reached out to me over the years to get over that last remaining hurdle to entering the conference presentation life …


Discover more about our expertise in Cloud and Oracle.

Categories: DBA Blogs

My Sales Journey: #4

Fri, 2015-09-25 10:53


Let me start today with some great news! This week I got my first meeting….Woohoo! It happened in totally unexpected ways. It happened on Twitter. Of the dozens of calls and emails that I had been sending out it happened with a quick, to the point tweet. It made me take a step back and look at what I was doing differently.

Sales is about reaching people with a message and hoping they get interested enough to talk to you. You have to reach people where they live. Today, people live on many different hubs. We live on Twitter, Facebook, Hootsuite, Snapchat, Email, Linkedin or in an office. People also do these things depending where they are geographically. New Yorkers are more likely to hang out on social media channels than Floridians who may love phone calls and a good chat. Reach your people where they live.

Sales is also a very creative channel. It forces you to think out of the box at all times. To be successful you must not only be creative with your messaging, you also have to be creative about how you will deliver that message. Make your message brief, relevant and fun.

I am an entrepreneur first and foremost, which makes me naturally curious about business. Reaching out to all these people is all about getting to know their business. I am genuinely interested in knowing who they are and how their business works. It is about meeting cool folks doing cool things. Be genuine. Business will follow.

Its been a good week! Do you have a story or two to share about where you meet your people. Have you had success over social media? If so, please share. I want to be inspired!



Categories: DBA Blogs

Log Buffer #442: A Carnival of the Vanities for DBAs

Fri, 2015-09-25 07:24

This Log Buffer Edition collects and then showers around some of the information-rich blog posts from Oracle, SQL Server and MySQL.


  • Generic Java Server for Static content and Directory Listing using API mode or command line mode.
  • OEM agents on each host upload data to your management servers every few minutes.
  • exitcommit … or your career down the drain.
  • Encryption is the Easy Part; Managing those Keys is Difficult.
  • Managing the OBIEE BI Server Cache from ODI 12c.

SQL Server:

  • Email addresses are very prevalent in IT systems and often used as a natural primary key. The repetitive storage of email addresses in multiple tables is a bad design choice. Following is a design pattern to store email addresses in a single table and to retrieve them efficiently.
  • OpenStack: The Good and Not-So-Good Bits.
  • By defining server- and database-level audits, you can record just about any kind of event that occurs in SQL Server, which can be an invaluable source of security troubleshooting and forensic information when security breaches occur.
  • Koen Verbeeck shows how to easily extract metadata from files in your directories with Power Query.
  • Implementing John Conway’s Game of Life in Microsoft SQL Server.


  • Oracle HA, DR, data warehouse loading, and license reduction through edge apps.
  • Easy Load-balancing and High-availability using MySQL Router.
  • When hosting data on Amazon turns bloodsport.
  • MySQL 5.7 Labs — Using Loopback Fast Path With Windows 8/2012.
  • How to evaluate if MySQL table can be recovered.


Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

Meet Pythian at AWS re:Invent 2015

Thu, 2015-09-24 12:21


We’re pumped to be sponsoring our first AWS re:Invent show on October 6-9 at The Venetian in Las Vegas!

As an AWS Advanced Consulting Partner with AWS accredited experts on our team, we’ve helped our clients design, architect, build, migrate, and manage their workloads and applications on AWS.

If you’re going to be at the show next week, stop by our booth (#1255) or book a one-on-one session with one of our leading AWS technical experts to get answers to some of your biggest cloud challenges. From strategy development and workload assessment to tool selection and advanced cloud capabilities, we can help you at any stage of your cloud journey.

Don’t miss out! Schedule a one-on-one session with one of our technical experts:

Alex GorbachevCTO, Oracle ACE Director, Cloudera Champion of Big Data, specializes in cloud strategy and architecture, data and big data in the cloud, Securing data in the cloud, and IOT strategy.

Aaron Lee – VP Transformation Services, specializes in cloud strategy and transformation solutions, DevOps, big data, advanced analytics, and application platforms and migrations to AWS

Dennis Walker – Director, Engineering, specializes in cloud solution architecture and DevOps solutions for AWS.

Categories: DBA Blogs

Creating a Test Lab Using VirtualBox / NAT networking

Tue, 2015-09-22 12:37

My job is almost completely reliant upon my ability to perform work in lab of virtual machines. Almost every action plan I write is tested locally. When I need to troubleshoot an issue for a client one of the most common first steps I’ll perform is attempting to recreate the issue in a virtual environment so I can work on it there without the risk of impacting client data.

I believe that having a place to test and grow your skills is an absolute necessity for anyone working in the IT field today regardless of your specialization, even if you’re an IT generalist. But every now and then I hear about individuals who have issues with their virtual machines or with the virtual environment provided by their employer, so I figured this was a good time to share my method of creating a virtual lab. More specifically, one that allows you to do virtual work on a commodity laptop, one that won’t break down if you lose connectivity, one that won’t be interfered with if you connect to a restrictive VPN.

We’re going to build 2 CentOS 6 virtual machines on a virtual NAT network using Oracle VirtualBox. Before you begin, all you’re going to need is the installer for VirtualBox for your host machine and an ISO of CentOS 6 (64 bit).

The first thing we’re going to do once VirtualBox is installed is setup a NAT network.  The most common application of NAT networking is likely the usage of home broadband internet ‘routers’. This takes the single IP you get from your ISP and allows multiple machines to interface with it.

We’re going to use this concept to build a NAT network in your virtual environment that your virtual machines will connect to. By internalizing the network structure I will be able to interact with my virtual machines and allow my virtual machines to interact with one another regardless of whether my PC is connected to a network or not. This can be really handy if you’re having a networking problem, if you’re traveling, or if you’re inheriting a restrictive networking policy from a client’s soft VPN connection, etc. Regardless of what happens, if you can turn on your machine you can work in your lab of virtual machines.


For this tutorial I’ve installed VirtualBox version 5.0.4 on my local machine. A Nat Network is already setup by default, but you can verify this by going to File -> Preferences -> Network (on the left menu bar). On the Nat Networks tab you will likely see an network called ‘LocalNat’. You can click on the edit button (the little screwdriver icon) to see the network configuration. If you don’t have a Nat Network by default, create one by hitting the icon that looks like a network card with a ‘+’ symbol over it, then select it and click the edit button.


You will see the following options in the Nat Network Details window:

(Checkbox) Enable Network: Make sure this is checked.

Network Name: I’ve decided to stay with the name ‘LocalNat’.

Network CCIDR: This is the IP/Netmask of the Nat Network. For this tutorial I’ve chosen to stay with the default option of For those of you unfamiliar with slash subnet notation, /24 translates to, or rather a class ‘C’ network. This means all devices (or in this case, virtual machines) that connect to this network must have an IP starting with 10.0.2.XX, but the XX can be anything you want it to be above the number 3. The reason for this the first IP (1) is reserved for the host resolution and the second IP (2) is reserved for the network gateway.

(Checkbox) Support DHCP: Leave this unchecked. We will be assigning static IPs.

(Checkbox) Support IPv6: Leave this unchecked. We only need Ipv4.


There will also be a button for port forwarding. We will come back to that later after our first virtual machine has been created. Keep clicking OK until you’re back to VM VirtualBox Manager, as any networking configuration changes you have made will not be applied until you have done so.



The network is setup and we’re ready to create our first virtual machine! Click on new and this will take you to the ‘Create Virtual Machine’ window. The first option will be to name your virtual machine. In this case I am going to name my first virtual machine ‘CentVM1’ and, will use type ‘Linux’ and Version ‘Red Hat (64-Bit)’.


NOTE: Often I hear about people having issues with the 64-bit options not being available in the version menu. If you only see 32 bit versions, learn how to enable the 64 bit options.

The next option will be to set the amount of memory that your virtual machine will use. Bare in mind that the amount of virtual memory you set will be used on your local machine whenever the virtual machine is turned on. For this tutorial I have set the memory to 1024MB (1G), meaning whenever the virtual machine is on, 1G of memory will be used on the local machine to support it.


The next step will give you the option to create a virtual hard disk for your virtual machine. Select ‘create a virtual hard disk now’ and click ‘create’.


The next option will allow you to select a hard disk type. In this example I will use a VDI (VirtualBox Disk Image).


The next option will allow you to choose whether you want the virtual hard disk to be dynamically allocated or a fixed size.

Dynamically allocated means that the file on your machine that represents the virtual hard disk will start at a small size, but will grow as needed until it meets the cap designated by the size of the drive. For example, if you specify for the drive to be 20G in size it will appear as 20G in the virtual machine, but the file that represents this on your host machine will be much smaller until the disk space is actually used. The advantage to this is that you save space initially until you actually use the hard disk space in the virtual machine. The downside to this is if you need to use more space, you may incur I/O overhead to allow your host machine to expand the size of the file on demand. The file can also easily become fragmented and distributed on the physical drive platter.

Fixed size is just what it sounds like. If you specify that the virtual machine is going to have a 20G hard disk, a 20G file is going to be created on the host machine and will likely not experience any expansion beyond that. The downside is that the file that represents the disk on your host machine will be 20G regardless of how much of the disk is actually being used. The upside is that you don’t have to worry about losing track of how much your virtual machine disk files may expand.

In this case I find dynamic allocation to be acceptable as I’m only going to use this VM for testing and development. However, you can use whichever you feel comfortable with using.


The next option allows you to set the size of the virtual hard disk. For a MySQL lab virtual machine I would recommend no less than 10G. In this case, I’m going to use 20G in case I need the extra space.


After clicking ‘create’ you will be brought back to the VM VirtualBox Manager. You will see the VM you just created, but we can’t start it up just yet! We need to attach it to the Nat Network we setup earlier and we need to load the CentOS 6 ISO. Right click on your new virtual machine (CentosVM1) and click Settings.


In the settings window for you virtual machine, click ‘Storage’ on the left menu, in the storage tree section click the ’empty’ CD-ROM designation under the IDE controller, and then to the right click in the attributes section click on the icon that looks like a CD next to the optical drive drop down menu. Then click ‘Choose Virtual Optical Disk File’


Browse to and select the CentOS 6 iso. When you return to the Virtual machines setting window you will see that the iso has been loaded into the virtual optical drive on the IDE controller.


Next, on the left menu section, click ‘Network’. Here you will see that you have a network adapter that is enabled and is attached to NAT by default. Use the ‘Attached to’ drop down menu and select ‘Nat Network’, then on the ‘Name’ drop down menu, select the Nat Network we created earlier (LocalNat). This will attach this network controller to your Nat Network.


Click OK to go back to the VM VirtualBox Manager. We are now ready to start up the virtual machine! I know we said earlier we were going to create 2 virtual machines, but we’ll take care of the second one later using cloning. For now, select your first virtual machine (CentVM1) and click start.



Install OS / Configure networking

You should install CentOS as you normally would in any other circumstance. However, I would suggest that you consider a minimal installation when given the option of what installation you would like to use. You don’t know what you will or won’t have available to you on a client system, so I strongly prefer to have a virtual machine where I can’t assume anything and will need to install packages on an as needed basis. This will allow you to troubleshoot and be ready for issues you may face on a client system.


The only other thing to note is that during the installation I selected to use the hostname cent1.localhost.

Once the installation is complete we are going to have to use the virtualbox virtual machine terminal until we have connectivity established. We can start by setting up networking. You’ll notice that if you run ‘ifconfig’ there isn’t any network controllers enabled beyond the virtual machine’s loopback ( Let’s enable the controller by editing it’s config file. You can edit the file with vi by using the following command:

> vi /etc/sysconfig/network-scripts/ifcfg-eth0

You will want the file to contain the following:

DEVICE="eth0" #this is the default, do not change
HWADDR="08:00:27:F6:B3:84" #this is going to be the mac address of the VM's network interface, do not change
NM_CONTROLLED="yes" #this is the default, do not change
ONBOOT="yes" #set to yes. This is what sets the network interface to start at boot
BOOTPROTO="none" #Since we are using a static IP, we don't want any boot networking protocol like dhcp or bootp to run
IPADDR= #The IP address of your virtual machine. I typically start my first machine with 6 and then move up from there. This is a matter of personal preference
NETMASK= #This netmask matches the /24 subnet notation we set for the Nat Network earlier
GATEWAY= #As stated earlier, VirtualBow reserves the .2 IP in the subnet for gateway
DNS1= #The DNS server you want to use. Personal preference.


Once you’re done modifying the configuration file for adapter eth0, you will want to restart networking with the following command:

> /etc/init.d/network restart

You should now see your static IP when running ifconfig and should be able to ping out.





At this point the machine is able to perform its basic functions. Note that you can’t SSH to the server yet, we’ll get to that. This is where you will want to put any finishing touches on the virtual machine and create your first snapshot. Before creating my first snapshot I will typically take the following steps.

  • Stop iptables and remove it entirely from chkconfig so it doesn’t start at boot. This is the linux software firewall and hasn’t been needed for any work I’ve ever had to do in a lab. This is typically one of the big things that throws people off when they’re trying to establish inbound connectivity for the first time so I recommend disabling it.
  • Disable SELINUX.
  • Update the OS by running yum update.

Note that I did not install MySQL. The reason for this is I want a snapshot of just the operating system load and nothing else. I utilize snapshots a lot to quickly spin up different types of working environments and I find it’s easier to restore an OS load to create a new environment then to deal with uninstalling and reinstalling packages.

For example. If I have an Oracle MySQL 5.6 VM installed and I want to switch to Percona 5.6, I would stop the VM, take a snapshot of my Oracle load for future reference, restore the OS Load snapshot, start the VM, and now I have a freshly installed CentOS VM where I can install Percona 5.6 and then create a new snapshot for it. This way if I ever need to use Oracle or Percona I can just load the appropriate snapshot and start the VM. No need to have more than 1 VM unless you’re emulating a replication or clustering environment.

So now that we have our first VM configured. Let’s shut it down and snapshot it. Shutdown your VM using the following command.

> shutdown -h now

Once shut down, go back to the VM VirtualBox Manager. Click on your VM and then click on snapshots.


Right now the only thing that you should see is the current state. No other states have been saved at this point, so create a new snapshot by clicking the menu icon that looks like a camera. This will take you to the option to create a snapshot of your virtual machine. When naming your snapshot and creating the snapshot description BE DESCRIPTIVE. You are going to want to know what is the state of the snapshot without having to load it. Once done, click ok and you will see that the new snapshot is created.




Setting up port forwarding

The last thing you need to do for this VM is make it accessible. You’ll notice that if you try to SSH to the machine at you’re not going to get very far. The reason for this is because that IP is sitting behind a NAT network, just like your machine does if you are using a router between yourself and your ISP. If your local machine on your local network is, no one outside your network is going to be able to ping your machine from the internet, they would have to connect to you using your public facing IP address with a port that you have designated to forward from your router to your local machine. Remember how we mentioned NAT Network port forwarding earlier? We’re going to configure that now.

Click on File -> Preferences -> Network (on the left menu bar), select your NAT network (LocalNat) and click the screwdriver icon to the right to edit like we did before when confirming the networking settings for the NAT network. This will bring you back to the NAT networking details window, click on the ‘port forwarding’ button.

You’ll need to setup a rule to forward a port on your local machine to the port of your virtual machine. In this example we’re going to setup an SSH port forward. We’ll use the following options…

Name: CentVM1SSH

Protocol: TCP

Host IP: <blank>

Host Port: 61022

Guest IP:

Guest Port: 22


What we have done is created a port forwarding rule called ‘CentVM1SSH’ that has stated that any inbound connectivity on port 61022 on your local machine should be forwarded to port 22 on the virtual machine we just created. Now we should be able to SSH from our local machine to our VM using Don’t forget to turn your virtual machine back on before you try this! Also, be sure that you have clicked okay on all preferences windows and are back to the VirtualBox VM Manager before attempting as new networking port forward rules will not be applied until you have done so.



Cool! As you can see from the example images above we can SSH directly to our VM using the ports we have forwarded.

Having 1 virtual machine is nice, but it’s not going to be enough. How often do you see single server solutions? We need another! However, I don’t want to go through all that configuration again, so let’s just clone this virtual machine. Make sure your first virtual machine is shut down and then follow these steps.


Cloning your virtual machine

In the VM VirtualBox Manager, right click on your first virtual machine and click on clone.


This will bring up the ‘Clone Virtual Machine’ window. Enter the name for your second virtual machine (CentVM2), MAKE SURE ‘Reinitialize the MAC address of all network cards’ IS CHECKED and then click next. If you do not reinitialize the MAC address, it means that the network card on your new virtual machine will have the same MAC address as the first one. This is bad.


The next options will be to either create a full clone or a linked clone. A linked clone is like a snapshot. I prefer to go with full clones so long as I have the disk space to support them.


The next option will ask if you want the a clone of the machine in just it’s current state, or everything including the existing snapshots. I would suggest cloning the machine in it’s current state. We are going to have to update some networking settings on the new virtual machine and create a new OS load snapshot for it later.


After you click on the clone button, you’ll see a progress bar come up. On average the cloning process of a small VM with just an OS load typically takes about 5 minutes. Once it’s done you will see that you have 2 virtual machines at your disposal. However there are going to be some issues with the second virtual machine that we will need to fix. Specifically it has the hostname, IP, and MAC address of the first virtual machine in it’s configuration files. Let’s fix that!


First, we need to know what the MAC address of the new virtual machine is. We can get that by right clicking on the second virtual machine, click on settings and then select network from the menu on the left. Drop down the advanced options and note the MAC address.


Go back to the VM VirtualBox Manager and start up your second virtual machine.

You will want to edit the following files….


Change the ‘HOSTNAME’ entry to whatever you would like your new virtual machine host name to be.


Change the ‘HWADDR’ entry to the MAC address that was reinitialized for this virtual machine. This is the MAC address you noted earlier.

Change the ‘IPADDR’ entry to the IP address you would like for this machine.


Delete everything that comes below the comments at the top of the file. This file is used to store information to tie network adapters to their MAC addresses and will be repopulated on the next reboot.

One this is done, reboot your virtual machine. The following is output that shows what the contents look like on my second virtual machine after it was rebooted.

[root@cent2 ~]# cat /etc/sysconfig/network
[root@cent2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
[root@cent2 ~]# cat /etc/udev/rules.d/70-persistent-net.rules
# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key. #Delete below this line before reboot
# PCI device 0x8086:0x100e (e1000) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:7e:a7:a5", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

At this point you’re good to go. Shut down your second virtual machine. Take an OS load snapshot of it and you’re all set. You can power on both virtual machines and they should be able to communicate with one another with no additional configuration. Don’t forget to set up another port forward so you can SSH directly to your second virtual machine, and additional ports as needed to connect to services like MySQL.

This virtual machine setup is as compact as you would like it to be, while being effective and will remain available to you so long as you can power your machine on. Enjoy! Oh, and also, everything written in this tutorial was tested using virtual machines.

Good day!


Discover more about Pythian and our technical expertise.

Categories: DBA Blogs

SQL On The Edge #2 – SQL 2016 Temporal Tables

Tue, 2015-09-22 12:31

Hello and welcome to our second episode of SQL On The Edge! On this episode we’re going to focus on a new feature of SQL Server 2016 called Temporal Tables.

What’s a Temporal table?

Temporal tables allow us to go back in time and see what the data looked like at some point in the past and are also referred to as “system-versioned” tables. This has several different uses, some of which are:

a) Auditing
b) Quickly fix mistakes
c) Data trending

A temporal table is implemented as two different tables by SQL Server and they’re displayed transparently as one table to the user. Each one of these tables also has two datetime2 columns to track the START of the validity of a record and the END of the validity. These tables are referred to as the current and the history table. Partitioning is supported for both the current and the history table to make it easier to work with large amounts of archived data.


Enabling support for a table to be temporal comes with some limitations, these are the main ones:
– A PK is required on the current table.
– The table can’t be TRUNCATED.
– No support for FILETABLE or FILESTREAM.

There are more, for the full list refer to the documentation.

Creating and Querying

For creating a temporal table and query examples let’s jump to the video, enjoy!


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Windows Containers: Installing SQL Server

Mon, 2015-09-21 14:28

This blog post is a quick introduction to Containers in the Windows world, and a walk-through on installing SQL Server in a Windows Container.


As many of you have heard, Microsoft is jumping into containers with native support for Docker containers in Windows 2016. Containers are the current big thing in virtualization, Linux, and DevOps, because the are very light-weight and allow you to quickly create a new environment without having to wait for a VM to be deployed and provisioned by the server team. I expect them to be just as useful and ubiquitous in the Windows world very soon.

Hypervisors are based on emulating hardware, and so they can be very resource intensive. At a minimum, they’re required to have an entire Operating System, CPU, RAM, and some drives assigned before they’re useable, and that’s often overkill for a VM running a single application. Containers, by contrast, virtualize only the OS level and share the kernel libraries between them, and you don’t need to worry about the rest. Containers are small and light-weight enough, that you can expect to run 4 to 6 times as many containers vs VMs on one host.

This MSDN Blog Post goes into detail on containers and their differences from VMs.

It’s important to note that containers are meant to run a single application and do not have GUI interfaces. So, everything must be run via the command line or a remote connection.

Why use containers for SQL Server?
  1. You need to quickly create a set of SQL Server instances for development or testing.
  2. Your company runs a Software-as-a-Service and wants to separate clients into different environments while squeezing everything they can from their hardware.
  3. You want to be able to share development environments without everyone getting in each others way.
  4. Your VM or Server team just isn’t very good, and they take forever to get you what you need.
Installing SQL Server in a Windows Container

The following is a walk-through for installing SQL Server in a Windows Container. You might want to reference the Docker documentation for more details on the commands I use.

When you’re done with the tutorial, try to get multiple containers and their instances of SQL Server running on the same box.

Step 1: Create a New Server

The server should be running Windows Server 2016 Technical Preview 3 (or higher) Core with the Container role enabled.

I used Azure’s “Windows Server Container Preview” VM for this tutorial, which luckily has the Host OS all setup for me. Find out more details on setting up Windows to run containers.

* A quick note for anyone who hasn’t used Windows Server Core before: Open Task Manager and use File–Run New Task to get new CMD windows.

At this point, you should also create a new directory structure in your VM:


Creating an Azure VM

Creating an Azure VM


Step 2: Configure Azure Security Rules

If you’re using Azure, you need to define the Inbound Security Rule for this port. To get there in Azure:
From the VM’s main blade click: All Settings — Network Interfaces — [Interface Name] — Network Security Group — All Settings — Inbound Security Rules.

The default rule to allow RDP traffic will be there. Create another rule to allow SQL Server traffic. For reasons I don’t understand, setting the port to 1433 here doesn’t work. You need to open it up, and hope your firewall is up to date.

Creating an Inboud Security Rule


Step 3: Configure Windows Firewall

Run the following in Powershell on your host to open the right ports. I’ll be using the default port 1433:

if (!(Get-NetFirewallRule | where {$_.Name -eq "SQLServer 1433"})) {
New-NetFirewallRule -Name "SQL Server 1433" -DisplayName "SQL Server 1433" -Protocol tcp -LocalPort 1433 -Action Allow -Enabled True


Step 4: Enable .Net 3.5 Framework

This is a hassle. The base Windows image that Microsoft provides does not have .Net Framework 3.5 enabled. So, you need to enable it in the container which should be easy enough, and we’ll get to that. Unfortunately, for reasons that I do not understand, when attempting to install .Net 3.5 in the container, it doesn’t use WindowsUpdate and fails. If you have a Windows .iso file (which I don’t), you can theoretically point the below command at it from within the container, and it should work.

The “fix” is to enable .Net 3.5 on the host, export the registry keys, and then import them into the Container’s registry. This tricks the SQL Server installer into thinking you have it enabled. Does SQL Server 2016 need anything in the .Net 3.5 SP1 Framework? Probably!

Seriously, I spent hours banging my head against this thing and if you can figure out how to get out to from your container, please let me know.

Enable and upgrade the .Net 3.5 Framework on the host server by running the following commands within Powershell. You don’t need to do this if you have a Windows .iso file because we’ll be installing it in the container later.

get-windowsfeature -name NET-Framework-Features | install-windowsfeature
get-windowsfeature -name NET-Framework-Core | install-windowsfeature


Using regedit, export the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.5 registry keys and save them as C:\mssql\install\registrykeys\registry.reg


Step 5: Download and Extract SQL Server Installation Files

Run the following commands in Powershell on your host to download the SQL Server installers. Change URLs as needed…

wget -uri '' -outfile 'SQLServer2016-x64-ENU.exe'
wget -uri '' -outfile ''

Run the executable and save the files at C:\mssql\install. You should delete or move the .exe & .box files as well.

Step 6: Create SQL Server Configuration File

As mentioned earlier, containers don’t allow any GUI interfaces, so SQL Server has to be installed silently. In addition, not everything in SQL Server is supported on Windows Server Core.

I used this configuration file. If you use the same one, make sure you change the password (search for CHANGEME).

Please put your configuration file at C:\mssql\install\configurationfile.ini.


Step 7: Create your dockerfile

Docker uses the dockerfile as a configuration file and to ensure images are built exactly the same every time.

Take the below code and save it in a text file as c:\mssql\dockerfile

The lack of extension is on purpose. This isn’t a new folder. If Windows insists on saving the file with a .txt extension, which happened to me a couple of times, use the Powershell rename-file command and remove the extension.

#Define the base image we'll be building everything else off of...
FROM windowsservercore

#Give it a label
LABEL Description=”SQL Server” Vendor=”Microsoft” Version=”13.00.500.53″

#These files and folders will be imported into the docker image and be available for us to use.
ADD install/SQLServer2016-x64-ENU /mssql/install
ADD install/configurationfile.ini /mssql/install/configurationfile.ini
ADD install/registrykeys/registry.reg /mssql/registrykeys/registry.reg


Step 8: Build Docker Image

At this point, everything you need to install SQL Server should be staged somewhere underneath the c:\mssql directory and you should have a reference to each file or the folder in your dockerfile.

To build the docker image, run this:

docker build -t mssql2016 c:\mssql

This command tells docker to build an image with a name of mssql2016, and that the dockerfile is located in the c:\mssql folder.

While it’s running, open up Task Manager and watch how much CPU & RAM it uses. Also, get some coffee and check your email. You’ve got time.


Step 9: Verify

After the build completes, run the below command to see all of the images you have available. It should be a magical rainbow of three choices.

docker images


Step 10: Run your image

This command will run your image

docker run -it --name mssqlContainer -p 1433:1433 mssql2016 cmd

Let’s walk through each of these parameters:

  • it | Runs an interactive psuedo-terminal.
  • name | The name of your running container.
  • p 1433:1433 | This binds port 1433 on the host to port 1433 on the container process.
    • In other words, any traffic coming into the host on port 1433 will be forwarded to the container’s port 1433.
  • mssql2016 | The name of the image you want to run.
  • cmd | The utility that you’ll be running.


Step 11: Enable .Net 3.5 Framework

As mentioned back in Step 4, this is a hassle.

We need to import the registry keys into the Container’s registry to trick the SQL Server installer into thinking we have .Net 3.5 SP1 installed. IN ADDITION, we need to enable as much as possible of the actual .Net 3.5 framework so it’s at least sort of usable. So, run the following commands to enable .Net 3.5 and import the registry keys.

DISM /online /enable-feature /featurename:NetFx3ServerFeatures
reg import C:\mssql\registrykeys\registry.reg


Step 12: Install SQL Server in a Windows Container

Navigate to C:\mssql\install and run the below command to install SQL Server using the values setup in your configuration file.

setup /IAcceptSQLServerLicenseTerms /ConfigurationFile=configurationfile.ini


Step 13: Fix the installation

At this point, the SQL Server instance should be up and running. Unfortunately, there’s a good chance the next time you start the container that the instance will not come up.

Here’s a blog post talking all about what happens. It appears to be due to how the container shuts down the underlying processes (or doesn’t).

The quick fix is to go against every best practice document and run SQL Server under LocalSystem.

sc config MSSQLSERVER obj=LocalSystem


Step 14: Connect to SQL Server

As a test of the instance, you can use OSQL from the command line to verify it’s up and running.
C:\Program Files\Microsoft SQL Server\130\Tools\Binn>osql -E

From your local machine, connect to the SQL Server instance. You should use your host server’s IP address (the Public IP address in Azure).

Congratulations! (but you’re not done yet)


Step 15: Save your work

Boy, it sure would be a shame if something happened to that nice, new SQL Server installation you’ve got.

Once you’re done playing with the instance, head back to the command window with access to your container. I recommend attempting to cleanly stop the container. From another command window on the host, run:

docker ps
## The output from docker ps will give you a ContainerID. Use it in the stop command.
docker stop [ContainerID]


Alternatively, just type exit as many times as necessary to get back to the host’s command line, and the container will shut down very poorly.

Type this to commit the new image to your repository
docker commit [ContainerID] mssql2016:Installed

This will save your container locally and give it a new tag of Installed. This will also take some time.

To start it again, use:
docker run -it --name mssqlcontainer mssql2016:Installed cmd


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Log Buffer #441: A Carnival of the Vanities for DBAs

Fri, 2015-09-18 13:38

This Log Buffer Edition dives deep into the ocean of blogosphere and surfaces with some cool blog posts from Oracle, SQL Server and MySQL.


  • Lets Talk DB Perth
  • The Fundamental Challenge of Computer System Performance
  • Index Advanced Compression: Multi-Column Index
  • Middleware Diagnostics Advisor (MDA) Setup
  • Most people are now aware that in 12c, a VARCHAR2 has been extended from a maximum size of 4000 to 32767 bytes.

SQL Server:

  • Creating SQL databases on Azure Blob storage
  • Monitoring Availability Group Replica Synchronization
  • PowerShell Tool Time: Controlling Our Tools
  • Automated Patching of SQL Server IaaS VMs
  • SQLCMD Mode; Run all SQL files in a directory


  • Clarification on “Call me Maybe: MariaDB Galera Cluster”
  • MySQL Aurora CPU spikes
  • Oracle HA, DR, data warehouse loading, and license reduction through edge apps
  • Protecting MySQL Passwords With the sha256_password Plugin
  • Ever get called out for a MySQL issue only to realize that there was no issue?  It was a false alarm from the monitor.
Categories: DBA Blogs

My Sales Journey: #3

Fri, 2015-09-18 13:26


This week started off with the last on boarding session with the CEO himself. It was my favorite for many reasons. It was passionate the way it can be when you build something from nothing with blood, sweat and tears and it resonated with me having been down that path myself. You hear it in the tone of their voice and the pride in their eyes. That BORG (boarding of recruited gurus!) is the inspiration for today’s post. Here are my takeaways from Week 3:

To be effective in a sales role you need to speak like a CEO, learn your product inside out and when people listen to you they must hear how amazing your product/service is without you ever saying how amazing it is. Your passion will shine through by your demonstrated knowledge.

Outreach needs organization as it demands many different tasks to be done at once. It is a daunting task but I am sure it will get better as I learn how to get the best out of my day. I am lucky to have a set up that allows me focus like two big screens, headsets, phones and most of all a manager who is there to assist and actively involved to make sure I succeed.

Being flustered is normal! You know that moment when you are making calls and leaving voice messages and suddenly you get a live one! You mumble your way through it and think Wow! I could have been so much better. I had one of those moments this week and it made me go out and do research and find better questions to ask. I chalk it down to my learning curve and move on.

As a rookie, your team is your support structure. Again, lucky to be surrounded by people who collaborate, take the time to help out, craft email templates and show you how its done on the phone. Without all that this fish jumping into unknown waters would certainly drown.

It’s been a good week! Share this with your rookies or if you are rookie tell me how your week was. Looking forward to hearing from you.


Categories: DBA Blogs