Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 7 hours 32 min ago

Service Reliability: How Systems Administration is Evolving – Part Two

Thu, 2014-06-19 08:06

In part one of this mini-series, I wrote about how the old ways of systems administration is broken and shared a story from my personal experiences. Today I’ll be talking about how it’s evolved with Site Reliability Engineering (also known as Service Reliability Engineering or SRE).

SRE is born

Interestingly, this is a concept I was pushing back around 2008-2010, but unfortunately my ideas fell on deaf ears.  How do Netflix, Facebook, Dropbox, and Google provide such reliable service? First I can tell you what they don’t do – they don’t throw more hardware or ram at the problem.

So how do you do it?

First everyone has to agree that service reliability is everyone’s problem.  It is the reason our jobs exist! Management must agree, because the way a company thinks and operates needs to change from the top down, and the focus needs to be on service reliability.

Operations and Development teams need to be aligned so that outages are everyone’s problem. Remember the story in my previous blog post? What do you think would have happened if I had forced the developers to reboot the servers every 12 hours, then every 24 hours? Would it have taken weeks or months to resolve the problem? Of course not.

You need your Ops team to know how to code, and understand development. I’m not saying they have to actually be on Dev team, but they need to understand coding, speak the language, and script their way out of issues. Most importantly, they must be on the hook if things go wrong.

At Google and Facebook, their developers are on call. If a release suddenly causes a significant amount of paging/failures, they don’t hire more Ops employees. The developers fill the gap until the issue is fixed, which clearly happens very quickly.

No one wants to be getting paged all night, especially developers. If they’re getting paged due to a bug in their software, you can bet the issue will be resolved in days, not weeks. Making developers responsible for the service reliability means they are required to think about failure in their software development – they’ll have to design in graceful failure modes, and expect their underlying infrastructure to be unreliable. If their little widget takes down the entire site, you can be sure they’re going to be up until it’s corrected.

The bottom line is that software and hardware come together to provide a service, and one isn’t more important than the other. Major wins in reliability can be realized if you align software design, automation, and hardware design.

At Pythian, we offer this as a managed service. Our team has the experience, background, and skills to help get you to this promise land of high reliable, scalable systems. We are building a team of Site Reliability Engineers to help companies benefit and transition to this new paradigm in systems administration, where service reliability is everyone’s problem.

Categories: DBA Blogs

Service Reliability: How Systems Administration is Evolving – Part One

Wed, 2014-06-18 07:52

The world of systems administration is changing, and it’s affecting everyone involved. Today’s blog post is the first of two in mini-series dedicated to Service Reliability: How Systems Administration is Evolving.

The days of formulas that tell us, You need “X” system admins, for every “Y” physical server, and every “Z” VMs are coming to a close. Even the world of IT management is changing.

Why? Because as scale increases, it’s simply impossible to continue at the pace we were at. Google saw this in the mid 2000’s and began the next evolution of systems administration. They recognized that there was no way they could scale up the way things had been managed for decades. In fact, Netflix came to a similar conclusion.

Interestingly, what Google did has nothing to do with technology, rather to do with the philosophy of systems administration. They started a new group which they originally called Production Engineering, and was later renamed Site Reliability Engineering, also known as Service Reliability Engineering or SRE.  At its core, SRE changes the fundamental thinking of IT management. It recognizes site reliability as everyone’s responsibility. Some might say that’s obvious, but in the past it wasn’t.

Worked fine in dev meme.jpg

The old way is broken

Most companies have two very separate and distinct groups. Operations and Development. Historically these two groups are highly siloed, and in some cases, do not get along very well. Why? It comes down to philosophy, really.

Operations folks are driven to ensure systems are up, secure, and reliable. Developers, on the other hand, are driven to create cool new features and applications. Here lies one of the biggest problems.

Years back I worked as an Operations Director, and had a counterpart on the development side who was the Software Engineering Director. We had just completed releasing a major update for one of our platforms, and very quickly we saw we had major issues. Our primary application servers (25+ physical boxes) were becoming unstable after about 12 hours of production load (I won’t go into why this happened, that’s a story for another day.) We quickly identified this, so the Ops team began rebooting these boxes in a rolling fashion. They were boxes that had some specialized hardware in them, and starting/stopping, then testing them took about 15-30 minutes each.  We had a team of about 5 people, which was not a 24/7 group. Clearly this caused significant pain for our Operations staff. We determined that part of the problem was a memory leak. Due to the nature of the release, rolling back simply was not an option.

The initial response I received was that we would just have to deal with it for now, as there were a few other pressing issues they wanted to resolve first. After many sleepless nights and lost weekends, we finally were able to get a update so the systems only needed to be rebooted daily, 7 days a week. It stayed this way for months.

Crazy, right?

But why? It was because the software team, and the management we both reported to, was far more interested in hitting deadlines for features, and new functionality – not how much sleep, or how many days off our Ops employees were getting. I was told on more than one occasion that high availability and recovery were Ops problems, not Development problems.

The core of this problem is simple. Development felt that service reliability was 100%  an Operations problem. Our new release takes 2x more ram? Add more ram to 100 servers! Our new application requires 20 new servers? Sure, with some work it could be cut down to 2-3, but just get the 20 servers. That’s easy!

Without naming names, has anyone else faced this issue? Comment below. Stay tuned for part two, where I’ll be discussing the birth of SRE, how it’s allowed systems administration to evolve, and how to achieve it.

Categories: DBA Blogs

Room for Improvement – Using DBMS_REDEFINITION with Spatial Data

Tue, 2014-06-17 07:53
Resizing a Column in a Table With a Spatial Column

Recently a client asked for help with using DBMS_REDEFINITION to modify a column in a table.

As the intent was to reduce the precision of a numeric column, simply modifying the column would not work, as there was already data in the column:

13:51:44 - jkstill@js01 SQL> /
alter table t modify(id number(6,2))
ERROR at line 1:
ORA-01440: column to be modified must be empty to decrease precision or scale

The client had made a few attempts use DBMS_REDEFINITION, but ran into several issues with each attempt.

In this case the owner of the destination table was different than the owner of the source table.
As per My Oracle Support, this situation requires several permissions granted that may not normally be necessary for using DBMS_REDEFINITION.
However I have seen similar problems when the source and destination user are the same.

The version of database for the client is running on Solaris
These test are also performed on, but on Linux 5 rather than Solaris.

Most of the issues involve permissions that must be granted to the owner of the destination object.
This is true even if the job is being run as SYSDBA.
These errors were due to the table in question having a column of spatial data with a corresponding spatial index.

17:27:49 – sys@js01 SQL> desc spdata
Name Null? Type
———————————————– ——– ——————————–


The information in this article will be presented somewhat backwards to the way that is usually seen.
First I will show a working example of using DBMS_REDEFINITION to redefine the column.
Following that some of the problems will be highlighted, and then some technical references shown.

Doing so will make this article a little more user friendly I think.
If you are facing a similar issue then the parts you need the most are right up front.

A Working Example

Following are the steps to create the test data, setup DBMS_REDEFINITION and complete the process

Create the Test Data

The first step is to create a test user. The permissions granted to this user are quite important, as will be demonstrated later on.

-- redefdest_user.sql

create user redefdest identified by redefdest;

grant resource, connect, create session, unlimited tablespace to redefdest;

grant alter session to redefdest;

grant execute on dbms_redefinition to redefdest;

grant create any sequence  to redefdest;
grant create any table  to redefdest;
grant alter any table  to redefdest;
grant drop any table  to redefdest;
grant lock any table  to redefdest;
grant select any table to redefdest;
grant create any index to redefdest;
grant create any trigger  to redefdest;

Once this process is completed, remember to revoke any privileges that are not needed on a permanent basis.

Now let’s create a simple table with some spatial data:

-- tab_create.sql

create table redefdest.spdata
   id number(28,0) not null,
   radius number(16,8) not null,
   location mdsys.sdo_geometry not null

The next step is to setup the Geometry Metadata. This must be run as the owner of the SPDATA table.

-- insert_geo_data.sql

delete from user_sdo_geom_metadata where  table_name = 'SPDATA' and column_name = 'LOCATION';

insert into user_sdo_geom_metadata
(table_name, column_name, diminfo, srid)
   sdo_dim_array (
      sdo_dim_element ('Longitude', -180, 180, 0.005),
      sdo_dim_element ('Latitude', -90, 90, 0.005)
   8307 -- SRID - Spatial Reference IDentifier - see mdsys.cs_srs


select * from user_sdo_geom_metadata;

Now we can insert some test data into the table.

-- insert.sql

insert into spdata(id, radius, location)
   sdo_geometry(2001, 8307,
       sdo_point_type (floor(dbms_random.value(-180,180)),floor(dbms_random.value(-90,90)) , null),
       null, null
from (
select level id
from dual
connect by level <= 100
) data;


Now create indexes on the ID column and the spatial data column.

-- idx_create.sql

create index redefdest.spdata_id_idx on redefdest.spdata (id);

create index redefdest.spdata_location_idx on redefdest.spdata (location)
   indextype is mdsys.spatial_index  parameters ('SDO_DML_BATCH_SIZE=2000');

The goal of this excercise is to change the scale and precision of the RADIUS column.

That cannot be done directly via ALTER TABLE as any attempt to so will cause ORA-01440.

The first task is to create the interim table. This table will be created nearly identical to the source table. The RADIUS column in the interim table will have a smaller scale and precision than the source table.

The columns for the table SPDATA_INTERIM are all set as nullable.
The reason for the will be explained later on.

-- create_interim_table.sql

create table redefdest.spdata_interim
   id number(28,0),
   radius number(12,4),
   location mdsys.sdo_geometry

Now geo data for the interim table must be created. This is not the same script as seen previously. Though the script name is the same, it is stored in a different directory. The only difference from the previous script is the name of the table used – SPDATA_INTERIM rather than SPDATA.
As before, this SQL must be run as the owner of the table in question, SPDATA_INTERIM in this case.

-- insert_geo_data.sql

insert into user_sdo_geom_metadata
(table_name, column_name, diminfo, srid)
   sdo_dim_array (
      sdo_dim_element ('Longitude', -180, 180, 0.005),
      sdo_dim_element ('Latitude', -90, 90, 0.005)
   8307 -- SRID  - see mdsys.cs_srs


Now that the interim table has been created and the geo data inserted, the redefinition process can begin:

-- redefine.sql

  1  declare
  2  v_col_map varchar(2048) := 'ID ,RADIUS ,LOCATION';
  3  begin
  4     dbms_redefinition.start_redef_table (
  5              uname          => 'REDEFDEST'
  6             ,orig_table     => 'SPDATA'
  7             ,int_table      => 'SPDATA_INTERIM'
  8             ,col_mapping    => v_col_map
  9             ,options_flag   => dbms_redefinition.cons_use_rowid
 10             ,orderby_cols   => null
 11             ,part_name      => null
 12  );
 13* end;
17:34:51 - jkstill@js01 SQL> /

PL/SQL procedure successfully completed.

Once the refinition process has completed, the dependent objects can be created.

-- copy_dependent_objects.sql

Wrote file afiedt.buf

  1  declare
  2     v_number_of_errors number := 0;
  3  begin
  4     dbms_redefinition.copy_table_dependents(
  5             uname             => 'REDEFDEST'
  6             ,orig_table       => 'SPDATA'
  7             ,int_table        => 'SPDATA_INTERIM'
  8             ,copy_indexes     =>  dbms_redefinition.cons_orig_params
  9             ,copy_triggers    =>  true
 10             ,copy_constraints =>  true
 11             ,copy_privileges  =>  true
 12             ,ignore_errors    => false
 13             ,num_errors       => v_number_of_errors
 14             ,copy_statistics  => true
 15             ,copy_mvlog       => true
 16     );
 17     dbms_output.put_line('Number of Errors' || v_number_of_errors);
 18* END;
17:35:58 - jkstill@js01 SQL> /

PL/SQL procedure successfully completed.

And now finish the redefinition process, check the SPDATA.RADIUS column to ensure it is now NUBMER(12,4), and drop the interim table (now the original table)

-- finish_redef.sql

  1  begin
  2    dbms_redefinition.finish_redef_table (
  3      uname          => 'REDEFDEST'
  4     ,orig_table     => 'SPDATA'
  5     ,int_table      => 'SPDATA_INTERIM'
  6  );
  7* end;
17:35:59 - jkstill@js01 SQL> /

17:36:43 - jkstill@js01 SQL> alter table redefdest.spdata modify (id not null);

Table altered.

17:44:05 - jkstill@js01 SQL>  alter table redefdest.spdata modify (radius not null);
alter table redefdest.spdata modify (radius not null)
ERROR at line 1:
ORA-01442: column to be modified to NOT NULL is already NOT NULL

17:44:05 - jkstill@js01 SQL> alter table redefdest.spdata modify (location not null);

Table altered.

PL/SQL procedure successfully completed.

16:01:40 - redefdest@js02 SQL> desc spdata
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL NUMBER(28)
 RADIUS                                             NUMBER(12,4)
 LOCATION                                  NOT NULL MDSYS.SDO_GEOMETRY

16:01:40 - redefdest@js02 SQL> select count(*) from spdata where radius is not null;


With all of the required permissions in place, everything works as expected.

Getting to that point however required reading a few Oracle Support notes and some experimentation.

There are several ways this process can fail if one of the requirements is not met.

Alter/Create/Drop Any Table

If the owner of the tables is missing any of ALTER/CREATE/DROP ANY TABLE, the process will fail when copying dependent objects.
This due to a bug in Oracle.

In this test the CREATE ANY TABLE privilege has been revoked, and the entire test re-run:

18:04:25 - sys@js01 SQL> revoke create any table from redefdest;

Revoke succeeded.

## Copy Dependents ##

ERROR at line 1:
ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-13249: internal error in Spatial index: [mdidxrbd]
ORA-13249: Error in Spatial index: index build failed
ORA-13249: Error in R-tree: [mdrcrtscrt]
ORA-13231: failed to create index table [MDRT_190DB$] during R-tree creation
ORA-29400: data cartridge error
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1155
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1885
ORA-06512: at line 4

Likewise if CREATE ANY SEQUENCE is revoked, the process will fail.
(CREATE ANY TABLE had already been re-granted)

18:12:23 - sys@js01 SQL> revoke create any sequence from redefdest;

Revoke succeeded.

## Copy Dependents ##

ERROR at line 1:
ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-13249: internal error in Spatial index: [mdidxrbd]
ORA-13249: Error in Spatial index: index build failed
ORA-13249: Error in R-tree: [mdrcrtscrt]
ORA-13231: failed to create index table [MDRS_190F9$] during R-tree creation
ORA-29400: data cartridge error
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1155
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1885
ORA-06512: at line 4

So each of these permission are required for this process to succeed, even when running as SYSDBA.

Permissions Required

The test objects created for this article are quite simple.
As such there are other potential errors that could occur due to the destination account not having all required permissions for a successful use of dbms_redefinition on a table with spatial data.
Before granting new privileges to an account, be sure to audit the current privileges.
That way you will know which privileges can be safely revoked when the table modifications are complete.

Constraint Issues

There are some bugs associated with DBMS_REDEFINITION.

The COPY_DEPENDENT_OBJECTS subprogram seemingly does not handle some constraints very well.

The original table SPDATA was created with all columns set to NOT NULL.

The interim table SPDATA_INTERIM was created with all columns set to NULL.

If the interim table is created with one or more columns as NOT NULL, the following error occurs:

## Copy Dependents ##
ERROR at line 1:
ORA-01442: column to be modified to NOT NULL is already NOT NULL
ORA-06512: at "SYS.DBMS_REDEFINITION", line 984
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1899
ORA-06512: at line 4

Part of the job of the procedure is to create check constraints as found on the original table.

That part of the process doesn’t seem to work quite correctly.

When the process has completed, the columns appear as NULLable to the sqlplus DESC command, but that is not the case.

18:21:15 - jkstill@js01 SQL> desc spdata
 Name								   Null?    Type
 ----------------------------------------------------------------- -------- --------------------------------------------
 ID									    NUMBER(28)
 RADIUS 								    NUMBER(12,4)

18:21:17 - jkstill@js01 SQL> insert into spdata values(null,null,null);
insert into spdata values(null,null,null)
ERROR at line 1:
ORA-01400: cannot insert NULL into ("REDEFDEST"."SPDATA"."ID")

18:21:35 - jkstill@js01 SQL> @check_cons

------------------------------ ------------------------------ - ------------------------------ --------
SPDATA			       SYS_C0018231		      C "ID" IS NOT NULL	       ENABLED
SPDATA			       SYS_C0018232		      C "RADIUS" IS NOT NULL	       ENABLED
SPDATA			       SYS_C0018233		      F "LOCATION" IS NOT NULL	       ENABLED

3 rows selected.

The output of the DESC command and the contents of DBA_CONSTRAINTS do not agree.

While it would be interesting to dig into the lower level reasons for why this is happening, it for now just an inconvience while there are more pressing matters to work on.


Following are the Oracle Notes considered when troubleshooting this issue.

Registering constraints

The DBMS_REDEFINITION.REGISTER_DEPENDENT_OBJECT procedure could be used to better deal with constraints, at least according to this Oracle Note:
I have not yet tried this.

In regard to desc table not showing constraints

This bug affects Oracle on Solaris, so it seems likely to be an issue for the client, and may well affect other platforms as well.

Direct privs granted to table owner

How to Re-Organize a Table Online (Doc ID 177407.1)
ORA-13233 and ORA-01031 When Creating a Spatial Index in Another Schema (Doc ID 251593.1)

Test Code:

Categories: DBA Blogs

Instant REST API For Any Database

Mon, 2014-06-16 07:48

Not so long ago, I was playing with ElasticSearch, which has the interesting characteristic of having a REST API as its primary interface. Sure, it’s a little more stilted and awkward than any native interface but, on the other hand, it’s a nice universal type of API. Any language that can make a http request can talk to it and, hey, bad comes to worse, even ‘curl’ will do. It would be kinda cool if other databases had such a web service.

And then I began to think…

Don’t we have DBIx::Class::Schema::Loader, which can connect to a database and auto-generate its DBIx::Class schema?

package MyDB;

use parent 'DBIx::Class::Schema::Loader'; 


# later on

my $schema = MyDB->connect( 'dbi:SQLite:foo.db' ); # boom, we have our schema

And once we have a DBIx::Class representation of a schema, can’t we introspect it and pretty much get everything there is to know about it?

use Data::Printer;

# get all the table names
my @tables = $schema->sources;

# and all the columns of all the tables
for my $table ( $schema->sources ) {
    say "Table $table";
    p $schema->source($table)->columns_info;

That is, that’s if we want to do it manually, considering that there’s already SQL::Translator that can do most of the job for us.

use SQL::Translator;

print SQL::Translator->new (
    parser      => 'SQL::Translator::Parser::DBIx::Class',
    parser_args => {
        dbic_schema => $schema,
    producer    => 'JSON',

Of course, since we are talking web service, we will want to pass everything back and forth using JSON, including database entries. Well, that’s hardly a problem if we use DBIx::Class::Helper::Row::ToJSON.

So it seems we have the database side covered. For the web framework? You’ll probably not be surprised to see me go with Dancer. Not only can we leverage the serializers and plugins like Dancer::Plugin::DBIC, but setting routes are ridiculously easy.

get '/_tables' => sub {
    return [ schema->sources ];

Even niftier: remember that Dancer routes are defined at runtime, so we can introspect that schema as much as we want and come up with any route we can dream of.

my @primary_key = schema->source($table)->primary_columns;
my $row_url = join '/', undef, $table, ( '*' ) x @primary_key;
 # GET ///
get $row_url => sub {
    my @ids = splat;
    return $schema->resultset($table)->find({
        zip @primary_key, @ids
 # GET /
get "/$table" => sub {
    my @things = $schema->resultset($table)->search({ params() })->all;
    return \@things;
 # create new entry
post "/$table" => sub {
    $schema->resultset($table)->create({ params() });

Added bonus: the way Dancer’s params() conglomerate parameters defined in the query string and in the serialized body of the request plays in our favor: simple queries can be passed directly via the url, and more complicated ones can be defined as JSON structures.

So, you put all of this together, and you obtain waack. All it needs is a dsn pointing to the right database (and credentials, if needed). To illustrate, let’s try with my Digikam SQLite database.

$ waack dbi:SQLite:digikam4.db
>> Dancer 1.3124 server 28914 listening on
>> Dancer::Plugin::DBIC (0.2100)
== Entering the development dance floor ...

And now, let’s fire up App::Presto as our REST client.

$ presto http://enkidu:3000

http://enkidu:3000> type application/json

First, we can retrieve all the table names.

http://enkidu:3000> GET /_tables

We can also get the whole schema.

http://enkidu:3000> GET /_schema
   "translator" : {
      "producer_args" : {},
      "show_warnings" : 0,
      "add_drop_table" : 0,
      "parser_args" : {
         "dbic_schema" : null
      "filename" : null,
      "no_comments" : 0,
      "version" : "0.11018",
      "parser_type" : "SQL::Translator::Parser::DBIx::Class",
      "trace" : 0,
      "producer_type" : "SQL::Translator::Producer::JSON"
   "schema" : {
      "tables" : {
         "ImageRelations" : {
            "options" : [],
            "indices" : [],
            "order" : "12",
            "name" : "ImageRelations",
            "constraints" : [
                  "type" : "UNIQUE",
                  "deferrable" : 1,
                  "name" : "subject_object_type_unique",
                  "on_delete" : "",
                  "reference_fields" : [],
                  "fields" : [
                  "match_type" : "",
                  "reference_table" : "",
                  "options" : [],
                  "expression" : "",
                  "on_update" : ""

Too much? We can get the columns of a single table.

http://enkidu:3000> GET /Tag/_schema
   "iconkde" : {
      "is_nullable" : 1,
      "data_type" : "text",
      "is_serializable" : 1
   "name" : {
      "is_serializable" : 1,
      "data_type" : "text",
      "is_nullable" : 0
   "id" : {
      "is_nullable" : 0,
      "data_type" : "integer",
      "is_auto_increment" : 1,
      "is_serializable" : 1
   "icon" : {
      "is_nullable" : 1,
      "data_type" : "integer",
      "is_serializable" : 1
   "pid" : {
      "is_serializable" : 1,
      "is_nullable" : 1,
      "data_type" : "integer"

Query that table, with a simple condition…

http://enkidu:3000> GET /Tag id=1
      "name" : "orchid",
      "icon" : null,
      "id" : 1,
      "pid" : 0,
      "iconkde" : null

… or with something a little more oomphie.

$ curl -XGET -H Content-Type:application/json --data '{"name":{"LIKE":"%bulbo%"}}' http://enkidu:3000/Tag
      "pid" : 1,
      "name" : "Bulbophyllum 'Melting Point'",
      "icon" : null,
      "id" : 32,
      "iconkde" : "/home/yanick/Pictures/My Plants/IMG_0461.JPG"
      "id" : 56,
      "iconkde" : "tag",
      "icon" : null,
      "pid" : 39,
      "name" : "Bulbophyllum ebergardetii"
      "name" : "bulbophyllum",
      "pid" : 564,
      "iconkde" : null,
      "id" : 565,
      "icon" : 0

Btw: I cheated for that last one. Presto doesn’t send body with GET requests. And Dancer doesn’t deserialize GET bodies either. Patches will be written tonight.

Anyway, back with the show. We can also select specific rows by primary keys.

http://enkidu:3000> GET /Tag/1
   "id" : 1,
   "iconkde" : null,
   "pid" : 0,
   "icon" : null,
   "name" : "orchid"

Create new rows.

http://enkidu:3000> POST /Tag '{"name":"nepenthes","pid":0}'
   "pid" : 0,
   "name" : "nepenthes",
   "iconkde" : null,
   "icon" : null,
   "id" : 569

And do updates.

http://enkidu:3000> PUT /Tag/569 '{"icon":"img.png"}'
   "icon" : "img.png",
   "iconkde" : null,
   "pid" : 0,
   "name" : "nepenthes",
   "id" : 569

Not too shabby, isn’t? Mostly considering that, if you look at the source of waack, you’ll see that it barely clock over 100 lines of code. Take a minute and let this sink in.

One hundred lines of code. For a universal database REST web service.

If that’s not standing on the shoulders of giants, then I don’t know what is.

Categories: DBA Blogs

Internet Scale Design: Part Two

Fri, 2014-06-13 08:01

In my previous blog post, I emphasized that internet scale design can be implemented for any type of company. Whether it’s a small, bootstrapped startup or a rapidly growing, well-funded tier 2. But if it’s suitable for that many companies, why isn’t everyone moving into the cloud? In my opinion, there are two reasons.

First, the model of utility computing doesn’t work for all business models. It is most effective in models where demand changes, where there are peaks and valleys for larger scale systems. It also works well as a way to get your startup or project off the ground with little-to-no capital investment. In the story I began in my previous blog post, the seasonality of their environment made them a perfect candidate.

The second is more of a people problem. In many companies,  IT leadership, SysAdmins, Developers, DBAs, and everyone else involved in service management, have been working with whatever technology stack that company has been using for years. It’s important to remember that most SysAdmins see their primary job as keeping things up and running, so we typically prefer working with things we know vs things we don’t.

If a C-level executive or VP returns from a conference about cloud, and issues a mandate that they need to “move everything to the cloud!” to remain “competitive” the SysAdmins will likely fail. Why? Not because they’re not smart enough, but because they simply don’t know enough about it.

While it would be ideal for the COO to say, “I want us to look into moving our platform into AWS, so I’m going to send you to get Amazon certified,” it rarely happens. Usually it sounds more like, “You’re smart, you’ll figure it out. Oh sure, you already have a full-time job keeping the lights on, but just squeeze it in when you can. We’ll need to see a POC by the end of the quarter.”

I don’t need to tell you how this ends ? it’s obvious. It will fail almost every time.

One of the amazing benefits to the Pythian model is that our teams are exposed to a wide variety of systems. We have built auto-scaling systems in AWS, OpenStack systems, VMWare systems, as well as legacy physical systems we support. Our teams are not relegated to whichever tech stack they happened to be stuck with for the last five years.

The bottom line here is that it doesn’t matter what kind of company you’re at – Whether it’s a small retailer, midsize tier 3, or larger tier 2, if you’re willing to sign on for the concept of site reliability engineering and commit to it, together we can accomplish some amazing things, all for a price you can afford.

Categories: DBA Blogs

Log Buffer #375, A Carnival of the Vanities for DBAs

Fri, 2014-06-13 07:57

What’s better; Watching Football World Cup or Reading Log Buffer? Yes, right Log Buffer, but please also take out sometime to watch the matches, as this tournament comes only once in 4 years. No? Ok, as you say. So read along then.


Alan Hargreaves reasons as Why you should Patch NTP.

This post examines the results of a recent database protection survey conducted by Database Trends and Applications (DBTA) Magazine.

Keep jobs active with screen command.

ORE Getting Connected: ore.connect and other commands.

End-to-End ODI12c ETL on Oracle Big Data Appliance Pt.4 : Transforming Data using Python & Hive Streaming

SQL Server:

Enabling and Configuring Reporting and Logging for Maintenance Plans in SQL Server 2012

The articles collected here will help you understand the theories and methodologies behind every stage of the database delivery pipeline, starting when database changes are checked in, and ending when they’re deployed to production.

Stairway to Database Source Control Level 2: Getting a Database into Source Control .

What does it take to become a database administrator, or what kinds of traits should I be looking for when I am hiring a DBA. Those traits can be summarized it two categories: Technical and Personal.

Display all errors thrown within a catch block by a restore statement in a stored procedure using extended events.


RHEL7 & the transition from MySQL to MariaDB: A first look.

FairWarning Privacy Monitoring Solutions Rely on MySQL to Secure Patient Data

MariaDB 5.5.38 Overview and Highlights

Recruiters Looking for MySQL DBAs and MySQL Developers

MariaDB Galera Cluster 10.0.11 now available

On-disk/block-level encryption for MariaDB

Categories: DBA Blogs

Internet Scale Design: Part One

Thu, 2014-06-12 07:52

Whether your company is a small, bootstrapped startup or a rapidly growing, well-funded tier 2 or 3 business, everyone dreams of having ‘Internet scale’ systems which are highly automated, anti-fragile, self-healing, and inexpensive to maintain.

The problem, however, is that those types of complex systems are only within the reach of well-funded companies who can afford to hire elite technical resources and expensive, leading edge technology systems, right?


Recently, I was working with one of my smaller retail clients. Their business is highly seasonal, doing the vast majority of their business during two separate two-month periods each year. During these periods, the demand on their system is extremely high, and downtime would be catastrophic because of the amount of their annual revenue generated during those times.

They run on a LAMP stack (Linux Apache MySQL PHP) and they’re hosted on a mix of shared VMs, and colocation equipment.  Their hosting costs are fixed year round, even though most of the year they don’t need two thirds of it.

They were beginning their annual budget review when we brought up what the next steps were for them. About a year ago, I began introducing the idea of Amazon Web Services (AWS) cloud, taking more of a site reliability engineering (SRE) approach to their business, so we scheduled a conference call with their Chief Financial Officer, a few VPs, and one of my SREs.

“We would like to move you 100% into the Amazon Cloud,” I said. “In doing so, we will automate all of your systems using image and configuration management, and set up detailed monitoring, graphics, and trending systems. We will work with your developers to redesign your applications to take advantage of the platform and its flexibility. The end result will be a system that is considerably more anti-fragile, runs in multiple regions, offers significantly faster recovery in the event of major failures, and can scale down to very little and up to larger scale in a moment’s notice.” We assured them that the systems could be self-healing and would require very little management.

The Sr. VP of Product Management’s response surprised me.

“Bill, this sounds awesome. But we’re a pretty small company, and we couldn’t possibly afford that sort of system.” I laughed a bit and responded, “Actually, not only can you afford it, but it will save you tens of thousands of dollars.”

I explained the benefits of auto-scaling and the cost savings it would bring to their business model. We discussed how those cost savings would fund the expenses of having Pythian build and manage the new service.

His response? “This is amazing – why isn’t everyone doing this?”

The answer is twofold. I’ll be sharing my insights in part two, so stay tuned…

Categories: DBA Blogs

Availability Group – Delay with Listener Connectivity After Failover

Wed, 2014-06-11 07:45

The beauty of working for multiple clients from different industries, is that you get exposed to a myriad of environment setup and configuration. Every company has its own standards for Network and Server configuration, as well different hardware vendors.  This introduces their own kinks and excitement to your everyday work – half of which you’ll likely not encounter if you are working in-house and using the same hardware.

The past week we encountered a rare and interesting issue with High Availability Group. The issue was two-fold, first it was initially not failing over automatically one one node though that was not as exciting as the second part – when it was able to failover correctly, our client was experiencing delays with the availability of the Listener Name outside its own sub-net  after failover, automatic or otherwise.  It is reachable within its own subnet but takes more than thirty minutes to be reachable outside of it even though the Failover happened smoothly and without error.

The first part was fairly straightforward. Checking on the cluster logs and event logs the automatic failover was throwing the error below when trying to failover on one of the nodes.

Cluster network name resource 'Listener_DNS_NAME' failed registration of one or more associated DNS name(s) for the following reason:
DNS operation refused.

Ensure that the network adapters associated with dependent IP address resources are configured with at least one accessible DNS server.

The error is as it says, the Computer object does not have the appropriate permissions on the Domain to register the DNS Name Resource for the Listener.  For the cluster  to perform this operation smoothly “Authenticated Users” should have read/write all permissions on the Computer Object for the cluster, its nodes and the Listener DNS Name. To do this, Log in to the Active Directory Server

  1. Open Active Directory Users and Computers.
  2. On the View menu, select Advanced Features.
  3. Right-click the object  and then click Properties.
  4. On the Security tab, click Advanced to view all of the permission entries that exist for the object.
  5. Verify that the Authenticated Users is in the list and has the permission to Read and Write All. Add the required permissions then Save the changes.

Now after doing that and testing the fail over, it is now encountering a different error, Kerberos-related one showed below.

The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server ComputerName$. The target name used was HTTP/ This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Ensure that the target SPN is only registered on the account used by the server. This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service. Ensure that the service on the server and the KDC are both configured to use the same password. If the server name is not fully qualified, and the target domain (Domain.COM) is different from the client domain (Doamin.COM), check if there are identically named server accounts in these two domains, or use the fully-qualified name to identify the server.

Ah, the often over-looked SPN. This should be part of your installation process – setting the SPN. To keep the story short and so we can get to the exciting part, you can refer here for the detailed instructions on how to configure the SPN for SQL Server.  Aside from registering the SPN for each of the Nodes as specified on the MSDN Link, You’ll also need to register the SPN for the Listener, as always 1433 is the port being used by your SQL Server:

setspn -A MSSQLSvc/ DOMAIN/SQLServiceAccount

This will enable Kerberos for the client connection to the Availability Group Listener and address the errors we received above. After configuring the SPN for the servers Automatic Fail over is now running smoothly, or so we thought.

The client came back to us that it was taking some time for the application to connect to the Listener Name. Checking on the cluster logs and SQL Server’s end, everything appears to be in order. No errors being thrown and Server is accessible. Now it get’s interesting. Ping test within the Database sub net is successful but ping test outside of it was timing out. It was able to connect though after a substantial amount of time.  After a few more test and checking the behavior is the same on both nodes. It takes more than thirty minutes for the Name to be reachable outside of the database sub net.  After involving the Network Admin we found out that  a MAC Address conflict is happening.  That’s our “Aha!” moment.  Windows 2003 servers and later issues a Gratuitous ARP (GARP) requests during failover. There are some switches/devices that does not forward Gratuitous ARP by default.   This causes the devices on the other end of the switch to not have the correct MAC address associated to the Name.  This causes the failure. It often corrects itself when the router detects the failures and do a broadcast and gets the correct value. That’s why it becomes accessible after some time. This KB details the issue. To address this, changes must be done on the configuration of the switches, you’ll need to check with your hard ware vendor for this.

Now, that could be the end of our problems, but after enabling the switch to forward GARP, we found out that the server itself is not sending a GARP request.  This is a server configuration issue and requires some Registry changes. Open the Registry for the server and locate the key below:


From there check if there is a key for ArpRetryCount, if there is make sure that the Value is not set to 0. The value could be between 0-3. After changing this and restarting the servers Everything works perfectly.

Last two issues are a bit rare, and something I wouldn’t have experienced if the client wasn’t using that particular hard ware and that particular standard configuration.

Categories: DBA Blogs