Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 16 hours 19 min ago

Changing Failgroup of ASM Disks in Exadata

Mon, 2014-07-21 08:12

There was a discrepancy in the failgroups of couple of ASM disks in Exadata. In Exadata, the cell name corresponds to the failgroup name. But there were couple of disks with different failgroup names. Using the following plan to rectify the issue online without any downtime:


1) Check disks and their failgroup:

col name format a27
col path format a45

SQL> select path,failgroup,mount_status,mode_status,header_status,state from v$asm_disk order by failgroup, path;

o/100.100.00.000/DBFSDG_CD_09_mycellnet0    mycellNET0             CACHED  ONLINE  MEMBER      NORMAL
o/100.100.00.000/DATA_CD_08_mycellnet0      mycell_NET0             CACHED  ONLINE  MEMBER      NORMAL

2) Drop Disks:

ALTER DISKGROUP DATA DROP DISK  DATA_CD_08_mycellnet0 REBALANCE POWER 32;

3) Wait for rebalanacing to finish

select * from gv$asm_operation;

4) Add the disks to the correct failgroups

ALTER DISKGROUP DATA ADD failgroup mycellNET0 DISK ‘o/100.100.00.000/DATA_CD_08_mycellnet0′ rebalance power 32;

– Wait for rebalance to complete.

5) select * from v$asm_operation;

6) Verify the incorrect failgroup has gone

select name,path,failgroup from v$asm_disk  where failgroup in (‘mycell_NET0′) order by name;

select path,failgroup,mount_status,mode_status,header_status,state from v$asm_disk order by failgroup, path;

Categories: DBA Blogs

Small Files on MapR-FS

Mon, 2014-07-21 08:11

One of the well-known best practices for HDFS is to store data in few large files, rather than a large number of small ones. There are a few problems related to using many small files but the ultimate HDFS killer is that the memory consumption on the name node is proportional to the number of files stored in the cluster and it doesn’t scale well when that number increases rapidly.

MapR has its own implementation of the Hadoop filesystem (called MapR-FS) and one of its claims to fame is to scale and work well with small files. In practice, though, there are a few things you should do to ensure that the performance of your map-reduce jobs does not degrade when they are dealing with too many small files, and I’d like to cover some of those.

The problem

I stumbled upon this when investigating the performance of a job in production that was taking several hours to run on a 40-node cluster. The cluster had spare capacity but the job was progressing very slowly and using only 3 of the 40 available nodes.

When I looked into the data that was being processed by the active mappers, I noticed that vast majority of the splits being read by the mappers were in blocks that were replicated into the same 3 cluster nodes. There was a significant data distribution skew towards those 3 nodes and since the map-reduce tasks prefer to execute on nodes where the data is local, the rest of the cluster sat idle while those 3 nodes were IO bound and processing heavily.

MapR-FS architecture

Differently from HDFS, MapR-FS doesn’t have name nodes. The file metadata is distributed across different data nodes instead. This is the key for getting rid of the name node memory limitation of HDFS, and let MapR-FS handle a lot more files, small or large, than a HDFS cluster.

Files in MapR-FS have, by default, blocks of 256MB. Blocks are organised in logical structures called “containers”. When a new block is created it is automatically assigned to one existing container within the volume that contains that file. The container determines the replication factor (3 by default) and the nodes where the replicas will be physically stored. Containers are bound to a MapR volume and cannot span multiple volumes.

There’s also a special container in MapR-FS called a “name container”, which is where the volume namespace and file chunk locations are stored. Besides the metadata, the name container always stores the first 64KB of the file’s data.

Also, there’s only a single name container per MaprFS volume. So the metadata for all the files in a volume, along with the files’ first 64KB of data, will be all stored in the same name container. The larger the number of files in a volume, the more data this container will be replicating across the same 3 cluster nodes (by default).

So, if your data set is comprised of a very large number of small files (with sizes around 64KB or less) and is all in the sae volume, most of the data will be stored in the same 3 cluster nodes, regardless of the cluster size. Even if you had a very large cluster, whenever you ran a map-reduce job to process those files, the job’s tasks would be pretty much allocated on only 3 nodes of the cluster due to data locality. Those 3 nodes would be under heavy load while the rest of the cluster would sit idle.

Real impact

To give you an idea of the dimension of this problem, the first time I noticed this in production was due to a Hive query that was causing high load only in 3 nodes of 40-node cluster. The job took 5 hours to complete. When I looked into the problem I found that the table used by the Hive query had tens of thousands of very small files, many of them smaller than 64K, due to the way the data was being ingested.

We coalesced the table to combine all those small files into a much smaller number of bigger ones. The job ran again after that, without any changes, and completed in just 15 minutes!! To be completely fair, we also changed the table’s file format from SequenceFile to RCFile at the same time we coalesced the data, which certainly brought some additional performance improvements. But, from the 3-node contention I saw during the first job run, I’m fairly convinced that the main issue in this case was the data distribution skew due to the large amount of small files.

Best practices

This kind of problem is mitigated when large files are used, since only a small fraction of the data (everything below the 64KB mark) will be stored in the name container, with the rest distributed across other containers and, therefore, other nodes. We’ll also have a smaller number of files (for a similar data volume), which reduces the problem even more.

If your data is ingested in a way that creates many small files, plan to coalesce those files into larger ones on a regular basis. One good tool for that is Edward Capriolo’s File Crusher. This is also (and especially) applicable to HDFS.

Best practice #1: Keep you data stored into large files. Pay special attention to incremental ingestion pipelines, which may create many small files, and coalesce them on a regular basis.

A quick and dirty workaround for the 3-node contention issue explained above would be to increase the replication factor for the name container. This would allow more nodes to run map-reduce tasks on that data. However, it would also use a lot more disk space just to achieve the additional data locality across a larger number of nodes. This is NOT an approach I would recommend to solve this particular problem.

Instead, the proper way to solve this in Mapr-FS is to split your data across different volumes. Especially if you’re dealing with a large number of small files that cannot be coalesced, splitting them across multiple volumes will keep the number of files per volume (and per name container) under control and it will also spread the small files’ data evenly across the cluster, since each volume will have its own name container, replicate across a different set of nodes.

The volumes may, or may not, follow your data lifecycle, with monthly, weekly or even daily volumes, depending on the amount of data being ingested and files being created.

Best practice #2: Use Mapr-FS volumes to plan your data distribution and keep the number of files per volume under control.

References:
  1. MapR Architecture Guide
Categories: DBA Blogs

Cloudera Challenge 2014

Mon, 2014-07-21 08:09

Yesterday, Cloudera released the score reports for their Data Science Challenge 2014 and I was really ecstatic when I received mine with a “PASS” score! This was a real challenge for me and I had to put a LOT of effort into it, but it paid off in the end!

Note: I won’t bother you in this blog post with the technical details of my submission. This is just an account of how I managed to accomplish it. If you want the technical details, you can look here.

Once upon a time… I was a DBA

I first learned about the challenge last year, when Cloudera ran it for the first time. I was intrigued, but after reading more about it I realised I didn’t have what it would be required to complete the task successfully.

At the time I was already delving into the Hadoop world, even though I was still happily working as an Oracle DBA at Pythian. I had studied the basics and the not-so-basics of Hadoop, and the associated fauna and had just passed my first Hadoop certifications (CCDH and CCAH). However, there was (and is) still so much to learn! I knew that to take the challenge I would have to invest a lot more time into my studies.

“Data Science” was still a fuzzy buzzword for me. It still is, but at the time, I had no idea about what was behind it. I remember reading this blog post about how to become a data scientist. A quick look at the map in that post turned me off: apart from the “Fundamentals” track in it, I had barely idea what the rest of the map was about! There was a lot of work to do to get there.

There’s no free lunch

But as I started reading more about Data Science, I started to realise how exciting it was and how interesting were the problems it could help tackle. By now I had already put my DBA career on hold and joined the Big Data team. I felt a huge gap between my expertise as a DBA and my skills as a Big Data engineer, so I put a lot of effort in studying the cool things I wanted to know more about.

The online courses at Coursera, Edx, Stanford and the like were a huge help and soon I started wading through courses and courses, sometime many at once: Scala, R, Python, more Scala, data analysis, machine learning, and more machine learning, etc… That was not easy and it was a steep learning curve for me. The more I read and studied I realised there was many times more to learn. And there still is…

The Medicare challenge

But when Cloudera announced the 2014 Challenge, early this year, I read the disclaimer and realised that this time I could understand it! Even though I had just scratched the surface of what Data Science is meant to encompass, I actually had tools to attempt tackling the challenge.

“Studies shall not stop!!!”, I soon found, as I had a lot more to learn to first pass the written exam (DS-200) and then tackle the problem proposed by the challenge: to detect fraudulent claims in the US Medicare system. It was a large undertaking but I took it one step at a time, and eventually managed to complete a coherent and comprehensive abstract to submit to Cloudera, which, as I gladly found yesterday, was good enough to give me a passing score and the “CCP: Data Scientist” certification from Cloudera!

I’m a (Big Data) Engineer

What’s next now? I have only one answer: Keep studying. There’s so much cool stuff to learn. From statistics (yes, statistics!) to machine learning, there’s still a lot I want to know about and that keeps driving me forward. I’m not turning into a Data Scientist, at least not for a while. I am an Engineer at heart; I like to fix and break things at work and Data Science is one more of those tools I want to have to make my job more interesting. But I want to know more about it and learn how to use it properly, at least to avoid my Data Scientist friends cringing away every time I tell tell I’m going to run an online logistic regression!

Categories: DBA Blogs

SQL Server 2014 Delayed Durability from an Application Perspective

Mon, 2014-07-21 08:06

The idea of this blog post is to describe what the delayed durability feature is in SQL Server 2014 and to describe a use case from an application development perspective.

With every new SQL Server release we get a bunch of new features and delayed durability of transactions really caught my attention. Most of the relational database engines are used to handle transactions with the write ahead log method(http://en.wikipedia.org/wiki/Write-ahead_logging), basically a transaction comes into the database, and in order to successfully commit a piece of information it will flush the pages from the memory, then write to the transaction log and finally to the datafile, always following a synchronous order, since the transaction log is pretty much a log of each transactions, recovery methods can even try to get the data from logs in case the data pages were never committed to the datafile, so as a summary this is a data protection method used to handle transactions, MSDN calls this a transaction with FULL DURABILITY.

So what is Delayed Transaction Durability?

To accomplish delayed durability in a transaction, asynchronous log writes happens from the buffers to the disk. Information is kept in memory until either the buffer is full or a flush takes place. This means instead of flushing from memory, then to log and then to datafile, the data will just wait in memory and the control of the transaction will be restored to the requestor app faster. If a transaction initially only hits memory and avoid going through the disk heads, it will for sure complete faster as well.

But when is the data really stored in disk?

SQL Server will handle this depending on how busy/full the memory is and will then execute asynchronous transactions to finally store the information in disk. You can always force this to happen with this stored procedure “sp_flush_log”.

Ok But there is a risk, right?

Correct, since the original data protection method is basically skipped, in the event of a system disruption such as SQL Server doing a failover or simply “unexpectedly shutting down”, some data can be lost in the so called LIMBO that is somewhere between the application pool and the network cable.

Why would I want to use this?

Microsoft recommends to use this feature only if you can tolerate a data loss, if you are experiencing a bottleneck or performance issue related to log writes or if your workload have a high contention rate(processes waiting for locks to be released.)

How do I Implement it?

To use delayed transactions you should enable this as a database property. You can used FORCED option which will try to handle all transactions as delayed durable, you can use ALLOWED which will let you use delayed durable transactions, which you then need to specify in your TSQL(this is called atomic block level control), see a sample taken from MSDN below:


CREATE PROCEDURE …
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
(
DELAYED_DURABILITY = ON,
TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'English'

)
END

For more syntax and details I invite you to check the so full of wisdom MSDN Library.

Enough of the background information, and let’s take this puppy for a ride, shall we?

Consider the following scenario: You manage a huge application, probably some application between an ERP and a Finance module. The company has developed this application from scratch, each year more and more features are added in this app. The company decides that they want to standardize procedures and want to have more control over the events of the application. They realize they do not have enough audit traces, if someone deletes data, if a new deal or customer information is inserted, management needs to have a track record of almost anything that happens. They have some level of logging, but is implemented differently depending on the developer taste and mood.

So, Mr MS Architect decides they will implement enterprise library logging block, and will handle both exceptions and custom logging with this tool. After adding all this logging to the events, the system begins to misbehave and the usual slow is now officially really slow. Mr Consultant then comes in and suggest that the logging data is moved to a separate database, also this database should use Delayed durability, by doing so, transactions related to logging events will have less contention and will return the control faster to the application, some level of data loss can be tolerated which also makes the decision even better.

Let’s build a proof of concept and test it..

You can find a sample project attached: WebFinalNew

You need to have enterprise library installed in your visual studio. For this sample I am using Visual Studio 2010.

You need to create 2 databases, DelayedDB and NormalDB (Of Course we need to use SQL Server 2014)

 

1

Use the attached script LoggingDatabase (which is part of the scripts of Enterprise library), it will create all the objects needed for the application log block.

 

In the DelayedDB, edit the properties and set the Delayed Durability to FORCED, this will make all transactions to have delayed durability(please note some transactions will never be delayed durable such as system transactions, cross-database transactions, and operations involving FileTable, Change Tracking and Change Data Capture)

2

You need to create a windows web project, it should have a web.config , if not you can manually add a configuration file:

3

 

Make sure you add all the application block references(Logging Block)

references

Now right click over the web.config or app.config file and edit your enterprise library configuration

4

 

In the database Settings block, add 2 new connections to your database(one for NormalDB and the other for DelayedDB), make sure to specify the connection in the form of a connection string like the picture below:

5

 

In the Logging block, create a new category called DelayedLogging, this will point to the database with delayed durability enabled.

6

Then add 2 database Trace listeners, configure General Category to point to “Database Trace Listener” and then configure DelayedLogging Category to point to “Database Trace Listener 2”. Configure each listener to point to the corresponding database(one to each database previously configured in the Database block)

7

 

Save all changes and go back to the application, configure the design layout with something like below

8

 

Add a codebehind to the button in the upper screen and build a code that will iterate and send X amount of commands to each database, track the time it takes to send the transaction and regain control of the application into a variable, check the attached project for more details, but use logwriter.write and pass as parameter the category you configured to connect to DelayedDB(DelayedLogging) and the general category(default, no parameter) to connect to NormalDB. See a sample of how a logging transaction is fired below:

 


logWriter.Write("This is a delayed transaction","DelayedLogging");
logWriter.Write("This is a transaction");

This code will call the logging block and execute a transaction on each database, the “normal” database and the durable one, it will also track milliseconds it takes to return the control to the application, additionally I will have performance monitor and query statistics from the database engine to see the difference in behavior.

 

Quick Test

Batch insert of 1000 rows, a normal database took 1 millisecond more in average per transaction to return the control to the application:9

 

What information we have from sys.dm_io_virtual_file_stats?

Database io_stall_read_ms num_of_writes num_of_bytes_written io_stall_write_ms io_stall size_on_disk_bytes DelayedDB 47 5126 13843456 4960 5007 1048576 Normal 87 5394 14492160 2661 2748 1048576

We can see that the same amount of data was sent to both databases(last column size_on_disk_bytes), interesting observation are the stalls, in a delayed durable database the stall will be higher for writing, this means despite the fact that the transaction is executed “faster”, what really means is that it returns the control to the application faster, but the time it takes to actually store the data to disk can be higher since is done in async mode.

 

Let’s see a quick graphic of the performance impact

Delayed Durability

10 11

With a Delayed Durability the disk queue length average is higher, since it will wait to fill the buffer and then execute the write. You can appreciate the yellow peak(within the red circle) after the transaction completes, it will execute pending writes( moment where I issue a “sp_flush_log”.).

 

Full Durability

1213

 

With a Full Durability the disk queue length average is lower, since it will sequentially execute the writes there will be less pending transactions in memory.

 

Conclusion

Delayed Durability feature is definitely a great addition to your DBA toolbelt, it needs to be used taking in consideration all the risks involved, but if properly tested and implemented it can definitely improve the performance  and architecture of certain applications. Is important to understand this is not a turbo button(like some people does with the nolock hint) and it should be used for certain types of transactions and tables. Will this change your design methods and make you plan for a separate delayed durable database? or plan to implement certain modules with delayed durable transactions? This for sure will have an interesting impact on software design and architecture.

Categories: DBA Blogs

Log Buffer #380, A Carnival of the Vanities for DBAs

Fri, 2014-07-18 07:48

There are plethora of ideas sprouting in technology arena on daily basis and bloggers are catching up with them pretty nicely. This Log Buffer Edition skims some of them.

Oracle:

Building Dynamic Branded Digital Experiences with Oracle WebCenter

ORA-19909: datafile 1 belongs to an orphan incarnation

Opatchauto for OEM Management Servers

Get more out of Product Information Management with PIM Training Resources

ADF Mobile 12c (MAF) Support for Master-Detail Data Control

SQL Server:

Convert Rows into Columns

Developing a Custom SSIS Source Component

Stairway to SQL Server Security Level 3: Principals and Securables

Automate Registering and Maintaining Servers in SQL Server Management Studio (SSMS)

Implementation of partition switching within SSIS

MySQL:

High Availability with mysqlnd_ms on Percona XtraDB Cluster

Oracle Critical Patch Update for MySQL

MySQL Slave Scaling and more

ClouSE 1.0 is generally available

Install Apache2, PHP5 And MySQL Support On CentOS 7 (LAMP)

Categories: DBA Blogs

Auditing Files in Linux

Tue, 2014-07-15 08:25

Stat command in Linux can be used to display a file or a file system status.

I came across an issue in RHEL4 where a file’s ‘Change time’ is far ahead than the ‘Modification time’ without a change in uid, gid and mode.

# stat /etc/php.ini
File: `/etc/php.ini'
Size: 45809 Blocks: 96 IO Block: 4096 regular file
Device: 6801h/26625d Inode: 704615 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2014-06-25 13:22:15.000000000 -0400
Modify: 2012-10-01 13:21:41.000000000 -0400
Change: 2014-06-01 20:06:35.000000000 -0400 

To explain why this can be considered unusual, I will start by explaining the time values associated with a file:

  • Access (atime) – Time the file was last accessed. This involves syscalls like open(). For example, running cat command on the file would update this.
  • Modify    (mtime) – Time the file content was last modified. For example, if a file is edited and some content is added this value would change.
  • Change (ctime) – When any of the inode attributes in the file changes this value changes. Stat command would notice change if inode attributes except access time is changed. Following are the rest of the inode attributes – mode, uid, gid, size and modification time.

So ctime would get updated with mtime and file size would get updated with a mtime. So if a file’s ctime is changed from mtime without a change in mode, uid, and gid, the behaviour can be considered unexpected.

On checking the stat upstream (coreutils) source, I came across a known issue. Running chmod on a file without changing the file permissions can alter inode and cause the same behaviour. It is documented in TODO of coreutils upstream source.

Modify chmod so that it does not change an inode's st_ctime
when the selected operation would have no other effect.
First suggested by Hans Ecke  in

http://thread.gmane.org/gmane.comp.gnu.coreutils.bugs/2920

Discussed more recently on http://bugs.debian.org/497514.

This behaviour is not fixed in upstream.

Now we can assume that a process or user ran a chmod command which actually did not changed the attributes of php.ini. This would change ctime and not other attributes.

I can reproduce the same behaviour in my Fedora system as well.

For example,

# stat test
File: ‘test’
Size: 0             Blocks: 0          IO Block: 4096   regular empty file
Device: 803h/2051d    Inode: 397606      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2014-07-14 16:26:10.996128678 +0530
Modify: 2014-07-14 16:26:10.996128678 +0530
Change: 2014-07-14 16:26:10.996128678 +0530
Birth: -
# chmod 644 test
# stat test
File: ‘test’
Size: 0             Blocks: 0          IO Block: 4096   regular empty file
Device: 803h/2051d    Inode: 397606      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2014-07-14 16:26:10.996128678 +0530
Modify: 2014-07-14 16:26:10.996128678 +0530
Change: 2014-07-14 16:26:41.444377623 +0530 
Birth: -

But this is just an assumption. For getting a conclusive answer on what is causing this behaviour in this specific system, we would need to find what process is causing this.

auditd in linux can be used for watching a file and capturing audit records on that file to /var/log/audit/.

To watch the file, I edited /etc/audit.rules and added following.

-w /etc/php.ini

Then restarted auditd,

# service auditd start
Starting auditd:                                           [  OK  ]
# chkconfig auditd on

Running a cat command on the php.ini file would give following logs.

type=SYSCALL msg=audit(1404006436.500:12): arch=40000003 syscall=5 success=yes exit=3 a0=bff88c10 a1=8000 a2=0 a3=8000 items=1 pid=19905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0
egid=0 sgid=0 fsgid=0 comm="cat" exe="/bin/cat"
type=FS_WATCH msg=audit(1404006436.500:12): watch_inode=704615 watch="php.ini" filterkey= perm=0 perm_mask=4
type=FS_INODE msg=audit(1404006436.500:12): inode=704615 inode_uid=0 inode_gid=0 inode_dev=68:01 inode_rdev=00:00
type=CWD msg=audit(1404006436.500:12):  cwd="/root"
type=PATH msg=audit(1404006436.500:12): name="/etc/php.ini" flags=101 inode=704615 dev=68:01 mode=0100644 ouid=0 ogid=0 rdev=00:00

ausearch command is available for searching through the audit logs. Following command would display the audit entries from 6th July related to /etc/php.ini file.

# ausearch -ts 7/6/2014 -f /etc/php.ini | less

When I noticed the ctime changed again, I ran ausearch. I saw multiple events on the file. Most of the access are from syscall=5, which is the open system call.

Following entries seem to be pointing to the culprit. You can see that the system call is 271.

type=SYSCALL msg=audit(1404691594.175:37405): arch=40000003 syscall=271 success=yes exit=0 a0=bff
09b00 a1=bff07b00 a2=7beff4 a3=bff0a1a0 items=1 pid=9830 auid=4294967295 uid=0 gid=0 euid=0 suid=
0 fsuid=0 egid=0 sgid=0 fsgid=0 comm="bpbkar" exe="/usr/openv/netbackup/bin/bpbkar"
type=FS_WATCH msg=audit(1404691594.175:37405): watch_inode=704615 watch="php.ini" filterkey= perm
=0 perm_mask=2
type=FS_INODE msg=audit(1404691594.175:37405): inode=704615 inode_uid=0 inode_gid=0 inode_dev=68:
01 inode_rdev=00:00
type=FS_WATCH msg=audit(1404691594.175:37405): watch_inode=704615 watch="php.ini" filterkey= perm
=0 perm_mask=2
type=FS_INODE msg=audit(1404691594.175:37405): inode=704615 inode_uid=0 inode_gid=0 inode_dev=68:
01 inode_rdev=00:00
type=CWD msg=audit(1404691594.175:37405):  cwd="/etc"
type=PATH msg=audit(1404691594.175:37405): name="/etc/php.ini" flags=1 inode=704615 dev=68:01 mod
e=0100644 ouid=0 ogid=0 rdev=00:00

Using ausearch you can search based on system calls also. You can see that there is only one record with system call number 271. Another advantage of ausearch is that it would convert the time stamps to human readable form.

# ausearch -ts 7/6/2014 -sc 271 -f /etc/php.ini 

You can see time in the start of each block of search outputs.

----
time->Sun Jul  6 20:06:34 2014
type=PATH msg=audit(1404691594.175:37405): name="/etc/php.ini" flags=1 inode=704615 dev=68:01 mod
e=0100644 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1404691594.175:37405):  cwd="/etc"
type=FS_INODE msg=audit(1404691594.175:37405): inode=704615 inode_uid=0 inode_gid=0 inode_dev=68:
01 inode_rdev=00:00
type=FS_WATCH msg=audit(1404691594.175:37405): watch_inode=704615 watch="php.ini" filterkey= perm
=0 perm_mask=2
type=FS_INODE msg=audit(1404691594.175:37405): inode=704615 inode_uid=0 inode_gid=0 inode_dev=68:
01 inode_rdev=00:00
type=FS_WATCH msg=audit(1404691594.175:37405): watch_inode=704615 watch="php.ini" filterkey= perm
=0 perm_mask=2
type=SYSCALL msg=audit(1404691594.175:37405): arch=40000003 syscall=271 success=yes exit=0 a0=bff
09b00 a1=bff07b00 a2=7beff4 a3=bff0a1a0 items=1 pid=9830 auid=4294967295 uid=0 gid=0 euid=0 suid=
0 fsuid=0 egid=0 sgid=0 fsgid=0 comm="bpbkar" exe="/usr/openv/netbackup/bin/bpbkar"

The time stamps matches.

# stat /etc/php.ini
File: `/etc/php.ini'
Size: 45809         Blocks: 96         IO Block: 4096   regular file
Device: 6801h/26625d    Inode: 704615      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2014-07-07 01:06:47.000000000 -0400
Modify: 2012-10-01 13:21:41.000000000 -0400
Change: 2014-07-06 20:06:34.000000000 -0400

From RHEL4 kernel source code we can see that syscall 271 is utimes.

# cat ./include/asm-i386/unistd.h |grep 271
#define __NR_utimes        271

utimes is a legacy syscall that can change a file’s last access and modification times. utimes is later deprecated and replaced with utime from RHEL5.

netbackup process bpbkar is doing a utimes syscall on the file, possibly modifying the mtime to the already existing time resulting in the change.

This example shows us the power of Linux Auditing System. Auditing is a kernel feature which provides interface to daemons like auidtd to capture events related to system and user space processes and log it.

Categories: DBA Blogs

Lock Timeout Error While Creating Database

Mon, 2014-07-14 08:07

Recently I worked on a issue where a third-party application was failing during the installation. Below was the error returned by the Application.

Lock request time out period exceeded. (Microsoft SQL Server, Error: 1222)

The application was failing while it was trying to create a database. The application seemed to have a default timeout setting which was about 60 seconds after which it was failing, as the command had not yet returned any results.

I tried creating a test database directly from the SQL Server Management Studio and noticed that it was taking long time as well. Once I checked the sys.sysprocesses, I found that the created database was having IO related waits.

Some of the reasons why it might be taking more time while creating the database are

  • IO bottleneck on the drive where we are creating the database files
  • Large size of Model database
  • Instant File Initialization is not enabled

I verified the size of the model database files and found that the model database data file is 5 GB and log file is 1 GB. I have reduced the size of the model database to 1 GB and log file size to 512 MB and then I was able to create the Test database quickly.

Now we started the installation of the Application and it completed successfully as well.

Categories: DBA Blogs

Script to Get the Create Date of an Object from Multiple Databases

Mon, 2014-07-14 08:04

As a DBA, it is common to get a request to run scripts against production databases. However, there can be environments where there are multiple databases on same instance, where the script needs to be run against. I have seen a environment where there were 50+ databases which have same schema with different data, and each database serving different customers.

When we get a request to run large scripts against many databases, at times with user over sight, it may be possible to miss running the script against one or more databases. The requirement comes to verify if the script was executed against all the databases. One way to verify if the script is executed against all databases is to pick an object (stored procedure, table, view, function) which was created as part of script execution, get the create date of that object and verify if it is showing the date and time when we ran the script. The challenge is to get the create date of a specific object from all databases at a time with little work.

Below is the code which will help in fetching the create date of the specified object (Stored Procedure, Table, View, Function) from all user databases on a instance. Pass the object name @ObjName in the 7th line of the code. Run the code and verify the create date from the output and make sure that the script was executed and created the object during the time the script was run.

-- Script Originally Written By: Keerthi Deep | http://www.SQLServerF1.com/

Set NOCOUNT ON
Declare @ObjName nvarchar(300)
declare @dbn nvarchar(200)

Set @ObjName = 'Object_Name' -- Specify the name of the Stored Procedure/ Table/View/Function

create table #DatabaseList(dbname nvarchar(2000)) 

Insert into #DatabaseList select name from sys.sysdatabases
where name not in ('master', 'msdb', 'model','tempdb')
order by name asc 

--select * from #DatabaseList
Create table #Output_table (DB nvarchar(200), crdate datetime, ObjectName nvarchar(200))
declare c1 cursor for select dbname from #DatabaseList open c1
Fetch next from c1 into @dbn
WHILE @@FETCH_STATUS = 0
BEGIN

declare @Query nvarchar(2048)
Set @Query = 'select ''' + @dbn + ''' as DBName, crdate, [name] from ' + @dbn + '.sys.sysobjects where name = ''' + @ObjName + ''''
--print @Query
Insert into #Output_table Exec sp_executesql @Query

FETCH NEXT FROM c1 into @dbn
END
CLOSE c1
DEALLOCATE c1

select * from #Output_table
Drop table #Output_table
Drop table #DatabaseList

Limitations:
This will work only if the object is created using create command, but will not work if Alter command is used.

Any suggestions are welcome.

Categories: DBA Blogs

A Ringleader Proxy for Sporadically-Used Web Applications

Fri, 2014-07-11 08:46

As you might already know, I come up with my fair share of toy web applications.

Once created, I typically throw them on my server for a few weeks but, as the resources of good ol’ Gilgamesh are limited, they eventually have to be turned off to make room for the next wave of shiny new toys. Which is a darn shame, as some of them can be useful from time to time. Sure, running all webapps all the time would be murder for the machine, but there should be a way to only fire up the application when it’s needed.

Of course there’s already a way of doing just that. You might have heard of it: it’s called CGI. And while it’s perfectly possible to run PSGI applications under CGI, it’s also… not quite perfect. The principal problem is that since there is no persistence at all between requests (of course, with the help of mod_perl there could be persistence, but that would defeat the purpose), so it’s not exactly snappy. Although, to be fair, it’d probably be still fast enough for most small applications. But still, it feels clunky. Plus, I’m just plain afraid that if I revert to using CGI, Sawyer will burst out of the wall like a vengeful Kool-Aid Man and throttle the life out of me. He probably wouldn’t, but I prefer not to take any chances.

So I don’t want single executions and I don’t want perpetual running. What I’d really want is something in-between. I’d like the applications to be disabled by default, but if a request comes along, to be awaken and ran for as long as there is traffic. And only once the traffic has abated for a reasonable amount of time do I want the application to be turned off once more.

The good news is that it seems that Apache’s mod_fastcgi can fire dynamic applications upon first request. If that’s the case, then the waking-up part of the job comes for free, and the shutting down is merely a question of periodically monitoring the logs and killing processes when inactivity is detected.

The bad news is that I only heard that after I was already halfway done shaving that yak my own way. So instead of cruelly dropping the poor creature right there and then, abandoning it with a punk-like half-shave, I decided to go all the way and see how a Perl alternative would look.

It’s all about the proxy

My first instinct was to go with Dancer (natch). But a quick survey of the tools available revealed something even more finely tuned to the task at hand: HTTP::Proxy. That module does exactly what it says on the tin: it proxies http requests, and allows you to fiddle with the requests and responses as they fly back and forth.

Since I own my domain, all my applications run on their own sub-domain name. With that setting, it’s quite easy to have all my sub-domains point to the port running that proxy and have the waking-up-if-required and dispatch to the real application done as the request comes in.


use HTTP::Proxy;
use HTTP::Proxy::HeaderFilter::simple;

my $proxy = HTTP::Proxy->new( port => 3000 );

my $wait_time = 5;
my $shutdown_delay = 10;

my %services = (
    'foo.babyl.ca' => $foo_config,
    'bar.babyl.ca' => $bar_config,

);

$proxy->push_filter( request => 
    HTTP::Proxy::HeaderFilter::simple->new( sub {

            my( $self, $headers, $request ) = @_;

            my $uri = $request->uri;
            my $host = $uri->host;

            my $service = $services{ $host } or die;

            $uri->host( 'localhost' );
            $uri->port( $service->port );

            unless ( $service->is_running ) {
                $service->start;
                sleep 1;
            }

            # store the latest access time
            $service->store_access_time(time);
    }),
);

$proxy->start;

With this, we already have the core of our application, and only need a few more pieces, and details to iron out.

Enter Sandman

An important one is how to detect if an application is running, and when it goes inactive. For that I went for a simple mechanism. Using CHI to provides me with a persistent and central place to keep information for my application. As soon as an application comes up, I store the time of the current request in its cache, and each time a new request comes in, I update the cache with the new time. That way, the existence of the cache tells me if the application is running, and knowing if the application should go dormant is just a question of seeing if the last access time is old enough.


use CHI;

# not a good cache driver for the real system
# but for testing it'll do
my $chi = CHI->new(
    driver => 'File',
    root_dir => 'cache',
);

...;

# when checking if the host is running
unless ( $chi->get($host) ) {
    $service->start;
    sleep 1;
}

...;

# and storing the access time becomes
$chi->set( $host => time );

# to check periodically, we fork a sub-process 
# and we simply endlessly sleep, check, then sleep
# some more

sub start_sandman {
    return if fork;

    while( sleep $shutdown_delay ) {
        check_activity_for( $_ ) for keys %services;
    }
}

sub check_activity_for {
    my $s = shift;

    my $time = $chi->get($s);

    # no cache? assume not running
    return if !$time or time - $time <= $shutdown_delay;

    $services{$s}->stop;

    $chi->remove($s);
}

Minding the applications

The final remaining big piece of the puzzle is how to manage the launching and shutting down of the applications. We could do it in a variety of ways, beginning by using plain system calls. Instead, I decided to leverage the service manager Ubic. With the help of Ubic::Service::Plack, setting a PSGI application is as straightforward as one could wish for:


use Ubic::Service::Plack;

Ubic::Service::Plack->new({
    server => "FCGI",
    server_args => { listen => "/tmp/foo_app.sock",
                     nproc  => 5 },
    app      => "/home/web/apps/foo/bin/app.pl",
    port     => 4444,
});

Once the service is defined, it can be started/stopped from the CLI. And, which is more interesting for us, straight from Perl-land:


use Ubic;

my %services = (
    # sub-domain      # ubic service name
    'foo.babyl.ca' => 'webapp.foo',
    'bar.babyl.ca' => 'webapp.bar',
);

$_ = Ubic->service($_) for values %services;

# and then to start a service
$services{'foo.babyl.ca'}->start;

# or to stop it
$services{'foo.babyl.ca'}->stop;

# other goodies can be gleaned too, like the port...
$services{'foo.babyl.ca'}->port;

Now all together

And that’s all we need to get our ringleader going. Putting it all together, and tidying it up a little bit, we get:


use 5.20.0;

use experimental 'postderef';

use HTTP::Proxy;
use HTTP::Proxy::HeaderFilter::simple;

use Ubic;

use CHI;

my $proxy = HTTP::Proxy->new( port => 3000 );

my $wait_time      = 5;
my $shutdown_delay = 10;

my $ubic_directory = '/Users/champoux/ubic';

my %services = (
    'foo.babyl.ca' => 'webapp.foo',
);

$_ = Ubic->service($_) for values %services;

# not a good cache driver for the real system
# but for testing it'll do
my $chi = CHI->new(
    driver => 'File',
    root_dir => 'cache',
);


$proxy->push_filter( request => HTTP::Proxy::HeaderFilter::simple->new(sub{
            my( $self, $headers, $request ) = @_;
            my $uri = $request->uri;
            my $host = $uri->host;

            my $service = $services{ $host } or die;

            $uri->host( 'localhost' );
            $uri->port( $service->port );

            unless ( $chi->get($host) ) {
                $service->start;
                sleep 1;
            }

            # always store the latest access time
            $chi->set( $host => time );
    }),
);

start_sandman();

$proxy->start;

sub start_sandman {
    return if fork;

    while( sleep $shutdown_delay ) {
        check_activity_for( $_ ) for keys %services;
    }
}

sub check_activity_for {
    my $service = shift;

    my $time = $chi->get($service);

    # no cache? assume not running
    return if !$time or time - $time <= $shutdown_delay;

    $services{$service}->stop;

    $chi->remove($service);
}

It’s not yet completed. The configuration should go in a YAML file, we should have some more safeguards in case the cache and the real state of the application aren’t in sync, and the script itself should be started by Unic too to make everything Circle-of-Life-perfect. Buuuuut as it is, I’d say it’s already a decent start.

Categories: DBA Blogs

Log Buffer #379, A Carnival of the Vanities for DBAs

Fri, 2014-07-11 07:34

During this summer time in Northern hemisphere, and winter time in Southern hemisphere, the bloggers are solving key problems either by sitting besides the bonfire, or enjoying that bbq. This Log Buffer Edition shares both of these with them.


Oracle:

3 Key Problems To Solve If You Want A Big Data Management System

OpenWorld Update: Content Catalog NOW LIVE!

Interested in Showcasing your Solutions around Oracle Technologies at Oracle OpenWorld?

GoldenGate and Oracle Data Integrator – A Perfect Match in 12c… Part 4: Start Journalizing!

What You Need to Know about OBIEE 11.1.1.7

SQL Server:

Interoperability between Microsoft and SOA Suite 12c

This article describes a way to speed up various file operations performed by SQL Server.

The Mindset of the Enterprise DBA: Creating and Applying Standards to Our Work

Stairway to T-SQL: Beyond The Basics Level 8: Coding Shortcuts using += and -= Operators

Microsoft Azure Diagnostics Part 2: Basic Configuration of Azure Cloud Service Diagnostics

MySQL:

MySQL Enterprise Monitor 2.3.18 has been released

Harnessing the power of master/slave clusters to operate data-driven businesses on MySQL

NoSQL Now! Conference – coming to San Jose, CA this August!

Manually Switch Slaves to new Masters in mySQL 5.6 (XTRADB 5.6)

How to Configure ClusterControl to run on nginx

Categories: DBA Blogs

Comparing CPU Throughput of Azure and AWS EC2

Thu, 2014-07-10 08:11

After observing CPU core sharing with Amazon Web Services EC2, I thought it would be interesting to see if Microsoft Azure platform exhibits the same behavior.

Signing up for Azure’s 30-day trial gives $200 in credit to use over the next 30-day period: more than enough for this kind of testing. Creating a new virtual machine, using the “quick create” option with Oracle Linux, and choosing a 4-core “A3″ standard instance.

I must say I like the machine naming into built-in “clouadpp.net” DNS that Azure uses: no mucking around with IP addresses. The VM provisioning definitely takes longer than AWS, though no more than a few minutes. And speaking of IP addresses, both start with 191.236. addresses assigned to Microsoft’s Brazilian subsidiary through the Latin American LACNIC registry, due to the lack of north american IP addresses.

Checking out the CPU specs as reported to the OS:

[azureuser@marc-cpu ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
processor       : 1
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
processor       : 2
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
processor       : 3
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4

2.2GHz rather than 2.6GHz, but otherwise the same family and architecture as the E5-2670 under AWS. Identified as a single-socket, 4-core processor, without hyperthreads at all.

Running the tests
[azureuser@marc-cpu ~]$ taskset -pc 0 $$
pid 1588's current affinity list: 0-3
pid 1588's new affinity list: 0
[azureuser@marc-cpu ~]$ dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 36.9319 s, 58.8 MB/s
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
2170552320 bytes (2.2 GB) copied, 72.8379 s, 29.8 MB/s
2170552320 bytes (2.2 GB) copied, 73.6173 s, 29.5 MB/s

Pretty low; that’s half the throughput we saw on AWS, albeit with a slower clock speed here.

[azureuser@marc-cpu ~]$ taskset -pc 0,1 $$
pid 1588's current affinity list: 0
pid 1588's new affinity list: 0,1
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu ~]$ 2170552320 bytes (2.2 GB) copied, 36.4285 s, 59.6 MB/s
2170552320 bytes (2.2 GB) copied, 36.7957 s, 59.0 MB/s

[azureuser@marc-cpu ~]$ taskset -pc 0,2 $$
pid 1588's current affinity list: 0,1
pid 1588's new affinity list: 0,2
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu ~]$ 2170552320 bytes (2.2 GB) copied, 36.3998 s, 59.6 MB/s
2170552320 bytes (2.2 GB) copied, 36.776 s, 59.0 MB/s

Pretty consistent results, so no core sharing, but running considerably slower than we saw with AWS.

Kicking off 20 runs in a rows:

[azureuser@marc-cpu ~]$ taskset -pc 0-3 $$
pid 1588's current affinity list: 0,2
pid 1588's new affinity list: 0-3
[azureuser@marc-cpu ~]$ for run in {1..20}; do
>  for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2>> output | gzip -c > /dev/null & done
> wait
> done
...
[azureuser@marc-cpu ~]$ cat output | awk '/copied/ {print $8}' | sort | uniq -c
      1 59.1
      4 59.2
      1 59.3
      2 59.4
      2 59.5
      8 59.6
     12 59.7
      7 59.8
      3 59.9

We get very consistent results, between 59.1 and 59.9 mB/sec

Results from “top” while running:

cat > ~/.toprc <<-EOF
RCfile for "top with windows"           # shameless braggin'
Id:a, Mode_altscr=0, Mode_irixps=1, Delay_time=3.000, Curwin=0
Def     fieldscur=AEHIOQTWKNMbcdfgjplrsuvyzX
        winflags=25913, sortindx=10, maxtasks=2
        summclr=1, msgsclr=1, headclr=3, taskclr=1
Job     fieldscur=ABcefgjlrstuvyzMKNHIWOPQDX
        winflags=62777, sortindx=0, maxtasks=0
        summclr=6, msgsclr=6, headclr=7, taskclr=6
Mem     fieldscur=ANOPQRSTUVbcdefgjlmyzWHIKX
        winflags=62777, sortindx=13, maxtasks=0
        summclr=5, msgsclr=5, headclr=4, taskclr=5
Usr     fieldscur=ABDECGfhijlopqrstuvyzMKNWX
        winflags=62777, sortindx=4, maxtasks=0
        summclr=3, msgsclr=3, headclr=2, taskclr=3
EOF
[azureuser@marc-cpu ~]$  top -b -n20 -U azureuser
...
top - 14:38:41 up 2 min,  2 users,  load average: 2.27, 0.78, 0.28
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 95.4%us,  4.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 94.4%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.5%si,  0.0%st
Cpu3  :  0.0%us,  0.3%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1606 azureuse  20   0  4292  800  400 R 97.0  0.0   0:03.49 gzip
 1604 azureuse  20   0  4292  796  400 R 96.7  0.0   0:03.50 gzip

top - 14:38:44 up 2 min,  2 users,  load average: 2.25, 0.80, 0.29
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 94.4%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.5%si,  0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 72.3%us,  3.9%sy,  0.0%ni, 23.4%id,  0.0%wa,  0.0%hi,  0.4%si,  0.0%st
Cpu3  : 12.0%us,  0.7%sy,  0.0%ni, 85.6%id,  1.4%wa,  0.0%hi,  0.4%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1604 azureuse  20   0  4292  796  400 R 96.8  0.0   0:06.42 gzip
 1606 azureuse  20   0  4292  800  400 R 96.4  0.0   0:06.40 gzip

top - 14:38:47 up 2 min,  2 users,  load average: 2.25, 0.80, 0.29
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 94.9%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  9.7%us,  0.3%sy,  0.0%ni, 89.7%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Cpu2  : 51.8%us,  2.8%sy,  0.0%ni, 45.4%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  : 17.9%us,  1.4%sy,  0.0%ni, 80.6%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1604 azureuse  20   0  4292  796  400 R 96.5  0.0   0:09.34 gzip
 1606 azureuse  20   0  4292  800  400 R 95.5  0.0   0:09.29 gzip

It’s using full CPUs and all from gzip, so no large system overhead here. Also, “%st”, time reported “stolen” by the hypervisor, is zero. We’re simply getting half the throughput of AWS.

Basic instances

In addition to standard instances, Microsoft makes available basic instances, which claim to offer “similar machine configurations as the Standard tier of instances offered today (Extra Small [A0] to Extra Large [A4]). These instances will cost up to 27% less than the corresponding instances in use today (which will now be called “Standard”) and do not include load balancing or auto-scaling, which are included in Standard” (http://azure.microsoft.com/blog/2014/03/31/microsoft-azure-innovation-quality-and-price/)

Having a look at throughput here, by creating a basic A3 instance “marc-cpu-basic” that otherwise matches exactly marc-cpu created earlier.

[azureuser@marc-cpu-basic ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
processor       : 1
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
processor       : 2
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
processor       : 3
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4

CPU specs are identical to marc-cpu. Running the same tests:

[azureuser@marc-cpu-basic ~]$ taskset -pc 0 $$
pid 1566's current affinity list: 0-3
pid 1566's new affinity list: 0
[azureuser@marc-cpu-basic ~]$  dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 54.6678 s, 39.7 MB/s
for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu-basic ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
2170552320 bytes (2.2 GB) copied, 107.73 s, 20.1 MB/s
2170552320 bytes (2.2 GB) copied, 107.846 s, 20.1 MB/s

Now that’s very slow: even with the identical stated CPU specs as marc-cpu, marc-cpu-basic comes in with 33% less throughput.

Doing 20 runs in a rows:

[azureuser@marc-cpu-basic ~]$ taskset -pc 0-3 $$
pid 1566's current affinity list: 0
pid 1566's new affinity list: 0-3
[azureuser@marc-cpu-basic ~]$ for run in {1..20}; do
> for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2>> output | gzip -c > /dev/null & done
> wait
> done
...
[azureuser@marc-cpu-basic ~]$ cat output | awk '/copied/ {print $8}' | sort | uniq -c
      4 40.4
     15 40.5
     14 40.6
      7 40.7

Very consistent results, but consistently slow. They do show that cores aren’t being shared, but throughput is lower than even a shared core under AWS.

Wrapping up Comparison chart

Under this simple gzip test, we are testing CPU integer performance. The Azure standard instance got half the throughput of the equivalent AWS instance, in spite of a clock speed only 15% slower. But the throughput was consistent: no drops when running on adjacent cores. The basic instance was a further 33% slower than a standard instance, in spite of having the same CPU configuration.

Under Azure, we simply aren’t getting a full physical core’s worth of throughput. Perhaps the hypervisor is capping throughput, and capping even lower for basic instances? Or maybe the actual CPU is different than the E5-2660 reported? For integer CPU-bound workloads like our gzip test, we would need to purchase at least twice as much capacity under Azure than AWS, making Azure considerably more expensive as a platform.

Categories: DBA Blogs