Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 7 hours 51 min ago

Thanks Oracle for R12.AD.C.DELTA.6

Wed, 2015-04-29 06:18

When reading through the release notes of the latest Oracle E-Business Suite R12.2 AD.C.Delta.6 patch in note 1983782.1, I wondered what they meant by “Simplification and enhancement of adop console messages”. I realized what I was missing after I applied the AD.C.Delta6 patch. The format of the console messaged changed drastically. To be honest, the old console messages printed by adop command reminded me of a program where somebody forgot to turn off the debug feature. The old adop console messages are simply not easily readable and looked more like debug messages of a program. AD.C.Delta6 brought in a fresh layout to the console messages, it’s now more readable and easy to follow. You can see for your self by looking at the below snippet:

### AD.C.Delta.5 ###

$ adop phase=apply patches=19197270 hotpatch=yes

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

 Please wait. Validating credentials...


RUN file system context file: /u01/install/VISION/fs2/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml

PATCH file system context file: /u01/install/VISION/fs1/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
Execute SYSTEM command : df /u01/install/VISION/fs1

************* Start of  session *************
 version: 12.2.0
 started at: Fri Apr 24 2015 13:47:58

APPL_TOP is set to /u01/install/VISION/fs2/EBSapps/appl
[START 2015/04/24 13:48:04] Check if services are down
  [STATEMENT]  Application services are down.
[END   2015/04/24 13:48:09] Check if services are down
[EVENT]     [START 2015/04/24 13:48:09] Checking the DB parameter value
[EVENT]     [END   2015/04/24 13:48:11] Checking the DB parameter value
  Using ADOP Session ID from currently incomplete patching cycle
  [START 2015/04/24 13:48:23] adzdoptl.pl run
    ADOP Session ID: 12
    Phase: apply
    Log file: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_134739.log
    [START 2015/04/24 13:48:30] apply phase
        Calling: adpatch  workers=4   options=hotpatch     console=no interactive=no  defaultsfile=/u01/install/VISION/fs2/EBSapps/appl/admin/VISION/adalldefaults.txt patchtop=/u01/install/VISION/fs_ne/EBSapps/patch/19197270 driver=u19197270.drv logfile=u19197270.log
        ADPATCH Log directory: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/19197270/log
        [EVENT]     [START 2015/04/24 13:59:45] Running finalize since in hotpatch mode
        [EVENT]     [END   2015/04/24 14:00:10] Running finalize since in hotpatch mode
          Calling: adpatch options=hotpatch,nocompiledb interactive=no console=no workers=4 restart=no abandon=yes defaultsfile=/u01/install/VISION/fs2/EBSapps/appl/admin/VISION/adalldefaults.txt patchtop=/u01/install/VISION/fs2/EBSapps/appl/ad/12.0.0/patch/115/driver logfile=cutover.log driver=ucutover.drv
          ADPATCH Log directory: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/log
        [EVENT]     [START 2015/04/24 14:01:32] Running cutover since in hotpatch mode
        [EVENT]     [END   2015/04/24 14:01:33] Running cutover since in hotpatch mode
      [END   2015/04/24 14:01:36] apply phase
      [START 2015/04/24 14:01:36] Generating Post Apply Reports
        [EVENT]     [START 2015/04/24 14:01:38] Generating AD_ZD_LOGS Report
          [EVENT]     Report: /u01/install/VISION/fs2/EBSapps/appl/ad/12.0.0/sql/ADZDSHOWLOG.sql

          [EVENT]     Output: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/adzdshowlog.out

        [EVENT]     [END   2015/04/24 14:01:42] Generating AD_ZD_LOGS Report
      [END   2015/04/24 14:01:42] Generating Post Apply Reports
    [END   2015/04/24 14:01:46] adzdoptl.pl run
    adop phase=apply - Completed Successfully

    Log file: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_134739.log

adop exiting with status = 0 (Success)
### AD.C.Delta.6 ###

$ adop phase=apply patches=19330775 hotpatch=yes

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

Validating credentials...

Initializing...
    Run Edition context  : /u01/install/VISION/fs2/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
    Patch edition context: /u01/install/VISION/fs1/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
Reading driver file (up to 50000000 bytes).
    Patch file system freespace: 181.66 GB

Validating system setup...
    Node registry is valid.
    Application services are down.
    [WARNING]   ETCC: The following database fixes are not applied in node ebs
                  14046443
                  14255128
                  16299727
                  16359751
                  17250794
                  17401353
                  18260550
                  18282562
                  18331812
                  18331850
                  18440047
                  18689530
                  18730542
                  18828868
                  19393542
                  19472320
                  19487147
                  19791273
                  19896336
                  19949371
                  20294666
                Refer to My Oracle Support Knowledge Document 1594274.1 for instructions.

Checking for pending adop sessions...
    Continuing with the existing session [Session id: 12]...

===========================================================================
ADOP (C.Delta.6)
Session ID: 12
Node: ebs
Phase: apply
Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_140643.log
===========================================================================

Applying patch 19330775 with adpatch utility...
    Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/19330775/log/u19330775.log

Running finalize actions for the patches applied...
    Log: @ADZDSHOWLOG.sql "2015/04/24 14:15:09"

Running cutover actions for the patches applied...
    Spawning adpatch parallel workers to process CUTOVER DDLs in parallel
    Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/log/cutover.log
    Performing database cutover in QUICK mode

Generating post apply reports...

Generating log report...
    Output: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/adzdshowlog.out

adop phase=apply - Completed Successfully


adop exiting with status = 0 (Success)

So what are you waiting for fellow Apps DBAs? Go ahead, apply the new AD Delta update to your R12.2 EBS instances. I am really eager to try out other AD.C.Delta6 new features, especially “Online Patching support for single file system on development or test systems”

Categories: DBA Blogs

Weblogic patch rollback issues in Oracle EBS R12.2

Tue, 2015-04-28 06:36

When you try to rollback a weblogic patch in Oracle EBS R12.2, you might run into issues similar to below:

$ ./bsu.sh -remove -patchlist=YIJF -prod_dir=/u01/install/VISION/fs2/FMW_Home/wlserver_10.3 -verbose
Checking for conflicts..
No conflict(s) detected

Starting removal of Patch ID: YIJF
Restoring /d1/V1223B5B/fs1/FMW_Home/modules/com.bea.core.datasource6.binding_1.10.0.0.jar
from /u01/install/VISION/fs2/FMW_Home/patch_wls1036/backup/backup.jar
Result: Failure
Failure condition follows:
An error occured while removing patches – run with logging to obtain detailed information
java.io.FileNotFoundException: /d1/V1223B5B/fs1/FMW_Home/modules/com.bea.core.datasource6.binding_1.10.0.0.jar (No such file or directory)
Result: Failure
Failure condition follows:
An error occured while removing patches – run with logging to obtain detailed information
java.io.FileNotFoundException: /d1/V1223B5B/fs1/FMW_Home/modules/com.bea.core.datasource6.binding_1.10.0.0.jar (No such file or directory)

If you observe the errors, it’s trying to restore the jar file to a location that is not present in the current filesystem. This error was captured in a VISION instance created from Oracle VM Template. So you can see that the file path seems to be a path used by an Oracle internal system where the VM template is created. There is not much harm caused by the above issue to the instance.

Now consider this hypothetical situation, where you are trying to rollback a weblogic patch in patch fs ( fs2 ) that was previously applied to fs1 in Production Oracle E-Business Suite R12.2. So the patch history in the patch fs ( fs2) will still have file paths of fs1. So when you rollback a patch it will restore the files to RUN FS ( fs1). This can cause a havoc and  a big outage to online users.

As of now bsu utility doesn’t seem to handle issues this kind of situation. So all Oracle EBS Apps DBAs out there, make sure to check the below file for correct paths, before you try to rollback any weblogic patch

$ vi $FMW_HOME/patch_wls1036/registry/patch-backup.xml

A permanent solution to this problem would be to update bsu patch utility to not restore or touch files outside the weblogic home. Hope Oracle Development notices this issue before it effects some customers EBS production instance.

Categories: DBA Blogs

Log Buffer #420: A Carnival of the Vanities for DBAs

Mon, 2015-04-27 06:05

This Log Buffer Editions brings few of the very insightful blog posts from the arena of Oracle, SQL Server and MySQL.

Oracle:

  • How to Increase Performance With Business Events in Fusion Applications
  • Advanced Oracle Troubleshooting Guide – Part 12: control file parallel reads causing enq: SQ – contention waits?
  • Changing REVERSE Transformations in Oracle Data Miner
  • Refresh Multiple Materialized Views in One Go: No Data Found
  • Error deploying Oracle Composite with wrong encoding wsdl

SQL Server:

  • How to Recover a SQL Server Login Password
  • Understanding Cross-Database Transactions in SQL Server
  • Adding Slicers to a Reporting Services Report
  • Continue a Foreach loop after an error in a SQL Server Integration Services package
  • Automating Image-Based Deployment of SQL Server on Azure IaaS VMs – Preparing OS Image

MySQL:

  • What Should I Monitor, and How Should I Do It?
  • How MySQL will act if there is no space left on hard disk? To answer this question, let’s test it.
  • Configuring PAM Authentication and User Mapping with MariaDB
  • MySQL Enterprise Database Firewall — Control and Monitor SQL Statement Executions
  • MariaDB : Bug when add index on Partition table
Categories: DBA Blogs

Pillars of PowerShell: Profiling

Fri, 2015-04-24 06:53
Introduction

This is the fourth blog post continuing the series on the Pillars of PowerShell. The previous post in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
Profiles

This is something I mentioned in the second post and can be a great way to keep up with those one-liners you use most often in your work. A profile with PowerShell is like using start up scripts in an Active Directory environment. You can “pre-run” things on a domain computer at start up or when a user logs into the machine. In a PowerShell profile you can “pre-load” information, modules, custom functions, or any command you want to execute in the PowerShell console. There is a separate profile for the console and then for PowerShell ISE. Your profile is basically a PowerShell script saved into a specific location under your Documents folder. The path to this profile is actually kept within a system variable, most notably called, $PROFILE.

Output of the $PROFILE variable

Output of the $PROFILE variable

I am using a Windows Azure VM that I just built, so I have not created any profiles on this machine. The path is kept within this variable but that does not mean it actually exists. We will need to create this file and the easiest method to do this is to actually use a cmdlet, New-Item. You can use this cmdlet to create files or folders. You can execute this one-liner to generate the PowerShell script in the path shown above:

New-Item $PROFILE -ItemType File -Force
New-Item $PROFILE

New-Item $PROFILE

Now, from here you can use another cmdlet to open the file within the default editor set to open any “.ps1″ file on your machine, Invoke-Item. This might be Notepad or you can set it to be the PowerShell ISE as well. Just execute this cmdlet followed by the $PROFILE variable (e.g. Invoke-Item $PROFILE).

One of the things I picked up on when I started using my profile more often was you can actually format your console. More specifically, I like to shorten the “PS C:\Users\melton_admin” portion. If you start working in directories that are 3 or 4 folders deep this can take up a good portion of your prompt. I came across a function that I truthfully cannot find the original poster, so sorry for not attributing it.

function prompt
{
if($host.UI.RawUI.CursorPosition.Y -eq 0) { "<$pwd> `n`r" + "PS["+$host.UI.RawUI.CursorPosition.Y+"]> "} else { "PS["+$host.UI.RawUI.CursorPosition.Y+"]> "}
}

Any function you save in your profile that performs an action you can call anytime in the PowerShell console, once it is loaded. However if I want that action to take effect when it loads the profile I simply need to call the function at the end of the profile script. I just add these two lines and ensure they are always the last two lines of my profile, anything added will go between the function above and these two lines:

prompt;
clear;
Profile_format

I use the clear command (just like using cls in the DOS prompt) to just get rid of any output a function or command I have may cause; just starts me out on a fresh clean slate.

If you want to test your profile script you can force it to load into your current session by doing this: .$profile. That is enter “period $profile” and just hit enter. You will need to take note that since I use the clear command in my profile if any cmdlet or one-liner I add outputs an error you will not see it. So when I have issues like this I simply comment the line out of my profile. You can put comments into your script using the pound sign (#), putting that before a command will allow it to be ignored or not run.

Set-ExecutionPolicy

PowerShell is a security product by default, so in certain operating system environments when you try to run your profile script above you may have gotten an error like this:

ExecutionPolicyError

 

 

 

This means pretty much what it says, execution of scripts is disabled. To enable this you need to use the Set-ExecutionPolicy cmdlet with a few parameters. You can find the documentation for this if you want by looking at the “about_Execution_Policies” in PowerShell or follow the link in the error. The documentation will explain the various options and policies you can set. The command below will allow you to execute scripts in your console and let it load your profile scripts:

Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned
Summary

In this post I pointed out the following cmdlets and concepts:

  • New-Item
  • Invoke-Item
  • Commenting in your script
  • Set-ExecutionPolicy

These are fairly basic areas of PowerShell and putting each one into your favorite search engine should lead you to a plentiful list of reading material. This post by no means encompassed all the potential you can do with Profiles, but was simply meant to get you started, and hopefully excited about what can be done.

Categories: DBA Blogs

Keeping Cassandra Alive

Mon, 2015-04-20 12:28

Troubleshooting Cassandra under pressure

This is the second blog post in the series. This is a bit more technical than the first one. I will explain some things that can be made to keep a cluster/server running when you are having problems in that cluster.

There were a lot of changes in Cassandra over the last 4 years (from 0.8 to 2.1), so I will refrain from discussing troubleshooting problems that affect some specific versions. Also, this is the kind of troubleshooting you need when you can’t “add a node”.

Why can’t I just add a node? Well, if you aren’t on vnodes, and you didn’t pre-calculate the token ranges, adding a node is a big effort. Other constrains may also apply, like budget or deliver time for hardware (if you are on bare metal). Plus, rack capacity, power constrains, etc…

Now you may say:

“Ok, we can’t add a node! What should we do?! We have a storm coming!”

So, I did navigate over that storm and it’s not an easy task, but it’s doable! First thing, you have to know what you have, that is critical! You also need to know where you can take more damage.

Let’s assume you have the following situation, and what I recommend for it:

  • Heavy Write Cluster, Low Read

Now let’s define “storm”: A storm is not when when Cassandra fails, it’s about an unanticipated load increase or a disaster. What happens is that you have more load than your planned capacity (Either because of failure of nodes or because of a sudden traffic increase). This will increase your resource usage to a point where your machines will start to die.

Let’s understand what can cause a Cassandra process to die, and a probably the machine (If you OOM and you didn’t configure swap… I warned you!) for the scenario described above.

  1. More data to the commitlog = more I/O pressure (Discard if you have commitlog on a different HDD)
  2. Data is written to memtables = Memory is used
  3. Memtables reach thresholds faster, get flushed to disk = I/O pressure
  4. Compaction starts faster and frequently = I/O pressure, CPU pressure
  5. Too many I/O compaction can’t compact fast enough and the memtables aren’t flushing fast enough = Memory not being released.
  6. Too much memory usage, JVM triggers GC more frequently = CPU pressure
  7. JVM can’t release memory = OOM
  8. OOM = PUM! Node dies (if you are “lucky” kernel will kill Cassandra)

And I didn’t go trough the hints that would be stored as nodes became unresponsive and send out once they get back online.

So now we know where our pain points are. Let’s understand them and see what we can do about it:

  • Commitlog – Let’s just assume you have this on separate HDD, and don’t do anything about it (after all it’s your safeguard).
  • Memtables – We can control how often they are flushed. It is a possible tweak point. Although it requires a Cassandra restart for the changes to produce an effect.
  • Compaction – This we can control via nodetool, inclusive we can disable it in the later versions.
  • JVM GC – We can change settings, but difficult to tweak and a restart is needed.
  • Swap – We can play a bit here if we do have a swap partition.
  • Dirty_ratio – How often the data is actually written to the HDD. This can put your data at risk, but also can help.
  • Replication Factor – this can be changed on the fly, will help by having less pressure on the nodes.

So, what do to? Where to start? It depends on a case by case scenario. I would probably make my Read performance suffer to keep the writes getting in. To allow that, the easiest way should be making the reads CL = ONE. That sometimes does look like the fast and easy option. But if you’re writes are not using Quorum or/and you have read_repair… You will spread more writes (And RF>1). I would pick compaction as my first target, you can always try to get it up to pace (re-enable, increase compaction throughput). Another option would be increase dirty_ratio and risk losing data (trusting the commitlogs + RF>1 helps not losing data) but this will give your HDD more room until the cluster recovers.

But every case is a case. I will talk about my own case, problems and resolutions this Wednesday at the Datastax Day in London! Fell free to join me!

 

Categories: DBA Blogs

Pillars of PowerShell: Debugging

Mon, 2015-04-20 12:09
Introduction

The is the third blog post continuing the series on the Pillars of PowerShell. The first two post are:

  1. Interacting
  2. Commanding

We are going to take a bit of a jump and dig into a particular topic that I think is better to go over up front, instead of later. In this post I want to go over a few things of how you can debug scripts or just issues in PowerShell. This is a topic that can get very advanced and make for a very long blog post. In place of trying to put all that in one blog post, I have a few links that I am going to share at the end of this post that will point you to some of the more deep dive material on debugging.

Pillar 3: Debugging

When it comes to writing scripts or developing T-SQL procedures you will generally see folks use print statements to either check where the processing is at in the script, or output some “additional” information. PowerShell is no different and offers cmdlets that you can output it to with various destinations and to even use it to make a decision. One of the main ones I like to use when I write scripts is Write-Verbose. You may see some folks use Write-Host in their scripts, and all I can say to that is, “be kind to puppies”. The basic gist of it is Write-Host outputs plain text, and will always output text unless you comment it out or remove it from your script. In using Write-Verbose you can actually have that information only output when a parameter switch is used, rightly called “-verbose”. This switch is included in most built-in cmdlets for modules provided by Microsoft. If you want to include it in your script or function you simply need to include this at the top:

[CmdletBinding()]
Param()

So in the example below you can see that both functions will never output the Write-Verbose cmdlet when they are called:

p3_function_verbose_example

The difference you will see is that Test-NoVerbose does not do anything when you include the verbose switch, where Test-WithVerbose will:

p3_function_verbose_example_2

So in cases where other people may be using your scripts this feature will help keep output clean, unless you need it for debugging. I tend to use this most often when I am working on long scripts that I want to initially know what is going on as it runs. If I ever have to come back to the script for debugging I can just use the switch, versus the normal execution which doesn’t need all that output.

Errors

They are going to happen, it is inevitable in your scripting journey that at some point you are going to have an error. You cannot always prepare for every error but you can help in collecting as much information about an error to help in debugging. Just like you would handle errors in T-SQL using a TRY/CATCH block, you can do the same in PowerShell.

PowerShell offers a variable that is available in every session you open or run called $Error. [The dollar sign in PowerShell denotes a variable.] This variable holds records of the errors that have occurred in your session. This variable is going to hold those errors that can occur in your scripts. There are other errors or exceptions that can also be thrown by .NET objects that can work a bit different in how you capture them; I will refer you to Allen White’s post on Handling Errors in PowerShell to see a good example.

Summary

Debugging is one of those topics that can go into a 3-day course so one blog post is obviously not going to cover all the information you might need. I came across a good blog post by the PowerShell Team on Advanced Debugging in PowerShell that should be a read for anyone wanting to get involved with PowerShell scripting.

Categories: DBA Blogs

Log Buffer #419: A Carnival of the Vanities for DBAs

Fri, 2015-04-17 10:51

This Log Buffer Edition covers Oracle, MySQL, SQL Server blog posts from around the world.

Oracle:

  • Why the Internet of Things should matter to you
  • Modifying Sales Behavior Using Oracle SPM – Written by Tyrice Johnson
  • SQLcl: Run a Query Over and Over, Refresh the Screen
  • Data Integration Tips: ODI 12.1.3 – Convert to Flow
  • JRE 1.8.0_45 Certified with Oracle E-Business Suite

SQL Server:

  • What’s this, a conditional WHERE clause that doesn’t use dynamic SQL?
  • The job of a DBA requires a fusion of skill and knowledge. To acquire this requires a craftsman mindset. Craftsmen find that the better they get at the work, the more enjoyable the work gets, and the more successful they become.
  • Using SQL to perform cluster analysis to gain insight into data with unknown groups
  • There are times when you don’t what to return a complete set of records. When you have this kind of requirement to only select the TOP X number of items Transact SQL (TSQL) has the TOP clause to meet your needs.
  • Spatial Data in SQL Server has special indexing because it has to perform specialised functions.

MySQL:

Profiling MySQL queries from Performance Schema

How to Easily Identify Tables With Temporal Types in Old Format!

The Perfect Server – CentOS 7.1 with Apache2, Postfix, Dovecot, Pure-FTPD, BIND and ISPConfig 3

Database Security – How to fully SSL-encrypt MySQL Galera Cluster and ClusterControl

MDX: retrieving the entire hierarchy path with Ancestors()

Categories: DBA Blogs

Ever Wondered How Pythian is Kind of Like a Fire Truck?

Tue, 2015-04-14 06:10
pierce____enforcerg____-54ee03db38d10

 

I have.

Coming from the world of selling fire trucks I’m used to selling necessary solutions to customers in need. The stakes are high. If the truck doesn’t perform best case scenario it’s a false alarm. Worst case scenario someone, many people, die.

Let me tell you a bit about fire trucks.

A lot of people think that a fire truck is a fire truck. That there is some factory where fire trucks are made, carbon copies of one another, varying only in what they carry – water, a pump, a ladder. That’s not the case. Every truck is custom engineered, designed, and manufactured from scratch. Things can go wrong. In a world where response time is everything, you don’t want something to go wrong. Not with the fire truck. Not when everything else is going wrong. Not when someone is trapped in their vehicle. Not when a house is burning down.

For the past five years I have been selling disaster management systems. There has been a clear, immediate, pressing need from my customers. I loved the urgency, I fed off that energy, helping people in charge of saving lives come up with solutions that help them do just that. When first walking into Pythian, I didn’t understand the importance of data, I didn’t comprehend the stakes. But they are present and the analogy can be made.

Pythian’s services are like a fire truck.

Data is like your house, your car, your life. When your business is dependent on your data and your data fails, your business fails. Data failures are serious. Downtime causes huge revenue losses as well as loss of trust and reputation. Identity theft, loss of security, these disasters are pressing threats in our digitized society.

Pythian’s FIT-ACER program is like your Fire Marshall.

We don’t just prepare for disasters, we help prevent them. Modeled after the Mayo Clinic’s patient checklist, Pythian’s FIT-ACER human reliability check acknowledges that no matter how intelligent our DBAs are (http://www.pythian.com/experts/) they can still make mistakes:

FIT-ACER: Pythian Human Reliability Checklist

F

Focus (SLOW DOWN! Are you ready?)

A

Assess the command (SPEND TIME HERE!)

I

Identify server/DB name, time, authorization

C

Check the server / database name again

T

Type the command (do not hit enter yet)

E

Execute the command

R

Review and document the results

We don’t just hire the best to do the best work, we hire the best, make sure they’re at their best, check their best, and apply their best. Every time we interact with your data we do so at a high level to improve your system, to prevent disaster.  And we answer our phones when disaster hits.

The average response time for a fire crew in Ontario is 6 minutes. The average response time for Pythian is under 4.

Take it from someone who knows disaster,

Pythian’s the best fire truck around.

Categories: DBA Blogs

Community dinner @ Pedro’s

Mon, 2015-04-13 08:36

Folks, as usual Pythian is organizing the community dinner. After many years, food, (responsible) drinking and photos, this event has become an important moment for all of us, to know each other better, discuss and have fun.

This year is also the 20th year for MySQL so … YEAAAH let us celebrate, with more food, fun and responsible drinking.

If you had not done it yet … register yourself here: https://www.eventbrite.com/e/the-pythian-mysql-community-pay-your-own-way-dinner-tickets-15692805604

Info about the event:

When: Tuesday April 14, 2015 – 7:00 PM at Pedro’s (You are welcome to show up later, too!)
Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

 

I know, I know … we are that kind of people that decide where to go at the last minute, and every year we do the same, but if you could register, that will help us to organize it better … and c’mon the dinner is on Tuesday … so we are almost there!!!

 

Anyhow, hope to see all of you there, all of you!

Some reference: Menu Eventbrite Pedro

Categories: DBA Blogs

Technology for the Non-Technical

Mon, 2015-04-13 08:33

I am potentially one of the least technical people in my generation. I’m 30 and I am afraid of my cellphone, my laptop, Netflix, the microwave…. Okay, afraid is maybe a strong word, but baffled by them at the very least.

In high school, while my classmates wrote most of their papers and assignments on the computer, I insisted on writing everything out by hand and only typed it out afterwards if absolutely required. It wasn’t that I had issues with typing – my mom who worked as an administrator for many years made sure that I learned to type from a very young age and I type quickly with a reasonable amount of accuracy. I just felt that writing by hand kept me more “connected” to the words I penned. Simply, my name is Sarah and I am a Luddite.

After high school I studied journalism for a couple of years and then entered the workforce into a number of different jobs, such as in sales and marketing and it became necessary for me to “engage” with technology a little more heavily. Typing articles and assignments slowly became second nature but grocery lists, thank you notes, birthday cards all continued to be written by hand.

For the last few years I’ve been working for technology and IT organizations, and for the last 14 months I’ve been working with Pythian, a leading IT services provider specializing in data infrastructure management. That was a big leap for me. Not only was I required to use technology constantly in my day-to-day (Smartphone, CRM system, soft phone, multiple email interfaces ACK!), but I also needed to do a lot more than dip my toes into some fairly intense technical knowledge to gain an understanding of our client base and what solutions would be most appropriate for the people I speak to every day. These people are Chief Information Officers, Chief Technology Officers’s and Vice Presidents of Information Technology for companies that are incredibly data-dependent. The quality and security of their data management directly affects their revenue and it’s critical that it is handled with a great amount of expertise and attention to detail. Kind of intimidating.

I have spent the last year wrapping myself in terms like NoSQL, non-relational database, Hadoop, MongoDB, SQL Server and Oracle. Do I have a perfect understanding of the benefits and draw-backs of each of these yet? No. What I do have is a great network of technical geniuses who work with me who have spent their careers becoming experts in their respective technologies. I know who the best resources are and how to connect with them to get the best answers and solutions. I’m very lucky to work at company that is incredibly transparent – questions are always welcomed and answered. I sit sandwiched between the offices of the Chief Revenue Officer and the CEO and Founder of our organization and while both are incredibly busy people, they are also happy to answer questions and share their insights and energy with anyone here.

All of our technical resources are just an instant message away and can often answer my questions in a few concise lines. So, while I am still monstrously uncomfortable with tasks like defragging (sounds like organized Fraggle removal to me) my computer or resetting my smartphone when it acts up, I am coming along slowly, in baby steps – an IT late-bloomer you could say – and it’s all much less painful than I ever feared it would be.

Categories: DBA Blogs

My thoughts on the Resilience of Cassandra

Mon, 2015-04-13 06:32

This blog is a part 1 of a 2 in a series. This will be different from my previous blogs, as this is more about some decisions you can make with Cassandra regarding the resilience of your system. I will talk deeply about this topic in the upcoming Datastax Days in London (https://cassandradaylondon2015.sched.org/), this is more of an introduction!

TL;DR: Cassandra is tough!

Cassandra presents itself as a “Cassandra delivers continuous availability, linear scalability, and operational simplicity across many commodity servers with no single point of failure, along with a powerful data model designed for maximum flexibility and fast response times.“ (http://docs.datastax.com/en/cassandra/2.0/cassandra/gettingStartedCassandraIntro.html). In a production system, having your persistence layer failure tolerant is a big thing. Even more so when you can make it resilient to full locations failure through geographic replication (and easily).

As in any production system you need to plan for failure. Should we blindly trust in Cassandra resilience and forget about the plan because “Cassandra can handle it”? By reading the documentation, some may think that by having several data centers and a high enough replication factor we are covered. In part this is true. Cassandra will handle servers down, even a full DC (or several!) down. But, anyway, you should always prepare for chaos! Failure will increase pressure on your remaining servers, latency will increase, etc. And when things get up again, will it just work? Getting all data in sync, are you ready for that? Did you forgot about gc_grace_seconds? There are lots of variables and small details that can be forgotten if you don’t plan ahead. And then in the middle of a problem, it will not help having those details forgotten!

My experience tells me that you must take Cassandra failures seriously, and plan for them! Having a B plan is never a bad thing, and a C even. Also, make sure those plans work! So for this short introduction I will leave a couple of recommendations:

  • Test your system against Cassandra delivering a bad service (timeouts, high latency, etc).
  • Set a “bare minimum” for your system to work (how low can we go on consistency, for example).
  • Test not only your system going down, but also prepare for the coming up!
  • Keep calm! Cassandra will help you!

Overall, Cassandra is a tough and robust system. I’ve had major problems with network, storage, Cassandra itself, etc. And in the end Cassandra not only survived, it gave me no downtime. But with every problem I had, it increased my knowledge and awareness of what I could expect. This lead to planning for major problems (which did happen) and this combined with the natural resilience of Cassandra made me go through those events without downtime.

Fell free to comment/discuss about it, in the comment section below! Juicy details will be left for London!

Categories: DBA Blogs

Licensing Oracle in a public cloud: the CPU calculation impact

Fri, 2015-04-10 09:18

First of all a disclaimer: I don’t work for Oracle nor do I speak for them. I believe this information to be correct, but for licensing questions, Oracle themselves have the final word.

With that out of the way, followers of this blog may have seen some of the results from my testing of actual CPU capacity with public clouds like Amazon Web Services, Microsoft Azure, and Google Compute Engine. In each of these cases, a CPU “core” was actually measured to be equivalent to an x86 HyperThread, or half a physical core. So when provisioning public cloud resources, it’s important to include twice as many CPU cores as the equivalent physical hardware. The low price and elasticity of public cloud infrastructure can however offset this differential, and still result in a cost savings over physical hardware.

One place this difference in CPU core calculation can have a significant impact, however, is software licensing. In this post I’ll look at Oracle database licensing in particular.

Oracle databases can be licensed using many metrics, including unlimited use agreements, embedded licenses, evaluation/developer licenses, partner licenses, and many more. But for those without a special agreement in place with Oracle, there are two ways to license products: Named User Plus (NUP) and processor licenses. NUP licenses are per-seat licenses which have a fixed cost per physical user or non-user device. The definition of a user is very broad, however. Quoting the Oracle Software Investment Guide:

Named User Plus includes both humans and non-human operated devices. All human users and non-human operated devices that are accessing the program must be licensed. A non-human operated device can be many things, such as a temperature-monitoring device. It is important to note that if the device is operated by a person, then this person must be licensed. As described in illustration #1, the 400 employees who are operating the 30 forklifts must be licensed because the forklift is not a “non-human operated device”.

So, if the application has any connection outside the organization (batch data feeds and public web users would be examples), it’s very difficult to fit the qualifications to count as NUP licenses.

Now, this leaves per-processor licenses, using processor cores that can potentially run the database software as licensing metric. When running in a public cloud, however, there is an immediate issue, which is your Oracle instance could presumably run on any of the thousands of servers owned by the cloud provider, so unique physical processors are virtually impossible to count. Fortunately, Oracle has provided a way to properly license Oracle software in public cloud environments: Licensing Oracle Software in the Cloud Computing Environment. It sets out a few requirements, including:

  • Amazon EC2, Amazon S3, and Microsoft Azure are covered under the policy.
  • There are limits to the counting of sockets and the number of cores per instance for Standard Edition and Standard Edition One.

But most importantly is the phrase customers are required to count each virtual core as equivalent to a physical core. Knowing that each “virtual core” is actually half a physical core, it can shift the economics of public cloud usage for Oracle database significantly.

Here’s an example of a general-purpose AWS configuration and a close equivalent on physical hardware. I’m excluding costs of external storage and datacenter costs (power, bandwidth, etc) from the comparison.

  • m3.2xlarge
  • 8 virtual / 4 physical CPU cores (from an E5-2670 processor at 2.6GHz)
  • 30GB RAM
  • 2x80GB local SSD storage
  • 3-year term

Total: $2989 upfront

A physical-hardware equivalent:

  • A single quad-core E5-2623 v3 processor at 3GHz
  • 32GB RAM
  • Oracle standard edition one
  • 2x120GB local SSD
  • 3-year 24×7 4hr on-site service

I priced this out at dell.com and came out with a total of $3761.

Now let’s add in an Oracle license. From the Oracle Price List, a socket of Standard Edition One costs $5800, with an additional $1276/year for support. Due to the counting of CPU cores, our AWS hardware requires two sockets of licensing. So instead of saving $772, we end up paying $9628 more.

 Standard Edition One

If we were to use Oracle Enterprise edition (excluding any options or discounts), that becomes an extra $157,700. Not small change anymore.

 Enterprise Edition

So before you make the jump to put your Oracle databases on a public cloud, check your CPU core counts to avoid unexpected licensing surprises.

Categories: DBA Blogs

Log Buffer #418: A Carnival of the Vanities for DBAs

Fri, 2015-04-10 08:20

This Log Buffer edition has collected some of the valuable blog posts from different databases like Oracle, SQL Server and MySQL.

Oracle:

  • Accessing HDFS files on a local File system using mountable HDFS – FUSE
  • enq: TM – contention
  • The Four A’s of Data Management
  • ODI, Big Data SQL and Oracle NoSQL
  • Using the RIDC Client to Interface with Oracle Webcenter Content

SQL Server:

  • SQL Server 2014 has introduced a rebuilt Cardinality Estimator (CE) with new algorithms
  • Creating a multi-option parameter report for SQL Server Reporting Services
  • Re-factoring a database object can often cause unexpected behavior in the code that accesses that object
  • What is Database Continuous Integration?
  • Deleting Historical Data from a Large Highly Concurrent SQL Server Database Table

MySQL:

For years it was very easy to defend InnoDB’s advantage over competition. Covering index reads were saving I/O operations and CPU everywhere, table space and I/O management allowed focusing on database and not on file systems or virtual memory behaviors, and for the past few years InnoDB compression was the way to have highly efficient OLTP.

InnoDB locks and deadlocks with or without index for different isolation level.

pquery binaries with statically included client libs now available!

MySQL Group Replication – mysql-5.7.6-labs-group-replication.

MySQL 5.7 aims to be the most secure MySQL Server release ever, and that means some significant changes in SSL/TLS.

Categories: DBA Blogs

Log Buffer #417: A Carnival of the Vanities for DBAs

Fri, 2015-04-10 08:06

This Log Buffer travels wide and deep to scour through the Internet to bring some of the most valuable and value-adding blog posts from Oracle, SQL Server and MySQL.

Oracle:

What is SQLcl ? SQLcl is a new command line interface like SQL*PLUS coming along with SQL Developper 4.1 Early Adopter. It’s a lightweight tool (only 11MB) developed by the SQL Developer team, which is fully compatible with Windows and Unix/Linux.  Also, you don’t need to install it so it’s totally portable.

Find Users with DBA Roles.

Virtual Compute Appliance 2.0.2 Released.

In case you are not familiar with WLST (the WebLogic Scripting Tool), it is a powerful scripting runtime for administering WebLogic domains.

The following article gives some useful hints-and-tips Richard used recently in helping people customizing tables and lists-of-values using Page Composer.

SQL Server:

With the idea of a generic Dacpac defined by international standard, comes the potential for a Visual Studio developer. This uses SSDT to create a generic database model to a SQL-92 compliant standard that can then be deployed to any one of the major RDBMSs.

Using the APPLY operator to reduce repetition and make queries DRYer.

Image a situation when you use the SQL Server RAND() T-SQL function as a column in a SELECT statement, and the same value is returned for every row as shown below. In this tip, Dallas Snider explains how you can get differing random values on each row.

This articles describes two ways to shred Unicode Japanese character from xls files into SQL Server table using SSIS.

Arshad Ali demonstrates how you can use the command line interface to tune SQL queries and how you can use SQL Server Profiler to capture the workload for tuning with Database Engine Tuning Advisor.

MySQL:

Postgres-Performance seit 7.4.

The Ubuntu 12.04.3 LTS release only provides MySQL 5.1 and MySQL 5.5 using the default Ubuntu package manager.

As part of a MySQL 5.5 to MySQL 5.6 upgrade across several Ubuntu servers of varying distros an audit highlighted a trivial but interesting versioning identification error in Ubuntu’s packaging of MySQL.

MySQL 5.6 will now automatically recreate the InnoDB redo log files during a MySQL restart if the size (or number) of these logs changes, i.e. a change to innodb_log_file_size.

Mermaids have the same probability of fixing your permission problems, but people continue believing in the FLUSH PRIVILEGES myth.

Categories: DBA Blogs

Disable Lock Escalation in SQL Server

Fri, 2015-04-10 07:55

If a lot of rows or pages are locked, the SQL Server escalates to a table-level lock, to save resources. Each single lock takes approx. 100 bytes. So if you have many locks it takes a lot of resources to manage them. (There is a great blog about lock escalation, if you want some more info: http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/05/17/lock-escalation.aspx)

 

Until SQL Server 2008, there was no way to change the lock escalation for a single table. You could deactivate the escalation for the server by using the Trace Flags:

  • 1211 - Disables Lock Escalation completely – allows to use 60% of the allocated memory – if 60% of memory is used and more locking is needed you will get an out-of-memory error.
  • 1224 - Disables Lock Escalation until the memory threshold of 40% allocated memory is reached – after that Lock Escalation is enabled.

 

But that was in most cases not a good choice and caused a lot of performance problems. In SQL-Server 2008 and above there is a new table option (ALTER-TABLE) that can be used to change the default Lock-Escalation. This helps you if you have a table where you want to disable the escalation or if the table is partitioned.

 

On a partitioned table activating the AUTO Option can improve concurrency, by escalating the locks to the partition-level and not to the table-level.

ALTER TABLE – table option:

SET ( LOCK_ESCALATION = { AUTO | TABLE | DISABLE } )

  • AUTO (should be considered if you have a partitioned table)
  • If tables is partitioned – the locks will be escalated to the partition-level
  • If table is not partitioned – the locks will be escalated to the table-level
  • TABLE
  • Default behavior
  • Locks are escalated to the table-level
  • DISABLE
  • Lock escalation to the table-level is deactivated in most cases
  • In some necessary cases it’s allowed to escalate to the table-level

This is a cool feature, that are many developers are not aware of.

Thanks for Reading!

Categories: DBA Blogs