alt.oracle
Move That Datafile!
Moving datafiles has always been a pain. There are several steps, it’s fairly easy to make a mistake and it requires the datafile to be offline. There are also different steps depending on whether the database is in ARCHIVELOG mode or not. In ARCHIVELOG mode, the steps are…
1) Take the tablespace containing the datafile offline
2) Copy/rename the datafile at the OS layer
3) Use ALTER TABLESPACE…RENAME DATAFILE to rename the datafile so that the controlfile will be aware of it
4) Backup the database for recovery purposes (recommended)
If the database is in NOARCHIVELOG mode, you have to shutdown the DB, put it in the MOUNT state, etc, etc. That’s certainly not that hard to do, but you get the feeling that there should be a better way. Now in Oracle 12c, there is – using the ALTER DATABASE MOVE DATAFILE command. With this command, you can move a datafile, while it’s online, in one simple step. Let’s set this up.
SQL> create tablespace test datafile '/oracle/base/oradata/TEST1/datafile/test01.dbf' size 10m;
Tablespace created.
SQL> create table altdotoracle.tab1 (col1 number) tablespace test;
Table created.
SQL> insert into altdotoracle.tab1 values (1);
1 row created.
SQL> commit;
Commit complete.
Let’s go the extra mile and lock the table in that datafile in another session.
SQL> lock table altdotoracle.tab1 in exclusive mode;
Table(s) Locked.
Now let’s use the command.
SQL> alter database move datafile '/oracle/base/oradata/TEST1/datafile/test01.dbf'
2 to '/oracle/base/oradata/TEST1/datafile/newtest01.dbf';
Database altered.
That’s all there is to it. Datafile moved/renamed in one step while a table it contained was locked.
SQL> select file_name from dba_data_files where file_name like '%newtest%';
FILE_NAME
--------------------------------------------------------------------------------
/oracle/base/oradata/TEST1/datafile/newtest01.dbf
Categories: DBA Blogs
Oracle 12c - New SYS-level Administration Privileges
For a while now, Oracle has been moving its security model toward a more differentiated set of privileges than just SYSDBA and SYSOPER. We saw this in 11g with the SYSASM privilege. This is in response to the growing number of DBA shops that delineate permissions at a more granular level, even among DBAs. Rather than saying, “Here’s my DBA team, everyone has SYSDBA,” more and more IT shops are defining DBA job roles at a lower level. If an application DBA never has the job responsibility to shutdown or backup a database, then maybe they don’t need SYSDBA. However, the lines aren’t always that clear, so Oracle has been adding new levels of admin privileges. In 12c, they’ve added several.
The SYSBACKUP privilege allows a user to connect to the target database and run RMAN commands, without requiring SYSDBA. Here’s what it looks like.
/home/oracle:test1:rman
Recovery Manager: Release 12.1.0.1.0 - Production on Tue Aug 13 14:57:41 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target "altdotoracle@test1 as sysbackup"
target database Password:
connected to target database: TEST1 (DBID=1787845942)
RMAN> select user from dual;
using target database control file instead of recovery catalog
USER
------------------------------
SYSBACKUP
RMAN> backup tablespace users;
Starting backup at 13-AUG-13
allocated channel: ORA_DISK_1
…
Finished backup at 13-AUG-13
RMAN>
In truth, SYSBACKUP has a lot more rights than just those needed to do something like a hot backup, including startup and shutdown, creating tables and tablespaces and executing a number of supplied packages. So from that perspective, I’m not exactly sure they hit the mark on this one. Nevertheless, it does differentiate SYSBACKUP from the god-like SYSDBA admin privilege to some degree. There are also the new admin privileges SYSDG, for administering Data Guard, and SYSKM to do Transparent Data Encryption (TDE) administration. Consequently, there are new columns in v$pwfile_users to reflect these new privs.
SQL> desc v$pwfile_users
Name Null? Type
--------------------------- -------- ---------------
USERNAME VARCHAR2(30)
SYSDBA VARCHAR2(5)
SYSOPER VARCHAR2(5)
SYSASM VARCHAR2(5)
SYSBACKUP VARCHAR2(5)
SYSDG VARCHAR2(5)
SYSKM VARCHAR2(5)
CON_ID NUMBER
If we examine the view itself, we see this in action.
SQL> select * from v$pwfile_users;
USERNAME SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM CON_ID
----------------- ----- ----- ----- ----- ----- ----- ----------
SYS TRUE TRUE FALSE FALSE FALSE FALSE 0
SYSDG FALSE FALSE FALSE FALSE TRUE FALSE 0
SYSBACKUP FALSE FALSE FALSE TRUE FALSE FALSE 0
SYSKM FALSE FALSE FALSE FALSE FALSE TRUE 0
ALTDOTORACLE FALSE FALSE FALSE TRUE FALSE FALSE 0
Categories: DBA Blogs
Tightening Security with SELECT ANY DICTIONARY in Oracle 12c
Developers and users sometimes request the SELECT ANY DICTIONARY system privilege to enable them to view various data dictionary tables. This may be fine for querying DBA_TABLES, etc, but the Oracle data dictionary contains a LOT of information. Some of the views/tables are compromising from a security standpoint. One of Oracle’s legendary gaffs in version 9i displayed cleartext passwords for database links in SYS.LINK$. Yikes. In version 12c, Oracle has locked down this type of access even further. Here’s an example from version 11g.
SQL> create user altdotoracle identified by altdotoracle;
User created.
SQL> grant create session, select any dictionary to altdotoracle;
Grant succeeded.
SQL> conn altdotoracle/altdotoracle
SQL> select name, password from sys.user$ where password is not null;
NAME PASSWORD
------------------------------ ------------------------------
SYS AAJ125C9483Q017
SYSTEM W45825DFFFD37SE
OUTLN WW24Z1N6A8ED2E1
ALTDOTORACLE 73NH15SG3Q2364W
Armed with these password hashes, password cracking tools can be used to compare these values to common dictionary passwords. When a match is found on a non-complex password, your password is cracked. (Don’t wear yourself out trying to crack the hashes above, I obfuscated them) SELECT ANY DICTIONARY no longer allows access to dictionary objects like USER$, ENC$ and DEFAULT_PWD$. Let’s try it.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> create user altdotoracle identified by altdotoracle;
User created.
SQL> grant create session, select any dictionary to altdotoracle;
Grant succeeded.
SQL> conn altdotoracle/altdotoracle
Connected.
SQL> select name, password from sys.user$ where password is not null;
select name, password from sys.user$ where password is not null
*
ERROR at line 1:
ORA-01031: insufficient privileges
SQL> select * from sys.enc$;
select * from sys.enc$
*
ERROR at line 1:
ORA-01031: insufficient privileges
SQL> select * from sys.default_pwd$;
select * from sys.default_pwd$
*
ERROR at line 1:
ORA-01031: insufficient privileges
Categories: DBA Blogs
Data Pump 12c – Pumping Data with the LOGTIME Parameter
Since its release, Oracle Data Pump has been a worthy successor to the traditional exp/imp tools. However, one area lacking with Data Pump has been something as simple as the ability to identify how long each step of a Data Pump job actually takes. The log will show start time at the top of the log and end time at the bottom, but the time of execution for each step is a mystery. Oracle 12c solves this problem with the LOGTIME parameter, which adds a timestamp to the execution of each step of the Data Pump job. Here’s what it looks like without the parameter.
/home/oracle:test1:expdp altdotoracle/altdotoracle \
> directory=data_pump_dir dumpfile=expdp.dmp \
> tables=employee
Export: Release 12.1.0.1.0 - Production on Tue Aug 13 09:32:38 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "ALTDOTORACLE"."SYS_EXPORT_TABLE_01": altdotoracle/******** directory=data_pump_dir dumpfile=expdp.dmp tables=employee
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "ALTDOTORACLE"."EMPLOYEE" 10.93 KB 16 rows
Master table "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ALTDOTORACLE.SYS_EXPORT_TABLE_01 is:
/oracle/base/admin/test1/dpdump/expdp.dmp
Job "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully completed at Tue Aug 13 09:32:51 2013 elapsed 0 00:00:11
With the LOGTIME parameter, each step is prefixed with a timestamp, indicating the start time for each event that is processed.
/home/oracle:test1:expdp altdotoracle/altdotoracle \
> directory=data_pump_dir dumpfile=expdp.dmp \
> tables=employee LOGTIME=ALL
Export: Release 12.1.0.1.0 - Production on Tue Aug 13 09:34:54 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
13-AUG-13 09:34:56.757: Starting "ALTDOTORACLE"."SYS_EXPORT_TABLE_01": altdotoracle/******** directory=data_pump_dir dumpfile=expdp.dmp tables=employee LOGTIME=ALL
13-AUG-13 09:34:57.019: Estimate in progress using BLOCKS method...
13-AUG-13 09:34:57.364: Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
13-AUG-13 09:34:57.396: Total estimation using BLOCKS method: 64 KB
13-AUG-13 09:34:57.742: Processing object type TABLE_EXPORT/TABLE/TABLE
13-AUG-13 09:34:57.894: Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
13-AUG-13 09:34:57.964: Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
13-AUG-13 09:35:04.853: . . exported "ALTDOTORACLE"."EMPLOYEE" 10.93 KB 16 rows
13-AUG-13 09:35:05.123: Master table "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
13-AUG-13 09:35:05.127: ******************************************************************************
13-AUG-13 09:35:05.128: Dump file set for ALTDOTORACLE.SYS_EXPORT_TABLE_01 is:
13-AUG-13 09:35:05.131: /oracle/base/admin/test1/dpdump/expdp.dmp
13-AUG-13 09:35:05.134: Job "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully completed at Tue Aug 13 09:35:05 2013 elapsed 0 00:00:09
The parameter works similarly with Data Pump Import. Note that, although it is documented, the LOGTIME parameter is not described when you do a expdp help=y or impdp help=y command.
Categories: DBA Blogs
RMAN 12c – More SQL without ‘sql’
Up until version 12c, the RMAN command line parser was capable of parsing certain DML and DDL statements. To do this, we use the “sql” command from the RMAN command line, like this.
11gHost> rman
Recovery Manager: Release 11.2.0.2.0 - Production on Wed Aug 7 15:29:19 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target
connected to target database: TEST2 (DBID=1111111111)
RMAN> sql 'create table altdotoracle (col1 number)';
sql statement: create table altdotoracle (col1 number)
RMAN> sql 'insert into altdotoracle values (1)';
sql statement: insert into altdotoracle values (1)
However, one operation we could not do was SELECT. The RMAN parser lacked the facility to return arrays of data to the screen. This shouldn’t be surprising, considering that RMAN is intended to be used to run backup and restore operations.
RMAN> sql 'select * from altdotoracle';
sql statement: select * from altdotoracle
The statement does not error, but no data is returned. Now, in version 12c, we get two new benefits in this area. First, we no longer need to use the ‘sql’ command. For example…
12cHost> rman
Recovery Manager: Release 12.1.0.1.0 - Production on Wed Aug 7 15:35:27 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target
connected to target database: TEST1 (DBID=1212121212)
RMAN> create table altdotoracle (col1 number);
Statement processed
RMAN> insert into altdotoracle values (1);
Statement processed
Additionally, the RMAN parser in 12c allows us to run SELECT commands, as well as DESCRIBE commands.
RMAN> select * from altdotoracle;
COL1
----------
1
RMAN> describe altdotoracle
Name Null? Type
----------------------------------------- -------- ----------------------------
COL1 NUMBER
The ability to use the ‘sql’ RMAN command is still supported for backward compatibility. A simple thing, but useful!
Categories: DBA Blogs
Oracle 12c - Out With the Grid, In With the Cloud
So Oracle database 12c is out now. The cloud (and hence the “c”) is hanging over us. At least we don’t have to hear about “the grid” anymore. New versions of Oracle are funny things – as DBAs, we’re kinda excited about the possibilities, but then reality sets in and we realize that our company isn’t going to upgrade for a long time. That’s not necessarily a bad thing – some of us have experienced the pain of the “release 1’s” that have come out in recent years. There are generally early adopters and then everyone else. Over the course of my time as a DBA, I’ve swung from wanting to be on “the bleeding edge” to saying “we’re not upgrading until the terminal release!” Reality is somewhere in the middle – every release probably has something useful to us. Inevitably, what I see is that new versions come out and DBAs don’t get familiar with the new features. Let’s be honest – so many features seem oriented towards selling something that we miss the good ones. But understanding a new feature set can be extremely useful. I’m not promising anything, but 12c could have that one new thing that could make your life so much easier that you eventually can’t live without it. Who could really deny the importance of stuff like ASM and the AWR that came with version 10g? Of course, reading through the actual new features documentation can be about as pleasant as giving yourself a root canal. So, since I’ve combed through some of the docs, I wanted to spare you the pain and give you some of the good stuff. Over the next few posts, we’ll take a look at some new features in 12c – I think they could be useful, so maybe you will too. All of the meat and none of the fat!
Categories: DBA Blogs
New book
Just a quick announcement that my second book is available from Packt Publishing. OCA Oracle Database 11g: Database Administration I: A Real-World Certification Guide (again with the long title) is designed to be a different kind of certification guide. Generally, it seems to me that publishers of Oracle certification guides assume that the only people who want to become certified are those with a certain level of experience, like a working DBA with several years on the job. So, these guides make a lot of assumptions about the reader. They end up being more about a lot of facts for the test rather than a cohesive learning experience. My book attempts to target to a different kind of reader. I've observed in the last several years that many people from non-database backgrounds are setting out to get their OCA or OCP certifications. These folks don't necessarily bring a lot of knowledge or experience to this attempt, just a strong drive to learn. My two books are designed to start from the beginning, then take the reader through all of the subjects needed for the certification test. They're designed to be read straight through, completing the examples along the way. In a sense, I'm attempting to recreate the experience of one of my Oracle classes in book form.
You'll find the book at these fine sellers of books.
Packt Publishing
Amazon
Barnes and Noble
You'll find the book at these fine sellers of books.
Packt Publishing
Amazon
Barnes and Noble
Categories: DBA Blogs
Open World 2012 Presentation
Here's an audio link to my presentation at Oracle Open World this year, How Enterprise Manager 12c Brokers Peace Between DBAs and Compliance Officers. This is just my portion, I also spoke with Dave Wolf from Oracle Corp. Dave covered a lot of the features and framework of Enterprise Manager 12c. My portion, as a DBA for the Department of Defense, was in regards to how we're using 12c to manage and monitor security. At the DoD, we have a detailed list of security requirements for all of our systems, including Oracle Databases. It can be tough to monitor this established security posture – databases change, users are added, etc. It's a term I call compliance drift. We're using OEM 12c to manage our security posture and monitor when compliance drift occurs. Sorry it's not video, so you don't get to see my pretty face. But the audio is decent quality, so give it a listen if you're interested.
Categories: DBA Blogs
Just an FYI. I'll be speaking at
Just an FYI. I'll be speaking at Oracle Open World again this year. If you're there, my session is as follows.
Speakers
Session ID: CON9591
Title: How Oracle Enterprise Manager 12c Brokers Peace Between DBAs and Compliance Officers
Abstract
This session discusses how to use Oracle Enterprise Manager 12cto completely automate the process of applying database patches while reporting on compliance against your own and Oracle’s security and implementation best practices. Specifically, you’ll learn how to quickly identify databases missing critical patches, how to automate patch applications, and how to maintain compliance on an ongoing basis.
Speakers
- Steve Ries CSC Database Architect
- David Wolf Oracle Principal Product Manager
Type: Conference Session
Length: 60 minutes
Primary Track: DATABASE:Database Manageability
Categories: DBA Blogs
You Don’t Know SQL
Or maybe you do. However, I’ve talked to a lot of DBAs (pretty much the target audience for this blog) and you might be surprised how often the SQL skills of your average DBAs dwindle over time. In today’s role-specific market, developers do developer stuff while DBAs do database stuff. Somewhere in between falls SQL – the red-headed stepchild of the programming world. Ask a DBA and they’ll probably say SQL is a legitimate fourth generation language. Tell a Java programmer that and they’ll laugh themselves into a seizure. It’s strange that DBAs become less familiar with SQL over time, since it’s probably the first thing you learned when you were an Oracle newbie. Maybe you learned about pmon and archivelog mode first, but more likely you struggled with how to use a SELECT statement to form a join between two tables. I know that’s how I started.
So that leads me into my excuse for not posting, lo, these many months. It’s because I wrote a book. A book about SQL. The fine folks at Packt Publishing approached me at the end of 2010 and asked me to write the first in a series of books to help folks earn an Oracle Certification. I’ve been teaching students that stuff for eight years, so it seemed like a good fit. This book, aptly named “OCA Oracle Database 11g: SQL Fundamentals I: A Real World Certification Guide” was published a few weeks ago, and may also hold the record for the longest title in history for an Oracle book.
This is the link to the book on Packt's site and this is the link on Amazon
Here is the lovely cover.
I'd take my "bridge to nowhere" picture over some weird animal cover any day. I'm talking to you, O'Reilly Publishing...
I’ll write more about the book soon, but my point here is about the subject of the book – SQL. If you’re a DBA, you might be able to do a nifty join between v$process and v$session to find the database username and OS process id of a user, but could you write it in Oracle’s new join syntax? Do you know how a correlated subquery works? Ever try a conditional insert with an INSERT...WHEN statement? No? Then buy my book and all these mysteries will be revealed!
Seriously, though, even if you’re not interested in being certified and your daily job description doesn’t include any correlated subqueries, it never hurts to be reminded that SQL is actually *why* we have relational databases in the first place. An Oracle DBA should always try to understand as much about Oracle as he or she can. So don't go to rust – bust out those mad SQL skillz!
Categories: DBA Blogs
Worst Blogger Ever...
Yes. I know. I'm the worst blogger ever. That last post says... <choke>... May. But I have an excuse (sort of). Busy does not describe my past six months. Some of you are familiar with the reason, but for those of you who aren't, I'll post about it very soon.
Categories: DBA Blogs
Purple Sweaters and Oracle 11g
If you're geeky enough like me, you get a little excited whenever Oracle puts out a new version. Most of us have wondered at one time or another what it would be like to be able to beta-test the new Oracle version before it comes out. You may read the pre-release articles about the new features, if nothing else to keep ahead of the technology curve. Like geeky me, you may download the new version on the same day it comes out – maybe to play with it a little or just to see how it's different. But as intrigued as I get when new versions come out, I'm generally a little scared too. What the hell did they do now, I wonder. Grabbing the newest version of Oracle when it comes out is a little like getting a Christmas present from that family member who always buys you something you didn't ask for. You're glad to get a gift, but you pray to God that it's not another purple sweater. And Oracle's version history is littered with plenty of purple sweaters.
Sometimes I think back longingly to the days of Oracle 8i. Despite being the first version with that silly single-letter suffix thing (Oracle 8i – the "i" is for INTERNET DATABASE!), it was streamlined, compact, and just worked well. Then 9i came out - with it's 700+ new features. Instead of fitting on a single CD, the 9i install now needed three CDs, which either meant you were dealing with a lot of CD swapping or pushing three times the amount of data over the network just to do an install. And that would've been fine if the new stuff was good stuff. Unfortunately, all that extra cruft was stuff you almost certainly didn't need. Did you really need to install Oracle's application server will every one of your database installs? Did you really need an HTTP server? Oh, and that wasn't even the best part. With all that extra crap came... wait for it... SECURITY HOLES! Oracle 9i was the version where Oracle started to get creamed in the tech press about its glaring security risks. Which means if you installed Oracle 9i using the click next... click next... click next... method, you might as well leave the doors to your company unlocked.
To Oracle's credit, they listened. I remember going to several pre-release seminars before Oracle 10g came out. Oracle made a big deal about how they put it together. In a revolutionary move, Oracle actually asked DBAs what they liked and didn't like about the Oracle database. DBAs said it was too big, took too long to install and had too much junk. Oracle responded. Version 10g had plenty of new features, but a lot of them were actually useful. And in a move that must be a first in the history of software, 10g was actually smaller than 9i – going back to one CD instead of three. Security was tighter. It installed quickly. All in all, a really forward-thinking move on Oracle's part, if you could ignore that dumb "g is for grid" thing.
Well, like I said, whoever thought up the approach to 10g obviously got fired, because now we have 11g. Before I go too far, yes, I know 11g has some good new features, although a quick list of the useful ones doesn't exactly spring to mind. But, in a total reversal of the slim and trim approach of 10g, version 11g has now become an even bigger, more unwieldy behemoth than 9i. A shining example of software crafted by suits instead of engineers. With 11g, you now get to drag 2GBs worth of crap from server to server in a vain attempt to do a simple database install. In fairness, you can separate out the "database" directory after you download the entire mess, but still... that leaves about 1.5GB of purple sweaters.
Every software company deals with bloat - how do you sell the next version? I get that. And Oracle has bought half the planet and needs to integrate those acquisitions across the board. Yep – I got it. But I also know that the RDBMS is Oracle’s flagship product. The company that produced 11g is the same company that was smart enough to ask DBAs what they should put in 10g. 10g was an incredibly successful version for Oracle – why screw with that?
I mentioned last time that, as great as Automatic Storage Management (ASM) is, Oracle had managed to screw it up in 11g. Here’s why. After telling you last time that ASM was so good that it should be used in single-instance systems as well as RAC, Oracle has gone and screwed me over. In 11gR2, ASM is now bundled with the “grid infrastructure” – the set of components used to run Real Application Clusters. Does that mean that you can’t use ASM with a single-instance database? Nope, but it makes it incredibly inconvenient. If you wanted to standardize on ASM across your database environments, you’d have to install the entire grid infrastructure on every one of your servers. If you manage 5 databases, it’s not too big a deal. If you manage 500, it's a much bigger deal. So c'mon Oracle – when you make good tools, make it easy for us to use them. This is incredibly discouraging.
On an unrelated positive note, I'm pleased to note that alt.oracle has been picked up by the Oracle News Aggregator at http://orana.info, which is just about the biggest Oracle blog aggregator in the universe. So thanks to Eddie Awad and the fine folks at OraNA.
Categories: DBA Blogs
ASM – It's not just for RAC anymore
I'm super critical of Oracle when they screw stuff up or try to push technology in a direction that's bad for DBAs. You'll be hearing some rants about it in upcoming posts. But I also think that Oracle is a company that is actually good for the direction that technology is heading, unlike some companies whose names begin with "Micro" and end with "soft". Yes, they're a vast, stone-hearted corporation that would sell their grandmothers to raise their stock price. So is every other technology company – get used to it. But when they do something right, I'll be fair and sing their praises. Once every version or so, Oracle does something that really changes the game for DBAs. In version 8 it was RMAN. In 9i it was locally managed tablespaces. In 10g, it's definitely ASM - Automatic Storage Management. Yeah, I know this is kinda old news - ASM has been out for a good long while. What surprises me, though, is how many DBAs think that ASM is only useful for RAC architectures. "I don't run RAC, why would I need ASM?"
When ASM came out, it both intrigued and terrified me. The claim that it could produce I/O performance almost on par with raw devices without all the grief that comes with using them was exciting. But the idea of putting your production data on a completely new way of structuring files was pretty scary. I trust filesystems like UFS and ext2/3 (maybe even NTFS a little, but don't quote me) because they've stood the test of time. If there's one thing a DBA shouldn't screw around with, it's the way that the bits that represent your company's data are written to disk. I'm skeptical of any new way to store Oracle data on disk, since I'm the loser that has to recover the data if everything goes south. So I entered into my new relationship with ASM the way you should – with a whole lot of testing.
I originally moved to ASM out of sheer necessity. I was running RAC and using a woeful product called OCFS – Oracle Clustered Filesystem – to store the data. Performance was bad, weird crashes happened when there was heavy I/O contention, it wasn't pretty. Nice try, Oracle. It's cool that it was an open source project, but eventually it became clear that Oracle was pushing toward ASM as their clustered filesystem of choice. To make a long story short, we tested the crap out of it and ASM came through with flying colors. Performance was outstanding and the servers used a lot less CPU, since ASM bypasses that pesky little filesystem cache thing. In the end, we moved our single instance databases to ASM as well and saw similar results. It's true that, since you give Oracle control of how reads and writes are done, ASM is a very effective global filesystem for RAC. But the real strength of ASM is in the fact that its a filesystem built specifically for Oracle databases. You don't use it to store all your stolen mp3 files (unless you're storing them as blobs in the database, wink), you use it for Oracle datafiles. You give Oracle control of some raw partitions and let it go. And it does a good job. Once you go ASM, you never go back.
I'm not going to do a sell job on the features of ASM, since I don't work for the sales department at Oracle. Really, the positives for ASM boil down to three key features. 1) It bypasses the filesystem cache, thus going through fewer layers in the read/write process. This increases performance in essentially the same way that raw devices do. 2) It works constantly to eliminate hot spots in your Oracle data. This is something that your typical filesystem doesn't do, since it takes an intimate knowledge of how the particular application (in this case Oracle) is going to use the blocks on disk. Typical filesystems are designed to work equally well with all sorts of applications, while ASM is specialized for Oracle. 3) It works (with Oracle) as a global filesystem. In clustered systems, your filesystem is crucial. It has to be "globally aware" that two processes from different machines might try to modify the same block of data at the same time. That means that global filesystems need to have a "traffic cop" layer of abstraction that prevents data integrity violations. Normally this layer would impact performance to a certain degree. But ASM gives control to Oracle, which has a streamlined set of rules about what process can access a certain block and prevents this performance loss.
So consider using ASM. Even if you don't run RAC, benefits #1 and #2 make it worth your while. Our DBA team has been using it religiously on both RAC and non-RAC systems for years without any problems.
Of course, we're talking about Oracle here, so leave it to them to take the wonderful thing that is ASM and screw it up. Next time I'll tell you how they did just that in version 11g.
Categories: DBA Blogs
Surrender (a little) to the Dark Side
When I was a freshman in college, I, like many, was bouncing back and forth on what my major should be. I was leaning heavily toward electrical engineering, but my long standing love of computers had me seriously considering Comp Sci as well. I decided to take a couple of introductory Comp Sci classes to see if I liked them. So I tried taking the Introduction to Programming course and lab during my first semester. While I imagine that today they use some cool and zippy language, back then they used Fortran, a programming language that only a mother language could love. The class was fine, but throughout the course, I began to have visions of myself growing old sitting in front of a room-sized mainframe typing in endless subroutines using indecipherable languages. As the old joke goes, "a computer without COBOL and Fortran is like a piece of chocolate cake without ketchup and mustard." That's a bit of an exaggeration, but let's face it, back then, being a computer professional was a lot different than it is today. So I chose a different path, but wound up in computers anyway. Fancy that.
Even though I chose to turn away from the "Dark Side" of development and became a DBA (i.e. Jedi Master), I've always regretted it a little bit. Why? Because programming is fun. But let's make a distinction here between programming and software development. Programming is cool, creative and useful. Software development is an everlasting grind of hateful user requirements, rigid coding standards and endless revisions because your functional wants that company logo to be moved three pixels to the left of where it is on the company website.
True story here. During my incredibly short stint as a sort-of web developer, I was assigned to revise the page on a company's website that had the pictures and biographies of the CEO and all his lackeys. The page was fine, but then I got a request that came from the CEO - his picture on the page needed to be bigger than everyone else's. Why? Well, he's the CEO, that's why – he's better than everyone else. So I did it and moved the page elements around to allow for the bigger picture. Soon after, I started getting requests to put in bigger pictures of the lackeys, as well. Why? Well, they're important too! So I did that. Then the CEO was pissed so he ordered an even BIGGER picture and a longer, more flowery bio. Then the lackeys... well you get the idea. It was the Cold War all over again. So I'm making a distinction here between writing actual programs that do something as opposed to a dog and pony show for a bunch of suits.
The IT world is so specialized anymore that we DBAs don't get to sling code on a regular basis, unless it's maybe PL/SQL or some shell scripts. A lot of DBAs are missing out on the fun. Maybe you've gotten the chance to debug some Perl or Python. That stuff is good too, but there's a whole world of cool, useful tools that have yet to be coded, because YOU haven't coded them.
We talked last time about GUIs and the bias against them. My main problem with GUIs is that they can only do what they're programmed to do. But what if you could make your own GUI that would do whatever you wanted? Well, "I could never do a GUI" you say. "There's all the drawing objects at the right pixel coordinates," etc, etc. Nope. I haven't had to do stuff like that since the days of my Commodore 64. Modern software is mostly based on libraries of code that some other poor shmuck has already done. You don't really need to "draw" a window – you just find out what the command is to say, "Hey – put a window on the screen." The libraries for windows, dialog boxes, dropdowns, etc, have probably already been written for your language of choice. If they haven't, well, you're probably writing in Fortran. Shame on you.
I'm not saying it's easy, but it's also not as hard as you think. A few years back, I stumbled on some example code on a website that let you make simple GUIs in Tcl/Tk. Tcl is a language, by the way. Tk is a set of extensions that lets you make pretty GUI-type stuff. I typed the commands into my Linux console and, voila – pretty windows and clicky boxes. A light clicked on somewhere in a my head and I figured out the general idea of how this worked. All you're really doing is making function calls. We DBAs know how to do this. If you do a SELECT AVG(SALARY) FROM EMP, you're just passing in the values from the SALARY column of the EMP table and the AVG function spits out the results. Using GUI libraries in some languages isn't that much more complicated. It's all pushing and pulling the data you want in and out of these functions/subroutines.
Awhile back I wrote a program in Perl that works as a GUI interface to Data Pump. Not a CGI that runs from a webpage (although that's cool too), but a real, bonefide, run-on-your-desktop GUI. Yes it took awhile – I don't have a degree in Comp Sci and all my experience in coding is self taught. It's probably moderately useful, but more than anything else it was COOL. It's hard to match the satisfaction of creating your own useful tool, whether it's a script or a GUI, that solves a problem. You're not gonna program the next sequel to Doom (that's a video game), but you can still do cool stuff. So don't sell yourself short – dive in and learn something new. Give in to the Dark Side a little. Yoda won't mind.
Categories: DBA Blogs
GUI or not GUI
One of the longest and loudest controversies in the DBA world is that of the graphical user interface vs command line. Some of the opinions sound like this…
“GUIs are for newbies who don’t know what they’re doing.”
“Why should I learn all the commands – there’s already a tool to do that.”
“GUIs are too slow.”
“Learning the command line takes too long.”
“I don’t need to learn a bunch of commands that I’ll never use – I just want to get my job done.”
My own feelings about this go back to my early days as a DBA. I had this supervisor who was an absolute wizard when it came to Enterprise Manager. Now, we’re talking the early OEM that came with Oracle version 8.0, here. Ancient stuff. If it could be done with OEM, this guy could “git ‘er done”. One day tho, some kind of devastating emergency happened. As a newbie, I wasn’t always trusted to handle the big issues, so I went to the supervisor and told him the situation.
“Hey boss, we need to do so-and-so.”
“Oh,” says Boss, “I don’t know how to do that with Enterprise Manager.”
“Um,” I says, “I don’t think you *can* do that with Enterprise Manager.”
“Oh,” says Boss, “Then what do we do?”
I remember the look of defeat on his face. He was a nice guy, he wanted to help, he was responsible to help, but since Oracle hadn’t written that particular ability into his GUI tool, he had no idea as to how to do it. It made an impression on me. I decided then and there - that wasn’t going to be me. I made a commitment that lasted for years – I will not use GUI tools. No matter how much longer it takes me to do the job, with looking up commands and all, I will abstain from the evil of the GUI. And so I did.
As a result, I learned the command line. I REALLY learned the command line. SQL*Plus was my home. Not only did I learn a ton of data dictionary views by heart, over time, I sort of developed a “feel” for syntax even if I didn’t know it. I could kinda intuit what might be in a certain v$ view or I could guess what the columns of a particular dba_* view should be. It was and is incredibly useful and I don’t regret it. I wrote and saved my little scripts to do things. But, over time, I started to look down on my peers who used GUI tools, inwardly thinking they really couldn’t hack it from the command line. You obviously don’t say something like that, but you joke about it, etc, just to let them know. It probably didn’t help matters that in the ultimate GUI vs command line deathmatch, Windows vs Linux, I was (and am) squarely on the Linux side.
What started to change me was, ironically, Enterprise Manager. Although I didn’t use it, I’d kept up with OEM, watching it get, for the most part, better and better. But when 10g was released, it was like OEM had a bar mitzvah, sweet sixteen and a coming-out party all in one. Re-christened as Grid/Database Control, you could do dang near EVERYTHING with OEM now. OEM was finally a comprehensive tool. It was so comprehensive, that it started to shake my “GUIs are for losers” mentality. I thought, I could really do some damage with this OEM thing (in a good way). I started to think in terms of what would be more efficient, OEM or command line, for different situations. Command line was still winning in my mind, but not by as much as before.
The thing that finally “brought balance to the force” for me was a quote I read by a well-known Oracle consultant/author/blogger guy. If I said his name, you’d probably recognize it. I read something of his where he was consulting for a client and said this, almost verbatim, “I knew their DBAs were incompetent because they were using Enterprise Manager.” Whoa. Now it’s true that I didn’t want to be like my old boss, unable to do anything without a GUI, but I sure didn’t want to be like this arrogant bastard either. Besides that, I had seen enough of Grid/Database Control to know that his reasoning was crap.
In the end, the command line versus GUI war boils down to a few principles for me. A good DBA needs to be efficient. If you’re more efficient using a GUI than command line, then go for it. If, on the other hand, the only reason you use a GUI is that you’re just too lazy to learn the commands, then you get what you deserve. I’m still heavily command line oriented, but, in truth, I know there are instances where it would just be faster to use a GUI tool. Take, for instance, performance tuning. Everybody has their own way of doing it, but Grid/Database Control really does a good job of pulling a lot of different metrics together. It would take a lot of scripts to pull that much information into one place. It’s not for everyone, but it shouldn’t just be written off without a second thought. And when you decide which one's "faster", you have to take into consideration the amount of time it took for you to come up with that whiz-bang script of yours.
And if you're still a hardcore command liner, I'll try to change your mind next time. What if you could make your own GUI? Hmm?
Categories: DBA Blogs
Everybody needs a spare database
I've gotten a little preachy in this blog lately, so I thought this time I'd give you something useful. Have you ever wished you had a quicky little set of database tables so you could do some generally wacky stuff that would likely get you fired if you did it on your production database? I thought so. In the past, the only way to do something like this was to build another database somewhere. Of course, where to put it? Some of us weirdos have machines at home where we build databases, do virtual machines or stuff like that. Guilty. But not everyone wants to tie up their home machine with the multi-gigabyte behemoth that Oracle 11g has become. Well, have I got a deal for you.
Oracle provides a nifty little free service to show off their Oracle Application Express product (APEX), which I'm not sure has been as popular as they'd like it to be. You can register at their site and get your own little workspace that will allow you to play around with Oracle a little.
Here's how it works.
- Go to http://apex.oracle.com and click the link to "Sign Up"
- Click through the "next" buttons, giving Oracle your name and email address. Give them a real one since they'll send the verification link to it.
- Provide a name for your workspace and a schema name for your database objects
- Next you have to give a reason for requesting an account. Now, I don't know if anyone actually reads these or not, but you'd probably be better off if you didn't put something like "That dork from alt.oracle said it would be cool." Try "Evaluation purposes" instead.
- Next, you type in your little verification thing with the goofy letters and click "Submit Request"
- After a bit, you'll hopefully get an email back saying "click this link to verify, etc".
- Lastly, you'll get another email with your login.
Then you can login and poke around. Truthfully, you can do a lot of stuff on your new personal Apex. I'm not super familiar with it yet, but it looks like you can...
- Create your own tables, indexes, constraints, sequences, etc
- Run SQL statements and scripts
- Build PL/SQL objects
- Build your own webby-type applications with the GUI "Application Builder"
I'm not sure yet if you can build web apps that you and others could access from a browser without going through the whole Apex frontend, but if so, that would be uber-cool. One word of warning however. FOR THE LOVE OF ALL THAT IS HOLY, DON'T PUT ANY REAL DATA IN THIS CRAZY THING! I have no idea as to how secure it is – it's only for evaluation purposes, so DON'T DO IT.
You can't do a lot of administration-type stuff with your own personal Apex. If you're looking to mess with parameter files and flash recovery areas, it's time to bust out a virtual machine. But it is nice to have a place where you could try some SQL stuff without fear of a pink-slip visit from HR. So go get your account and do some crazy, webby SQL stuff. And, finally, FOR THE LOVE OF ALL THAT IS HOLY, DON'T PUT ANY REAL DATA IN THIS CRAZY THING!
Categories: DBA Blogs
Oooohhh... shiny!
I went to last year's Oracle Open World. I'd always wanted to go, but having been a consultant for so many years, those opportunities don't always come your way. In my experience, companies will spring for their own employees to go to Open World, but "no way" to that lousy, overpaid consultant who probably won't even be here next week. That leaves it up to the consulting company, whose take on things is usually, "If you don't already know everything they're talking about at Open World, then why did we hire you? Get back to work!" But since I work for a good consulting company, they offered me the chance to go.
Open World is a blast. If you're a geeky Oracle person like me, it's a complete nerd-o-gasm. First of all, Oracle's always announcing the "next big thing" – this year, it was the Oracle Linux kernel (perhaps the subject of a future post) and the latest in Exadata. Then you have your session speakers, most of which are pretty good. The technology booths full of people trying to sell you stuff are always cool. Of course, best of all is all the free swag you get. I came home with more techie junk than you can shake a datafile at. Let me tell you, it takes some mad ninja skilz to nab 11 t-shirts from Open World and get them home. I had to throw away all my underwear just to get them to fit in my luggage (don't ask me how the flight home was...).
Of course, the real focus of any Open World is same as that of a lot of the software industry – better, faster, stronger, more. Newer and shinier. What you have isn't what you need. I can't fault them for this – they need to keep selling stuff to compete and to stay in business, and innovation is a huge part of what we do. Progress is good. But sometimes a DBA needs to distinguish between something that represents progress and something that represents a big ol' pile of shiny.
I talked last time about how being a good DBA means having a healthy dose of skepticism. That has to apply to "new feature-itis" too. Part of what I do involves evaluating new technologies. Not only do I have to evaluate the tech to verify that it does what it says it does, I need to assess that its benefits are worth the time, risks and cost of adopting it. As an evaluator, there's an implied trust with my employers that if I recommend a shiny, new feature, it's because it will benefit their interests – not necessarily mine. I haven't seen it often, but I can remember working with more than one DBA who didn't share my take on this. I've come to expect non-technical people to fall into the whole "Look! Shiny!" thing when it comes to new tech. But some technical folks in positions of authority see new tech as way to 1) pad their resume ("why yes I've worked with feature X, I helped bring it into my last company"), or 2) make them indispensable, since they adopted it and are the only ones who understand it. When I was a newbie DBA, I knew a senior DBA who did just that - repeatedly. Everybody could see it, but nobody was in a position to do anything about it. Then, he left and the rest of us were put in the position of having to support this big, expensive, shiny nightmare.
Flying along the bleeding edge can be a bumpy ride. Resist the urge to pad your resume at the expense of your employer. Otherwise, your big ol' pile of shiny might become a big ol' pile of something else.
Open World is a blast. If you're a geeky Oracle person like me, it's a complete nerd-o-gasm. First of all, Oracle's always announcing the "next big thing" – this year, it was the Oracle Linux kernel (perhaps the subject of a future post) and the latest in Exadata. Then you have your session speakers, most of which are pretty good. The technology booths full of people trying to sell you stuff are always cool. Of course, best of all is all the free swag you get. I came home with more techie junk than you can shake a datafile at. Let me tell you, it takes some mad ninja skilz to nab 11 t-shirts from Open World and get them home. I had to throw away all my underwear just to get them to fit in my luggage (don't ask me how the flight home was...).
Of course, the real focus of any Open World is same as that of a lot of the software industry – better, faster, stronger, more. Newer and shinier. What you have isn't what you need. I can't fault them for this – they need to keep selling stuff to compete and to stay in business, and innovation is a huge part of what we do. Progress is good. But sometimes a DBA needs to distinguish between something that represents progress and something that represents a big ol' pile of shiny.
I talked last time about how being a good DBA means having a healthy dose of skepticism. That has to apply to "new feature-itis" too. Part of what I do involves evaluating new technologies. Not only do I have to evaluate the tech to verify that it does what it says it does, I need to assess that its benefits are worth the time, risks and cost of adopting it. As an evaluator, there's an implied trust with my employers that if I recommend a shiny, new feature, it's because it will benefit their interests – not necessarily mine. I haven't seen it often, but I can remember working with more than one DBA who didn't share my take on this. I've come to expect non-technical people to fall into the whole "Look! Shiny!" thing when it comes to new tech. But some technical folks in positions of authority see new tech as way to 1) pad their resume ("why yes I've worked with feature X, I helped bring it into my last company"), or 2) make them indispensable, since they adopted it and are the only ones who understand it. When I was a newbie DBA, I knew a senior DBA who did just that - repeatedly. Everybody could see it, but nobody was in a position to do anything about it. Then, he left and the rest of us were put in the position of having to support this big, expensive, shiny nightmare.
Flying along the bleeding edge can be a bumpy ride. Resist the urge to pad your resume at the expense of your employer. Otherwise, your big ol' pile of shiny might become a big ol' pile of something else.
Categories: DBA Blogs
Magical Snapshotty-things
Magical Snapshotty-Things
I spent some time with a storage vendor recently. Vendors kill me. No matter what you say, they still cannot for the life of them understand why you are not using their product. And if you are, they are mystified by the fact that you're not using every last bell and whistle. In this case, the vendor was questioning why we weren't using their magical-snapshotty backup solution. Basically the way their backup feature works (similar to most snapshotty type of features) is that when a backup occurs, only the deltas are written out. Then, pointers/vectors (vectors sounds cooler) act as reference points to the delta blocks. If a recovery has to occur, the product is smart enough to The upshot of stuff like this is that recoveries are blazingly fast and the amount of data written is extremely small.
Very attractive - too good to be true right? Maybe a little - which takes me to my conversation with the vendor and my point about the inability of vendors to see past their product.
Me: So, your solution doesn't actually copy the data anywhere, except for the deltas?
Them: Yes, that makes it extremely fast and it uses tiny amounts of space.
Me: Okay, but that means there's not a complete copy of the data on physically separate part of the SAN?
Them: Yes, and it's extremely fast by the way.
Me: Um, yeah. So what if something radical happens? What if you lose more disks in the RAID group than you have parity disks?
Them: --Laughs--. That never happens.
Me: Really? I've seen RAID5 groups where two disks failed simultaneously.
Them: No way. Really?
Me: Yep. I've seen it happen three different times.
Them: --dumbfounded look. crickets chirping--
Me: So, you're willing to sign a form that guarantees that your storage system will never have a failure of that nature?
Them: --exasperated look-- Well, we can't really do that.
Me: Hmm. That's a shame.
In the end, they were probably frustrated with me, and I didn't intend to make them mad, but sometimes a DBA has to call BS on things. There's nothing wrong with their product. It's a very good product and we may end up making use of it in some way. The problem is that they're proceeding from a false assumption: namely, that unlikely failures are impossible failures. They're not.
In my last post, I mentioned that I would talk about the second common problem I see in the DBA world with backups, and that is "shortcuts" – ways to make things better, faster, stronger that ultimately leave you with a noose around your neck. The skeptic in me says, if it sounds too good to be true, it probably is – or at least there are probably some strings attached. If these guys were selling a magical-performance-tuney thing, it would be different. But as a DBA, you need to understand that there is no area where your fannie in the on the line more than the recoverability of the data. If you lose data and can't recover - it's gone - and you may be too.
With all apologies to Harry Potter, the trouble with magic is that it isn't real. Database administration isn't an art – it's a hard, cold science. In the end, there aren't many shortcuts to doing your job. If you're going to use a magical backup solution, you have to be dead sure 1) that you know the exact process as to how you're going to magically recover that data and 2) that you've taken every eventuality into consideration.
So in the end, problem #2 is similar to problem #1. Test things and make sure you know what you're doing. If red flags go up, stop and think. I don't want to see you in the unemployment line.
I spent some time with a storage vendor recently. Vendors kill me. No matter what you say, they still cannot for the life of them understand why you are not using their product. And if you are, they are mystified by the fact that you're not using every last bell and whistle. In this case, the vendor was questioning why we weren't using their magical-snapshotty backup solution. Basically the way their backup feature works (similar to most snapshotty type of features) is that when a backup occurs, only the deltas are written out. Then, pointers/vectors (vectors sounds cooler) act as reference points to the delta blocks. If a recovery has to occur, the product is smart enough to The upshot of stuff like this is that recoveries are blazingly fast and the amount of data written is extremely small.
Very attractive - too good to be true right? Maybe a little - which takes me to my conversation with the vendor and my point about the inability of vendors to see past their product.
Me: So, your solution doesn't actually copy the data anywhere, except for the deltas?
Them: Yes, that makes it extremely fast and it uses tiny amounts of space.
Me: Okay, but that means there's not a complete copy of the data on physically separate part of the SAN?
Them: Yes, and it's extremely fast by the way.
Me: Um, yeah. So what if something radical happens? What if you lose more disks in the RAID group than you have parity disks?
Them: --Laughs--. That never happens.
Me: Really? I've seen RAID5 groups where two disks failed simultaneously.
Them: No way. Really?
Me: Yep. I've seen it happen three different times.
Them: --dumbfounded look. crickets chirping--
Me: So, you're willing to sign a form that guarantees that your storage system will never have a failure of that nature?
Them: --exasperated look-- Well, we can't really do that.
Me: Hmm. That's a shame.
In the end, they were probably frustrated with me, and I didn't intend to make them mad, but sometimes a DBA has to call BS on things. There's nothing wrong with their product. It's a very good product and we may end up making use of it in some way. The problem is that they're proceeding from a false assumption: namely, that unlikely failures are impossible failures. They're not.
In my last post, I mentioned that I would talk about the second common problem I see in the DBA world with backups, and that is "shortcuts" – ways to make things better, faster, stronger that ultimately leave you with a noose around your neck. The skeptic in me says, if it sounds too good to be true, it probably is – or at least there are probably some strings attached. If these guys were selling a magical-performance-tuney thing, it would be different. But as a DBA, you need to understand that there is no area where your fannie in the on the line more than the recoverability of the data. If you lose data and can't recover - it's gone - and you may be too.
With all apologies to Harry Potter, the trouble with magic is that it isn't real. Database administration isn't an art – it's a hard, cold science. In the end, there aren't many shortcuts to doing your job. If you're going to use a magical backup solution, you have to be dead sure 1) that you know the exact process as to how you're going to magically recover that data and 2) that you've taken every eventuality into consideration.
So in the end, problem #2 is similar to problem #1. Test things and make sure you know what you're doing. If red flags go up, stop and think. I don't want to see you in the unemployment line.
Categories: DBA Blogs
We don't need no steennkking recoveries!
Since this is a new blog, let's start with something basic - backups. Everybody knows you do need those 'steenking backups'. You know it, the rest of your team knows it, even the suits know it (they read it in a Delta Airlines inflight magazine). But there are a couple of problems I see with backups these days. The first lies with the DBA and, sadly, it can get your ass fired.
Yes, you did a nice and proper RMAN backup of your database. You did the right syntax, you reviewed the log, you even did a 'report need backup' command and made sure it came back clean. The real question is: if you lose everything, do you know how to put it back together? In my observation, it's utterly confounding how few DBAs today know how to do a recovery. Because let's face it - doing a backup is a deceptively simple process. You start RMAN and type 'backup database'. You can make it more complicated than that, but it doesn't always have to be. Backup is clean, orderly and comfortable. Recovery is messy, complicated and scary if you don't know what you're doing. Ask yourself these questions.
- You lose a datafile. Do you know how to do a complete recovery without restoring the whole database? Do you know how to do it while the database is online?
- You lose an entire database – control files, redo logs and datafiles. Can you do a complete recovery from that? Try it – it's an enlightening exercise.
- A brilliant developer drops the most important table in your production database. Can you do a point-in-time incomplete recovery to roll the database forward to the point right before the table was dropped?
- You lose an entire database, including the archivelogs. Have you tried the process of pulling the last backup off of tape and then doing a restore?
The list goes on. So how do you learn to do this stuff? You learn by doing it. I tell students in my classes that if you want to know how do recoveries, break stuff and then see if you can fix it. Literally. Build a database that no one else is using. Then, delete the system datafile and try to recover. Delete two redo logs and see how far you can get. Delete the system datafile AND the control files and see what you can do. It's one of the most enlightening experiences a DBA can go through. You'll learn what really makes a database tick. Consider this scenario – your mission critical, never-goes-down, life-blood of the company database experiences a media failure. The suits are surrounding you in your cube, watching your every move, questioning your every decision, telling you how much every moment of downtime is costing them, while sweat pours off your face onto your shaking hands as you try to think of the next command to type. I've seen it happen before.
Several years ago, I worked for a company that had a division that decided they needed their own personal DBA – the "regular" DBA group wasn't giving them enough TLC, I guess. They hired a guy who claimed to have extensive knowledge of Oracle version 9iR2 (way back then, yeah). He started on a Monday. That same day, the server on which his database lived had a drive controller error that wrote corrupt data across the entire RAID array. Time for recovery! Unfortunately, new guy (who was a really nice fellow) didn't have a clue as to what to do, and, worse, he didn't know some basic 9i stuff. He didn't know what an spfile was and he kept trying to "connect internal". Long story short, new guy started on a Monday and was gone by Wednesday.
Spare yourself the agony. Practice, practice, practice. Test, test and test again. We'll talk about the second problem next time. Until then, go break something – and try to fix it.
Categories: DBA Blogs
alt.oracle – the blog for the rest of us
Welcome to alt.oracle - the Oracle blog of Steve Ries. What can you expect to find here? Well, this blog is designed to be different. I've read a lot of people who blog about Oracle and, while they're fine, they tend to fall into two categories. One, Oracle "business types" telling you about SOA, blah, blah, E-business, blah, blah, cloud computing, blah. The second are those that continually write articles on obscure, esoteric technical solutions that you may have need of once in your lifetime if someone held a gun to your head and said... "Okay punk. Write a concrete metaclass in Ruby that will take all the data in your database, convert it to the Japanese 16-bit Shifted Japanese Industrial Standard characterset, put it back in sideways and cause all your reverse key indexes to spontaneously combust. And for God's sake use the declarative style!!" That stuff is all well and good, but I'm an Oracle DBA and have been one for a long time. If I were going to read an Oracle blog on a regular basis (which is my hope for this blog), I'd want it to be a little more relevant to my daily life. And a little bit more entertaining.
If you're wondering about me, I've been a DBA for about 13 years. I hold five Oracle certifications and have been an Oracle instructor, training students in the Oracle certification tracks for about 6 years. I currently consult for a large governmental body best known for its guns and large tanks. I've specialized in designing high performance Oracle RAC systems all the way back to when it was called Oracle Parallel Server. But beyond that, I'm a geek. A pure Star Trek watching, video game playing, Linux compiling, unapologetic geek. I'm crazy enough to think that what I do for a living is so cool that if they asked me nicely, I just might do it for free. That being said, there are certainly parts of the job that I don't like. While I've learned to love good manager-types (they're rare and worth their weight in gold), the bad ones make my skin crawl. And sometimes I think that if someone uses another cool buzzword like "ROI" one more time, I'll go into an epileptic fit. So I like to complain about and make some good natured fun of "the suits" as they're often called. Heck - we all like doing that behind their backs. That's why Dilbert is so popular.
However cold and hard my computerized heart may be, I have a soft spot for Oracle newbies - people who are just trying hard to "get it done" while the suits are standing over their shoulders telling them to "make it faster!". So expect some helpful hints and stories along the way. Of course, I take no responsibility if any of my "helpful hints" turn out to break your database. Always test stuff somewhere safe first.
If you're wondering about the name "alt.oracle", it comes from the Usenet newsgroup alt.* hierarchy that goes all the way back to the "Great Renaming" of 1987 (I'm not making that up - look it up). I'm old enough to have actually used the Internet before the World Wide Web came around, back in the days of Gopher and WAIS, and Usenet was one of my favorite browsing spots. The groups in the alt.* hierarchy ranged from the extremely weird to the naughty, naughty, but tended to fall under the idea that they were "alternative to the mainstream". Taking the aforementioned goals into consideration, I thought it an appropriate name for this blog.
So maybe you get the idea by now. Sometimes informative, often opinionated, but always interesting. If you're an everyday, lunch pail, trying-to-get-it-done DBA, I think you'll like it here. If you're a “suit”, it would probably be better if you just left now...
Categories: DBA Blogs