Hampus Linden

Subscribe to Hampus Linden feed
Unknownnoreply@blogger.comBlogger198125
Updated: 9 hours 41 sec ago

Backups, cheaper and faster?

Sun, 2008-03-09 18:21
When did backups become easy and affordable?
I've been sort of slacking off in the backup game in the last year or so, our backups have been managed by other parts of the company and some backups have been outsourced to BSP's. I recently spec and deploy a basic Netbackup solution for our office servers at work and was amazed how extremely simple the whole thing was. The first backup system I worked with was Legato Networker (now owned by EMC) and it was at the time a pretty great product but it was quite difficult to manage, lots of things needed to be scripted and setting up a new tape library required quite a lot of tweaking.
Secondly, the budget was at first glance fairly limited but when we speced up what we needed and got a couple of quotes in I was amazed of how much we actually got for our money.
I wanted basic disk staging then duplication to tape. The obvious choice for tape was LTO4, we get pretty good price / performance, market leading capacity and it's a solid standard.
Most suppliers actually quoted the same tape library, just branded by different vendors. HP, Dell, IBM. All where IBM 3573 tape libraries, it's a excellent and affordable library. Dell came in best in price as usual. We opted for SAS attached. The backup server, my first choice server these days, is the Dell PowerEdge 2950. I buy it for pretty much 75% of all requirements. Speced out with a pair of Quad-Core 1.86GHz processors, 4Gb RAM, 6x500Gb SATA drives (in RAID5) for the disk staging pool and a SAS5 to attach the library.
Software wise the server is running CentOS 5.1 and Symantec/Vertias Netbackup 6.5 “Linux Starter Edition (LSE)”. The LSE is a pretty good deal, you get a server license, tape library license and 5 client licenses (with bare metal, open file etc.) for a fairly low cost.
Installing and configuring Netbackup was as easy as it gets, the tape library was detected on boot, no sysctl.conf changes needed. Stepped through the netbackup installation process, added the tape library (less than 5 clicks!!), defined a couple of tape label policies, created a 2Tb disk staging pool with my new tape group as the target, just add water (ehm, clients actually).

The total cost was under £13.000 (about $22.000), installation time was less than one day, included was:

  • Veritas Netbackup 6.5 Linux starter edition
  • Netbackup clients 5-pack

  • Netbackup Exchange agent

  • Dell PowerEdge 2950 server

  • Dell PowerVault TL2000 library w/ one LTO4 SAS drive

  • 25x LTO4 tapes

  • 3x LTO cleaning tapes

  • CentOS Linux 5.1 (free)

How to delete an object with a special character in Oracle

Wed, 2008-02-20 11:52
There are some things in Oracle that are possible but shouldn't be possible.
One thing I love to hate is the fact that you can create tables with almost any name, just as long as you double quote it.
I.e.:
SQL> create table "My Fruit Table" (id number);

Table created.

SQL> select * from tab;

TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
My Fruit Table TABLE
Horrible! And what's even more horrible is that people actually do this.

Now to the problem, a user created a table with a special character in the name. Not even sure what character but I need a way to drop it. PL/SQL to the rescue.
Example:
SQL> set serveroutput on
-- Lets create a dummy table with a bogus character in the name
SQL> declare
a varchar2(500);
begin
a:='create table "abctest'||chr(150)||'" (id number)';
execute immediate a;
end;
/

PL/SQL procedure successfully completed.

SQL> select * from tab;

TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
abctest? TABLE

SQL> desc "abctest?";
ERROR:
ORA-04043: object "abctest?" does not exist

-- And a bit of PL/SQL to drop it
-- change the abctest% string to something matching your single table.
SQL> declare
a varchar2(500);
tname varchar2(50);
begin
a:='select table_name from user_tables where table_name like ''abctest%''';
execute immediate a into tname;
dbms_output.put_line('Table name is: '||tname);
execute immediate 'drop table "'||tname||'"';
end;
/

Table name is: abctest?

PL/SQL procedure successfully completed.

SQL> select * from tab;

no rows selected
Handy.

Buying marketshare

Sun, 2008-02-03 12:55
Ok, I'm way waay late on commenting on the two most recent IT company takeovers .

Sun buys MySQL
Tricky one, I have to honestly say I have a hard time seeing where MySQL fits in to Sun's portfolio.
Sun is one of Oracle biggest partners (the biggest after Dell?) and they are one of the biggest backers of PostgreSQL, then even employee a few of head lead Postgres developers.
Sun has got two options with MySQL, they can either invest a lot of RnD in MySQL and actually turn it in to a really good product. MySQL is without a doubt a product with incredible potential, but is today pretty much a over-sized way to store CSV index files (bit harsh perhaps but hey).
Or they can ride out the profits by selling LAMP-stack boxes and let MySQL sustain itself, I doubt they can make up for the billion dollars they payed for it though.
Oh, and what will Sun make of the (rather evil) closed source MySQL Enterprise Monitor administration product from MySQL. Sun having new closed source software? Errr!

Some advice for sun,
1) Add real constraint support. This is a show stopper for many, and the InnoDB approach isn't good enough. MySQL needs native solid from the ground up constraint support, independent of storage engine, version bla bla. If people don't want the "performance loss" of constraints, don't add the constraints!
2) Sort out proper I/O management, tablespaces, clean partitioning all that stuff. MySQL has improved a lot on this point over the last 12 months. But more work is needed, especially if they are planning on selling MySQL powered Thumpers.
3) 100% solid transactional support. MVCC would be fun. I've seen a few to many transactional problems in MySQL. Some sort of logging/undo/redo system is needed.

On top of that I wouldn't mind a clean hot-backup tool, requires proper transactional support (with something like SCN's etc), 3rd party scripting language support (PgPerl anyone?).

That's about 250.000 RnD hours, go get busy!

Oracle buys BEA
This is most likely very good news. I'm a big fan of the Oracle database (duh) and a fan of Weblogic Application Server.
What I'm not a big fan of is Oracle Application Server (oc4j).
I hope Oracle realize that Weblogic is the superior product and works with Weblogic as the front runner and port oc4j features to Weblogic and not the other way around. I doubt that will be the case though. If not, I hope they don't kill off Weblogic any time soon.

Hmm, that's all for now.

Websphere ND dmgr permission problems

Fri, 2008-01-18 05:23
Ran in to a really weird problem with one of our Websphere 6.1 Network deployment setups yesterday and as I couldn't find one single page about the problem in google I thought I'd blog it.
I'm not much of a Websphere admin but managed to fix it after a while.

The problem started when a datasource was updated and all of the sudden all node agents stopped trusting the deployment manager (dmgr). Syncing the nodes failed and hence pretty much everything failed to restart / deploy. Running servers where fine though.
We got this message in the logs:
[1/17/08 13:30:52:020 GMT] 00000028 RoleBasedAuth A   SECJ0305I: 
The role-based authorization check failed for admin-authz operation
SSLAdmin:temporarilyDisableCertificateAuthentication:java.lang.Long.
The user UNAUTHENTICATED (unique ID: unauthenticated) was not granted
any of the following required roles: administrator.
My best guess is that node agent configuration was corrupted in some way.
How do you fix the problem then?
Fairly easy actually.
# Stop all node-agents that seem broken (that would probably be all of them!).
# Go to the node agents bin directory on the node (usually something like $WAS_HOME/profiles//bin/).
# Manually sync the node with syncNode.sh, point to the SOAP connector (default is 8879) on the DMGR server. See example
./syncNode.sh dmgrhost 8879 -username websphere -password webfear
# Start the node agent and verify that the logs are happy. Kick off a cell sync from dmgr. You should see entries similar to this in the logs:
[17/01/08 16:14:59:872 GMT] 0000002f NodeSyncTask  A   ADMS0003I: 
The configuration synchronization completed successfully.

Solaris Express on a Toshiba Satellite Pro A200

Tue, 2008-01-01 05:25
I bought myself one of those cheap laptops the other month. I needed a small machine for testing and since laptops are just as cheap (if not cheaper) as desktops these days I got a laptop.
The machine came with Vista but I wanted to triple boot Vista, Ubuntu and Solaris Express Community Edition.

  • Use diskmgmt.msc in Vista to shrink the partition the machine came with, Windows can do this natively so there is no need to use Partition Magic or similar tools. Create at least three new partitions. One for Solaris, one for Linux and one for Linux swap.
  • Secondly install Solaris, boot off the CD and go through the basic installer. The widescreen resolution worked out of the box (as usual). Do a full install, spending time "fixing" a smaller installer is just annoying. Solaris will install it's grub boot loader on both the MBR and superblock (on the Solaris partition). It probably makes sense to leave a large slice unused so it can be used with ZFS after the installation is done.
  • Install Ubuntu. Nothing simpler than that.
  • Edit Ubuntu's grub menu config (/boot/grub/menu.lst) to include Solaris. Simply point it to the Solaris parition (hd0,2 for me). Add these lines at the end of the file.
    title Solaris
    root (hd0,2)
    chainloader
Done!

I had to install the gani NIC driver in Solaris to get the Ethernet card working and the Open Sound System sound card driver to get sound working.
The Atheros WiFi card is supposed to be supported but I couldn't get it to work, even after adding the pci device alias to the driver. I'll post an update if I get it to work.

Solaris 10 on Dell PowerEdge 1950 and 2950

Thu, 2007-12-20 11:58
I probably get an email every couple a weeks about this one, not sure why.
E-mails regarding something in Solaris not working on Dell PowerEdge 9th gen servers.

For any device driver related issues with Solaris on x86, the first resource to check if you run in to trouble is the Sun HCL, the 'hardware compatability list'.
Solaris 10 works fine on most Dell boxes, but some need NIC or HBA drivers.

For a Dell 2950 that tells us that Solaris works, from release 11/06 upwards, it also tells us to download the ethernet driver here and the MegaRAID SAS driver here.

NB, the Sun HCL does not mention the new 'III' series 9-gen server yet. I would guess they are working on as Sun and Dell recently partenered to provide better Solaris support. The PERC6 card may need some new drivers.

Converting MySQL "on update current_timestamp" to Oracle

Wed, 2007-11-28 15:20
Another short simple SQL for all out there in the process of converting old MySQL schemas to Oracle.

MySQL has got a built in feature to automatically update a column to the current timestamp whenever the row is updated, just by using the default-clause. The "on update current_timestamp" feature can be quite handy if you have lazy developers who can't be bothered writing full insert statements. :)

The MySQL create table statement would be something like this:
create table p (
id int,
a varchar(10),
d timestamp DEFAULT CURRENT_TIMESTAMP
ON UPDATE CURRENT_TIMESTAMP,
constraint p_pk primary key (id)
);

Not difficult to do in Oracle either, but we need a trigger to assist.
Example:
SQL> alter session set nls_Date_format='HH24:MI:SS';

Session altered.

SQL> create table p (id number, a varchar2(10), d date default sysdate,
constraint p_pk primary key (id));

Table created.

SQL> insert into p(id,a) values(1,'test');

1 row created.

SQL> host cat p_test.sql
CREATE OR REPLACE TRIGGER p_d_trig
BEFORE UPDATE ON p
FOR EACH ROW
BEGIN
select sysdate into :new.d from dual;
END p_d_trig;
/

SQL> @p_test

Trigger created.

SQL> select * from p;

ID A D
---------- ---------- --------
1 test 21:15:05

SQL> update p set a='foo' where id=1;

1 row updated.

SQL> select * from p;

ID A D
---------- ---------- --------
1 foo 21:16:44

SQL>

Azul Systems Java appliance

Sun, 2007-11-04 15:03
So, I've been dragged more and more in to managing Java application containers like Weblogic, Websphere and JBoss. These have a tendency to be hugely complex beasts, almost as complex as our favorite database and performance optimization is sometimes quite difficult, or simply it's to much effort to actually upgrade or replace the server.
Azul offers a quite neat (but a tad pricey) solution to this. They off-load Java computation to a separate appliance, the smallest model has 96 processing cores and 48Gb of ram. The big daddy has a massive 768 cores and 768Gb ram. It's a by Azul in house engineered hardware with custom software (I would guess the OS is semi-based on one of our open-source friends (have a look at the ifconfig output)). The application server still a normal server (Linux/Solaris etc), the small JVM on the server pretty much acts as a proxy between external resources such as JDBC sources and the actual JVM on the appliance.
Their marketing crew calls it a "turn key solution", it's not really that easy but it's pretty straight forward to use.
Azul vega
The appliance itself takes about 15 minutes to install, setup the networking and the compute domain name and you are pretty much done with the appliance itself.
The application side is almost as easy. Azul provides sort of a "JDK wrapper", you unpack the wrapper and run a simple shell script to integrate it with an existing JDK, the script asks for the path to the JDK to "mimic". Works with IBM and Sun JDK's, both 1.4 and 1.5 (I haven't tried with Jrockit).
Change your appserver init script's to use the new JAVA_HOME, give it a couple of azul specific JVM options, give it 10Gigs or so of heap and off you go. One thing to remember is that most garbage collection arguments are now obsolete, azul uses it's own custom "pausless gc".
The first thing that hit me when starting the first app-server was how incredibly slow it was to deply, deploying EJB's etc took ages, but hm, yes, that's an almost single threaded operation. The application itself felt "ok" when using it, now the cool part, we really couldn't get the application to run slowly with our benchmarks, at least not if we count out the fact that it eventually trashed the database sever (just some random 8-core thing) behind it. Bottlenecks in the application tiers where all gone! It doesn't matter if 5 users are online or 250, the application performs exactly the same.
The simple conclusion now is "we need a bigger Oracle box!".

Azul provides a quite good web management console for the appliance, the gui provides functionality to manage compute pools if you need to throttle CPU and memory usage between applications or servers and also provides views to monitor applications and utilization.

I guess one could call it a bit of a custom "throw hardware at the problem" solution, the box itself is quite expensive but for a medium sized company with loads of j2ee apps it makes sense.

Oracle SQL to return a alphabetical subset

Thu, 2007-11-01 07:15
Yes, I know, I've been waaay bad at blogging lately. Been busy working on new projects at work, upgrading applications to Java1.5 containers (Websphere 6.1, Weblogic 9.2 etc). On the fun side we've got an Azul Java-acceleration box, that really needs a couple of blog entries!

Anyway, got asked if there was a way to return a resultset of data based on the leading character, the use case was to ignore all strings starting with a,b, or d and return e to z.

Fairly straight forward but a good SQL to have.
I simply grab the first character in the varchar and compare it's ASCII value to the ASCII value D.
SQL> select * from t;

A
----------
Atest
Btest
Etest
Htest
Wtest
Dtest
dtest
SQL> with data as (
2 select ascii(upper(substr(a,1,1))) a_val,a from t
3 )
4 select * from data where a_val not between ascii('A') and ascii('D') order by a
5 /

A_VAL A
---------- ----------
69 Etest
72 Htest
87 Wtest
SQL>

Oracle 11g: Archive log alert log output

Tue, 2007-08-21 02:42
Starting with Oracle 11g the archivelog process is no longer logged to the alert log by default. For good and bad in my opinion, nice to get rid of the log entries on smaller databases but it can be quite useful to see the details on larger systems with multiple log destinations.

The log level is simply a database parameter and can easily be changed, look at this page to figure out what level you want. I figured 79 would be a good starting point (64+8+4+2+1). Databases running dataguard should probably have 207 (add 128) to include FAL service details.

How to set the trace level:
[oracle@vm-rac1 ~]$ rsqlplus "/ as sysdba"

SQL*Plus: Release 11.1.0.6.0 - Production on Sun Aug 19 02:35:23 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show parameter log_archive_trace

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_trace integer 0
SQL> alter system set log_archive_trace=79;

System altered.

SQL> alter system switch logfile;

System altered.

SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@vm-rac1 ~]$ talert # my shell alias to tail the current alert log
Current log# 1 seq# 7 mem# 0: /u01/app/oracle/oradata/test11g/redo01.log
Sun Aug 19 02:03:14 2007
ARC3: Evaluating archive log 3 thread 1 sequence 6
ARC3: Beginning to archive thread 1 sequence 6 (627362-628594) (test11g)
ARC3: Creating local archive destination LOG_ARCHIVE_DEST_10:
'/u01/app/oracle/flash_recovery_area/TEST11G/archivelog/2007_08_19/o1_mf_1_6_%u_.arc' (thread 1 sequence 6)
(test11g)
ARC3: Creating local archive destination LOG_ARCHIVE_DEST_1:
'/u01/app/oracle/11.1.0/db_1/dbs/arch1_6_630935902.dbf' (thread 1 sequence 6)
(test11g)
ARC3: Closing local archive destination LOG_ARCHIVE_DEST_10:
'/u01/app/oracle/flash_recovery_area/TEST11G/archivelog/2007_08_19/o1_mf_1_6_3dh5pld1_.arc'
(test11g)
Committing creation of archivelog
'/u01/app/oracle/flash_recovery_area/TEST11G/archivelog/2007_08_19/o1_mf_1_6_3dh5pld1_.arc'
ARC3: Closing local archive destination LOG_ARCHIVE_DEST_1:
'/u01/app/oracle/11.1.0/db_1/dbs/arch1_6_630935902.dbf'
(test11g)
Committing creation of archivelog '/u01/app/oracle/11.1.0/db_1/dbs/arch1_6_630935902.dbf'
ARC3: Completed archiving thread 1 sequence 6 (627362-628594) (test11g)

Read details about the log trace level here and general alert log related stuff here.

Oracle 11g: basic rundown

Sun, 2007-08-19 15:09
Ok, so 11g has been out some time but I've been to busy to really dig in to it until now. 11g is probably the first release that doesn't really many new features; no, before you send me angry e-mails hear me out :)
Take a look at the new features list. What do most of these features do? Easy, they just make the most out of the fairly old but totally outstanding REDO and UNDO management in Oracle. Why not use what we already got? Ok, some things are totally new features like the improved partitioning and new data types for the medical and life-sciences industry.

Partitioning and compression
The coolest things in 11g is probably the VLDB features for partitioning and compressions, can really reduce storage costs (at the same time as storage costs are plummeting with the competition from iSCSI). It is now possible to create "automatic" partitions, i.e. instead of adding a partition every month with DDL when doing a bulk load Oracle can automatically partition the table as per predefined rules. Pretty cool and saves a lot of time.

Direct NFS
Another pretty cool feature, at least for me as I'm using a lot of Netapp NFS storage, is the Direct NFS client in Oracle (Linux). Switch to the direct NFS lib and Oracle will do NFS ops outside the Linux kernel, which removes the NFS caching layer and kernel block device mapping etc. Gives a quite massive performance increase on some workloads. This is what I'm playing most with at the moment.

Data Guard
Lots of new cool stuff for Data Guard. Most significantly is probably the fact that you can have an open standby, available for real-time queries such as reporting applications and other read only operations. In fact, you can even open a Data Guard instance in read-write mode and mess about with it and later just flashback to a SCN where the db was in sync with the Data Guard master db and just roll forward to the current SCN using the normal FAL services. How sweet isn't that for DR testing and schema change testing. DR testing during live operations, that's a couple of late nights saved.
Other new data guard features include redo shipping compression, the normal improvements to the DG broker, sqlplus manageability etc.


Workload replay
Another thing that is really useful for deployment testing and change control is the database replay feature, one can "record" all transactions over a period of time and replay them after a change has happened to ensure performance and functionality has not degraded.


Improved ASM features.
You can now bring online a replaced drive and new writes will go directly to the drive while Oracle sync data from a redundant mirror in the background. Reduces resync time quite a fair bit.

PL/SQL
Couple of improvements to plsql;
Use of sequences in declares.
Real compound triggers.

New data types
Not really my thing but probably useful for a lot of people.
Spatial and multimedia support for medical imaging, life-sciences and other fairly odd things. :) Docs here.

A couple of minor things have changed as well, the alert log lives in a new place (with it's buddies in the new diag system), the logging levels in the alert log can now be adjusted as well (got a blog entry about that in a couple of days).

Lots of fun things to play with and to blog about.

Ah, almost forgot. Larry pulled a version hack on us again. As some of you may now there was never an Oracle Version 1 release, the first version was called 2 since Larry Ellison didn't think anyone would buy version 1. The first (public) 11g release is 11.1.0.6

Oracle Database 11g available for download

Sat, 2007-08-11 15:40
Well, it's about time. Oracle finally made 11g available for download. Only 32-bit Linux so far though and I have a feeling we'll have to wait a while for most other platforms (possibly a 64-bit Linux download soon).

Download it here.

Lots of new cool stuff to blog about, I'm away on holiday for a week but my recently upgraded lab machines at home are sitting there waiting. Fun times when I get home.

Limit user sessions in Oracle

Mon, 2007-07-30 15:55
Developers sometimes enjoy configuring their JDBC datasources after their own likings, testing a performance issue by having a gazzillion threads to the database "just in case we need them".
Well in my opinion, if you need what is obviously too many threads, then there is a coding issue.
So to make people think twice before flooding a service with connections I've started limiting the number of sessions they are allowed to have.
Easy as always to do in Oracle. First we allow resource limits by setting the resource_limit to true, then we simply create a profile and assign that to the user. The profile can set the limit of how many session a user is allowed to have.

A little demonstration;
oracle@htpc:~$ rsqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Jul 30 21:56:12 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter resource_limit

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
resource_limit boolean FALSE
SQL> alter system set RESOURCE_LIMIT=true scope=both;

System altered.

SQL> create profile sesslimit limit sessions_per_user 2;

Profile created.

SQL> create user fish identified by fish123
2 default tablespace data profile sesslimit;

User created.

SQL> grant create session to fish;

Grant succeeded.

SQL> connect fish/fish123
Connected.
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
-- I'll start two sessions in another terminal.
SQL> connect fish/fish123
ERROR:
ORA-02391: exceeded simultaneous SESSIONS_PER_USER limit


SQL>

Using SERVICE_NAMES in Oracle

Sun, 2007-07-08 12:42
The use of "SERVICE_NAMES" in Oracle is quite an old and probably well known feature but perhaps not everyone is familiar with it yet.
Got asked today about a recovery scenario where the administrator had a failed instance (broken data files, no logs, no backups, just a nightly exp), a new database was created with 'dbca', but with a new name to test importing the exp file.
All worked fine, but there was a problem with the database name. The application had the service name set in a number of config files and there was also a number of ETL scripts with service names hardcoded. The thinking at the time was to delete the old instance, remove all traces of it (oratab etc.) and then create it *again* with the same name.
Now hold on here, we have tested the imp in a new database, all is fine and all we want to do is allow connections to the old database instance name?
That's pretty much a one-liner, not a new database.
We can simply add the new name we want to "listen on" to the SERVICE_NAMES parameter.
Easy peasy.

Ok, here is what we should do. Quite a long example for something simple.
But hey, just want to make it clear.
oracle@htpc:admin$ lsnrctl status
-- What's the current db_name and service_names?
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-JUL-2007 18:31:22

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 08-JUL-2007 18:23:39
Uptime 0 days 0 hr. 7 min. 43 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/10g/network/admin/listener.ora
Listener Log File /u01/oracle/10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=htpc)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "peggy" has 2 instance(s).
Instance "peggy", status UNKNOWN, has 1 handler(s) for this service...
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggyXDB" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy_XPT" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
The command completed successfully

oracle@htpc:admin$ rsqlplus hlinden/hlinden as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:24 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter db_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name string peggy
SQL> show parameter service_names

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string peggy
SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

-- What can we connect to?
oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/peggy

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:53 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/dog

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:58 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name:

-- Ouch, that's the one we want!

oracle@htpc:admin$ rsqlplus hlinden/hlinden as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:32:01 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

-- Here is the 'one-liner'
SQL> alter system set service_names='peggy,dog' scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 599785472 bytes
Fixed Size 2022600 bytes
Variable Size 167772984 bytes
Database Buffers 423624704 bytes
Redo Buffers 6365184 bytes
Database mounted.
Database opened.
SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

-- Let's see what changed. What can we connect to now?
oracle@htpc:admin$ lsnrctl status

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-JUL-2007 18:33:57

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 08-JUL-2007 18:23:39
Uptime 0 days 0 hr. 10 min. 18 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/10g/network/admin/listener.ora
Listener Log File /u01/oracle/10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=htpc)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "dog" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy" has 2 instance(s).
Instance "peggy", status UNKNOWN, has 1 handler(s) for this service...
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggyXDB" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy_XPT" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
The command completed successfully

oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/dog

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:34:18 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
-- It works, but where are we?

SQL> select sys_context('userenv','SERVICE_NAME') from dual;

SYS_CONTEXT('USERENV','SERVICE_NAME')
------------------------------------------------------------------------------------------------------------------------
dog

SQL> select sys_context('userenv','DB_NAME') from dual;

SYS_CONTEXT('USERENV','DB_NAME')
------------------------------------------------------------------------------------------------------------------------
peggy

The new top 500 supercomputer list is out!

Sun, 2007-07-01 16:48
Finally had some time to look at the latest Top500 supercomputer list released last week at ISC '07 last week.

Lots of changes compared to the old list; two quite interesting new things. First of all is Intel multi-core chips, secondly we have Blades, loads and loads of blades.
A significant number of entries on the list are using Intel Xeon 5160 3.0GHz dual-core CPU's and there is also a number of quad-core systems.
Another interesting point is that the crown in terms of number of systems on the list has been passed from IBM (with 192 systems) to HP (with 201 systems).
The HP Cluster Platform BL460c holds quite a few spots on the list. Can't say I've heard much about HP's HPC offering so far, they're not exactly beating the drum on that one (compared to the IBM guy who's calling me every 2 months).

SGI holds a few new slots on the list as well, which is extra fun to see considering the hard last year SGI have had. One Altix system even made it to number 10 on the list (with Columbia on number 13).

Top 500

Dell teaming with Oracle Enterprise Linux?

Thu, 2007-06-28 03:32
Just noticed that Dell change the "Red Hat Enterprise Linux" label on their support website to just "Enterprise Linux".
Is this the first step for Dell embracing Oracle Enterprise Linux?

Sun 2002 flashback

Sun, 2007-06-24 14:54
We've been doing some shuffling around of hardware at work and I've recommissioned some old Sun hardware for a new QA environment.
Oh the memories I have of old Sun kit :)
The most exciting thing I've installed is a Sun v480 connected to a T3 brick. That combo was pretty much the industry standard Oracle solution when I had my first job in "real" systems administration and it was probably one of the first the real mid-range installs I did (not counting ageing 420r/450's).
To be honest, it's a match made in heaven. A quad cpu v480 with a t3 brick (or two) kicked out some real performance numbers in it's day. The v480 is probably one of the nicest boxes to work with (considering age and all), the boards are easy to fit, the RSC is awesome, it's simply rock solid. Running Solaris 8 of course (we still have *lots* of clients on Solaris 8 (with Solaris 10 gaining ground)).


The T3 array is pretty cool, but it brings back the frustrations everyone had with early low-end SANs. The limitation to two disk RAID volumes with very limited zoning and slicing. But hey, it's pretty old and is still pretty fast.



Now if I only could figure out how to get rid of that darn Sun E250 we still have running Sybase 12.5.4, it just works to well so far.

Reclaiming LOB space in Oracle

Thu, 2007-06-14 06:19
Reclaiming space in Oracle can sometimes be a bit of a "problem", not really a problem it just works in a funny way. It's a quite common question I get and users are usually happy with a manual alter table table_name shrink space compact;, but what do we do for lobs?
We need a manual reclaim for the lob column.

A little demo:
(spinner1)oracle@spinner[~/lob_test]$ rsqlplus hlinden/hlinden

SQL*Plus: Release 10.2.0.2.0 - Production on Thu Jun 14 12:19:02 2007

Copyright (c) 1982, 2005, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

-- Create a table and sequence to play with
HLINDEN@spinner1> create table lob_test (id number, data blob);

Table created.

HLINDEN@spinner1> create sequence lob_test_seq;

Sequence created.

-- Load 50 rows with 1.5Mb blobs (see code bellow)
HLINDEN@spinner1> @lobload

PL/SQL procedure successfully completed.

-- Find out what our lob segment is called
HLINDEN@spinner1> select object_name,object_type from user_objects where
2 created>sysdate-interval '5' minute and object_type='LOB';

OBJECT_NAME OBJECT_TYPE
------------------------------ -------------------
SYS_LOB0000199575C00002$$ LOB

-- Display the current size of the lob segment
HLINDEN@spinner1> select round(sum(bytes)/1024/1024) Mb from user_segments where
segment_name='SYS_LOB0000199575C00002$$';

MB
----------
75

-- Ok, let's delete those blobs and see what the size is after
HLINDEN@spinner1> delete from lob_test purge;

50 rows deleted.

HLINDEN@spinner1> select round(sum(bytes)/1024/1024) Mb from user_segments where
segment_name='SYS_LOB0000199575C00002$$';

MB
----------
75

-- Still 75Mb, hm, perhaps it recycled if we insert more data?

HLINDEN@spinner1> @lobload

PL/SQL procedure successfully completed.

HLINDEN@spinner1> select round(sum(bytes)/1024/1024) Mb from user_segments where
segment_name='SYS_LOB0000199575C00002$$';

MB
----------
150

-- Nope, not recycled. We need to issue a shrink command to free up the
-- space immediately

HLINDEN@spinner1> delete from lob_test;

50 rows deleted.

HLINDEN@spinner1> alter table lob_test modify lob (data) (shrink space);

Table altered.

HLINDEN@spinner1> select round(sum(bytes)/1024/1024) Mb from user_segments where
segment_name='SYS_LOB0000199575C00002$$';

MB
----------
0
-- All gone!
My simple blob loading code:
DECLARE
src_file BFILE := bfilename('TMP', 'data.dat');
dst_file BLOB;
lgh_file BINARY_INTEGER;
cur_id NUMBER(10);
BEGIN
FOR i IN 1..50
LOOP
INSERT INTO lob_test(id,data) VALUES(lob_test_seq.nextval,empty_blob())
RETURNING id into cur_id;
-- lock record
SELECT data INTO dst_file FROM lob_test WHERE id=cur_id FOR UPDATE;

dbms_lob.fileopen(src_file, dbms_lob.file_readonly);
lgh_file := dbms_lob.getlength(src_file);
dbms_lob.loadfromfile(dst_file, src_file, lgh_file);
dbms_lob.fileclose(src_file);
END LOOP;
END;
/

Oracle 11g - one month to go

Tue, 2007-06-12 17:05
Yep, exciting times. July 11th, lots of new cool stuff.

Read about a couple of new things in Insider .

If you're lucky enough to be in New York, sign up for the launch event!

New blade enclosure from Sun

Sun, 2007-06-10 11:34
Ever since Andy Bechtolsheim returned to Sun pretty much everything on the Sun x64 server line meet all expectations and then some. Sure, there has been the slight ethernet chipset problem, but generally the AMD kit has really kicked ass
Though some people where surprised to see first (new) blade offering from Sun, the 8000 series. A 4-socket blade offering when most of the market moved away from larger blade enclosures and focused on 2-socket systems. The 8000 still made good sense for larger customers like financial institutions and telcos. Server consolidation on a large level.

Now the other day Sun released a second (third actually, the 8000 comes in two models) blade offering. The 6000 series. It's a fairly basic 10U enclosure, 10 blade slots and the classic Ethernet modules. Not very exciting at first glance, I have to admit. Sounds like the PowerEdge 1955 box but not with the same high density (Dell can do 10 blades in 7U).
6000
The thing cool about the 6000 enclosure is that it is not a Blade enclosure in the classic sense, it's not "stripped down computers in a box". It's actually 10 high capacity servers. It enclosure offers the same expandability as normal 1U servers.
All blade modules are dual socket, can take up to 16 DIMMS, four 2.5" SAS drives and even two PCI-e slots. That's right, normal PCI-e slots, no more proprietary on board FC-AL or Infiniband modules. Just slot in a couple of standard c-PCIe cards and off you go. No more opening up the blade to install modules.
Another new feature is a hardware RAID controller on-board (for good and bad indeed).
The whole enclosure is based around industry standards and open solutions.

In addition to all this cool stuff the real winner here is the selection of blades.
Sun offers no less than 3 blade types.
The t6300 UltraSPARC T1 blade,
the x6220 AMD blade and
the x6250 Intel Xeon blade
It's the first Intel Xeon (including quad-core procs) offering from Sun, with many more to come.
t6300

Watch Andy talk about the new box here.

Pages