Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> A couple of tablespace questions for the experts

A couple of tablespace questions for the experts

From: Paul Dixon <root_at_127.0.0.1>
Date: Fri, 19 Sep 2003 12:42:08 +0000 (UTC)
Message-ID: <bketj0$9p3$1@visp.bt.co.uk>


I am planning the design of a new database to host an existing decision support sytem, and am trying to decide :-

  1. Whether to use Automatic Segment Space Management for tablespaces or to use Uniform Extent sizes (as in Howard Rogers tablespace fragmentation document) and manually manage the Freelists / Freelist Groups.
  2. Whether there are any siginficant implications from using a small number of large datafiles for a tablespace instead of a larger number of smaller
    (i.e. 2 GByte) datafiles.

Background.

Currently the application is running on Oracle 8.1.7.4 / HPUX 11i (64 bit). Hardware is dual processor 1.5 GB RAM and 60 GB diskspace (RAID 0).

The new database will be a two node 9iRAC Cluster Oracle 9.2.0.4 / HPUX 11i
(64 bit). Each node will have 4 proccessors and 4 GB RAM. Disk cluster is 30
x 18 GB disks (configured as two RAID 1 + 0 stripesets). Datafiles will be on RAW devices (HPUX doesn't support a cluster file system).

The database workload consists of daily batch loading and processing of base data from other systems (large inserts and updates) and read only user access via a PL/SQL web application (lots of full table / partition scans). Current hardware is performing OK but won't be able to handle anticipated workload generated by larger batch data loads and more users. The 9iRAC configuration is intended to improve both availability and scaleability of the application.

Have so far researched several sources for answers to my questions including Oracle 9iRAC concepts guide, the platform specific installation guides, Oracle online manuals at tahiti, Oracle newsgroup archives on google, the Ask Tom website (and both of Tom Kytes excellent books).

I found numerous references to Automatic Segment Space Management and it's "potential" benefits especially in a RAC environment, but also some concerns about its impact on the performance of full tablescans. Can anyone point me to a source for some hard numbers on this or how I might go about working it out for myself. My application currently doesn't have multiple processes inserting into or updating the same table so freelists aren't an issue at the moment. This may change if I need to redesign it to make better use of the new hardware but I don't think there will ever be more than a couple of concurrent processes trying to update/insert into the same table.

My development and test platforms are currently both 32 bit so 2GB datafiles are the largest I can have. The current production environment is 64 bit but with existing tablespace sizes (10GB max) there hasn't been a real need to have datafiles larger than 2GB. Haven't seen anything so far that would suggest that having larger datafiles would make a difference either way in a system of under 300 GB, but then I could easily have missed something, and I don't currently have any first hand experience of 9iRAC or RAW devices.

Any thoughts suggestions or pointers to other resources gratefully appreciated.

Paul Dixon Received on Fri Sep 19 2003 - 07:42:08 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US