Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: design question, use of partitioned tables?
"Manuela Mueller" <mueller_m_at_fiz-chemie.de> wrote in message news:bgqr98$rlcs6$1_at_uni-berlin.de...
>
Ah, that's better. Try and get smaller partitions, definitely. Make sure you create first the db datafiles in freshly-made partitions, before they get too much activity. Check out if the block size used internally by xfs is the same as your database block size. Should be in the "man" pages how to verify this.
Another thing: these "files to load", how are you getting them into sqlldr? Catenate and then load as a single file? Can you merge them into larger files?
You may be able to run, say, 4 parallel loads into the same table by splitting the input files into 4 independently run scripts? Before you do that, make sure you set the table with INITRANS 4 and FREELISTS 4 (a bit of an overkill, but it won't hurt).
Table partitioning with sqlldr to each partition is of course another option, but you need an easy way of separating the input data files by partitioning key and that is not always possible.
Another thing to try would be direct path data loads instead of normal, given that you use sqlldr. Read about them on the Utilities manual and see if it is applicable to your data.
> The Database Server is another node which has XFS partitions with 272 GB
> in size. Partitons reside on a SAN. During setup of the DB Server I
> created fresh partitions with XFS and datafiles each 32 GB in size.
Much better. I'm a bit surprised you can get db datafiles of 32Gb in Linux with a 32-bit version of Oracle and not have any problems. Still if it's working, no problem. But much larger file system partitions might have performance problems with Unix/Linux.
> Redo log files (each 100 MB) are also located on a XFS partitions, log
> switch account 1 - 2 per day in 'normal' usage (not Oracle Text Index
> creation).
Yeah but how many switches in a load situation? Doesn't hurt (other than disk overhead) to make them larger.
Another thing: do a complete check of constraints and primary keys and such being disabled while the big data load is going. Also triggers. All you need is one left active and bang: there goes your performance. Big data loads like these need a vanilla plain table with nothing fancy in it. It's easy to forget about some long forgotten detail, so do another check.
How long does it take to create indexes and such after the data is in? Is the total time you mentioned including index creation time, if so how long is the table load time alone?
-- Cheers Nuno Souto wizofoz2k_at_yahoo.com.au.nospamReceived on Wed Aug 06 2003 - 09:52:51 CDT