Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: design question, use of partitioned tables?
Manuela Mueller <mueller_m_at_fiz-chemie.de> wrote in message news:<bgr86c$rmr97$1_at_uni-berlin.de>...
> xfs block size is 4k (default value). I should change this to 8k in the
> next setup.
that's not too bad. Would be a lot worse if the xfs block size was *larger* than your db block size. All this means is fs will do 2 I/O requests for each single one from db. Not optimal, but you can live with it.
> I'm currently running 4 independent sqlldr sessions but initrans,
> freelists are on the default value 1. Thanks for the hint, I must change
> these values.
It should get you less contention for the last block of a table (the "insert new row" point, also known as the HWM).
> Each session has 3.5 million rows to load, location of the
> lob file is dynamically specified in control file (filler) and the *.dat
> file contains the absolut path to the lobfile. Size of each *.dat file
> is about 860M.
Where is the lob data to be loaded, in your nfs partition? Can you post your CREATE TABLE statement, your sqlldr control file and some details of the Text indexing you do? It might help trigger a "flash" in anyone listening here.
> application). Direct load exits when I try to load many records with a
> null lob. After a few initial tests I returned to slower conventional
> path :( .
Bugger!
> >
> 32 Gb is max size for a datafile on Linux. We haven't encountered any
> problems with the size yet.
Thanks for the info. Suse, wasn't it?
> Good question, I never performed a load at this magnitude. When I load
> about 3 Million records it takes about 1 h.
That's not too bad, considering you're loading lobs as well. The rest of the time is spent creating the Text indexes?
Partitioning looks very promising for scaling. Do you have a date column you could partition on? I'm thinking one partition for every period of 7 days, drop oldest and create new ones when needed?
Still, very strange that you get such low CPU activity even with concurrent loads. I'd say the main problem is the nfs data source. Do a test by putting some data in a local disk, then do a trial load from there and another of the same volume from nfs. Compare CPU and I/O activity across the system for both. That should tell you for sure if the prob is the db handling of lob's or the nfs.
A visit to Metalink for any outstanding bugs (features?) on lob's in your version of the software may pay off really well.
Cheers
Nuno Souto
wizofoz2k_at_yahoo.com.au.nospam
Received on Wed Aug 06 2003 - 19:27:08 CDT