I just wanted to make a correction for Steve Adam's website:
http://www.ixora.com.au/
Good luck,
-- jeff
+--------------------------------------------------------------------+
| Jeffrey M. Hunter | jhunter_at_idevelopment.info |
| Sr. Database Administrator | JeffreyH_at_CoManage.net |
| CoManage | OFFICE: (412) 318-6007 |
| 12330 Perry Highway | WEB : www.idevelopment.inf |
| Wexford, PA 15090 | WEB : www.comanage.net |
+--------------------------------------------------------------------+
"Richard Foote" <richard.foote_at_bigpond.com> wrote in message
news:EkYk9.41128$g9.118026_at_newsfeeds.bigpond.com...
> Hi Christian,
>
> May I suggest checking out Steve Adam's website http://www.ixora.com .
>
> In the tips section, he discusses the most appropriate block size
depending
> on your environment.
>
> Cheers
>
> Richard
> "Christian Svensson" <chse30_at_hotmail.com> wrote in message
> news:ccc2a7eb.0209270426.7ac82043_at_posting.google.com...
> > Greetings all,
> >
> > In our staging database (~200Gb) for our Datawarehouse we have a great
> > deal of large I/O intensive batch transaction where we do a lot of
> > selects/inserts/deletes/updates between 200.000 - 400.000 rows each
> > transaction in our ETL process. Later when all the
> > validation/scrubbing is done, Cognos application selects approx 90
> > milj of rows to its own files from where our users do the actual gets.
> >
> > Currently we use a block size of 8K but are thinking if choosing a
> > higer block size.
> >
> > Environment:
> > - Sun Solaris
> > - Raid 0+1
> > - 8.1.7
> >
> > Does anyone have any ideas or can recommend any good
> > papers/articles/url:s of this topic ?
> >
> > Thanks a lot
> >
> > Cheers !
> >
> > /Christian Svensson
>
Received on Sun Sep 29 2002 - 00:24:39 CDT