sandy,
saw your thread re the size of the archive log dir size and had another
possible angle for you to look at.
not knowing your architecture or tape archiving method and make a very
large generalisation, looking at the amount of data that you are trying
to put to tape, 11gb in 4hrs seems fairly slow by modern streaming
devices. ie dlt dds3 units etc.
maybe get your sysadmin to look at the method that you are putting the
files to tape and see if there could be any possible benefits to
changing block size.
as an eg, we were taking approx 13hrs to put 80gb+ to a dlt7000 via the
following command
tar cvpf /dev/rst13 -I <file containing list of files>
utilising iostat and playing with the block size that was being used to
put the data to tape, we found that
tar cvpbf 80 /dev/rst13 -I <file containing list of files>
was completing in under 4hrs.
what we did was utilise iostat to review the amount of kilobytes
read/written etc from the various devices until we found a 'best fit'
solution.
another major thing that we found was to ensure that the tape device
that your using is using the most uptodate driver to ensure maximum
throughput.
also, don't know the size of your archive log files, but within that
4hrs you could also look at setting up background jobs at very low
priortiy to compress the archive logs and then only put those that have
been compressed to tape.
this again impacts the amount of time that it takes to recover, but its
another possible option to ensure that your archive window is not
exceeded.
if you would like any help, feel free to ask any q's. the work that we
did approx took some 2-3 hrs to evaluate best block size fit but when
you look at the reduction from 13 to 4 hrs to complete backups it
seemed very worthwhile.
sorry if this is completely off beam,
karl
- "Koivu, Lisa" <lkoivu_at_qode.com> wrote:
> Sandy,
>
> Add as much space as you can get!!!
>
> If you are shooting for 4 hour intervals, I would ask for something
> like
> <your # of arclogs per hour at peak> * <size of arclogs> =
> hourly_bytes
>
> 4 * <hourly_bytes> = initial size of arclog destination
>
> Assuming it takes 1 hour to run your arclog cleanups: add another
> hourly_bytes to your total. That way you can hold 4 hours worth of
> arclogs
> plus another hour's worth while the cleanup script is running. Of
> course
> this is very pessimistic.
>
> Does that make sense? You may also choose to size it down if you
> have
> definite information about your peak processing times (only lasts an
> hour,
> etc.)
>
> I feel your pain....HTH
> Lisa
>
>
> -----Original Message-----
> Sent: Monday, July 31, 2000 2:33 PM
> To: Multiple recipients of list ORACLE-L
>
>
> We've recently had a lot of problems with our archive log directory
> filling
> and the database freezing. Now that I've mastered how to fix this
> quickly,
> I'm wondering why I have to fix it so often. Is there some sort of
> optimal/minimum size for an archive log directory? We are running a 1
> terabyte db with varying amounts of activity, and the archive log
> directory
> can hold up to 11G. The archived logs are being written to tape every
> 4 hrs,
> but this often isn't fast enough--sometimes the directory fills in an
> hour.
> Should I just increase the frequency that the logs are written to
> tape? Add
> space? Any advice would be appreciated.
>
> Thanks,
> Sandy
>
>
> Get Your Private, Free E-mail from MSN Hotmail at
> http://www.hotmail.com
>
> --
> Author: Sandy Ocheltree
> INET: ora_dba_at_hotmail.com
>
> Fat City Network Services -- (858) 538-5051 FAX: (858) 538-5051
> San Diego, California -- Public Internet access / Mailing
> Lists
> --------------------------------------------------------------------
> To REMOVE yourself from this mailing list, send an E-Mail message
> to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in
> the message BODY, include a line containing: UNSUB ORACLE-L
> (or the name of mailing list you want to be removed from). You may
> also send the HELP command for other information (like subscribing).
>
Received on Mon Jul 31 2000 - 18:43:31 CDT