Home » RDBMS Server » Server Administration » Theoretical Archive Question
Theoretical Archive Question [message #59985] Wed, 07 January 2004 23:27 Go to next message
Keir Hawker
Messages: 8
Registered: September 2003
Junior Member
Hi All,

I have a question which has been bothering me regarding Archiving. I've looked around but can't come up with an answer. I'm currently running Oracle 8.1.7 on Windows 2K server.

Reading around, a lot of people concur that the redo log files should be archived roughly ever half an hour [[even less depending on how much data you wish to lose]].

As my database has different loads at different times [[as I'm sure most of you do]]. Instead of having my logs files at a set size of 50 and tune it so they are written to on average every half an hour. Why don't I have some large redo files [[e.g. 200 meg]] and have a job that switches the log file every half an hour.

This doesn't seem like a bad idea to me, however, I've never heard of anyone doing it. Is there anything incorrect by doing this?

Thankyou very much.

Keir Hawker
DBA.

P.s. I would like to thank Thiru as well for the help that he has dished out to us, you've been a great help over the last couple of months!
Re: Theoretical Archive Question [message #59992 is a reply to message #59985] Thu, 08 January 2004 06:40 Go to previous messageGo to next message
croca
Messages: 50
Registered: January 2004
Member
Hi Keir.
We have over 600 oracle databases in my company, most of them archives every minute....
redo size is 10Mb. And it's fine.
But, regardign your question, i would not use redo that big!, suppouse you lost just ONE file, you will lost 200MB information and transactions!
if you have shorter files, you most probably lost less information in case one file got damaged.
Besides, if you have to recover, you may need to recover just a few transactions (depending on the type of failure) and you will apply fewer archived redo files quicker. I mean, if your failure demands you to recover just a few transactions:
1) in case you have 200mb redo files
you will have to apply that file, that would
insume more time, just to recover a few transactions
2) in case you have for example 5MB redo file
most probably you will have to apply a few files, thus, it will be faster.

I've been working as dba for 12 years, never implemented archived redo log that big, some databases i have with 20MB redo...
And i dont worry about archiving every half hour...
we do every minute and there is not much performance impact...it depends on the servre also..
well, best luck
Re: Theoretical Archive Question [message #59994 is a reply to message #59992] Thu, 08 January 2004 06:54 Go to previous messageGo to next message
Keir Hawker
Messages: 8
Registered: September 2003
Junior Member
Hi Croca,

Thanks very much. Wholly molly. 600 databases? That should keep you nice and busy :- ).

Thanks very much for your input. I was planning on decreasing the size of my log files anyway [[from 50 to 20]] so that they fire off every 20 minutes or so [[on average]].

Cheers

Keir
Re: Theoretical Archive Question [message #59995 is a reply to message #59992] Thu, 08 January 2004 08:42 Go to previous messageGo to next message
Thiru
Messages: 1089
Registered: May 2002
Senior Member
Hi there,
I have used redolog files of sizes 8M-1GB depending on the requirements. A Full interval checkpoint & Archiving every minute doesnt look too good!

Mirror your redolog files across different disks(Oracle multiplexing when asynchronous I/O is available or else Hardware mirroring) and you wouldnt have to worry too much about loosing all your members at the same time.

Quote :
---------
"Besides, if you have to recover, you may need to recover just a few transactions (depending on the type of failure) and you will apply fewer archived redo files quicker. I mean, if your failure demands you to recover just a few transactions:
1) in case you have 200mb redo files
you will have to apply that file, that would
insume more time, just to recover a few transactions
2) in case you have for example 5MB redo file
most probably you will have to apply a few files, thus, it will be faster"
---------
is not true,becos Oracle will only read as much redo entries from the archivelog file as possible and not the whole file[[Agreed you have to restore the whole file,but it shouldnt cause that much difference, in just copying]]

Frequent checkpointing is a performance killer,whether its observed or not due to abundant resources , is a
different thing.
I still like log switching every 15-30 minutes...

-Thiru
Re: Theoretical Archive Question [message #59996 is a reply to message #59985] Thu, 08 January 2004 09:18 Go to previous messageGo to next message
Thiru
Messages: 1089
Registered: May 2002
Senior Member
Hi,
what you are thinking (ie making them bigger and switching every 30 minutes through a job) is certainly possible. But during high transaction times,you will be disabling the job so as not to cause too many checkpoints that will slow down your transactions.

You will need to monitor your alert.log for the current frequency of log switches and any 'checkpoint not complete' ,'Thread 1 cannot allocate new log' and eventually come up with a size ,that strikes a balance between performance and amount of data loss tolerated. But also remember that incremental checkpoints can be configured(using FAST_START_MTTR_TARGET or LOG_CHECKPOINT_INTERVAL etc) that can make the datafiles consistent from the recovery perspective,even before the next Log switch happens. Although the incremental checkpoints are not updated in the datafile headers,they do get recorded in the controlfile(which should be mirrored anyway) and so during the instance recovery , SMON only needs to copy the redo entries from the online redolog files since the last incremental checkpoint and not necessarily from the full interval checkpoint caused by the Log switch. This makes the arguement still in favour of big log files.

When it comes to completely loosing your archive redolog file, yes, you loose more data if you have a 'big' log file. But there are clients who can afford to loose some amount of data and care a lot about performance. And there are companies that cannot afford to loose a single transaaction( and they should be using Standby databases anyway). So it all depends on the requirements.

the following extract from Performance tuning guide still holds good :

"The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Small log files can increase checkpoint activity and reduce performance. Because the recommendation on I/O distribution for high performance is to use separate disks for the redo log files, there is no reason not to make them large. A potential problem with large redo log files is that these are a single point of failure if redo log mirroring is not in effect.

It is not possible to provide a specific size recommendation for redo log files, but redo log files in the range of a hundred megabytes to a few gigabytes are considered reasonable. Size your online redo log files according to the amount of redo your system generates. A rough guide is to switch logs at most once every twenty minutes.

"

HTH
Thiru
Re: Theoretical Archive Question [message #59999 is a reply to message #59996] Thu, 08 January 2004 19:28 Go to previous messageGo to next message
sachin kumar gupta
Messages: 157
Registered: March 2003
Senior Member
Pls. clear my doubt: CHKPT done by FAST_START_MTTR_TARGET or LOG_CHECKPOINT_INTERVAL or LOG_CHECKPOINT_TIMEOUT causes logswitch or not??

TIA,
Sachin
Re: Theoretical Archive Question [message #60000 is a reply to message #59999] Thu, 08 January 2004 19:43 Go to previous messageGo to next message
Thiru
Messages: 1089
Registered: May 2002
Senior Member
Log switches cause checkpoint , not the other way around.

-Thiru
Re: Theoretical Archive Question [message #60001 is a reply to message #60000] Thu, 08 January 2004 19:53 Go to previous messageGo to next message
sachin kumar gupta
Messages: 157
Registered: March 2003
Senior Member
i still have doubt. Log switch causes CHKPT but does CHKPT (say done by FAST_START_MTTR_TARGET etc....) cause logswitch????

TIA,
Sachin
Re: Theoretical Archive Question [message #60002 is a reply to message #60001] Thu, 08 January 2004 20:09 Go to previous message
Thiru
Messages: 1089
Registered: May 2002
Senior Member
Nope (ie not the other way around !)..
Previous Topic: Dead BAckground Process QMN
Next Topic: ORA-00600/00601/00604/01001
Goto Forum:
  


Current Time: Wed Jun 26 04:47:03 CDT 2024