Re: Archive Log Size
Date: Tue, 9 Mar 2021 13:14:25 -0800
Message-ID: <d2a4619a-8eda-1f8a-9eda-f62f6c45adee_at_gmail.com>
Is it possible that, when archived redo log files are small, their
modification time is 15 mins (i.e. 900 seconds) after the previous redo
log file was created/archived?
I might be missing the point of your original question, but if you're
asking about archived redo log files being smaller than the online redo
log files, then ARCHIVE_LAG_TARGET > 0 on a frequently-idle instance
would explain that?
On 3/9/2021 11:39 AM, Henry Poras wrote:
> archive_lag_target is 900
>
> On Tue, Mar 9, 2021 at 2:33 PM Tim Gorman <tim.evdbt_at_gmail.com
> <mailto:tim.evdbt_at_gmail.com>> wrote:
>
> What is the value of ARCHIVE_LAG_TARGET?
>
>
> On 3/9/2021 11:29 AM, Henry Poras wrote:
>> Basically that is it. Redo logs are 256M, archive logs are ~40M.
>> There was a similar issue/thread in this list ~6 weeks ago, but
>> the solution to that system doesn't fit with this case. Hope that
>> helps.
>>
>> Henry
>>
>>
>> On Tue, Mar 9, 2021 at 2:23 PM Chris Taylor
>> <christopherdtaylor1994_at_gmail.com
>> <mailto:christopherdtaylor1994_at_gmail.com>> wrote:
>>
>> So what is the issue here exactly? I''ve read through this
>> thread a few times without clearly understanding the "problem" .
>>
>> It seems like you're wondering about why the archivelogs are
>> smaller than the redo logs maybe? But I'm not sure if that is
>> the issue or not.
>>
>> Chris
>>
>>
>> On Tue, Mar 9, 2021 at 1:51 PM Henry Poras
>> <henry.poras_at_gmail.com <mailto:henry.poras_at_gmail.com>> wrote:
>>
>> I wonder if there is any way to see if the problem is
>> local or from the standby without temporarily disabling
>> the standby. It would be nice to be able to eliminate one
>> of those two causes.
>>
>> Henry
>>
>> On Tue, Mar 9, 2021 at 12:54 PM Henry Poras
>> <henry.poras_at_gmail.com <mailto:henry.poras_at_gmail.com>> wrote:
>>
>> I liked that idea. Unfortunately,
>>
>> *physical standby*:sys_at_ohcopp2> select distinct bytes
>> from v$log;
>>
>> BYTES
>>
>> ----------
>>
>> 268435456
>>
>> 1 row selected.
>>
>> *physical standby*:sys_at_ohcopp2> select distinct bytes
>> from v$standby_log;
>>
>> BYTES
>>
>> ----------
>>
>> 268435456
>>
>> 1 row selected.
>>
>> *primary*:sys_at_ohcops1> select distinct bytes from v$log;
>>
>> BYTES
>>
>> ----------
>>
>> 268435456
>>
>> 1 row selected.
>>
>>
>> On Tue, Mar 9, 2021 at 12:02 PM Jonathan Lewis
>> <jlewisoracle_at_gmail.com
>> <mailto:jlewisoracle_at_gmail.com>> wrote:
>>
>>
>> That explains the LGWR/ARCH oddity.
>>
>> Next, let me display my ignorance: is it possible
>> to configure your standby with standby log files
>> that are SMALLER than the primary log files - I
>> think it used to be necessary to have them
>> matching in size, but maybe that's changed since
>> the good old days (of 10g). If a standby can
>> survive with standby log file smaller than
>> primary log files then maybe your 40 - 45MB is
>> being dictated by the size of your standby log files.
>>
>> Regards
>> Jonathan Lewis
>>
>>
>>
>> On Tue, 9 Mar 2021 at 16:35, Henry Poras
>> <henry.poras_at_gmail.com
>> <mailto:henry.poras_at_gmail.com>> wrote:
>>
>> Current parameter settings (same on all
>> nodes) are:
>>
>> log_archive_dest_1
>> LOCATION=USE_DB_RECOVERY_FILE_DEST
>> VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
>> DB_UNIQUE_NAME=xxxx
>>
>> log_archive_dest_2 SERVICE=xxxx LGWR ASYNC
>> VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
>> DB_UNIQUE_NAME=xxxx
>>
>> log_archive_dest_state_1 enable
>>
>> log_archive_dest_state_2 ENABLE
>>
>> log_archive_max_processes 4
>>
>> log_archive_duplex_dest
>>
>> log_archive_format ARC%S_%R.%T
>>
>>
>>
>> So log_archive_dest_2 has the deprecated LGWR
>> parameter. It's dest_1, using FRA which is
>> using ARCH. Huh?
>>
>> Henry
>>
>>
>> On Tue, Mar 9, 2021 at 8:45 AM Henry Poras
>> <henry.poras_at_gmail.com
>> <mailto:henry.poras_at_gmail.com>> wrote:
>>
>> Jonathan,
>> I have to admit I was also not expecting
>> both LGWR and ARCH and only came across
>> that because you suggested looking for
>> creator. I didn't put this standby
>> together, so I'll do a bit of digging to
>> see how it is constructed. One question
>> though (and the reason I didn't head down
>> this path earlier). If
>> archive_lag_time=900 on all instances,
>> what is the hypothesis where the standby
>> impacts the archive log size?
>>
>> Thanks again.
>> Henry
>>
>>
>> On Tue, Mar 9, 2021 at 4:43 AM Jonathan
>> Lewis <jlewisoracle_at_gmail.com
>> <mailto:jlewisoracle_at_gmail.com>> wrote:
>>
>> Interesting that of the two archived
>> copies one is created by ARCH and the
>> other by LGWR.
>> What do your log_archive_dest_1 and
>> log_archive_dest_2 look like ?
>> For completeness I guess it's also
>> worth checking
>>
>> log_archive_dest_state_1 and
>> log_archive_dest_state_2
>> log_archive_max_processes
>>
>> log_archive_duplex_dest
>> log_archive_format
>> log_archive_local_first
>>
>> What sort of standby setup do you have?
>>
>> Regards
>> Jonathan Lewis
>>
>>
>>
>>
>>
>> On Mon, 8 Mar 2021 at 19:39, Henry
>> Poras <henry.poras_at_gmail.com
>> <mailto:henry.poras_at_gmail.com>> wrote:
>>
>> Hi Jonathan,
>> Thanks for responding so quickly.
>>
>> As expected, we have 2 public
>> threads and no private threads
>> (RAC). The sizes of log_buffer,
>> Redo Size, Fixed Size, public
>> thread size as documented above
>> are identical for all three nodes.
>>
>> Looking at v$instance, all nodes
>> have been up for ~10 days (not
>> identical startup_time, but
>> within a day) with status of OPEN.
>>
>> I can't find any clues in
>> v$archived_log
>> primary:sys_at_ohcops1> l
>> 1 select * from (
>> 2 select dest_id, thread#,
>> sequence#, creator, first_time
>> from v$archived_log
>> 3 order by first_time desc,
>> thread# asc, dest_id asc
>> 4 )
>> 5* where rownum <=20
>> primary:sys_at_ohcops1> /
>>
>> DEST_ID THREAD# SEQUENCE#
>> CREATOR FIRST_TIME
>> ---------- ---------- ----------
>> ------- -------------------
>> 1 2 1056302
>> ARCH 08-03-2021 20:34:11
>> 2 2 1056302
>> LGWR 08-03-2021 20:34:11
>> 1 1 1126141
>> ARCH 08-03-2021 20:34:06
>> 2 1 1126141
>> LGWR 08-03-2021 20:34:06
>> 1 3 1164895
>> ARCH 08-03-2021 20:33:47
>> 2 3 1164895
>> LGWR 08-03-2021 20:33:47
>> 1 3 1164894
>> ARCH 08-03-2021 20:32:26
>> 2 3 1164894
>> LGWR 08-03-2021 20:32:26
>> 1 2 1056301
>> ARCH 08-03-2021 20:32:20
>> 2 2 1056301
>> LGWR 08-03-2021 20:32:20
>> 1 1 1126140
>> ARCH 08-03-2021 20:32:15
>> 2 1 1126140
>> LGWR 08-03-2021 20:32:15
>> 1 3 1164893
>> ARCH 08-03-2021 20:30:50
>> 2 3 1164893
>> LGWR 08-03-2021 20:30:50
>> 1 2 1056300
>> ARCH 08-03-2021 20:30:29
>> 2 2 1056300
>> LGWR 08-03-2021 20:30:29
>> 1 1 1126139
>> ARCH 08-03-2021 20:30:09
>> 2 1 1126139
>> LGWR 08-03-2021 20:30:09
>> 1 3 1164892
>> ARCH 08-03-2021 20:29:32
>> 2 3 1164892
>> LGWR 08-03-2021 20:29:32
>>
>> 20 rows selected.
>>
>> I'll keep looking.
>>
>> Henry
>>
>>
>>
>>
>> On Sat, Mar 6, 2021 at 3:41 AM
>> Jonathan Lewis
>> <jlewisoracle_at_gmail.com
>> <mailto:jlewisoracle_at_gmail.com>>
>> wrote:
>>
>>
>> RAC doesn't use private redo
>> - so with 32 CPUs we'd expect
>> two public threads with
>> (granule - fixed -
>> overheads)/2 as the log
>> buffer size.
>> Is this happening on all
>> three instances (redo threads)?
>> If you report the
>> "first_time" from
>> v$archived_log can you see a
>> pattern to the timing of the
>> switch.
>> Is there any clue in the
>> "creator" from v$archived_log ?
>>
>>
>>
>> One possible explanation for
>> this (assuming you've
>> discounted all the usual
>> suspects): Are all three
>> instances active when this
>> happens?
>> I haven't checked recently
>> but when an instance is down
>> the active instances may be
>> "kicking" it (KK lock) on a
>> regular basis to do a log
>> file switch so that all
>> instances have archived log
>> files with SCNs that are
>> reasonably close to each
>> other. Maybe there's a side
>> effect (or timing issue, or
>> bug) related to this that
>> means the kicking is
>> happening too frequently and
>> you're not getting through a
>> lot of redo before it happens.
>>
>>
>> Regards
>> Jonathan Lewis
>>
>>
>>
>> On Fri, 5 Mar 2021 at 20:39,
>> Henry Poras
>> <henry.poras_at_gmail.com
>> <mailto:henry.poras_at_gmail.com>>
>> wrote:
>>
>> * I was just rereading
>> this thread and ran
>> into a slightly
>> different case I am
>> just starting to dig
>> into. My environment
>> is 11.2.0.4 RAC (3
>> nodes).
>>
>> The rough breakdown:
>> Granule size: 256M
>> > select * from v$sga;
>>
>> NAME VALUE
>> --------------------
>> ----------
>> Fixed Size 2269072
>> Variable Size 3.9460E+10
>> Database Buffers 8.8584E+10
>> Redo Buffers 227807232
>> cpu_count: 32
>> log_buffer: 220798976
>> public threads: 2 public
>> (no private) 105M each
>>
>> primary:sys_at_ohcops1> l
>> 1 select
>> 2 indx,
>> 3 total_bufs_kcrfa,
>> 4 strand_size_kcrfa,
>> 5
>> index_kcrf_pvt_strand,
>> 6
>> space_kcrf_pvt_strand
>> 7 from
>> 8* x$kcrfstrand
>> primary:sys_at_ohcops1> /
>>
>> INDX TOTAL_BUFS_KCRFA
>> STRAND_SIZE_KCRFA
>> INDEX_KCRF_PVT_STRAND
>> SPACE_KCRF_PVT_STRAND
>> ----------
>> ----------------
>> -----------------
>> ---------------------
>> ---------------------
>> 0 215624 110399488
>> 0 0
>> 1 215624 110399488
>> 0 0
>>
>> redo logs: 256M
>> archive logs: 40-43M
>>
>> So if each public thread
>> takes 105M, I have ~~46M
>> left over. Filling one
>> thread and switching, as
>> discussed in Jonathan's
>> article, should give me
>> archive logs of ~105 M.
>> Instead, I am getting an
>> archive log of ~ the rump
>> size in the redo
>> (256-105-105=46M).
>>
>> I checked the obvious
>> (not a manual log switch,
>> not archive_lag_target).
>> I'll chime in if I find
>> something interesting.
>> Also wondering if any
>> obvious things I'm
>> missing off the top.
>>
>> Henry
>> ===================================================================
>>
>
-- http://www.freelists.org/webpage/oracle-lReceived on Tue Mar 09 2021 - 22:14:25 CET