Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Very high 'log file sync' wait time with no 'log file parallel write' wait time
EscVector wrote:
> joel garry wrote:
> > EscVector wrote:
> > > joel garry wrote:
> > > > DA Morgan wrote:
> > > > > joel garry wrote:
> > > > >
> > > > > > Well, I don't know about the OP, but on one system, if I increased undo
> > > > > > to where a script tells me it "should be," I would have to quadruple
> > > > > > the tablespace from 10G to 40G. Now, that system
> > > > > > has two days online full backups available, so that means an additional
> > > > > > 90G.
> > > > > >
> > > > > > jg
> > > > > > --
> > > > > > @home.com is bogus.
> > > > > > http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html
> > > > >
> > > > > Only if your backup is brain-dead. Aren't you using RMAN?
> > > >
> > > > I'm using 9i RMAN. What about it? It takes my 10G undo file (which
> > > > just now is using under 700M) and two other 2G files (which are using
> > > > about 650M) and puts them into 1 11G piece. I get about 70%
> > > > compression when I compress the pieces to an off-SAN device.
> > > >
> > > > >
> > > > > And just in case someone interprets that answer as meaning you should do
> > > > > what the v$ tells you too ... are there any 1555s or other issues?
> > > >
> > > > If I don't kill off leftover sessions nightly, yes. undo retention is
> > > > 10 hours.
> > > >
> > > > jg
> > > > --
> > > > @home.com is bogus.
> > > > "...that's not how class-action litigation is supposed to work."
> > > > http://www.signonsandiego.com/uniontrib/20061205/news_1b5lerach.html
> > >
> > > Your point is taken regarding "cheap", but I have found that the DBA
> > > often buys into problems that they shouldn't such as having to work
> > > around backup related disk shortages. RMAN is a great tool, but even
> > > it is limited when the database grows to sufficient size. "Cheap in
> > > this instance involves high commit rate and related log sync waits
> > > debugging vs increasing undo and getting the data loaded and then
> > > possibly resizing undo after it completes. I suggest it would be
> > > cheaper or optimal to simple increase undo in this situation. A 9
> > > million row insert( not knowing the actual row size) at 200 bytes per
> > > row comes out to under 2gb. So, if there are indexes, that would bump
> > > up undo, but still, let's gestimate at 5gb more undo for this insert.
> > > Is it worth the time in this situation? Your situation/undo mileage
> > > may vary.
> >
> > My situation/undo is, for your way of doing it, undo would need to be
> > larger than the data in the database. commit=y makes much more sense,
> > you don't have to worry so much about a slight increase in data after
> > testing, or some wierdness in segment alignment, blowing the actual
> > live load. Of course, that's a judgement call on my part, I don't want
> > to give up my weekends and new years eve to babysit this stuff. Set it
> > up in cron and forget about it until normal work hours.
> >
> > I agree with your point about buying into problems people shouldn't.
> > As a contractor,
> > I don't even get buy-in that a DBA is necessary, or that computers are
> > cheaper than
> > people, or that salespeople aren't necessarily the best system
> > integration experts. Truly a strange result considering the people
> > making the decisions are budget, IS management and cost accounting
> > types, one would think they would understand the limits of _their_
> > tools.
> >
> > jg
> > --
> > @home.com is bogus.
> > http://www.rocketracingleague.com/
>
> I would look to minimize undo when loading. 10GB in my book is still
So isn't that what commit=Y does? (Actually, in the ETL I'm currently
working
on, I have a choice between culling data with multiple exp's using
QUERY, or
doing massive deletes after imp. Guess which has less undo and other
impacts?)
> small. The focus here was to suggest a solution to the issue that does
> not involve skilled analysis, but rather a simple solution that
> involves disk space. I'd suggest giving management Goldratt's The Goal.
Well, when you are talking hundreds of tables, some of which are
enigmas
wrapped in wheels within wheels, some skill is necessary. You can rest
assured
I am minimizing complexity.
Management has its own fads, the only impact I can have is to show
better
results than my competition. I make suggestions, but I'm not going to
tell them how to run their business. I'm not in the business of
competing
with the USC or UC graduate schools of management. Especially when my
work derives from the fact that this company is more successful than
other
similar companies, so it is going around buying them up and someone has
to
deal with the db implications of that. It's just one more
demonstration of the
fact that arbitrary tactical decisions in the IS department don't mean
much
in the overall business decision process, strategic application success
is
more important than database tuning details. The fact that the
db-independent application depends on the scalabilty of Oracle can be
our
little secret.
jg
-- @home.com is bogus. "Intimate friendly cooperation between the management and the men" http://melbecon.unimelb.edu.au/het/taylor/sciman.htm Yeah, right! Gonzo stacks 137 blocks per hour, so we'll dock your pay if you don't!Received on Wed Dec 06 2006 - 16:21:34 CST