Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Asynchronous Commit in Oracle Database 10g R2
"VC" <boston103_at_hotmail.com> wrote:
> <xhoster_at_gmail.com> wrote in message
> news:20050902202109.103$g4_at_newsreader.com...
> > "VC" <boston103_at_hotmail.com> wrote:
> >
> > <snip asynchronous commits>
> >
> >> Could you please comment on how this optimization does not create a
> >> possibility for data loss ?
> >
> > Who says it doesn't? Of course there is the possibility of data loss.
> > But
> > then again, if your A/D converter suffers a buffer overrun because the
> > app spent too much time waiting for a log sync, that is also a
> > possibility for data loss. Life's a bitch, then you die.
>
There are two aspects I was talking about, one is the option of choosing the "commit write nowait", and the other is the PLSQL optimization which happens automatically. The above pertained to the intentional choice aspect.
You have a piece of real-time equipment whose output you want to put into the database. If you don't read the data off of this equipment fast enough, then its buffer gets full and starts dropping data. So, if you read data and put it into the database with synchronous commits you can lose data because you are too slow, or if you do asynchronous commits you can (very rarely) lose data because of a crash. This is just an example, of course. There are other situations which present similar trade-offs.
> >> (An application thinks that a transaction has
> >> committed while in fact it did not, and a crash occurs ...)
Below, I've switched to the second aspect, the PLSQL optimization that happens whether you ask for it or not when you issue commits within some PLSQL loops. Sorry I wasn't more clear on that.
> >
> > Since the thing which is doing the commit in the tight loop is a PLSQL
> > block running on the same database server, then the application's
> > "thought process" doesn't survive the crash any better than its commits
> > did. Therefore, the application can't "think" that it committed the
> > transaction--either it can reconstruct exactly what it needs to by
> > looking at what did get truly committed, or it simply has no idea what
> > happened prior to the lights going out.
>
I think the issue only arises when you have commits inside the PL/SQL loop (because those are the only commits automatically turned into nowait ones). Whether there are additionally *other* commits outside the loop is immaterial, as far as I can see.
> Is it the optimization we are talking about ?
Yes, I believe so.
>
>
Well, I wouldn't argue that. Surely someone has to commit somewhere, and I see no reason to think it is somehow OK for Java or Perl or C++ to issue the commit but not for PL/SQL to do so.
> I do not understand your using the word
> 'asynchronously' in the last sentence, though.
By "asynchronously" I simply meant the type of commit that doesn't wait. Maybe I misused the term.
Xho
-- -------------------- http://NewsReader.Com/ -------------------- Usenet Newsgroup Service $9.95/Month 30GBReceived on Fri Sep 02 2005 - 21:16:30 CDT
![]() |
![]() |