RE: OLTP compression slow update
Date: Thu, 27 Dec 2012 22:03:18 -0500
Message-ID: <000c01cde4a7$e4214770$ac63d650$_at_rsiz.com>
Only the presumption that rows are clustered in blocks primarily by time of creation in OLTP databases, so NOT being compressed yet while primarily single block operations are taking place is helpful, while most of the data (all but the recent blocks) being compressed helps on reports covering a longer time scale, which might include some multi-block reads.
-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Kevin Closson
Sent: Thursday, December 27, 2012 7:09 PM
To: mwf_at_rsiz.com; oratune_at_yahoo.com; martin.a.berger_at_gmail.com; 'Michael Dinh'
Cc: 'Raj Jamadagni'; 'oracle-l'; kevin.jernigan_at_oracle.com
Subject: Re: OLTP compression slow update
>>For the broad base of reasonable cases, the elapsed time cost of
>>reading
fewer physical blocks is likely to save overall elapsed time unless your system is pretty badly CPU bound already.
...since the OP is OLTP, I'm confused why packed-payload would be seen as a value prop. A single block read (db file sequential read) is a single block read regardless of how much data is packed in it. Am I missing something?
For the broad base of reasonable cases, the elapsed time cost of reading fewer physical blocks is likely to save overall elapsed time unless your system is pretty badly CPU bound already.
Using a rolling aging window as described below is an interesting way to have your cake and eat it too.
-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org]
On Behalf Of David Fitzjarrell
Sent: Thursday, December 27, 2012 4:41 PM
To: martin.a.berger_at_gmail.com; Michael Dinh
Cc: Raj Jamadagni; oracle-l; kevin.jernigan_at_oracle.com
Subject: Re: OLTP compression slow update
OLTP compression operates on 'full' blocks (those where PCTUSED has been reached) before compression is enacted. Updating blocks compressed for OLTP requires uncompressing the data, performing the updates then compressing the blocks again once, as I previously mentioned, PCTUSED has been reached. All 'full' blocks are compressed (if they aren't already) in sort of a 'batch' operation thus OLTP compression results in a mixture of compressed and uncompressed blocks after DML (inserts, updates, deletes) and it operates on any DML, not just direct path inserts. As such updates to OLTP compressed data can be resource intensive. I have noted that updating OLTP compressed data can take almost twice as long as updating uncompressed data (a fact already revealed in this thread).
David Fitzjarrell
From: Martin Berger <martin.a.berger_at_gmail.com> To: Michael Dinh <mdinh235_at_gmail.com> Cc: Raj Jamadagni <rjamya_at_gmail.com>; oracle-l <oracle-l_at_freelists.org>; kevin.jernigan_at_oracle.com Sent: Thursday, December 27, 2012 1:49 PM Subject: Re: OLTP compression slow update
Maybe we can sort the situation out a little bit.
For me the main question is still "Why the execution X (compressed) is slower by 1.79 compared to execution Y (uncompressed)?"
Is this still correct?
I know of 2 methods oracle provide to answer these question:
"system/session statistics" or "wait events". Does anyone knows other methods to get informations about the "what's going on"?
Maybe Kevin or some others can give us some insight which statistics or events we can expect and measure to identify the process is 'in compression A' or 'doing compression task B'?
If there are no such informations available I can only suggest to collect/sample stack traces.
Jareds and Rajs suggestions are totally valid with a primary target to 'avoid the slow path', but I'd like to 'understand the slow path' and then decide how/why to act.
just my .02 €
 Martin
On Thu, Dec 27, 2012 at 7:50 PM, Michael Dinh <mdinh235_at_gmail.com> wrote:
> Thanks all for the ressponse.
> Still testing and will share results.
>
> -Michael.
>
> On Thu, Dec 27, 2012 at 4:52 AM, rjamya <rjamya_at_gmail.com> wrote:
>
> > I saw this first in early 10g versions of plain vanilla compression
> > (not fancy oltp etc). We were analyzing http/smtp logs collected
> company-wide,
> > Initially we had daily partitions. since it started getting large
> > quickly (we had open web at that time), I introduced partition
> > compression, it worked like magic. When updates (due to occasional
> > re-loads, corrections, ranking etc) to compressed partitions took
> > long, we redesigned our strategy. Kept 7 days worth of parttions
> > uncompressed, and everything
> older
> > was compressed. That worked out well in the end. I guess the update
> > issue on compressed data still needs work then :) Raj
>
>
-- http://www.freelists.org/webpage/oracle-l -- http://www.freelists.org/webpage/oracle-l -- http://www.freelists.org/webpage/oracle-l -- http://www.freelists.org/webpage/oracle-l -- http://www.freelists.org/webpage/oracle-lReceived on Fri Dec 28 2012 - 04:03:18 CET