Re: Effective compression ratio with minimal performance overhead
Date: Mon, 23 May 2022 15:03:55 +0100
Message-ID: <CACj1VR7U0LWE1H+MXOTjog4dzN7G2k_-co9X74AEy-YDva=4-w_at_mail.gmail.com>
On Mon, 23 May 2022 at 14:34, Lok P <loknath.73_at_gmail.com> wrote:
> Thank You All. I think the DML(mainly INSERT) which i have tested on the
> HCC compressed(compress for query low) table, it was performing in same
> speed as non compressed table , because it was not compressing those newly
> inserted data. Its just meant for compressing the existing data in the
> table. And Advanced compression is the one seems to be really compressing
> the future/incoming DML/data but giving significant(100%) performance
> overhead. So i was thinking if its okay to just do the HCC compression
> applied on the existing data and let it incoming data remain uncompressed,
> but as Mohamed highlighted , if the read queries wont be doing smartscan
> post this HCC on the existing table , that is going to be an issue.
>
> _at_Mohamed, Yes, i was testing the performance of UPDATE and INSERT query on
> top of the compressed table. There are partitioned tables but the SELECT
> queries dont partition prune so they gets scanned full and/or through
> global indexes etc many times.
>
> My understanding was that considering this is an OLTP database and we do
> conventional DML most of the time. As per Doc, Advanced compression is best
> suited even it gives us least compression ratio. However as i tested the
> DML performance overhead on the advanced compression table , the execution
> time is doubled which is little concerning. I have not tested the
> performance of full scan on the compressed table but the index access seems
> to be performing with same speed as that of non compressed table.
>
> Regarding the HCC compression the doc says , 'compress for query low' is
> the one which will provide minimal compression ratio or minimal storage
> space save but its a best compromise for SPACE reduction VS DML and SELECT
> query performance in a OLTP environment. But as you said it wont perform
> smartscan post "compress for query low", that is concerning and i have to
> test it out then.
>
> I ran the select query which you posted and its giving
> "COMP_FOR_QUERY_LOW" as output for the compressed table which is expected.
>
> We donot have any tables holding CLOB column though.Also tried running the
> index fragmentation script in the Jonathan's blog, i see it will give us
> ~200-300GB of storage space back. But looking for more space reduction.
>
> Regards
>
> Lok
>
>
>
>
>
>
>
> On Mon, 23 May 2022, 6:12 pm Tim Gorman, <tim.evdbt_at_gmail.com> wrote:
>
>> Mladen,
>>
>> That is unlikely, as HCC compression (and TDE, for that matter) is
>> performed immediately prior to direct-path INSERTs, so that completed
>> blocks (including compression and/or encryption) are written to the
>> underlying storage layer.
>>
>> It would be interesting to see if Exadata somehow changes that?
>>
>> Thanks!
>>
>> -Tim
>>
>>
>>
>> On 5/23/2022 5:05 AM, Mladen Gogala wrote:
>>
>> On 5/23/22 07:43, Jonathan Lewis wrote:
>>
>>
>> _at_Mladen Gogala <gogala.mladen_at_gmail.com>
>>
>> When did that change; I haven't been watching closely but the compression
>> used to be done at the compute node, and HC decompression at storage cells,
>> through software, except for the cases where the storage cell sent the CU
>> to the compute node either to balance the CPU usage or because the extract
>> from a smart scan was too large for the message buffer.
>>
>> Regards
>> Jonathan Lewis
>>
>> Hi Jonathan,
>>
>> I think that I've read in some white paper that HCC is done on storage
>> nodes, transparently to the compute nodes, as of X7. I'll look for the
>> paper and let you know.
>>
>> Regards
>>
>> --
>> Mladen Gogala
>> Database Consultant
>> Tel: (347) 321-1217https://dbwhisperer.wordpress.com
>>
>>
>>
-- http://www.freelists.org/webpage/oracle-lReceived on Mon May 23 2022 - 16:03:55 CEST