Re: Effective compression ratio with minimal performance overhead

From: Mohamed Houri <mohamed.houri_at_gmail.com>
Date: Mon, 23 May 2022 09:38:29 +0200
Message-ID: <CAJu8R6juCJGc9JGmvgVHe5PybPiyJ0Xq+q91yttDQ9r-p9qUXA_at_mail.gmail.com>



Pap,

*With HCC compression(i.e compress for query low), table of size ~500Gb is reduced to ~31GB. But the performance of a batch INSERT query (using a batch size of 100) remains the same as with an uncompressed table. And the same UPDATE was somehow failing with a deadlock error, yet to get the cause.*

If I understood you correctly, you have a table compressed in HCC mode on which you are making updates?

If that's what you're doing, then you should know that it's not good from a performance point of view for selects done on this table. Simply because an update of a table compressed in HCC transforms the rows updated in mode 64 (OLTP compression) creating row chaining and preventing, as such, any smart scan on this table.

You can check the type of compression of the rows of your table via the following select (adapt the table name and the number of rows)

SELECT
 decode(dbms_compression.get_compression_type(user, 'T2', ROWID)

    ,1, 'NOCOMPRESS'
    ,2, 'COMP_FOR_OLTP'
    ,4,'COMP_FOR_QUERY_HIGH'
    ,8,'COMP_FOR_QUERY_LOW'
    ,16,'COMP_FOR_ARCHIVE_HIGH'
    ,32,'COMP_FOR_ARCHIVE_LOW'
    ,64,'COMP_BLOCK','OTHER') comp_type,
    COUNT(*)
FROM
    t2
WHERE
    ROWNUM < 100
GROUP BY
   decode(dbms_compression.get_compression_type(user, 'T2', ROWID)
    ,1, 'NOCOMPRESS'
    ,2, 'COMP_FOR_OLTP'
    ,4,'COMP_FOR_QUERY_HIGH'
    ,8,'COMP_FOR_QUERY_LOW'
    ,16,'COMP_FOR_ARCHIVE_HIGH'
    ,32,'COMP_FOR_ARCHIVE_LOW',

    64,'COMP_BLOCK','OTHER'); As per regards to index compression, have you executed the Jonathan Lewis script (for b-tree indexes) to find whether you have exploded indexes that you can rebuild and save disk space?

     https://jonathanlewis.wordpress.com/index-sizing/

Are you using LOBs as well? A couple of days ago I have moved LOB from a couple of tables; thanks to this I was able to recover 2.2TB

Best regards
Mohamed Houri

Le lun. 23 mai 2022 à 05:58, Lok P <loknath.73_at_gmail.com> a écrit :

> Thank you. Actually I read in many blogs(for e.g in below doc)
> stating there is "minimal performance overhead" with advanced compression,
> so this 100% degradation surprised me and thus wanted to check if this is
> expected behaviour or if I am checking it wrong. And at the same time , why
> is the HCC compression(compress for query low) experiencing zero
> performance overhead?
>
>
> https://www.oracle.com/technetwork/database/options/compression/advanced-compression-wp-12c-1896128.pdf
>
> On Mon, May 23, 2022 at 1:53 AM Mladen Gogala <gogala.mladen_at_gmail.com>
> wrote:
>
>> On 5/22/22 15:31, Lok P wrote:
>>
>> With Advanced compression, a table of size ~500GB is reduced to ~100GB.
>> But the execution time of a batch INSERT query (using a batch size of 100)
>> is increased from ~6minutes to ~13minutes for 10million rows. And run time
>> of a multi threaded 'UPDATE' using pipelined function has been increased
>> from ~11minutes to ~29minutes. It was updating 16million rows.
>>
>> Well, there is no such thing as a free lunch. And thee is no such thing
>> as cheap gasoline.
>>
>> --
>> Mladen Gogala
>> Database Consultant
>> Tel: (347) 321-1217https://dbwhisperer.wordpress.com
>>
>> -- http://www.freelists.org/webpage/oracle-l
>
>

-- 

Houri Mohamed

Oracle DBA-Developer-Performance & Tuning

Visit My         - Blog <http://www.hourim.wordpress.com/>

Let's Connect -
<http://fr.linkedin.com/pub/mohamed-houri/11/329/857/>*Linkedin
Profile <http://fr.linkedin.com/pub/mohamed-houri/11/329/857/>*

My Twitter <https://twitter.com/MohamedHouri>      - MohamedHouri
<https://twitter.com/MohamedHouri>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon May 23 2022 - 09:38:29 CEST

Original text of this message