Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: Table compression
Here is a test:
--
drop table test_delete_compress;
--
create table test_delete_compress noparallel compress as
select * from some_table where rownum < 100001;
--
drop table test_delete_nocompress;
--
create table test_delete_nocompress noparallel nocompress as
select * from test_delete_compress;
--
SQL> select segment_name, bytes
2 from dba_segments
3 where segment_name in ('TEST_DELETE_COMPRESS','TEST_DELETE_NOCOMPRESS');
SEGMENT_NAME BYTES
---------------------------------------- ----------
TEST_DELETE_COMPRESS 7340032 TEST_DELETE_NOCOMPRESS 17825792
SQL> select count(*) from test_delete_compress;
COUNT(*)
100000
SQL> select count(*) from test_delete_nocompress;
COUNT(*)
100000
SQL> alter session set sql_trace = true;
Session altered.
SQL> delete from test_delete_compress;
100000 rows deleted.
SQL> delete from test_delete_nocompress;
100000 rows deleted.
Here is the trace file:
delete from test_delete_compress
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 1 0 0 Execute 1 3.92 4.03 0 403 102388 100000 Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 3.92 4.05 0 404 102388 100000
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 26
Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE (cr=403 r=0 w=0 time=4037999 us) 100000 TABLE ACCESS FULL TEST_DELETE_COMPRESS (cr=403 r=0 w=0 time=154747 us)
delete from test_delete_nocompress
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0 Execute 1 3.57 4.46 0 1090 105496 100000 Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 3.57 4.47 0 1091 105496 100000
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 26
Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE (cr=1090 r=0 w=0 time=4468838 us) 100000 TABLE ACCESS FULL TEST_DELETE_NOCOMPRESS (cr=1090 r=0 w=0 time=154613 us)
Oracle 9.2.0.4 Solaris 8 Sun Fire 6800
Regards,
Waleed
-----Original Message-----
From: Tim Gorman [mailto:tim_at_sagelogix.com]
Sent: Monday, March 22, 2004 11:34 PM
To: oracle-l_at_freelists.org
Subject: Re: Table compression
Waleed,
What I think generally has little to do with what I post, unless I specifically say as much. Excerpts from a test case...
SQL> delete from test_compress;
10000 rows deleted.
Elapsed: 00:00:12.20
Statistics
16 recursive calls 20699 db block gets 96 consistent gets 0 physical reads 4921204 redo size 788 bytes sent via SQL*Net to client 798 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 10000 rows processed
SQL> delete from test_nocompress;
10000 rows deleted.
Elapsed: 00:00:01.55
Statistics
16 recursive calls 10367 db block gets 45 consistent gets 0 physical reads 2539244 redo size 791 bytes sent via SQL*Net to client 800 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 10000 rows processed
This is an 8:1 ratio in timing, gathered on a Sun E450 running Solaris 8 over Sun JBOD disk. I get similar (8:1) timing results on my little ol' Mac OS X laptop too. I got 6:1 results on a Sun E15000 running Solaris 9 over EMC, but I don't have those test results handy, so I'll just say so for now... :-)
Hope this helps...
-Tim
on 3/22/04 8:47 AM, Khedr, Waleed at Waleed.Khedr_at_FMR.COM wrote:
> Don't know why you think deletion in compression tables will be that slow. > > I expect deletion speed not be affected by the compression. > Also conventional inserts should no be affected since it will not be > compressed. > > On the other hand I know that updates are really slow and usually the update > ends up with tons of chained rows that make FTS a nightmare. When I need to > update, I take these steps: uncompress, update, compress. > > Regards, > > Waleed > -----Original Message----- > From: Tim Gorman [mailto:tim_at_sagelogix.com] > Sent: Monday, March 22, 2004 9:03 AM > To: oracle-l_at_freelists.org > Subject: Re: Table compression > > > LeRoy, > > Expect performance penalties of 2x on times for "bulk" inserts, 3x-4x > penalty on "conventional" inserts and updates, and up to 6x-8x penalty on > deletes. This is an option that shares a lot of design characteristics with > bitmap indexes, in that you love them for the queries but hate them during > DML. > > As with bitmap indices, the benefits of compression come out only during > certain query operations. Queries using full table scans are much, MUCH > faster (due simply to fewer I/Os) and queries using indexed scans are a wash > (no difference). > > Excerpts from some test timings on a small subset of a fact table copied > into (unindexed and nonparallel) compressed and non-compressed copies: > > SQL> select sum(bytes)/1048576 alloc_mb, > 2 sum(blocks) alloc_blocks, > 3 count(distinct partition_name) nbr_partitions, > 4 count(*) nbr_exts > 5 from user_extents > 6 where segment_name = 'COMPRESS_TEST'; > > ALLOC_MB ALLOC_BLOCKS NBR_PARTITIONS NBR_EXTS > ---------- ------------ -------------- ---------- > 194.875 24944 51 538 > > SQL> select count(*) from compress_test; > > COUNT(*) > ---------- > 6443786 > > 1 row selected. > > Elapsed: 00:00:07.40 > > SQL> select sum(bytes)/1048576 alloc_mb, > 2 sum(blocks) alloc_blocks, > 3 count(distinct partition_name) nbr_partitions, > 4 count(*) nbr_exts > 5 from user_extents > 6 where segment_name = 'NONCOMPRESS_TEST'; > > ALLOC_MB ALLOC_BLOCKS NBR_PARTITIONS NBR_EXTS > ---------- ------------ -------------- ---------- > 792.0625 101384 51 1110 > > SQL> select count(*) from noncompress_test; > > COUNT(*) > ---------- > 6443786 > > 1 row selected. > > Elapsed: 00:00:56.48 > > In this case, a 4:1 compression ratio on space but a 7:1 compression ratio > on FULL table scan performance. A little less I/O goes a long way, and look > ma! no caching! > > The reason that there is little to no performance penalty on queries is > because Oracle's method of compression is not similar to the algorithm's > used by "zip" or "compress" or "zcat", etc. Rather, Oracle adds a layer of > indirection to a list of distinct values within the block, so that rows > become a set of pointers to this list (called a "symbol table") within the > block. Little-to-no additional cycles to "uncompress". This style of > "compression" pays off when there is low cardinality and lots of repeated > data values, such as the case with fact tables in a dimensional data model. > > So this is an incredibly useful option for data warehouses but you have to > use bulk loads (a.k.a. inserts via direct-path, append, etc) which, in most > circumstances, means using partitioning and various forms of the EXCHANGE > PARTITION load technique... > > Hope this helps... > > -Tim > > > > > > on 3/19/04 1:58 PM, LeRoy Kemnitz at lkemnitz_at_uwsa.edu wrote: > >> I read that the select runs just as fast if not faster when compared to >> uncompressed tables. >> >> >> >> Goulet, Dick wrote: >> >>> Yeah, but what's the penalty during reads??? >>> >>> Dick Goulet >>> Senior Oracle DBA >>> Oracle Certified 8i DBA >>> >>> -----Original Message----- >>> From: Khedr, Waleed [mailto:Waleed.Khedr_at_FMR.COM] >>> Sent: Friday, March 19, 2004 3:48 PM >>> To: 'oracle-l_at_freelists.org' >>> Subject: RE: Table compression >>> >>> >>> I use it, works great, 60% savings. >>> >>> Once the table/partition gets flagged "compress", any direct load will be >>> compressed. >>> >>> You will get ora-600 if trying to do parallel direct load. >>> >>> Also can't add a column to a compressed table. >>> >>> Waleed >>> >>> -----Original Message----- >>> From: LeRoy Kemnitz [mailto:lkemnitz_at_uwsa.edu] >>> Sent: Friday, March 19, 2004 3:41 PM >>> To: Oracle List >>> Subject: Table compression >>> >>> >>> >>> I am looking into doing some table compression on my warehouse database >>> to free up some space on the os. I am running 9.2.0.4 on Unix 5.1. The >>> compression is about 2.5:1 on my tables. The documentation says the >>> bulk insert time will be doubled but the single inserts, updates, and >>> deletes are going to be a wash. Does anyone use compression? Are there >>> any problems you notice in the use of it? I have also read that the >>> table will need to be re-compressed after the bulk inserts. Any >>> alternative ideas about getting this done? >>> >>> Thanks in advance, >>> >>> LeRoy >>> >>> ---------------------------------------------------------------- >>> Please see the official ORACLE-L FAQ: http://www.orafaq.com >>> ---------------------------------------------------------------- >>> To unsubscribe send email to: oracle-l-request_at_freelists.org >>> put 'unsubscribe' in the subject line. >>> -- >>> Archives are at http://www.freelists.org/archives/oracle-l/ >>> FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html >>> ----------------------------------------------------------------- >>> ---------------------------------------------------------------- >>> Please see the official ORACLE-L FAQ: http://www.orafaq.com >>> ---------------------------------------------------------------- >>> To unsubscribe send email to: oracle-l-request_at_freelists.org >>> put 'unsubscribe' in the subject line. >>> -- >>> Archives are at http://www.freelists.org/archives/oracle-l/ >>> FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html >>> ----------------------------------------------------------------- >>> ---------------------------------------------------------------- >>> Please see the official ORACLE-L FAQ: http://www.orafaq.com >>> ---------------------------------------------------------------- >>> To unsubscribe send email to: oracle-l-request_at_freelists.org >>> put 'unsubscribe' in the subject line. >>> -- >>> Archives are at http://www.freelists.org/archives/oracle-l/ >>> FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html >>> ----------------------------------------------------------------- >>> >>> >>> >>> > > ---------------------------------------------------------------- > Please see the official ORACLE-L FAQ: http://www.orafaq.com > ---------------------------------------------------------------- > To unsubscribe send email to: oracle-l-request_at_freelists.org > put 'unsubscribe' in the subject line. > -- > Archives are at http://www.freelists.org/archives/oracle-l/ > FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html > ----------------------------------------------------------------- > ---------------------------------------------------------------- > Please see the official ORACLE-L FAQ: http://www.orafaq.com > ---------------------------------------------------------------- > To unsubscribe send email to: oracle-l-request_at_freelists.org > put 'unsubscribe' in the subject line. > -- > Archives are at http://www.freelists.org/archives/oracle-l/ > FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html > -----------------------------------------------------------------Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
FAQ is at http://www.freelists.org/help/fom-serve/cache/1.htmlput 'unsubscribe' in the subject line.
-----------------------------------------------------------------
----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to: oracle-l-request_at_freelists.org
![]() |
![]() |