Re: Pros/cons of using as many table partitions as possible
Date: Sun, 20 Apr 2008 01:15:41 +1000
Message-ID: <fud2ah$g8h$1@registered.motzarella.org>
Paulukas wrote,on my timestamp of 19/04/2008 7:30 AM:
> Oracle 9i had a limit of 64K-1 partitions per table. 10g increased
> that to over a million (1024K-1). Is it practical to approach that
> limit (>500,000 partitions)? What are the pitfalls?
>
> I'm very aware of partition-elimination and partition-wise joins.
> Assume all queries will appropriately specify the partition-keys.
>
> Here are some of the requirements and goals:
> a) I'm trying to manage a table that will have about 50 billion rows
> and about 60 bytes/row.
> b) Each row will be partitioned on a year-field for LIST-partitions
> and a surrogate-key (integer) for HASH-subpartitions.
> c) I'd like to use composite list/hash partitioning to support:
> - Easy removal of old data (dropping partitions based on year)
> - Minimal size of subpartitions due to large hash count.
> d) With small-enough subpartition size, I won't need any indexes,
> since a full scan of an individual subpartition should be very
> reasonable. This will save an enormous amount of space (no indexes).
> e) Small subpartition size allows easier replication via partition-
> swapping.
> f) Subpartitions for earlier years can also be compressed and their
> tablespaces can be made readonly.
>
> What concerns me:
> a) Query parsing.
> b) Huge amount of system info (objects, segments, extents,
> DBA_TAB_PARTITIONS, etc.).
> c) Other things I do not know about!! :-)
>
> I welcome other suggestions for managing such a large table, but
> please try to address the original question about using a very high
> number of partitions.
>
Interesting. Is this to keep RFID data
by any chance?
Jonathan mentioned here a few years ago
about using thousands of partitions
in a table without any major problems.
That was wa-a-a-a-y back in the 9i days.
With 10g and 11g you'd have much better luck,
at a guess.
Let's say that at 50 billion rows, partitioning to 100 thousand will bring the number of rows per partition to an averaged 500000, or 30MB using the average row size you claim. That is quite manageable for partition scan instead of indexing.
Interesting concept, and quite relevant in these days of tens of billion row tables: traditional concepts of indexing are not quite applicable anymore, are they? Good lateral thinking. Received on Sat Apr 19 2008 - 10:15:41 CDT