Skip navigation.

Hemant K Chitale

Syndicate content
I am an Oracle Database Specialist in Singapore.
get an rss feed of this blog at http://hemantoracledba.blogspot.com/feeds/posts/default?alt=rss
follow me on twitter : @HemantKChitale
Updated: 7 hours 9 min ago

TRUNCATEing a Table makes an UNUSABLE Index VALID again

Sun, 2016-05-22 09:54
Here's something I learned from Jonathan Lewis sometime ago.

If you set an Index to be UNUSABLE but later issue a TRUNCATE TABLE, the Index becomes VALID again --- i.e. the Index gets updated with rows subsequently inserted.

SQL> connect hemant/hemant
Connected.
SQL> drop table target_data purge;

Table dropped.

SQL> create table target_data as select * from source_data where 1=2;

Table created.

SQL> create index target_data_ndx_1
2 on target_data(owner, object_type, object_name);

Index created.

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> col segment_name format a30
SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 49
TARGET_DATA_NDX_1 INDEX 19

SQL>
SQL> col index_name format a30
SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 VALID

SQL>


So, I have a VALID Index on my Table.

I now make it UNUSABLE and add rows to it.

SQL> alter index target_Data_ndx_1 unusable;

Index altered.

SQL> select status
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1';

STATUS
--------
UNUSABLE

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 UNUSABLE

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 104

SQL>


Oracle actually drops the Index segment (so you don't see it in USER_SEGMENTS) when it is set to UNUSABLE alhough the Index definition is still present.  The Index doesn't "grow" as the Segment doesn't exist.

Let me TRUNCATE the table.

SQL> truncate table target_data;

Table truncated.

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE .0625
TARGET_DATA_NDX_1 INDEX .0625

SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 VALID

SQL>


Immediately after the TRUNCATE TABLE, the Index Segment is instantiated and the Index becomes VALID again.  So inserting rows will update the Index.  My last explicit command against the Index was ALTER INDEX ... UNUSABLE but that seems to be not the current state now !

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 49
TARGET_DATA_NDX_1 INDEX 19

SQL>


So, repopulating the Table has expanded the Index again.
.
.
.


Categories: DBA Blogs

Partition Storage -- 8 : Manually Sizing Partitions

Wed, 2016-05-18 09:44
As demonstrated in the first post in this series, the  default size for Table Partitions is to start with an I nitial8MB Extent.  The fourth post demonstrated how to resize a Partition that has already been created and populated.

Here is a method to pre-size the Initial Extent. I start with 3 partitions of different sizes.

SQL> create table my_part_tbl_init_sized (id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101) segment creation immediate
4 storage (initial 64K next 64K) tablespace hemant,
5 partition p_200 values less than (201) segment creation immediate
6 storage (initial 1M next 1M) tablespace hemant,
7 partition p_max values less than (maxvalue) segment creation immediate
8 storage (initial 8M next 1M) tablespace hemant)
9 /

Table created.

SQL>
SQL> select partition_name, initial_extent, next_extent, tablespace_name
2 from user_tab_partitions
3 where table_name = upper('my_part_tbl_init_sized')
4 order by partition_position
5 /

PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME
------------ -------------- ----------- ------------------------------
P_100 65536 65536 HEMANT
P_200 1048576 1048576 HEMANT
P_MAX 8388608 1048576 HEMANT

SQL>
SQL> select partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name = upper('my_part_tbl_init_sized')
4 and segment_type = 'TABLE PARTITION'
5 order by 1
6 /

PARTITION_NA BYTES/1024 EXTENTS
------------ ---------- ----------
P_100 64 1
P_200 1024 1
P_MAX 8192 1

SQL>


However, the caveat about Partition Extent Sizes when SPLIT is used as shown in the third post still applies.

SQL> alter table my_part_tbl_init_sized
2 split partition p_max
3 at (301)
4 into (partition p_300, partition p_max)
5 /

Table altered.

SQL>
SQL> select partition_name, initial_extent, next_extent, tablespace_name
2 from user_tab_partitions
3 where table_name = upper('my_part_tbl_init_sized')
4 order by partition_position
5 /

PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME
------------ -------------- ----------- ------------------------------
P_100 65536 65536 HEMANT
P_200 1048576 1048576 HEMANT
P_300 8388608 1048576 HEMANT
P_MAX 8388608 1048576 HEMANT

SQL>
SQL> select partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name = upper('my_part_tbl_init_sized')
4 and segment_type = 'TABLE PARTITION'
5 order by 1
6 /

PARTITION_NA BYTES/1024 EXTENTS
------------ ---------- ----------
P_100 64 1
P_200 1024 1
P_300 8192 1
P_MAX 8192 1

SQL>


So, SPLITting P_MAX to create P_300 created P_300 with an 8MB extent because the sizing is inherited from P_MAX.  How do I change the Partition Extent sizes on SPLIT ?

SQL> alter table my_part_tbl_init_sized
2 split partition p_max
3 at (501)
4 into (partition p_500 storage (initial 64K next 64K) tablespace hemant, partition p_max)
5 /

Table altered.

SQL>
SQL> alter table my_part_tbl_init_sized
2 split partition p_500
3 at (401)
4 into (partition p_400, partition p_500)
5 /

Table altered.

SQL>
SQL> select partition_name, initial_extent, next_extent, tablespace_name
2 from user_tab_partitions
3 where table_name = upper('my_part_tbl_init_sized')
4 order by partition_position
5 /

PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME
------------ -------------- ----------- ------------------------------
P_100 65536 65536 HEMANT
P_200 1048576 1048576 HEMANT
P_300 8388608 1048576 HEMANT
P_400 65536 65536 HEMANT
P_500 65536 65536 HEMANT
P_MAX 8388608 1048576 HEMANT

6 rows selected.

SQL> select partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name = upper('my_part_tbl_init_sized')
4 and segment_type = 'TABLE PARTITION'
5 order by 1
6 /

PARTITION_NA BYTES/1024 EXTENTS
------------ ---------- ----------
P_100 64 1
P_200 1024 1
P_300 8192 1
P_400 64 1
P_500 64 1
P_MAX 8192 1

6 rows selected.

SQL>


This time, when I split P_MAX I deliberately specified a size for P_500.  Then, when I split P_500 further, the new P_400 inherited the sizing from P_500.
,
,
,

Categories: DBA Blogs

Compression -- 7 : Updating after BASIC Compression

Mon, 2016-05-09 08:57
In the first blog post on compression, I had shown BASIC Compression and also the effect of executing an UPDATE on a table with BASIC compression.

To expand on the them of UPDATEs of BASIC compression blocks ....

SQL> select count(*) from source_data;

COUNT(*)
----------
367156

SQL> create table target_comp row store compress basic as select * from source_data where 1=2;

Table created.

SQL> select pct_free from user_tables where table_name = 'TARGET_COMP';

PCT_FREE
----------
0

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> analyze table target_comp compute statistics;

Table analyzed.

SQL> select chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

CHAIN_CNT BLOCKS
---------- ----------
0 4452

SQL>
SQL> exec dbms_stats.gather_table_stats('','TARGET_COMP');

PL/SQL procedure successfully completed.

SQL> select num_rows, chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

NUM_ROWS CHAIN_CNT BLOCKS
---------- ---------- ----------
1101468 0 4452

SQL>


So we have a table with 1.1million rows and no Row Chaining.

What happens if we update about 20% of the rows ?

SQL> begin
2 for rec in (select rowid from target_comp where rownum < 220001)
3 loop
4 update target_comp set owner=owner where rowid=rec.rowid;
5 commit;
6 end loop;
7 end;
8 /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL> analyze table target_comp compute statistics;

Table analyzed.

SQL> select chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

CHAIN_CNT BLOCKS
---------- ----------
202189 7882

SQL> exec dbms_stats.gather_table_stats('','TARGET_COMP');

PL/SQL procedure successfully completed.

SQL> select num_rows, chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

NUM_ROWS CHAIN_CNT BLOCKS
---------- ---------- ----------
1101468 202189 7882

SQL>


I have updated 220,000 rows without actually increasing the notional length of each row (I set OWNER=OWNER).  Yet, The CHAIN_CNT is now 202K and the table's HighWaterMark has expanded from 4,452 blocks to 7,882 blocks.  A significant increase !
(YMMV may vary in your tests !)

It was Jonathan Lewis who suggested getting the Chain Count (or LIST CHAINED ROWS) to understand the impact of UPDATEs on a table with BASIC compression.
.
.
.


Categories: DBA Blogs

Compression -- 6b : Advanced Index Compression (revisited)

Thu, 2016-05-05 08:09
Following up on my earlier post on 12.1.0.2 Advanced Index Compression, one of my readers asked what would be the difference if I reversed the order of columns in the chosen index.

My defined index was on (OWNER, OBJECT_TYPE, OBJECT_NAME) --- defined as being from the column with the fewest distinct values to the most.  This ordering is best compressible with Index Key Compression (also known as Prefix Compression).  If I reverse the order, Index Key Compression for the two leading columns wouldn't deliver the same level of compression.  The question is whether Advanced Index Compression can intelligently handle the reversal.

SQL> create index target_data_ndx_3_comp on
2 target_data(object_name, object_type, owner) compress 2;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_3_COMP'
4 /

LEAF_BLOCKS
-----------
3091

SQL>


Surprisingly, this index with Prefix 2 on (OBJECT_NAME, OBJECT_TYPE) is, at 3,091 leaf blocks, smaller than the previous  index with Prefix 2 on (OWNER, OBJECT_TYPE) at 5,508 leaf blocks.

Continuing with Prefix 3

SQL> drop index target_data_ndx_3_comp;

Index dropped.

SQL> create index target_data_ndx_3_comp on
2 target_data(object_name, object_type, owner) compress 3;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_3_COMP'
4 /

LEAF_BLOCKS
-----------
2277

SQL>


At 2,277 leaf blocks it is, as expected, the same size with Prefix 3 on (OWNER, OBJECT_TYPE, OBJECT_NAME).  Since the entire index key is specified as the Prefix, both indexes would be the same size.

Going on to Advanced Index Compression

SQL> drop index target_data_ndx_3_comp;

Index dropped.

SQL> create index target_data_ndx_4_advcomp on
2 target_data(object_name, object_type, owner)
3 compress advanced low
4 /

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_4_ADVCOMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_4_ADVCOMP'
4 /

LEAF_BLOCKS
-----------
2277

SQL>


This is, again, as expected.  Advanced Index Compression results in the same size irrespective of the ordering of the columns.

The advantage of Advanced Index Compression over Key or Prefix Compression is that the DBA does not need to determine the Prefix for compression.  He does not have to spend time to analyze the data and compare the number of distinct values for each of the columns in the composite index.
.
.
.

Categories: DBA Blogs

Compression -- 6 : Advanced Index Compression

Tue, 2016-05-03 08:23
Earlier, I had covered Index (Key) Compression which is included in the Enterprise Edition.

In Key Compression, the DBA must specify the Prefix (i.e. number of leading columns in a composite index) that must be used as the compression key.

12.1.0.2 Advanced Index Compression does not require the DBA to manually identify the prefix key length.  Advanced Index Compression auto(magically) identifies the optimal prefix columns.
(Note : Advanced Index Compression requires the Advanced Compression licence Option and is available only in 12.1.0.2 and higher)

SQL> create table target_data as select * from source_data where 1=2;

Table created.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL>


Creating a Key Compression Index by specifying the Prefix size (the leading 2 columns) :

SQL> create index target_data_ndx_1_comp on
2 target_data (owner, object_type, object_name) compress 2;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1_COMP'
4 /

LEAF_BLOCKS
-----------
5508

SQL>


Note how I specified "2" as the Prefix size as I want to compress on repeated values of (OWNER, OBJECT_NAME).

Using Advanced Index Compression by specifying "COMPRESS ADVANCED LOW" and letting Oracle decide on the compression strategy in each leaf block :

SQL> drop index target_data_ndx_1_comp;

Index dropped.

SQL> create index target_data_ndx_2_advcomp on
2 target_data (owner, object_type, object_name)
3 compress advanced low;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_2_ADVCOMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_2_ADVCOMP'
4 /

LEAF_BLOCKS
-----------
2277

SQL>


Wow, that's significantly smaller.  What's more, I did not have to spend time analyzing the data and the index definition to identify the "correct" Prefix size.

However, it is now possible to specify the entire composite key as the Prefix, although that is not what I would have done in earlier versions.  Identifying the Prefix size requires analyzing the data.

SQL> create index target_data_ndx_1_comp on
2 target_data (owner, object_type, object_name) compress 3
3 /

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1_COMP'
4 /

LEAF_BLOCKS
-----------
2277

SQL>


So, the Advanced option allows me to let Oracle automatically decide the appropriate mechanism to compress the Index keys.

UPDATE : Also see the subsequent test with a reversal of the columns in the composite index.

Just for comparison, here is a regular index :

SQL> drop index target_data_ndx_2_advcomp;

Index dropped.

SQL> create index target_data_ndx_3_nocomp on
2 target_data (owner, object_type, object_name)
3 /

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_NOCOMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_3_NOCOMP'
4 /

LEAF_BLOCKS
-----------
7289

SQL>


That is a much larger regular index !
.
.
.

Categories: DBA Blogs

FBDA -- 7 : Maintaining Partitioned Source Table

Mon, 2016-05-02 01:46
Taking up the TEST_FBDA_PARTITIONED table,  let's look at a couple of Partition Maintenance operations.

SQL> select partition_name, high_value, num_rows
2 from user_tab_partitions
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 order by partition_position
5 /

PARTITION_NAME HIGH_VALUE NUM_ROWS
---------------- ------------------------- ----------
P_100 101 100
P_200 201 100
P_300 301 100
P_400 401 100
P_MAX MAXVALUE 301

SQL>


Let's try a TRUNCATE PARTITION

SQL> alter table test_fbda_partitioned truncate partition p_100;

Table truncated.

SQL>


So, that's supported.

Let's try a SPLIT PARTTIION

SQL> alter table test_fbda_partitioned       
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /
alter table test_fbda_partitioned
*
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table


SQL>


So, a SPLIT PARTITION fails.  We need to DISASSOCIATE the Flashback Archive.

SQL> execute dbms_flashback_archive.disassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA_PARTITIONED
FBDA
SYS_FBA_HIST_93342 DISASSOCIATED


SQL>
SQL> alter table test_fbda_partitioned
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /

Table altered.

SQL> execute dbms_flashback_archive.reassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL>
SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA_PARTITIONED
FBDA
SYS_FBA_HIST_93342 ENABLED


SQL>


While a Table is disassociated with it's Flashback Archive, DDL that would not normally be permitted may be done under strict control to ensure that there is no data divergence.

.
.
.
Categories: DBA Blogs

Partition Storage -- 7 : Revisiting HWM - 2 (again)

Mon, 2016-05-02 01:19
Revisiting the previous test case, but with a larger AVG_ROW_LEN

SQL> create table part_table_large
(id_column number(6), data_col_1 varchar2(100), data_col_2 varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
/
2 3 4 5 6 7 8 9

Table created.

SQL>
SQL> insert into part_table_large values
(51,rpad('String',75,'X'), rpad('Another',60,'Y'))
2 3
SQL> /

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_large
values (25, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_large
values (45, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_large
values (55, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
/
2 3 4 5
AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
140 1100003 22349

SQL>
SQL>
SQL> alter table part_table_large move partition p_100 ;

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
/
2 3 4 5
AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
140 1100003 22626

SQL>
SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_LARGE'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/
2 3 4 5 6 7 8
EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024
3 1024
4 1024
5 1024
6 1024
7 1024
8 1024
9 1024
10 1024
11 512
12 1024
13 1024
14 1024
15 1024
16 1024
17 1024
18 1024
19 1024
20 1024
21 1024
22 1024

23 rows selected.

SQL>


Aha ! Unlike the previous case (where an AVG_ROW_LEN of 11, a MOVE reduced the HWM from 3,022 to 2,484), with a larger row size, the HWM has moved from 22,349 to 22,626.

So, space consumption is a factor of both the AVG_ROW_LEN and the manner in which the rows are  inserted / relocated.

SQL> l
1 select avg_row_len*num_rows*1.2/8192 Expected_Blocks, Blocks
2 from user_tab_partitions
3 where table_name = 'PART_TABLE_LARGE'
4* and partition_name = 'P_100'
SQL> /

EXPECTED_BLOCKS BLOCKS
--------------- ----------
22558.6553 22626

SQL>

Also, see how the "Expected Blocks" count seems more accurate than earlier.
.
.
.



Categories: DBA Blogs

Partition Storage -- 6 : Revisiting Partition HWM

Fri, 2016-04-29 09:42
After the curious finding in my previous blog post, where a Partition's HighWaterMark was noticeably higher than that for a non-Partitioned Table but then shrunk on a MOVE operation, retrying the same rows with a different pattern of INSERT statements.

However, I am still sticking to a single session doing the INSERT (as I don't want ASSM spreading the incoming rows to different non-contiguous blocks)

This in 12.1.0.2
SQL> connect hemant/hemant
Connected.
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7 partition p_max values less than (maxvalue))
8 /

Table created.

SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100' 2 3 4
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, a Row-By-Row Insert still resulted in the the HWM being 3,022 and shrinking to 2,484 after a MOVE.



Let's try the same data-set in 11.2.0.4
SQL> connect hemant/hemant
Connected.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL>
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
/

2 3 4 5 6 7 8
Table created.

SQL> SQL> show parameter deferr

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
SQL>
SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, 11.2.0.4 and 12.1.0.2 display the same behaviour for the Partition HWM.  A HWM of 3,022 blocks shrinking to 2,484 blocks.

The next test would be with a larger AVG_ROW_LEN.
.
.
.


Categories: DBA Blogs

Partition Storage -- 5 : Partitioned Table versus Non-Partitioned Table ? (in 12.1)

Mon, 2016-04-25 09:13
Reviewing my second blog post in this series, I found it strange that Partition P_100 (populated by Serial Inserts of 1 row, 100,000 rows, 500,000 rows and 500,000 rows) had such a High Water Mark.

For 1.1million rows of an Average Row Length of 11, the High Water Mark was 3,022 blocks.

In the fourth blog post, a simple ALTER TABLE MOVE PARTITION had brought the High Water Mark to 2,482 blocks !

This needs further investigation.

Let's compare a single Partition of a Partitioned Table with a Non-Partitioned Table for exactly the same data and same pattern of INSERT statements.

Starting with a new Partitioned Table.

SQL> l
1 create table new_part_tbl (id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7* partition p_max values less than (maxvalue))
SQL> /

Table created.

SQL>
SQL> insert into new_part_tbl values (51,'Fifty One');

1 row created.

SQL>
SQL> insert into new_part_tbl
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001
5 /

100000 rows created.

SQL> insert into new_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001
5 /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'NEW_PART_TBL'
4 and partition_name = 'P_100'
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 3022

SQL>
SQL> REM Let's MOVE the Partition
SQL> alter table new_part_tbl move partition P_100;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'NEW_PART_TBL'
4 and partition_name = 'P_100'
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2484

SQL>
SQL> l
1 select extent_id, blocks
2 from dba_extents
3 where segment_name = 'NEW_PART_TBL'
4 and segment_type = 'TABLE PARTITION'
5 and partition_name = 'P_100'
6 and owner = 'HEMANT'
7* order by 1
SQL> /

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


As expected (see the first blog post), the Extents are still 8MB each.  But the High Water Mark has "magicallly" shrunk from 3,022 blocks to 2,484 blocks.

Let's create a Non-Partitioned Table with the same columns and rows.

SQL> create table non_part_tbl (id_column number(6), data_column varchar2(100));

Table created.

SQL> insert into non_part_tbl values (51,'Fifty One');

1 row created.

SQL> insert into non_part_tbl
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001
5 /

100000 rows created.

SQL> insert into non_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001
5 /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','NON_PART_TBL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
2 from user_tables
3 where table_name = 'NON_PART_TBL'
4 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2512

SQL>
SQL> REM Let's MOVE the Table
SQL> alter table non_part_tbl move;

Table altered.

SQL> select avg_row_len, num_rows, blocks
2 from user_tables
3 where table_name = 'NON_PART_TBL'
4 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2512

SQL>
SQL> l
1 select extent_id, blocks
2 from dba_extents
3 where segment_name = 'NON_PART_TBL'
4 and segment_type = 'TABLE'
5 and owner = 'HEMANT'
6* order by 1
SQL> /

EXTENT_ID BLOCKS
---------- ----------
0 8
1 8
2 8
3 8
4 8
5 8
6 8
7 8
8 8
9 8
10 8
11 8
12 8
13 8
14 8
15 8
16 128
17 128
18 128
19 128
20 128
21 128
22 128
23 128
24 128
25 128
26 128
27 128
28 128
29 128
30 128
31 128
32 128
33 128
34 128

35 rows selected.

SQL>


The Non-Partitioned Table had a High Water Mark of 2,512 blocks.  This did not change with a MOVE.  The allocation of Extents is also expected in AutoAllocate.

Why, then, does the Partition behave differently ?  It started with a High Water Mark of 3,022 blocks which shrunk to 2,484 blocks after a MOVE ?

Is the Average Row Length or the actual data a factor ?  (Note : I am *not* using Table Compression).

To be explored further with a larger row size ...........

Possibly, to be explored with a different pattern of INSERT statements  ......

Possibly to be compared in 11.2 as well. ......
.
.
.

Categories: DBA Blogs

Partition Storage -- 4 : Resizing Partitions

Sun, 2016-04-24 09:38
Building on Posts 2 (Adding Rows) and 3 (Adding Partitions) where we saw Table Partitions using 8MB Extents ..... is there a way to "resize" Partitions to smaller Extents (and, maybe, lesser space consumed) without using Compression ?

Let's explore.

Beginning with Partitions P_100 and P_200 ....

SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 24576 3
MY_PART_TBL P_200 32768 4
MY_PART_TBL_NDX P_100 28672 43
MY_PART_TBL_NDX P_200 33792 48

SQL>
SQL> alter table my_part_tbl move partition p_100 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_100 storage (initial 64K next 64K)
2 /

Index altered.

SQL> alter table my_part_tbl move partition p_200 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_200 storage (initial 64K next 64K)
2 /

Index altered.

SQL>
SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34

SQL>
SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 3022 1100001
P_200 3668 1100001

SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 2482 1100001
P_200 2639 1100001

SQL>
SQL>
SQL> l
1 select partition_name, blocks, count(*)
2 from dba_extents
3 where owner = 'HEMANT'
4 and segment_name = 'MY_PART_TBL'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name in ('P_100','P_200')
7 group by partition_name, blocks
8* order by 1,2
SQL> /

PARTITION_NA BLOCKS COUNT(*)
------------ ---------- ----------
P_100 8 16
P_100 128 19
P_200 8 16
P_200 128 20

SQL>


Partition P_100 has shrunk from 3 extents of 8MB adding up to 24,576KB to 35 extents adding up to 20,480KB. The High Water Mark has shrink from 3,022 blocks to 2,482 blocks (Remember : P_100 was populated with a Serial Insert.  Partition P_200 that had been populated with Parallel (DoP=4) insert has also shrunk from 32,768KB to 21,504KB and the High Water Mark from 3,668 blocks to 2,639 blocks.  The Extents are a combinaion of 64KB (the first 16, adding up to 1MB) and 1MB sizes.
Even the Index Partitions seem to have shrunk.

So, a MOVE/REBUILD (the REBUILD of the Index Partitons was required because I did a Partition MOVE without UPDATE INDEXES), could be used to shrink the Partitions with newer, smaller, Extents allocated.

But what about the case of SPLIT Partition, where Partitions SPLIT from an 8MB Partition resulted in 2 8MB Partitions, even for empty Partitions.

Here's a workaround.  Before SPLITting the P_MAX Partition, I resize it.

SQL> alter table my_part_tbl move partition p_max storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_max storage (initial 64K next 64K);

Index altered.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (1001)
4 into (partition p_1000, partition p_max)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_1000
3 at (901)
4 into (partition p_900, partition p_1000)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_900
3 at (801)
4 into (partition p_800, partition p_900)
5 /

Table altered.

SQL>
SQL> l
1 select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4* order by 1,2
SQL>
SQL> /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL P_300 8192 1
MY_PART_TBL P_400 8192 1
MY_PART_TBL P_600 8192 1
MY_PART_TBL P_680 8192 1
MY_PART_TBL P_700 8192 1
MY_PART_TBL P_800 64 1
MY_PART_TBL P_900 64 1
MY_PART_TBL P_1000 64 1
MY_PART_TBL P_MAX 64 1
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34
MY_PART_TBL_NDX P_300 64 1
MY_PART_TBL_NDX P_400 64 1
MY_PART_TBL_NDX P_600 64 1
MY_PART_TBL_NDX P_680 64 1
MY_PART_TBL_NDX P_700 64 1
MY_PART_TBL_NDX P_800 64 1
MY_PART_TBL_NDX P_900 64 1
MY_PART_TBL_NDX P_1000 64 1
MY_PART_TBL_NDX P_MAX 64 1

22 rows selected.

SQL>


(Note : I have manually relocated Partition P_1000 in the listing).
Partitions P_600, P_680 and P_700 had been created by SPLIT PARTITION commands in the previous post, beginning with segment-created P_MAX partition.  However, after rebuilding P_MAX to 64KB Extents, subsequently SPLITted Partitions (P_800 to P_1000) are also 64KB.

Note : I am not advising that all have to Partitions be 64K.  (Observe how AutoAllocate did allocate 1MB Extents to P_100 and P_200 after the first 1MB of space usage (using 16 64KB Extents).
.
.
.


Categories: DBA Blogs

Partition Storage -- 3 : Adding new Range Partitions with SPLIT

Sat, 2016-04-23 09:04
Building on the Partitioned Table in the previous two blog posts...

We know that the Table is a Range Partitioned Table.  With a MAXVALUE Partition, the only way to add new Partitions is to use the SPLIT PARTITION command.

First, let's review the Table, Partitions and Segments.

SQL> select table_name, num_rows
2 from user_tables
3 where table_name = 'MY_PART_TBL'
4 /

TABLE_NAME NUM_ROWS
---------------- ----------
MY_PART_TBL 2200004

SQL> select partition_name, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by 1
5 /

PARTITION_NA NUM_ROWS BLOCKS
------------ ---------- ----------
P_100 1100001 3022
P_200 1100001 3668
P_300 1 1006
P_400 1 1006
P_MAX 0 0

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the table has 5 partitions P_100 to P_MAX but only 4 have segments created after one or more rows have been populated.  P_MAX has no segment created for either the Table Partition or the Index Partition.

What happens if we SPLIT P_MAX (an empty, segmentless Partition) to create a new Partition ?

SQL> alter table my_part_tbl
2 split partition p_max
3 at (501)
4 into (partition p_500, partition p_max)
5 /

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, high_value, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE NUM_ROWS BLOCKS
------------ ---------------- ---------- ----------
P_100 101 1100001 3022
P_200 201 1100001 3668
P_300 301 1 1006
P_400 401 1 1006
P_500 501 0 0
P_MAX MAXVALUE 0 0

6 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the process of creating Partition P_500 did not create a segment for it, because P_MAX which it was SPLIT from, was segmentless.  What happens if I split a Partition with 1 or more rows ?

SQL> insert into my_part_tbl
2 select 550, 'Five Hundred Fifty'
3 from dual
4 /

1 row created.

SQL> commit;
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

10 rows selected.

SQL>
SQL> alter table my_part_tbl
2 split partition p_max
3 at (601)
4 into (partition p_600, partition p_max)
5 /

Table altered.

SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

12 rows selected.

SQL>


So, the row for ID_COLUMN=550 created the segment for Partition P_MAX. Subsequently, SPLITting this Partition into P_600 and P_MAX resulted into two Partitions of 8MB each.
The row for ID_COLUMN=550 would be in the P_600 Partition and the P_MAX Partition would now be the empty Partition.  Yet, even P_MAX now takes an 8MB extent, unlike earlier.

Let's try doing such a SPLIT that, say P_700 is created empty but P_MAX inherits the row.

SQL> insert into my_part_tbl
2 select 900, 'Nine Hundred'
3 from dual
4 /

1 row created.

SQL> commit;

Commit complete.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (701)
4 into (partition p_700, partition p_max)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

14 rows selected.

SQL> select count(*) from my_part_tbl partition (P_700);

COUNT(*)
----------
0

SQL>


Again, both Partitions (P_700 and P_MAX) have a segment of 8MB.
This means that, once a Segment for a Partition is created, any SPLIT of that Partition results into two Segments inheriting the same 8MB Extent Size, irrespective of the fact that one of the two may be empty.

SQL> alter table my_part_tbl
2 split partition p_700
3 at (681)
4 into (partition p_680, partition p_700)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


That is confirmation that SPLITting a Partition that has a segment (even if it is empty) results into two segmented partitions, even if both are empty.

Going back to Parttion P_500 (which is present but segmentless), what happens if we split it ?

SQL> alter table my_part_tbl
2 split partition p_500
3 at (451)
4 into (partition p_450, partition p_500)
5 /

Table altered.

SQL>
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE
------------ ----------------
P_100 101
P_200 201
P_300 301
P_400 401
P_450 451
P_500 501
P_600 601
P_680 681
P_700 701
P_MAX MAXVALUE

10 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


Splitting segmentless Partition P_500 into P_450 and P_500 did *not* result into new Segments.

 This has implications for your SPLIT Partition strategy.  If you need to do a recursive split to create, say, 90 1-day Partitions and you start with a Partition that has a segment (even if empty), you get 90 new segments as well.  Thus, the table would suddenly "grow" by 720MB without having inserted a single row on the day you create these 90 Partitions.  You may get some questions from IT Operations / Support about the sudden "growth" in 1 day.
On the other hand, starting with a segmentess Partition, you get 90 new segmentless Partitions.  Their segments will be created when they are populated.
.
.

.
Categories: DBA Blogs

Partition Storage -- 2 : New Rows Inserted in 12.1 Partitioned Table

Wed, 2016-04-20 09:41
Following up from the previous post,  given the current sizes of the Partitions (The underlying Tablespace is AUTOALLOCATE)  :

SQL> col segment_name format a30
SQL> col partition_name format a12
SQL> l
1 select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4* order by 1,2
SQL> /

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 64

8 rows selected.

SQL>


I shall now insert rows so that a Partition has to grow beyond the first extent.

SQL> insert into my_part_tbl                      
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001;

100000 rows created.

SQL> insert into my_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001;

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 24576
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 28672
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 64

8 rows selected.

SQL>


So, now, the first Partition P_100 has grown to exceed 24MB. Let's check the Extent Sizes.

SQL> select extent_id, blocks, blocks*8192/1024 Size_KB
2 from user_extents
3 where segment_name = 'MY_PART_TBL'
4 and segment_type = 'TABLE PARTITION'
5 and partition_name = 'P_100'
6 order by 1;

EXTENT_ID BLOCKS SIZE_KB
---------- ---------- ----------
0 1024 8192
1 1024 8192
2 1024 8192

SQL>
SQL> l
1 select extent_id, blocks, blocks*8192/1024 Size_KB
2 from user_extents
3 where segment_name = 'MY_PART_TBL_NDX'
4 and segment_type = 'INDEX PARTITION'
5 and partition_name = 'P_100'
6* order by 1
SQL> /

EXTENT_ID BLOCKS SIZE_KB
---------- ---------- ----------
0 8 64
1 8 64
2 8 64
3 8 64
4 8 64
5 8 64
6 8 64
7 8 64
8 8 64
9 8 64
10 8 64
11 8 64
12 8 64
13 8 64
14 8 64
15 8 64
16 128 1024
17 128 1024
18 128 1024
19 128 1024
20 128 1024
21 128 1024
22 128 1024
23 128 1024
24 128 1024
25 128 1024
26 128 1024
27 128 1024
28 128 1024
29 128 1024
30 128 1024
31 128 1024
32 128 1024
33 128 1024
34 128 1024
35 128 1024
36 128 1024
37 128 1024
38 128 1024
39 128 1024
40 128 1024
41 128 1024
42 128 1024

43 rows selected.

SQL>


So, the Table Partition grew as 8MB Extents but the Index Partition used the normal AutoAllocate routine of 64KB Extents for the first 1MB (16 Extents) followed by 1MB Extents.

What might have been the expected size of the Table Partition ?   ** see the UPDATE at the end of this post.

SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, trunc(1.2*avg_row_len*num_rows/1024) Expected_KB
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by 1
5 /

PARTITION_NA EXPECTED_KB
------------ -----------
P_100 14179
P_200 0
P_300 0
P_400 0
P_MAX 0

SQL>


I use a 20% factor to account for PCTFREE and block/row overheads.  **  see the UPDATE at the end of  this post.

Next, what if I use Parallel Insert ?

SQL> alter table my_part_tbl parallel 4;

Table altered.

SQL> alter session enable parallel dml ;

Session altered.

SQL> insert /*+ PARALLEL (MANUAL) */
2 into my_part_tbl
3 select 125, 'New Row'
4 from dual
5 connect by level < 100001;

100000 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ PARALLEL (MANUAL) */
2 into my_part_tbl
3 select 125, 'New Row'
4 from dual
5 connect by level < 500001;

500000 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ PARALLEL (MANUAL) */
2 into my_part_tbl
3 select 125, 'New Row'
4 from dual
5 connect by level < 500001;

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> l
1 select segment_name, partition_name, segment_type, extents, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4* order by 1,2
SQL> /

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE EXTENTS BYTES/1024
------------------------- ------------ ------------------ ---------- ----------
MY_PART_TBL P_100 TABLE PARTITION 3 24576
MY_PART_TBL P_200 TABLE PARTITION 4 32768
MY_PART_TBL P_300 TABLE PARTITION 1 8192
MY_PART_TBL P_400 TABLE PARTITION 1 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 43 28672
MY_PART_TBL_NDX P_200 INDEX PARTITION 48 33792
MY_PART_TBL_NDX P_300 INDEX PARTITION 1 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 1 64

8 rows selected.

SQL>


Note the size of P_200.  Parallel Insert actually allocated more extents because each PX server process created at least 1 extent for the rows it inserted (Note : For a smaller row-count it is not necessary that each PX server would have inserted rows, some may not have). So, Parallel Insert can result in more allocated space.

** UPDATE :  What about the "expected size" computation based on AVG_ROW_LEN x NUM_ROWS ?

SQL> select partition_name, avg_row_len, num_rows, blocks, trunc(1.2*avg_row_len*num_rows/8192) Expected_Blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by 1;

PARTITION_NA AVG_ROW_LEN NUM_ROWS BLOCKS EXPECTED_BLOCKS
------------ ----------- ---------- ---------- ---------------
P_100 11 1100001 3022 1772
P_200 12 1100001 3668 1933
P_300 26 1 1006 0
P_400 28 1 1006 0
P_MAX 0 0 0 0

SQL>


This shows how any"formula" based on AVG_ROW_LEN and NUM_ROWS can be inaccurate in forecasting the expected size (blocks used).  Also note that P_100 was populated with a Serial Insert while P_200 was populated with a Parallel (Degree=4) Insert.
.
.
.



Categories: DBA Blogs

Partition Storage -- 1 : Default Partition Sizes in 12c

Tue, 2016-04-19 09:17
11g 11.2.0.2 introduced a change whereby the default Initial Extent of a Table Partition was 8MB.  However, this did not apply to Index Partitions which could still start with 64KB extents in an AutoAllocate Tablespace.

12cR1 now introduces a parameter to enable large Initial Extent for Index Partitions as well.

SQL> connect / as sysdba                          
Connected.
SQL> select banner from v$version where banner like 'Oracle Database%';

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> show parameter deferred

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
SQL>
SQL> connect hemant/hemant
Connected.
SQL> create table my_part_tbl (id_column number(6), data_column varchar2(100))
2 partition by range(id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7 partition p_max values less than (maxvalue))
8 /

Table created.

SQL> create index my_part_tbl_ndx on my_part_tbl(id_column) local;

Index created.

SQL>
SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

no rows selected

SQL> 
SQL> insert into my_part_tbl values (51,'Fifty One');

1 row created.

SQL> insert into my_part_tbl values (151,'One Hundred Fifty One');

1 row created.

SQL> insert into my_part_tbl values (251, 'Two Hundred Fifty One');

1 row created.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64

6 rows selected.

SQL>


I can enable large Index Partition Extent with a parameter (which can be set with ALTER SESSION)


SQL> alter session set "_index_partition_large_extents"=TRUE;

Session altered.

SQL> insert into my_part_tbl values (351,'Three Hundred Fifty One');

1 row created.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 8192

8 rows selected.

SQL>


However, I can rebuild the Index Partition Extent as well :

SQL> alter index my_part_tbl_ndx rebuild partition p_400 storage (initial 64K);

Index altered.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 64

8 rows selected.

SQL>


In the next post, we'll see more Extents for the Partitions.
,
,
,
Categories: DBA Blogs

Online Relocation of Database File : ASM to FileSystem and FileSystem to ASM

Sun, 2016-04-17 09:44
There have been few published examples of the online datafile relocation feature in 12c.  The examples I've seen are on filesystem.

Here I show online relocation to/from ASM and FileSystem.

SQL> connect system/oracle
Connected.
SQL> create tablespace test_relocate;

Tablespace created.

SQL> create table test_relocate_tbl
2 tablespace test_relocate
3 as select * from dba_objects;

Table created.

SQL> select tablespace_name, bytes/1024
2 from user_segments
3 where segment_name = 'TEST_RELOCATE_TBL';

TABLESPACE_NAME BYTES/1024
------------------------------ ----------
TEST_RELOCATE 13312

SQL> select file_name, bytes/1024
2 from dba_data_files
3 where tablespace_name = 'TEST_RELOCATE';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1024
----------
+DATA/NONCDB/DATAFILE/test_relocate.260.909444793
102400


SQL>
SQL> alter database move datafile
2 '+DATA/NONCDB/DATAFILE/test_relocate.260.909444793'
3 to '/oradata/NONCDB/test_relocate_01.dbf';

Database altered.

SQL> !ls -l /oradata/NONCDB
total 102408
-rw-r----- 1 oracle asmdba 104865792 Apr 17 23:39 test_relocate_01.dbf

SQL> 
SQL> alter database move datafile  
2 '/oradata/NONCDB/test_relocate_01.dbf'
3 to '+DATA';

Database altered.

SQL> select file_name, bytes/1024
2 from dba_data_files
3 where tablespace_name = 'TEST_RELOCATE';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1024
----------
+DATA/NONCDB/DATAFILE/test_relocate.260.909445261
102400


SQL>
SQL> !ls -l /oradata/NONCDB
total 0

SQL>


Note that I was courageous enough to not use the KEEP keyword (which is optional !).
.
.
.

Categories: DBA Blogs

FBDA -- 6 : Some Bug Notes

Sun, 2016-04-10 09:27
Some MoS documents on FBDA Bugs

1.  Bug 16454223  :  Wrong Results  (more rows than expected)

2.  Bug 16898135  :  FBDA does not split partitions  (resulting in rows not being purged)

3.  Bug 18294320  :   ORA-01555 (ORA-2475) on SMON_SCN_TIME

4.  Bug 22456983  :   Limit on SMON_SCN_TIME affecting FBDA

5.  Document 2039070.1 :  Known Issues with Flashback Data Archive
.
.
.




Categories: DBA Blogs

Recent Blog Series on Compression

Sun, 2016-04-10 09:16
These have been my recent posts on Compression.

1.  1.  BASIC Table Compression  (Feb-16)

2.  1b.  More on BASIC Table Compression  (Feb-16)

3.  2.  Compressed Table Partitions  (Mar-16)

4.  3.  Index (Key) Compression  (Mar-16)

4.  4.  RMAN (BASIC) Compression  (Mar-16)

5.  5.  OLTP Compression  (Mar-16)

6. 6.  Advanced Index Compression (May-16)

7.  6b  Advanced Index Compression (revisited)  (May-16)

8.  7.  Updating After BASIC Compression  (May-16)
.
.

.
Categories: DBA Blogs

FBDA -- 5 : Testing AutoPurging

Sun, 2016-04-10 09:06
Tracking data changes after one row added (ID_COLUMN=2000) on 06-Apr

SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
06-APR-16 10.53.20.328132 PM +08:00

SQL> select scn_to_timestamp(startscn), scn_to_timestamp(endscn), count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4 order by 1,2;

SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
COUNT(*)
----------
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM
450

02-APR-16 11.32.55.000000000 PM
03-APR-16 11.45.24.000000000 PM
550

02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM
5

02-APR-16 11.46.11.000000000 PM
03-APR-16 11.45.24.000000000 PM
445

03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM
5

03-APR-16 11.45.24.000000000 PM
04-APR-16 11.05.33.000000000 PM
1000

06-APR-16 10.40.43.000000000 PM
06-APR-16 10.42.54.000000000 PM
1


7 rows selected.

SQL>
SQL> select count(*) from sys_fba_tcrv_93250;

COUNT(*)
----------
1002

SQL>


More changes on 07-Apr


SQL> insert into test_fbda
2 select 3000, to_char(3000), trunc(sysdate)
3 from dual;

1 row created.

SQL> commit;

Commit complete.

SQL> update test_fbda
2 set date_inserted=date_inserted
3 where id_column=3000;

1 row updated.

SQL> delete test_fbda
2 where id_column < 1001 ;

1000 rows deleted.

SQL> commit;

Commit complete.

SQL>
SQL> l
1 select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4* order by 1,2,3
SQL> /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000

10 rows selected.

SQL>
SQL> l
1 select id_column, trunc(date_inserted), count(*)
2 from test_fbda
3 group by id_column, trunc(date_inserted)
4* order by 1
SQL> /

ID_COLUMN TRUNC(DAT COUNT(*)
---------- --------- ----------
2000 06-APR-16 1
3000 07-APR-16 1

SQL>


I see two new 1000 row sets (04-Apr and 07-Apr).  I should expect only one.

Now that rows for ID_COLUMN less than 1001 have been deleted on 07-Apr, we have to see if and when they get purged from the History table.


On 09-Apr:

SQL> insert into test_fbda
2 select 4000, to_char(4000),trunc(sysdate)
3 from dual;

1 row created.

SQL> commit;

Commit complete.

SQL> update test_fbda
2 set date_inserted=date_inserted
3 where id_column=4000;

1 row updated.

SQL> commit;

Commit complete.

SQL> l
1 select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4* order by 1,2,3
SQL> /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL>
SQL> select * from user_flashback_archive
2 /

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
1 3
02-APR-16 11.24.39.000000000 PM
02-APR-16 11.24.39.000000000 PM



SQL>


As on the morning of 10-Apr (after leaving the database instance running overnight) :

SQL> select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4 order by 1,2,3
5 /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL> select systimestamp from dual
2 /

SYSTIMESTAMP
---------------------------------------------------------------------------
10-APR-16 08.51.29.398107 AM +08:00

SQL>
SQL> select * from user_flashback_archive
2 /

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
1 3
02-APR-16 11.24.39.000000000 PM
02-APR-16 11.24.39.000000000 PM



SQL>


So auto-purge of the data as of earlier days (02-Apr to 06-Apr) hasn't yet kicked in ?  Let's try a manual purge.

SQL> alter flashback archive fbda purge before timestamp (sysdate-4);

Flashback archive altered.

SQL> select * from user_flashback_archive;

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
1 3

05-APR-16 11.52.16.000000000 PM



SQL>
SQL> ! sleep 300
SQL> l
1 select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4* order by 1,2,3
SQL> /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL>


Although USER_FLASHBACK_ARCHIVE shows that a purge till 05-Apr (the 11:52pm timestamp is strange) has been done, I still see older rows in the History table.  The query on the active table does correctly exclude the rows that should not be available. 


SQL> select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4 order by 1,2,3;

STARTTIME ENDTIME COUNT(*)
---------------------------------- ---------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL> select * from user_flashback_archive;

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
------------------------------------------------------------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS CREATE_TIME
------------------ ----------------- ---------------------------------------------------------------------------
LAST_PURGE_TIME STATUS
--------------------------------------------------------------------------- -------
SYSTEM
FBDA
1 3
05-APR-16 11.52.16.000000000 PM


SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
10-APR-16 10.52.12.361412 PM +08:00

SQL> select count(*) from test_fbda as of timestamp (sysdate-3);

COUNT(*)
----------
2

SQL>
SQL> select partition_position, high_value
2 from user_tab_partitions
3 where table_name = 'SYS_FBA_HIST_93250'
4 order by 1;

PARTITION_POSITION HIGH_VALUE
------------------ --------------------------------------------------------------------------------
1 MAXVALUE

SQL>



Support Document 16898135.1 states that if Oracle isn't maintaining partitions for the History table, it may not be purging data properly.  Even an ALTER FLASHBACK ARCHIVE ... PURGE doesn't purge data (unless PURGE ALL is issued).  I'd seen this behaviour in 11.2.0.4 . The bug is supposed to have been fixed in 12.1.0.2  but my 12.1.0.2 environment shows the same behaviour.   The fact that my test database has very little activity (very few SCNs being incremented) shouldn't matter. The "Workaround" is, of course, unacceptable.
.
.
.
Categories: DBA Blogs

FBDA -- 4 : Partitions and Indexes

Tue, 2016-04-05 09:47
Continuing our investigation of the FBDA architecture.

oracle@ora12102 Desktop]$ sqlplus hemant/hemant

SQL*Plus: Release 12.1.0.2.0 Production on Tue Apr 5 23:25:10 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Tue Apr 05 2016 23:23:47 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select table_name, def_tablespace_name, partitioning_type, partition_count, status
2 from user_part_tables
3 order by 1;

TABLE_NAME
--------------------------------------------------------------------------------
DEF_TABLESPACE_NAME PARTITION PARTITION_COUNT STATUS
------------------------------ --------- --------------- --------
SYS_FBA_HIST_93250
FBDA RANGE 1 VALID


SQL>
SQL> set pages600
SQL> set long 10000
SQL> select dbms_metadata.get_ddl('TABLE','SYS_FBA_HIST_93250') from dual;

DBMS_METADATA.GET_DDL('TABLE','SYS_FBA_HIST_93250')
--------------------------------------------------------------------------------

CREATE TABLE "HEMANT"."SYS_FBA_HIST_93250"
( "RID" VARCHAR2(4000),
"STARTSCN" NUMBER,
"ENDSCN" NUMBER,
"XID" RAW(8),
"OPERATION" VARCHAR2(1),
"ID_COLUMN" NUMBER,
"DATA_COLUMN" VARCHAR2(15),
"DATE_INSERTED" DATE,
"D_1729869_NEW_COL_1" VARCHAR2(5)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"
PARTITION BY RANGE ("ENDSCN")
(PARTITION "HIGH_PART" VALUES LESS THAN (MAXVALUE) SEGMENT CREATION IMMEDIATE

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 8388608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA" )


SQL>


So, although my active table (TEST_FBDA) is not partitioned, the History table is Range Partitioned on the ENDSCN column, with a single partition !   All the rows in the History table are in the MAXVALUE partition.  I wonder if and when it will ever be split ?

What if I create a partitioned table and then define Flashback Archiving on it ?

SQL> create table test_fbda_partitioned (
2 id_column number,
3 data_column varchar2(15),
4 date_inserted date)
5 partition by range (id_column)
6 (partition p_100 values less than (101),
7 partition p_200 values less than (201),
8 partition p_300 values less than (301),
9 partition p_400 values less than (401),
10 partition p_max values less than (MAXVALUE))
11 /

Table created.

SQL> alter table test_fbda_partitioned flashback archive fbda;

Table altered.

SQL> insert into test_fbda_partitioned
2 select rownum, to_char(rownum), trunc(sysdate)
3 from dual connect by level < 701;

700 rows created.

SQL> commit;

Commit complete.

SQL> update test_fbda_partitioned
2 set data_column=data_column;

700 rows updated.

SQL> commit;

Commit complete.

SQL> col subobject_name format a15
SQL> select object_type, subobject_name, object_id
2 from user_objects
3 where object_name = 'TEST_FBDA_PARTITIONED'
4 order by 3,1;

OBJECT_TYPE SUBOBJECT_NAME OBJECT_ID
----------------------- --------------- ----------
TABLE 93342
TABLE PARTITION P_100 93343
TABLE PARTITION P_200 93344
TABLE PARTITION P_300 93345
TABLE PARTITION P_400 93346
TABLE PARTITION P_MAX 93347

6 rows selected.

SQL>
SQL> select table_name
2 from user_tables
3 where table_name like '%93342%'
4 order by 1;

TABLE_NAME
--------------------------------------------------------------------------------
SYS_FBA_DDL_COLMAP_93342
SYS_FBA_HIST_93342
SYS_FBA_TCRV_93342

SQL>
SQL> select dbms_metadata.get_ddl('TABLE','SYS_FBA_HIST_93342') from dual;

DBMS_METADATA.GET_DDL('TABLE','SYS_FBA_HIST_93342')
--------------------------------------------------------------------------------

CREATE TABLE "HEMANT"."SYS_FBA_HIST_93342"
( "RID" VARCHAR2(4000),
"STARTSCN" NUMBER,
"ENDSCN" NUMBER,
"XID" RAW(8),
"OPERATION" VARCHAR2(1),
"ID_COLUMN" NUMBER,
"DATA_COLUMN" VARCHAR2(15),
"DATE_INSERTED" DATE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"
PARTITION BY RANGE ("ENDSCN")
(PARTITION "HIGH_PART" VALUES LESS THAN (MAXVALUE) SEGMENT CREATION IMMEDIATE

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 8388608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA" )


SQL>


So, even though my active table (TEST_FBDA_PARTITIONED) is created as a Range Partitioned Table partitioned on ID_COLUMN, the corresponding History table is Range Partitioned on ENDSCN with a single MAXVALUE partition.
Therefore, Oracle ignores my partitioning definition when creating the History table.  The History table (seems to be ?) always Range Partitioned on ENDSCN and starts using the MAXVALUE partition up-front.  (When will this Partition be split ?  I could search MoS for Docs / Bugs, but I'll do that exercise later).

Now that we know that the History table doesn't use our Partition Key, we must wonder about Partition Pruning when running AS OF queries on the active table that need to access the History Table.  If we can't Partition Prune, can we Index the History table ?

SQL> select table_name                   
2 from user_tables
3 where table_name like 'SYS_FBA_HIST%'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
SYS_FBA_HIST_93250
SYS_FBA_HIST_93342

SQL> create index sys_fba_hist_93250_ndx_1 on sys_fba_hist_93250(id_column) tablespace fbda;

Index created.

SQL> create index sys_fba_hist_93342_ndx_1 on sys_fba_hist_93342(id_column) tablespace fbda;

Index created.

SQL> select dbms_metadata.get_ddl('INDEX','SYS_FBA_HIST_93250_NDX_1') from dual;

DBMS_METADATA.GET_DDL('INDEX','SYS_FBA_HIST_93250_NDX_1')
--------------------------------------------------------------------------------

CREATE INDEX "HEMANT"."SYS_FBA_HIST_93250_NDX_1" ON "HEMANT"."SYS_FBA_HIST_932
50" ("ID_COLUMN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"


SQL> select dbms_metadata.get_ddl('INDEX','SYS_FBA_HIST_93342_NDX_1') from dual;

DBMS_METADATA.GET_DDL('INDEX','SYS_FBA_HIST_93342_NDX_1')
--------------------------------------------------------------------------------

CREATE INDEX "HEMANT"."SYS_FBA_HIST_93342_NDX_1" ON "HEMANT"."SYS_FBA_HIST_933
42" ("ID_COLUMN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"


SQL>


Yes !  Oracle does allow us to build custom indexes on the History tables.  (I believe I saw this somewhere in the documentation or a note).

Next Post (possibly) : More DML operations and whether auto-purging based on the defined RETENTION 3 DAY period.  (Note : The Retention Period is defined at the Flashback Archive level, not at the individual table (active/History) level.  If you need to have tables with different Retention Periods, you need to define different Flashback Archives, although they can all be in the same Tablespace).
.
.
.

Categories: DBA Blogs

FBDA -- 3 : Support for TRUNCATEs

Mon, 2016-04-04 09:13
One of the points in the previous post was how ADD and DROP Column commands are supported in FBDA.

Let's look at support for TRUNCATEs.

(Just to prove that I am not using the Undo tablespace for the AS OF query, I drop the Undo tablespace) :

SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> show parameter spfile

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +DATA/NONCDB/PARAMETERFILE/spf
ile.267.896483727
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> create undo tablespace undotbs2;

Tablespace created.

SQL> alter system set undo_tablespace='UNDOTBS2';

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> drop tablespace undotbs1 including contents and datafiles;

Tablespace dropped.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS2
SQL>


I now TRUNCATE my test table and then run an AS OF query against it.

SQL> connect hemant/hemant
Connected.
SQL> select count(*) from test_fbda;

COUNT(*)
----------
1000

SQL> truncate table test_fbda;

Table truncated.

SQL> select count(*) from test_fbda;

COUNT(*)
----------
0

SQL> select count(*) from test_fbda as of timestamp (sysdate-1);

COUNT(*)
----------
1000

SQL>


The AS OF query was satisfied by the FBDA History table  (and not from Undo).

However, because of yesterday's test of ADD and DROP columns (see yesterday's post), the table structure doesn't match AS OF yesterday.  So, I work around the difference by naming the columns.

SQL> insert into test_fbda select * from test_fbda as of timestamp (sysdate-1);
insert into test_fbda select * from test_fbda as of timestamp (sysdate-1)
*
ERROR at line 1:
ORA-00913: too many values


SQL>
SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> insert into test_fbda
2 select id_column, data_column, date_inserted
3 from test_fbda as of timestamp (sysdate-1);

1000 rows created.

SQL> commit;

Commit complete.

SQL>


Thus, when my query matches the table structure, I can use the data from the previous version of the query.

Note how I did NOT use the SYS_FBA_HIST_93250 table as I'd shown in my previous posts.  The History table is NOT to be accessed directly but I demonstrated queries against it it to show the underlying architecture in FBDA.
.
.
.



Categories: DBA Blogs