Home » RDBMS Server » Server Utilities » Archiving Old Data
Archiving Old Data [message #74643] Tue, 25 January 2005 12:07
Sean
Messages: 22
Registered: July 2000
Junior Member
Hi All,

I need to archive the data which is more than 12 months old.

No.of Transactional Tables:100
No.of Auditing Tables(which get populated by Triggers):211

OS:SUN 5.8
Oracle Ver: 9iR2 with RAC

I need to create a new archive instance and move data which is more than 12 months old
from production to this instance.

** Time window for this process is not defined yet by the client
** Frequency will be weekly for this process with data load ranging beetween 1-7 GB
** Only few tables are range partitioned and client does not want change the exsiting structure for tables
** Few tables contain CLOB's
** Few tables contain more than 3 million rows

This what i have planned:

1.

Create a new tablespace with few large size datafiles
i.e.
create tablespace trans_temp
datafile 'c:varapporadataarchdata01.dbf' 500 mb
'd:varapporadataarchdata02.dbf' 500 mb
'e:varapporadataarchdata03.dbf' 500 mb
extent management local autoallocate
SEGMENT SPACE MANAGEMENT AUTO;

2.

Create tables using CTAS with date criteria
i.e.
create table arch_tab1
tablespace trans_temp
parllel
nologging
as
select * from tab1
where
z_date < sysdate - 365;

3.

Check if the tablespace is self contained for transporting
i.e.

EXECUTE dbms_tts.transport_set_check('trans_temp', TRUE);

SELECT * FROM TRANSPORT_SET_VIOLATIONS;

4.

After ensuring a self-contained tablespace that i want to
transport, make the tablespace read-only.
i.e.

ALTER TABLESPACE trans_temp READ ONLY;

5.

Invoke the Export utility and specify which tablespace in the transportable set:

EXP TRANSPORT_TABLESPACE=y TABLESPACES=(trans_temp)
TRIGGERS=n CONSTRAINTS=n GRANTS=n FILE=expdat.dmp

6.

Transport both the datafiles and the export file of the tablespaces to a place accessible to
the archive database.

i.e.

cp 'c:varapporadataarchdata01.dbf' 'c1:varapporadataarchdata01.dbf'
cp 'd:varapporadataarchdata02.dbf' 'd1:varapporadataarchdata02.dbf'
cp 'e:varapporadataarchdata03.dbf' 'e1:varapporadataarchdata03.dbf'
cp 'c:varappexp.dat' 'c1:varappexp.dat'

7. Disable all the constraints on the tables/schema
in the archive instance

8.

Plug in the tablespace in the archive instance

i.e

IMP TRANSPORT_TABLESPACE=y FILE=expdat.dmp
DATAFILES=('c1:varapporadataarchdata01.dbf',
'd1:varapporadataarchdata02.dbf',...)
TABLESPACES=(trans_temp) TTS_OWNERS=(owner1)
FROMUSER=(owner1) TOUSER=(owner1a)

9.

Do insert into respective tables from tables contained in transportable tablespace
i.e.

insert /* append */ into tab1
select * from arch_tab1;
commit;
insert /* append */ into tab2
select * from arch_tab2;
commit;

.............

insert /* append */ into tab n
select * from arch_tab n;
commit;

10.

Enable all the constraints on the tables/schema
in the archive instance

11.

Drop transportable tablespace including contents and datafiles

i.e.

DROP TABLESPACE trans_temp INCLUDING CONTENTS AND DATAFILES;

Kindly advice if my apprroach is wrong or I can achieve the same by any other
easier and faster method.
Previous Topic: facing problem while exporting
Next Topic: Problem in Import
Goto Forum:
  


Current Time: Wed Dec 25 19:52:05 CST 2024