Adding column to the table which have the huge data [message #665559] |
Mon, 11 September 2017 03:03  |
 |
Revathi.orafaq
Messages: 21 Registered: September 2017
|
Junior Member |
|
|
Hi All,
We had requirement that I need to alter table, which contains huge data nearly 25 millions records.
Its range partitioned table on a dated based column and having 60 Partitions .
We want to add some extra columns to the same table.Some of the columns are having default value also and some of them don't have.
But I have more columns to add.
Obviously it will take more time.What is the efficient way of doing this ?
Do I need to disable all the indexes or Table gathering
I know that when we have bulk insertions into the table, disabling the indexes will help .
When we are selecting the data by applying indexed column where conditions , gathering statistics will help .
But in this case
PARALLE HINT will help ?
or
Multi columns alter will help ?
or
Table Redefinition will help ?
or
Is there any other best solution?
Please help me to resolve the problem
Details : Table count : 25 Million Records
No of Partitions : 60
Partition Type : Range Partition on Entry Date Column
Oracle Version :Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
Thanks
Revathi.T
|
|
|
Re: Adding column to the table which have the huge data [message #665565 is a reply to message #665559] |
Mon, 11 September 2017 05:22   |
John Watson
Messages: 8974 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
Welcome to the forum. Please read our OraFAQ Forum Guide and How to use [code] tags and make your code easier to read
You have thrown lot of questions into one topic, which makes it difficult to answer. I'll deal with just the first one:
Quote:We want to add some extra columns to the same table.Some of the columns are having default value also and some of them don't have.
But I have more columns to add.
Obviously it will take more time.What is the efficient way of doing this ? Adding a column, with or without a default value, is a DDL operation and takes no time. The fact that your table has 25m rows is not relevant.
Before considering your other questions, I would ask why you weant to partition such a small table. You will probably find that partitioning it into 60 partitions will tend to supress index usage and you'll get a lot of scans. Probably direct. Is this really what you want?
by the way, I wish you wouldn't say "record" when you mean "row".
|
|
|
|
|
Re: Adding column to the table which have the huge data [message #665576 is a reply to message #665575] |
Mon, 11 September 2017 06:57   |
Solomon Yakobson
Messages: 3305 Registered: January 2010 Location: Connecticut, USA
|
Senior Member |
|
|
John, OP mentioned some to be added columns have default values which means Oracle will issue update of existing rows to set new column value to default, unless column is defined as not null and OP is on newer version (I don't recall in what version Oracle made that enhancement).
SY.
|
|
|
|
|
|
Re: Adding column to the table which have the huge data [message #665582 is a reply to message #665581] |
Mon, 11 September 2017 07:59   |
Solomon Yakobson
Messages: 3305 Registered: January 2010 Location: Connecticut, USA
|
Senior Member |
|
|
Oops, I got mixed up again. As I originally stated "unless column is defined as not null and OP is on newer version (I don't recall in what version Oracle made that enhancement)". So to clarify, initially (and I don't recall version) Oracle made enhancement and default value isn't written to existing rows ONLY IF COLUMN IS NOT NULL:
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production
SQL> create table tbl as select level col1 from dual connect by level <= 1000000;
Table created.
SQL> set timing on
SQL> alter table tbl add col2 varchar2(10) default '0123456789' not null;
Table altered.
Elapsed: 00:00:00.41
SQL> alter table tbl add col3 varchar2(10) default '0123456789';
Table altered.
Elapsed: 00:00:57.83
SQL>
In 12C Oracle made further enhancement and default value isn't written to existing rows regardless of NULL/NOT NULL:
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE 12.1.0.2.0 Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production
SQL> create table tbl as select level col1 from dual connect by level <= 1000000;
Table created.
SQL> set timing on
SQL> alter table tbl add col2 varchar2(10) default '0123456789' not null;
Table altered.
Elapsed: 00:00:00.13
SQL> alter table tbl add col3 varchar2(10) default '0123456789';
Table altered.
Elapsed: 00:00:00.05
SQL>
So unless columns with default value OP is adding are NOT NULL it will take time to add, but 25 million rows shouldn't be too bad.
SY.
|
|
|
|
|
|
|
|
|
Re: Adding column to the table which have the huge data [message #665595 is a reply to message #665594] |
Tue, 12 September 2017 03:16   |
Roachcoach
Messages: 1576 Registered: May 2010 Location: UK
|
Senior Member |
|
|
cookiemonster wrote on Tue, 12 September 2017 08:52And 26 million is not huge. There are systems out there where that amount is relatively small.
Oh yes.
1* select * from (select num_rows from dba_tables where owner = 'APPOWNER' order by 1 desc nulls last) where rownum <=5
09:15:18 SQL> /
NUM_ROWS
----------
9485554850
7494313980
5960284480
5958877320
4367538830
|
|
|
|
|
|
|
|
Re: Adding column to the table which have the huge data [message #665602 is a reply to message #665601] |
Tue, 12 September 2017 05:11   |
cookiemonster
Messages: 13966 Registered: September 2008 Location: Rainy Manchester
|
Senior Member |
|
|
Parallel doesn't actually open extra sessions and I don't know what affect it'll have on an alter table. It may be able to divide up the updates due to the defaults, but you would have to test it.
What I'm suggesting is you start a sqlplus session and issue one alter table. Then, while that's still running, start a new sqlplus session and issue an alter for the 2nd table, and so on. Parallel won't let you do that, and it seems the main problem for time is that you have 25 tables to alter.
|
|
|
Re: Adding column to the table which have the huge data [message #665604 is a reply to message #665597] |
Tue, 12 September 2017 05:54   |
John Watson
Messages: 8974 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
Have you actually read any of the answers you have received? To repeat, for I think the fourth time, if you add your column with a default as not null, it will take no time at all:orclx>
orclx> set timing on
orclx> select count(*) from t1;
COUNT(*)
----------
1166704
Elapsed: 00:00:00.08
orclx> alter table t1 add (newcol varchar2(10) default 'idiot' not null);
Table altered.
Elapsed: 00:00:00.01
orclx>
Furthermore, you are still saying "record" when you mean "row". Please stop doing that, it is very irritating.
|
|
|
|
Re: Adding column to the table which have the huge data [message #665607 is a reply to message #665592] |
Tue, 12 September 2017 06:56   |
Solomon Yakobson
Messages: 3305 Registered: January 2010 Location: Connecticut, USA
|
Senior Member |
|
|
You are adding new columns, right? They are obviously not part of any index, so what this has to do with indexes? Having partitions can make adding columns faster since populating new columns with default value in existing rows can be done in parallel. And again, 25 million is not a large number of rows.
SY.
|
|
|
|
|
|
|
|
|
|
Re: Adding column to the table which have the huge data [message #665662 is a reply to message #665658] |
Thu, 14 September 2017 07:28   |
Bill B
Messages: 1971 Registered: December 2004
|
Senior Member |
|
|
It's actually very easy to do what you want and the new feature was made reliable in version 11.2.0.4.0 so you are all set.
ALTER TABLE MY_TABLE ADD (TEST_COL NUMBER(30) DEFAULT ON NULL 36 NOT NULL);
The secret is the "ON NULL" option. This will add a new not null column to a table with a default but will store the default value in the database but not in the row. so when you select from the table your new column will return (in the example above) 36 even though it is not actually in the table. The following rules apply
1) if you update the column to null, the default value is stored into the column.
2) if you insert a new row and don't specify the column it will store the default value into the row
3) if you specify the column but specify a null for the value it will store the default value
4) if you specify any other value that value will be stored in the row.
The beauty of the alter is it takes less then a second even on gigantic tables.
[Updated on: Thu, 14 September 2017 07:29] Report message to a moderator
|
|
|
|
Re: Adding column to the table which have the huge data [message #665665 is a reply to message #665664] |
Thu, 14 September 2017 08:40   |
Bill B
Messages: 1971 Registered: December 2004
|
Senior Member |
|
|
Sorry,
I think I gave you the version for 12c, try the following
ALTER TABLE MY_TABLE1 ADD (TEST_COL NUMBER(30) DEFAULT 36 NOT NULL);
in 11g it is supposed to work very fast if the not null is specified because it just stores the default value in the database. however it only does that if the NOT NULL is specified for the column.
|
|
|
|
|
|
Re: Adding column to the table which have the huge data [message #665688 is a reply to message #665677] |
Fri, 15 September 2017 05:19   |
Solomon Yakobson
Messages: 3305 Registered: January 2010 Location: Connecticut, USA
|
Senior Member |
|
|
Revathi.orafaq wrote on Fri, 15 September 2017 02:57
That's why after adding the DEFAULT value,again we are reverting back to the NULL.
ALTER TABLE MY_TABLE1 ADD (TEST_COL NUMBER(30) DEFAULT 36 NOT NULL);
ALTER TABLE MY_TABLE1 MODIFY TEST_COL DEFAULT NULL;
Do you realize that you just changed DEFAULT value from 36 to NULL while you intended to change column to NULLable:
SQL> alter table tbl ADD (TEST_COL NUMBER(30) DEFAULT 36 NOT NULL);
Table altered.
SQL> ALTER TABLE TBL MODIFY TEST_COL DEFAULT NULL;
Table altered.
SQL> insert into tbl(col1,col2) values(-1,'X');
insert into tbl(col1,col2) values(-1,'X')
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("SCOTT"."TBL"."TEST_COL")
SQL>
And as soon as you do it Oracle again, unless you are on 12C, will update each row with default value of 36:
SQL> ALTER TABLE TBL MODIFY TEST_COL DEFAULT 36;
Table altered.
SQL> set timing on
SQL> ALTER TABLE TBL MODIFY TEST_COL NULL;
Table altered.
Elapsed: 00:01:50.86
SQL>
SY.
|
|
|
Re: Adding column to the table which have the huge data [message #665689 is a reply to message #665688] |
Fri, 15 September 2017 05:35   |
Bill B
Messages: 1971 Registered: December 2004
|
Senior Member |
|
|
Unless there is an absolute requirement to store a null value then set the default to a value that would indicate an unset value. For example if legal values are a positive integer set the default to -1. You get a very fast column addition with the ability to find the unset values by checking for -1
|
|
|