Does Increasing Field Length Decrease Perf? [message #65914] |
Mon, 31 January 2005 09:53 |
Steve F
Messages: 4 Registered: January 2005
|
Junior Member |
|
|
If i do an alter table, and increase a field's length, does that mean that a portion of the data held in that field will now be written to a 'new' separate area of the disk, thus degrading performance when querying on that field?
We have a pretty tenuous performance situation here at work so I need to be careful about any changes to our db.
Thank you, -Steve
|
|
|
|
Re: Does Increasing Field Length Decrease Perf? [message #109308 is a reply to message #65915] |
Wed, 23 February 2005 12:47 |
Mack Sundar
Messages: 20 Registered: July 2002
|
Junior Member |
|
|
When the size of a particular column within a record is increased, the database will most likely store that in a different logical data block (assuming that there is no more space available within that data block). This could be in a different physical disk location.
Couple of ways to resolve this:
1. CTAS: Create table <new_table_name> as (select * from <old_table>). Read up on CTAS.
2. Export/Import the table
3. ALTER TABLE...COALESCE
|
|
|
Re: Does Increasing Field Length Decrease Perf? [message #109902 is a reply to message #109308] |
Tue, 01 March 2005 15:38 |
andrew again
Messages: 2577 Registered: March 2000
|
Senior Member |
|
|
simply analyze your tables (in a non-prod environment) and check the chained row count. If updates are causing a high (> a few %) number of chained rows, you may need to increase PCTFREE for the table.
analyze table abc estimate/compute statistics;
select table_name, pct_free, pct_used, num_rows, chain_cnt, last_analyzed
from user_tables where table_name = 'ABC';
As far as I know, dbms_stats doesn't gather chained row counts - but give it a try - analyze table is an oldish command.
begin
dbms_stats.gather_table_stats('SCOTT', 'ABC');
end;
|
|
|