Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> How to best reset a column to null on very large tables.

How to best reset a column to null on very large tables.

From: peter <p_msantos_at_yahoo.com>
Date: 18 Mar 2005 06:37:18 -0800
Message-ID: <1111156638.299464.317860@f14g2000cwb.googlegroups.com>


looking for some feedback...

Problem: Each of our customers has a demographic table in my database. Each table contains a userid, followed by as many as 100 demographic fields. Depending on the customer, the table can range from 1 million records to 30 million. We allow our customers via a web interface to
"reset" any column or combination of demographic columns. Basically we
do "update TABLE_NAME set column_name=null where column_name is not null" and we typically have a rownum clause to do the update a piece at a time to reduce the large amount of undo.

In many cases a customer is only updating a very small percentage of non-null values in a very large table ie (10K records out of 20 million), but in other cases it could be 15 million recs out of 20 million).

I'm looking for a less resource intensive way of doing this...These are not 1 time batch jobs, but they don't happen that often either. Typically a customer will reset a few fields 4-5 times per week. This environment is neither OLTP nor DSS, it's what I've seen termed
"operational data". The system is about 70% read, and 30% write.

Restrictions: I can't use NOLOGGING operations because of my standby db.

Here are the current options I'm considering:

  1. Since all the customers want the colums nulled out, I was think, why not just drop/recreate the column. The overhead here is still significant.
  2. I could do something similar to the above step where I mark the column unused, then use the "checkpoint" option of the cmd to control the undo generation (similar to using rownum ). SQL>ALTER TABLE <TABLE_NAME> DROP UNUSED COLUMNS CHECKPOINT 1000
  3. use a regular update statement and just use a dedicated larger rollback segment ..and do it as fast as possible... using parallelization.
  4. I've thoughts of recreating the tables with CTAS in parallel and then rebuilding the indexes, but I just don't know how effective that will be for tables that are so wide (many demographic fields) and it seems a lot of work for when only 10K,20K or 50K records need to have a column nulled out in tables that have millions of records.

Let me mention, we are currently using 816/solaris8 and will soon be upgrading to 10G/solaris9

Any feedback would be greatly appreciated. thanks
peter Received on Fri Mar 18 2005 - 08:37:18 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US