Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Best way for updating EVERY row in a large database?
I've got an application that is run once every several months which resets
several columns to zero in a table that contains > 3,000,000 rows. In order
to take advantage of a variable commit frequency, I allow the frequency to
be specified as a parameter when I start my program.
What I am doing is opening a cursor which will select the key column of the entire table I need to update, fetch the rows one at a time, then issue a separate UPDATE statemement for each row I fetch from the cursor. I am using the precompiler option which allows me to keep cursors open after doing a commit, so periodically committing updates (my default is every 200) works just fine.
This program is taking over 9 hours to run, and I am wondering if there is something I can do about that. I've looked into what I saw as the only viable alternative, and that is to use the WHERE CURRENT OF positioned cursor update instead of doing a separate UPDATE call for each fetched row. The problem with this approach is that, according to Oracle doc, you cannot fetch from the cursor anymore after you do a commit (you will get a 1002). I simply cannot update 3,000,000 rows without doing periodic commits.
I feel like I would lose any performance advantage I've gained by having to reopen the cursor and reposition it. I considered using rowid and keeping track of which rows I have already processed (using the commit frequency parameter as the number of rows I process each time before a commit) but it just seems to me that continually having to reopen that cursor might even make the program perform worse.
Has anyone been faced with this situation and found a good way to handle it?
--
Regards,
Jim
Received on Thu Apr 23 1998 - 08:22:13 CDT
![]() |
![]() |