taking to much time to update [message #349755] |
Tue, 23 September 2008 00:52 |
tannad
Messages: 43 Registered: January 2008 Location: mumbai
|
Member |
|
|
Hi.
I am updating one table in which 26GB of data .
My query is
UPDATE invoice_data
SET supplierid = RTRIM (supplierid)
But its takes to much time ..
from more than one day its still running ..
I checked the locks its show mi folowing output
LOCK Type MODE
TRANSACTION Exclusive
DML Row-X (SX)
How to tune the query and increase the performance?
Please Help
|
|
|
|
|
|
Re: taking to much time to update [message #349788 is a reply to message #349755] |
Tue, 23 September 2008 02:37 |
JRowbottom
Messages: 5933 Registered: June 2006 Location: Sunny North Yorkshire, ho...
|
Senior Member |
|
|
YOu can't tune the query - it's got to read all the rows of the table, and that's it.
You could fire off a series of DBMS_JOB processes (20, say) each one of which would process a range of id values.
Assuming that you've got the datafiles for this table spread over several disks, that should give you some improvement in speed, but it will slow down everyone else using the system.
Have you had a look at v$session_longops to see how much longer the update has to run?
|
|
|
|
Re: taking to much time to update [message #350045 is a reply to message #350038] |
Tue, 23 September 2008 22:08 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
If you are updating EVERY row, then it will probably be 10 to 100 times faster to just rebuild the table using CREATE TABLE new AS SELECT ... FROM old
Ross Leishman
|
|
|
Re: taking to much time to update [message #350716 is a reply to message #349755] |
Fri, 26 September 2008 00:50 |
|
Kevin Meade
Messages: 2103 Registered: December 1999 Location: Connecticut USA
|
Senior Member |
|
|
Ross is correct. The problem with large updates like this is that they take a long time to do period. All that disk movement is hard on any database.
If you are updating more than 10% of rows in the table then the usually the best way to get this done in a reasonable time frame is to create a new table. Forget about all the PARALLEL crap. If you feel you must have parallel work happening you can add it to this create statement wiht the PARALLEL clause, but you better have the CPU available, and the storage specifics set up right.
use something like this:
create table newtable
nologging
as
select ...
from oldtable
/
create bunch of indexes (use parallel)
setup proper grants
might need to set logging on for the newtable and its indexes, don't remember
rename old table
rename new table
[do a backup that night]
drop the old table
You can do 1 million row inserts per minute easy this way. You will see your job goes from days to hours.
You will eventually use this create table method. You just need to suffer enough first before you accept it.
Good luck, Kevin
|
|
|