how to reduce the response time of the sql query [message #438332] |
Fri, 08 January 2010 02:15 |
dhanamukesh
Messages: 51 Registered: January 2009
|
Member |
|
|
The following query 's cost would be 229 and cpu cost will be 6820445 and response time will be around 6 seconds.
UPDATE NOTIFICATION_MSG MSG
SET LAST_UPDATE_USERID = 'NOTIFICATION_SERVICE',
LAST_UPDATE_DATE =SYSDATE,
BATCH_PRCS_ID = :1, --'9999'
MSG_SND_TRY_CNT = MSG_SND_TRY_CNT + 1
WHERE NOTF_MSG_STAT_CODE = :2 --'SENT'
AND (BATCH_PRCS_ID IS NULL OR BATCH_PRCS_ID = :3)
AND MSG_SND_TRY_CNT < :5 -- 2
AND CREATE_DATE < = (SYSDATE - INTERVAL '5' MINUTE)
AND EXISTS (SELECT NOTF_MSG_ID
FROM NOTIFICATION_MSG_PARM PARMS
WHERE PARMS.NOTF_MSG_ID = MSG.NOTF_MSG_ID)
AND ROWNUM <= :4 --100000;
Because of the high cost, i have tuned the query like below:
BEGIN
FOR c1 IN (SELECT NOTF_MSG_ID
FROM NOTIFICATION_MSG_PARM PARMS)
LOOP
UPDATE NOTIFICATION_MSG MSG
SET LAST_UPDATE_USERID = 'NOTIFICATION_SERVICE',
LAST_UPDATE_DATE =SYSDATE,
BATCH_PRCS_ID = :1,'-9999'
MSG_SND_TRY_CNT = MSG_SND_TRY_CNT + 1
WHERE NOTF_MSG_STAT_CODE = :2 --'SENT'
AND CREATE_DATE < = (SYSDATE - INTERVAL '5' MINUTE)
AND (BATCH_PRCS_ID IS NULL OR BATCH_PRCS_ID = :3)
AND MSG_SND_TRY_CNT < :5 --2
AND MSG.NOTF_MSG_ID = c1.NOTF_MSG_ID
AND ROWNUM <=:4; -- 100000 ;
END LOOP;
END;
The cost and cpu cost reduced respectively like 1 and 2290.But the response tine got very high.
Even i tried with global temp table to store the subquery details and used those values in the main query.But still the query response time is very very high.
I have attached both statistics and explain plan with this.
Please guide me to tune this query.
Thanks,
Dhanalakshmi.
|
|
|
Re: how to reduce the response time of the sql query [message #438350 is a reply to message #438332] |
Fri, 08 January 2010 04:31 |
cookiemonster
Messages: 13958 Registered: September 2008 Location: Rainy Manchester
|
Senior Member |
|
|
First off when posting code can you please use code tags - see the orafaq forum guide if you're not sure how.
Those updates aren't equivalent. The First updates 100000 rows. The 2nd updates 100000 rows per matching
row in NOTIFICATION_MSG_PARM.
So the 2nd update is probably updating a lot more data than the first.
How many rows are in each of the two tables?
Which query is the explain plan for? And next time put explain plans in the message (use code tags).
And why are you limiting this by rownum?
|
|
|
Re: how to reduce the response time of the sql query [message #438356 is a reply to message #438350] |
Fri, 08 January 2010 05:33 |
dhanamukesh
Messages: 51 Registered: January 2009
|
Member |
|
|
Total no of records in the table notification_msg will be
2455128.
and for the table notification_msg_parm will be 17703990.
The plan is for the original query.i.e for the first query.
And rownum is not limiited.. It is a user input variable.It may vary from 100 to max i.e 10000 or more.
Thanks,
Dhanalakshmi.
|
|
|
Re: how to reduce the response time of the sql query [message #438365 is a reply to message #438332] |
Fri, 08 January 2010 06:12 |
cookiemonster
Messages: 13958 Registered: September 2008 Location: Rainy Manchester
|
Senior Member |
|
|
Well looping over a table that contains aprox 18m rows is always going to take a long time.
What table and columns are the indexes IX2_NOTIFICATION_MSG and IX1_NOTIFICATION_MSG_PARM on?
As for rownum, why is it there at all?
You're not doing any ordering so the rows that are updated out of the possible ones that match the rest of the where clause is random. I would have thought you'd want to update all rows that match the rest of the where clause.
Rownum is generally only used for top-n queries - where you want to get the first (or last) n rows according to some criteria, but that requires an order by.
|
|
|
|
|
|
|
Re: how to reduce the response time of the sql query [message #438639 is a reply to message #438631] |
Mon, 11 January 2010 06:00 |
dhanamukesh
Messages: 51 Registered: January 2009
|
Member |
|
|
After creating this index , the statistics going veery high.
Original Query's statistics :
Statistics
----------------------------------------------------------
18 recursive calls
12 db block gets
89 consistent gets
0 physical reads
0 redo size
2243 bytes sent via SQL*Net to client
446 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
26 rows processed
After creating the index,
Statistics
----------------------------------------------------------
3118 recursive calls
24 db block gets
2754 consistent gets
208 physical reads
0 redo size
2186 bytes sent via SQL*Net to client
446 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
38 sorts (memory)
0 sorts (disk)
26 rows processed
Please look into this.
Thanks,
Dhanalakshmi.
|
|
|
|
Re: how to reduce the response time of the sql query [message #438652 is a reply to message #438639] |
Mon, 11 January 2010 07:31 |
JRowbottom
Messages: 5933 Registered: June 2006 Location: Sunny North Yorkshire, ho...
|
Senior Member |
|
|
Something doesn't add up here - in the attachment to your original post, you've got the stats as this: Statistics
----------------------------------------------------------
1488 recursive calls
22 db block gets
2370 consistent gets
156 physical reads
0 redo size
3483 bytes sent via SQL*Net to client
457 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
20 sorts (memory)
0 sorts (disk)
40 rows processed
Even allowing for the possibility that those are the stats for the explain plan display, I'm still doubtful that a query that updates 100,000 rows will manage to do so with 89 consistent gets.
|
|
|