Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> Re: Tuning for High Volume of Updates of Temporary info into a Small Table ?
Vivek wrote:
Our Banking Product has an Application Login Table into which Session Connect information is INSERTED on User Login.
Approx 10,000 Concurrent Users login & hence the Table contains about 10,000 rows.
Info created by 1 session may be used by Another Database Session process. On User Logout the respective Session information Record is DELETED.
An Application Load of 300 UPDATEs per second happen to the Table’s different rows at random (NOT to every row) depending on the respective Application User’s Transaction
NOTE - Each Update is updating a unique single row via the unique index.
What options, ideas can be considered to make the Updates faster & less CPU consuming?
Some ideas Checked already are:-
Global Temporary Table (GTT) can NOT be used
Benchmarked with Asynchronous COMMIT gave NO benefit
Truncate Table periodically to reduce HWM
Vivek
Have you considered retaining the rows at user logout (just mark them as deleted/logged out). That would reduce space management overheads, completely avoid index entry addition/deletion, and as a result would probably cut down on contention for index and data blocks. Of course it also means all your relevant application SQL has to filter out the rows that currently would be deleted. On login, update the row back to active (and, occasionally, insert a new row for a new user - that could be done as the user account is authorised/activated, rather than on the user's first login; removing the user's row is done only when the user account is de-activated).
Later, you could also consider the pros and cons of:
Use an index-organised table or hash organised table to reduce LIOs getting to the row
Reduce the number of rows in each block (increase PCT_FREE) if there is any block contention, or row chaining
Investigate any other kind of waiting / contention (eg ITL waits, which could become significant if many small rows are contained in each block)
10,000 users updating these records 300 times/sec means on average each record is touched every five or six minutes, I think. Two touches per session (insert/delete)? or are there other updates during a session? In a similar example, we spent a long time fiddling with storage parameters, IOTs etc - but as is often the case, reducing the need for such frequent DML on a small table was much more effective (and easier).
As a completely different approach, you could use Oracle's built-in session tracking, using DBMS_APPLICATION_INFO to decorate V$SESSION (or GV$SESSION if you're using RAC) with whatever additional details you need? Niall Litchfield referenced Gerry Miller's presentation at http://www.millerandbowman.com/cpapers.html . As you can see from the paper, the overheads are very low.
Limitations are:
You must be able to encode the information you need in Module, Action, Client_Info (48,32,64 bytes respectively on 9iR2).
If you are using a session pool, it may be difficult/impossible to map (serially shared) Oracle sessions to application sessions
Your application may already be using DBMS_APPLICATION_INFO as an instrumentation aid
HTH
Regards Nigel
-- http://www.freelists.org/webpage/oracle-lReceived on Fri Feb 23 2007 - 07:43:16 CST
![]() |
![]() |