how to tune 100s of transactions over single table [message #567159] |
Mon, 24 September 2012 13:29 |
|
kamilp
Messages: 10 Registered: August 2012
|
Junior Member |
|
|
Hi,
I am trying to find some way how to tune and optimize the server performance in following situation. There are 100s of sessions inserting records to one table. Sessions are communication threads in java application, each thread is receiving messages that are to be stored in the table. Each message must be commited and then is ACK sent to remove client. Two problems are raising of course - much of ITLs on the table and lots of very small transactions. I can adjust the java application, but cant do much about the design. I was thinking about some "caching" - if the messages are stored in memory and bulk-inserted to database by single thread the performance would be much higher. However, there would be possible loss of data - the message could be lost from memory cache and client already received ACK.
So I wonder, If you have any advise on this. I would appreciate any help.
Regards.
|
|
|
|
|
Re: how to tune 100s of transactions over single table [message #567167 is a reply to message #567159] |
Mon, 24 September 2012 15:45 |
|
kamilp
Messages: 10 Registered: August 2012
|
Junior Member |
|
|
> Oracle version? with 4 decimals.
11.2.0.1, but I think this issue is same in any 10,11 version
> Does table have Primary Key (PK)?
yes
> does PK contain SEQUENCE?
no, PK is NUMBER(38) generated by java application logic
table is
ID NOT NULL NUMBER(38)
PHEADER VARCHAR2(64)
MSG LONG RAW
TSTMP NUMBER(38)
METADATAID NUMBER(38)
msg is containing short texts with length approx. 100-200 bytes on average
[Updated on: Mon, 24 September 2012 15:46] Report message to a moderator
|
|
|
|
|
|
Re: how to tune 100s of transactions over single table [message #567214 is a reply to message #567213] |
Tue, 25 September 2012 05:53 |
rleishman
Messages: 3728 Registered: October 2005 Location: Melbourne, Australia
|
Senior Member |
|
|
Best way to reduce contention on hot blocks is to get different sessions to write to different blocks.
An index on a current date or incrementing number will have new rows all trying to write to same physical index leaf. You can reduce this by converting the index to a REVERSE KEY index. Note that you can only scan a reverse key index with = and IN, so no good for date range queries. Each transaction that writes a different index value will write to a different block, so you fix your hot block problem, but may create a cache-miss problem
It's a bit more difficult with a table. In a full table, new rows want to go at the end. Some alternatives that come to mind:
- Hash partition the table on the PK. Different PK values will write to different partitions, and therefore different blocks. Keep your partition count a power of 2. If you have hundreds of concurrent transactions, 128 hash partitions might be suitable.
- Create the table in an Index Cluster. Make the Cluster Index the PK column with a REVERSE KEY. I have never tried this, and would only consider it after you had rejected the 1st idea.
Ross Leishman
|
|
|
|
|
Re: how to tune 100s of transactions over single table [message #567506 is a reply to message #567245] |
Tue, 02 October 2012 17:24 |
|
alan.kendall@nfl.com
Messages: 163 Registered: June 2012 Location: Culver City, California
|
Senior Member |
|
|
Kamil,
Here are a few Ideas that I have implemented to reduce "enq: TX - row lock contention" on inserts.
1) Delete as many indexes as possible from the table you are inserting into.
2) Disable all triggers from this table.
3) Disable foreign keys from this table to other tables.
3) Do the inserts on one RAC node to reduce contention.
4) Look at your event waits to make sure you are not locking on something else like "latch free" or "log buffer".
5) In 11g Turn result cache off on the table you are inserting into. "alter table scott.emp result_cache (mode manual);"
6) Consider inserting into a table without any indexes then on one node do inserts 10 at a time with commit into the main table with a cursor. Construct your delete from the smaller table based on the row already being commited in the main table to avoid loss of data.
7) Consider making your apps servers collect 20 inserts, then the first apps server inserts at 5 seconds after the minute and insert them with a commit. The next apps server collects 20 inserts and inserts 10 seconds after the mintue to commit. If any apps server goes over 20 inserts it wakes up and inserts with a commit.
Consider making the table a hash partition.
9) Cache the table and its indexes into the keep buffer pool.
10) Look at what is hogging memory because these other objects will cause the table to get LRU and cause inserts to lock more.
ENWEBP1P > SELECT
2 o.object_type,i.instance_name db,COUNT(*)*8192/1024/1024 meg_in_memory,
3 o.owner||'.'||o.OBJECT_NAME Object_in_Memory
4 FROM DBA_OBJECTS o, gV$BH bh, gv$instance i
5 WHERE o.DATA_OBJECT_ID = bh.OBJD
6 and bh.status<>'free'
7 and bh.inst_id = i.inst_id
8 GROUP BY o.owner||'.'||o.OBJECT_NAME,o.object_type,i.instance_name
9 having count(*)>1024*100
10 ORDER BY COUNT(*);
OBJECT_TYPE DB MEG_IN_MEMORY OBJECT_IN_MEMORY
----------- -------- ------------- -----------------------------------------
TABLE NWEBP2 854 ALFC.NFL_ABSTRACT_CONTENT
TABLE NWEBP3 856 ALFC.NFL_ABSTRACT_CONTENT
TABLE NWEBP4 858 ALFC.NFL_ABSTRACT_CONTENT
TABLE NWEBP4 1416 CONTENT.NFL_DOCBASE_PROPERTIES1_S
TABLE NWEBP2 1435 CONTENT.NFL_DOCBASE_PROPERTIES1_S
TABLE NWEBP1 1533 ALFC.NFL_ABSTRACT_CONTENT
TABLE NWEBP1 1536 CONTENT.NFL_DOCBASE_PROPERTIES1_S
TABLE NWEBP1 2813 PROFILE.REGISTRATIONS
TABLE NWEBP4 2813 PROFILE.REGISTRATIONS
TABLE NWEBP3 3184 PROFILE.REGISTRATIONS
TABLE NWEBP2 3632 PROFILE.REGISTRATIONS
Many times you can purge a table that is hogging memory or better index it to relieve the impact on your more important tables.
|
|
|