Home » RDBMS Server » Performance Tuning » open connection memory overhead
open connection memory overhead [message #132083] Wed, 10 August 2005 14:40 Go to next message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
What will be the easiest and most reliable way to measure the memory usage performance?
I have new application version which now opens 2 connections per process instead of only 1. The first connection does the regular job - no changes there. The second connection pulls 1 data field - number, and updates it immediately like:
set value_was_selected = value_was_selected+20,001.
It does that for 15 different rows in the same table. Then it stays idle till the next turn about 10-15 min later.
There's a huge debate for the possible overhead of keeping the connection open - even if it doesn't do anything.
Is there going to be performance degradation due to multiple open connections? I usually have about 300, now they are going to be 600... If I request that they close the connections while the app doesn't need them - is this going to be better for the performance then leaving them open and idle? Then there'l be the time penaly of establishing new connection to Oracle...
How can I monitor this situation to prove the same load or worse one?
Thanks a lot,mj
Re: open connection memory overhead [message #132093 is a reply to message #132083] Wed, 10 August 2005 16:21 Go to previous messageGo to next message
Art Metzer
Messages: 2480
Registered: December 2002
Senior Member
This topic was just recently discussed at AskTom ("Best Practices for App Server Keep Alive").
Re: open connection memory overhead [message #132504 is a reply to message #132083] Sat, 13 August 2005 10:01 Go to previous message
skempins
Messages: 16
Registered: June 2005
Location: Florida
Junior Member
The AskTom topic seemed to deal with a different situation. I'm guessing the second connection actually does some work other than make sure the database is still there. We have an application that does this (one connection does bulk of the work, the other gets new keys and commits without affecting the other connection's transaction).

My two cents:

For dedicated server connections, there is an o/s process and a chunk of memory dedicated for that connection. As you up the processes parameter to support more dedicated connections you need more shared memory. At some point you will reach the limit of how big a shared memory segment you can allocate based on your o/s and RAM. You should be able to easily show how many dedicated connections you can support with the current hardware.

The next step is to introduce shared memory connections (aka MTS). There is almost no limit to the number of connections that can be supported with shared, but they will be a little slower. If you can configure the connection strings for the applications, use a dedicated connection for the first one and a shared connection for the second, least used, connection. If you cannot configure the connections individually, you may have to use only shared server connections.

Hope this helps.

[Updated on: Sat, 13 August 2005 10:04]

Report message to a moderator

Previous Topic: B*-tree or bitmap index?
Next Topic: Performance problem
Goto Forum:
  


Current Time: Wed Nov 27 10:03:55 CST 2024