Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Help - Client memory footprint growing, poss resource leak ?
I am working on a system whose oracle db has recently been moved off a
'clustered' database onto a 'normal' database on a different host and
port.
One of the client processes has suddenly started growing in memory use (rate: 80MB in 17 hours!). This process has been working fine for over a year, it has not been recompiled but the configs have been updated to point at the new host/port for the database.
My first thought is that the new (non-clustered) oracle db may have a different behaviour regarding connections that is causing the client process to leak resources (by exercising areas of the client code that had not been used before).
I can envisage this scenario, for example.. Perhaps the non clustered oracle db is closing connections frequently whereas the previous clustered db allowed the client to keep using a single connection for a long time. If the client is now forced to keep creating new ones, and it had a bug in the connection management (eg failing to clean up memory associated with a connection that got closed by the db) then I could see a plausible explanation here.
So I'm thinking, how do I ask oracle to let client connections live longer ?
Any helpful ideas folks ?
Also, I'd like to know roughly how big a oracle connection is in the C library that allows oracle connections ? Also of interest would be sizes of the RogueWave DBTools connection object (I guess I risk being bashed for off topic here, sigh). That would
Well, thanks for reading.. any useful pointers appreciated.
Thanks, Fazl Received on Thu Oct 07 2004 - 03:49:19 CDT