Re: Memory management in Oracle 11g and limit of open file descriptors

From: Andre van Winssen <dreveewee_at_gmail.com>
Date: Fri, 13 Nov 2009 09:39:47 +0100
Message-ID: <9b46ac490911130039y4c240c1evaacc3c64cc034784_at_mail.gmail.com>



Hi Michael,

we had "too many files open" problems with oracle 11gR1 RAC before. I found three main factors contributing to having too many open file descriptors by the oracle processes.

  • using the ilo package for code instrumentation in a packaged procedure that was called many times per second during deliveries caused many open file descriptors to oraus.msb file
  • Bug 7225720: ASM DOES NOT CLOSE OPEN DESCRIPTORS EVEN AFTER APPLYING THE PATCH 4693355, although this was only suspicion of oracle support
  • when you set audit_trail to xml oracle writes all audit records to XML format OS files. I noticed thousends of these files appearing in audit_file_dest

we moved to 11gR2 and stopped using the ilo package (replacing it with something simpler but still fit for purpose) and then the issue with too many files open went away.

we had the too many files open issue with and without memory_target set to 12GB. it was not the main contributor.

Regards,
Andre

2009/11/12 Michael Elkin <melkin4u_at_gmail.com>

> Dear list members,
>
> Recently i have encountered a problem with system resources on my Linux (64
> bit) machine with Oracle 11g. I have started 2 instances on the same server
> with 1GB of MEMORY_TARGET each.
> We have run some sort of stress test trying to open 200-300 application
> connections to the database and suddenly got many errors "too many open
> files".
>
> As you know Oracle 11g uses a different mechanism for memory management
> implementing shared memory on so called tmpfs file system or /dev/shm/.
> In previous versions shared segment can be easily observed from ipcs -m
> command but in Oracle 11g the situation is different , many small files of
> 4MB (MEMORY_TARGET under 1GB, and 16MB for MEMORY_TARGET above 1G) are
> created under /dev/shm.
> So i tried to understand how all this is related to "too many open files
> error".
> I have tried to examine lsof output and found that ~55000 files are opened.
> After that i did a small experiment:
> 1. shutdown the database ; /dev/shm is empty
> 2. startup the database :
> found 250 files (makes sense 1GB of memory_traget / 4M =250 files)
> lsof gives me output 5000 open files , without any opened external
> sessions ,which means that all files were opened by Oracle background
> processes like pmon, dbwr, lgwr ... I have checked this one more time and
> yes indeed i found that by default my instance is started with 20 background
> processes.
>
> The most interesting point that each Oracle process opens every file under
> /dev/shm that has been created by Oracle, 20 processes * 250 files =5000 -
> this is how i got 5000 open files without even opening a single connection
> to Oracle.
> 3. After that i run a simple shell script that opens 200 sessions without
> closing them. The number of open files immediately jumped to 55000.
>
> From my first tests i can see that number of opened files by Oracle is
> directly related to MEMORY_TARGET size of each instance * number of Oracle
> processes. Taking into account that every OS has it's own limits for
> maximum number of possible open files new memory management approach looks
> to me a little bit problematic.
>
> I would like to know if someone has experiences the same and how you tried
> to solve this problem.
> Is there any option to increase the basic file size under /dev/shm for
> example ?
>
> Thank you.
> --
> Best Regards
> Michael Elkin
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Nov 13 2009 - 02:39:47 CST

Original text of this message