> Can anyone share experiences with running out of open files on Linux? >I am using a 2.4.26 kernel, and the system wide open file limit is >rather large. Do I need to set anything other than this? The default >limit of 1024 is in effect for both cyrus and root.
Off the top of my head, there are four main areas where you run into trouble (only three relevent to linux): 1. total open fd's in the system 2. user fd limits (ulimit - typically 1024) 3. the select() call (typically 1024) 4. old stdio implementations (256) The first two you probably know about, although you may not know about the third and fouth. The select() function usually has a limit of 1024 file descriptors - this is because it uses an implementation-defined bitmap to signal interest and status of each file descriptor. The FD_SETSIZE constant (defined in <sys/types.h>) tells you the size of the bitmap. The fourth will bite you on what I'll rudely call "legacy" unix systems, eg Solaris. I haven't checked versions after Solaris 8, but the fd field in the stdio structure was traditionally an unsigned char value, and in the bad old days, apps would mess around inside this structure. Presumably because they have customers with grungy old apps, Sun has retained this historical anacronism. -- Andrew McNamara, Senior Developer, Object Craft http://www.object-craft.com.au/ --- Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html