This has been a big help, this and the lsof command, will put me well on
the way to solving the problem

Thanks again for your help.

I do have one more question.  Is it possible for lsof to indicate more
open files than /proc/sys/fs/file-max says is possible?

david

On Wed, 19 Feb 2003, Sites, Brad wrote:

> Jan wrote:
> > dbrett wrote:
> >> I have a RH 6.2 server, which seems to be unable to keep up with the
> >> load it is under.  I have to keep rebooting it about every other
> >> day.  One of the first clues I have is there too many files open and
> >> can't do another operation. 
> >> 
> >> How do I find out how many files are open and by what programs?  Is
> >> it possible to increase the number of files which can be open?
> >> 
> > lsof may be a good place to start - it lists all open files; it is a
> > LONG list! Perhaps you should run it at intervals (and save the
> > output) to see if there is a single program that runs amok.
> > 
> > /jan
> 
> You may be running out of file descriptors.  Open tcp sockets and things
> like Apache and database servers are prone to opening a large amount of file
> descriptors.  The default number of file descriptors available is 4096.
> This probably needs to be upped in your scenario.  The theoretical limit is
> somewhere around a million file descriptors, but a number much lower would
> be more reasonable.  Try doubling the default number and seeing if that
> takes care of things.  If not, double that number and see how it works.
> Here is the command to do this on the fly:
> 
> echo 8192 > /proc/sys/fs/file-max
> 
> To make this happen each time at boot, edit your /etc/sysctl.conf file and
> add the following line:
> 
> fs.file-max = 8192
> 
> 
> 
> Brad Sites
> 



-- 
redhat-list mailing list
unsubscribe mailto:[EMAIL PROTECTED]?subject=unsubscribe
https://listman.redhat.com/mailman/listinfo/redhat-list

Reply via email to