I have a dual processor xeon server, running Red Hat Linux 9, that I was
trying to stress test. The system has a SCSI based RAID 5 disk array. It had
great response times for what I was testing until I got 468 instances of the
application being tested running. It is a web based application that uses a
cgi to parse a web form and pass data to my process via a socket connection.
The application under test runs continually and waits for the cgi to hand it
data. The data is then used to generate a dynamic HTML page that gets sent
to the user's browser where they fill in the form and pass the form to the
cgi again. I have a browser replacement that I run as many instances as the
number of processes under test, on a second dual processor xeon server. The
browser replacement auto fills the forms and submits them at some controlled
pace. The browser system was just coasting doing it's thing.

If I replaced the cgi with a perl script and use mod-perl then I was able to
run about 700 instances of the application being tested before the
performance problem reappeared.

It seems like some kind of system resource or system table is getting full
and when I add the one extra instance of my application a wall gets hit and
the new process goes into a wait state or something like that.

I do not think it is CPU related as just before the process "that broke the
camel's back" gets launched the system is only running at about 50% on each
processor. Additionally memory does not seem to be an issue either. The CPU
and memory usage is still reasonable after the troubled process starts.

What I was wondering is what system monitoring tools exist to allow me to
view various system tables in real time to see if I can figure out what
resource needs to be increased.

Any words of wisdom as to ways to try to figure out what barrier I might be
hitting?

Thanks for any ideas!
-- 
Jim Dickenson
mailto:[EMAIL PROTECTED]


-- 
redhat-list mailing list
unsubscribe mailto:[EMAIL PROTECTED]
https://www.redhat.com/mailman/listinfo/redhat-list

Reply via email to