>However, in my experience it is unusual for a too low limit on the number
of open files to result in a segmentation fault. Especially in a well
written program like Apache HTTPD. A well written program will normally
check whether the >open (or any syscall which returns a file descriptor)
failed an
I have set my siege concurrency level a bit lower (20 users) and that seems
to have resolved the segfault issue. Its strange that I hadn't read
anywhere else that a lack of resources could cause that, but there it is. I
guess that running Debian 8, Apache 2.4.10, php-fpm and Mariadb was just a
bit
On Fri, Aug 21, 2015 at 6:14 PM, Daryl King
wrote:
> Thanks Ryan. Strangely when running "ulimit -n" it returns 65536 in a ssh
> session, but 1024 in webmin? Which one would be correct?
>
Limits set by the ulimit command (and the setrlimit syscall) are correct if
they are high enough to allow a
Thanks Ryan. Strangely when running "ulimit -n" it returns 65536 in a ssh
session, but 1024 in webmin? Which one would be correct?
On Sat, Aug 22, 2015 at 12:52 AM, R T wrote:
>
> Hi Daryl,
>
> Typically when I see a core dump when running siege, it is a resource
> issue. Out of memory, and/or I
Hi Daryl,
Typically when I see a core dump when running siege, it is a resource
issue. Out of memory, and/or I've reached the ulimit on my machine and need
to set it higher. The limit is 1024 (displayed via ulimit -n), and can be
changed via ulimit -n . This change isn't persistent - and the
setti
I am running Apache 2.4.10 with mpm_event on a Debian 8 vps. When I run
Siege on my setup it runs well, except for a Segmentaion Fault at the very
end [child pid exit signal Segmentation fault (11)]. I have run GDB on
a core dump of the segfault and returned this:
[Using host libthread_db lib