I have a commandline php script, which opens a nonblocking socket to
a server.  Inside the message loop, where it polls the socket to determine
whether there is any data to read/write, I've been running into problems,
where the application takes as much CPU time as it can (80%, 90%,
sometimes 99%, etc)...  Normally, using socket_select() with a timeout
should allow allow it to poll the socket without using all the cpu time,
but it isn't working.  I've tried with timeouts of 15ms, 100ms, 200ms,
500ms, 1sec, and it's still using 100% cpu time.  Even adding sleep() and
usleep() calls will not relieve the system load that the script causes. 
For now, I've fixed the problem temporarily by not setting the socket in
nonblocking mode (so it blocks when reading), but the reason I need to use
nonblocking is that I'm adding more to it that will require that the
script can respond to events *between* reading from the socket.  Can
anyone explain why socket_select and other timeout function calls would
still cause it to use 100% cpu time?  If it makes a difference, the script
is on gentoo linux, and it includes a fork() call, to fork into the
background (though this shouldn't make a difference).

Is it possible that socket_select() ignores the timeout values, if the
sockets are nonblocking?  Or would I be able to use socket_select() on a
blocking socket, to simulate the effect of a nonblocking socket (so that I
would only call socket_read() IF socket_select() indicates that there is
data waiting.)

Thanks
Joe

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to