> You were right. Thank you very much ?ukasz.
> This sheds a completely different light.
> It also explains why the first 9k requests passed so quickly and the
> last 1k took longer.
> I did the same with node.js and the express framework, raw node and
> node+nginx.
> The results blow me away.
> requests/second
> ~3k for node/express ~200 more w/o nginx
> raw php <?php echo "Hello World"; ?>
> ~7.2k
> symfony 2.3.4, not a hello world app but no db queries
> ~20 , yes 20 not 20k
>
> The same thing on Ubuntu with apache 2.2.22 and php-fpm, no failed
> requests, same machine, a while back.
> ~14.4k raw php
> ~580 sf 2.2.1 hello world app
> ~6.1k php with phalcon hello world
> ~6.7k spring on tomcat 7.0.39 vanilla hello world app
>
> This is weird. How can there be such a big difference?
>
>

I am not sure to fully understand the benchmark you made, but 3 things can
definitely help with nginx + uWSGI + php with high concurrency:

- use a nginx worker for each cpu (too much nginx workers lead to really
poor performances)
- add --thunder-lock to uWSGI (this avoid the thundering herd)
- starts without cheaper mode, as its default behavior is very different
from the apache one (it is tuned by default for psgi/wsgi/rack apps)

Regardign threads, more than one time i have heard of people trying them
with php (iirc the gentoo or the arch package apply a patch to allow
threads in php + uWSGI) but personally i never managed to let them work

-- 
Roberto De Ioris
http://unbit.it
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to