Re: bug in spdy - 499 response code on long running requests

2013-10-11 Thread justin
Just upgraded to nginx 1.5.6 and still seeing this behavior where long running requests are being called twice with SPDY enabled. As soon as I disabled SPDY it goes away. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240278,243634#msg-243634 ___

An official yum repo for 1.5x

2013-10-11 Thread justin
[nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/6/$basearch/ gpgcheck=0 enabled=1 Is the 1.4x branch. Is it possible to get an official 1.5x repo? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243633,243633#msg-243633 _

Re: 98: Address already in use

2013-10-11 Thread Peleke
I have the same problem and even if I stop nginx with /etc/init.d/nginx stop it is running on Debian Wheezy: PID PPID %CPUVSZ WCHAN COMMAND 2709 1 0.0 127476 ep_pol nginx: worker process 2710 1 0.0 127716 ep_pol nginx: worker process 2711 1 0.0 127476 ep_pol nginx: worker

Re: log timeouts to access log

2013-10-11 Thread Richard Kearsley
On 11/10/13 12:25, Maxim Dounin wrote: Closest to what you ask about I can think of is the $request_completion variable. Though it marks not only timeouts but whether a request was completely served or not. http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_completion thank

Re: Each website with its own fastcgi_cache_path

2013-10-11 Thread ddutra
Maxim Thanks for your time. It really works. Thanks alot! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243622,243627#msg-243627 ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx

Re: Each website with its own fastcgi_cache_path

2013-10-11 Thread Maxim Dounin
Hello! On Fri, Oct 11, 2013 at 07:35:56AM -0400, ddutra wrote: > I would like to know if it is possible to have multiple fastcgi_cache_path / > keys_zone. Yes, it is possible. And that's actually why the fastcgi_cache directive needs zone name as a parameter. -- Maxim Dounin http://nginx.org

Re: Quick performance deterioration when No. of clients increases

2013-10-11 Thread Aidan Scheller
Hi Nick, There was a discussion recently that yielded a marked performance difference between smaller and higher levels of gzip compression. I'll echo the concerns over CPU power and would also suggest trying gzip_comp_level 1. Thanks, Aidan On Fri, Oct 11, 2013 at 3:07 AM, wrote: Message: 4

Each website with its own fastcgi_cache_path

2013-10-11 Thread ddutra
Hello guys, I would like to know if it is possible to have multiple fastcgi_cache_path / keys_zone. If I host multiple websites and all share the same keys_zone, it becomes a problem if I have to purge the cache. I cannot purge it for a single website, only for all of them. This is more out of c

Re: log timeouts to access log

2013-10-11 Thread Maxim Dounin
Hello! On Fri, Oct 11, 2013 at 12:11:10PM +0100, Richard Kearsley wrote: > Hi > I would like to log an indication of weather a request ended because > of a client timeout - in the access.log > > e.g. > > log_format normal '$remote_addr - $remote_user [$time_local] ' > '"$request" $status $bytes

Re: Quick performance deterioration when No. of clients increases

2013-10-11 Thread Dennis Jacobfeuerborn
On 11.10.2013 10:18, Steve Holdoway wrote: The ultimate bottleneck in any setup like this is usually raw cpu power. A single virtual core doesn't look like it'll hack it. You've got 35 php processes serving 250 users, and I think it's just spread a bit thin. Apart from adding cores, there are 2

log timeouts to access log

2013-10-11 Thread Richard Kearsley
Hi I would like to log an indication of weather a request ended because of a client timeout - in the access.log e.g. log_format normal '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" "$client_send_timed_out"'; where $client_

Re: Quick performance deterioration when No. of clients increases

2013-10-11 Thread Steve Holdoway
The ultimate bottleneck in any setup like this is usually raw cpu power. A single virtual core doesn't look like it'll hack it. You've got 35 php processes serving 250 users, and I think it's just spread a bit thin. Apart from adding cores, there are 2 things I'd suggest looking at - are yo