Hello!
On Fri, Oct 4, 2013 at 7:48 AM, laltin wrote:
> My backend logs show response times generally less than 2ms and with 4
> instances backends can handle 2000 reqs/sec.
> But when I look at nginx logs I see response times around 200ms and i think
> nginx is the main problem with my situation.
As promised here are my stats on vmware 4 vcpus siege -c50 -b -t240s -i
'http://127.0.0.1/test.html'
gzip off, pagespeed off.
Transactions: 898633 hits
Availability: 100.00 %
Elapsed time: 239.55 secs
Data transferred: 39087.92 MB
Response
Hello!
On Fri, Oct 04, 2013 at 12:52:28PM -0400, ddutra wrote:
> Maxim,
> Thank you again.
>
> About my tests, FYI I had httpauth turned off for my tests.
>
> I think you nailed the problem.
>
> This is some new information for me.
>
> So for production I have a standard website which is php
Maxim,
Thank you again.
About my tests, FYI I had httpauth turned off for my tests.
I think you nailed the problem.
This is some new information for me.
So for production I have a standard website which is php being cached by
fastcgi cache. All static assets are served by nginx, so gzip_static
Hello!
On Fri, Oct 04, 2013 at 09:43:41AM -0400, DevNginx wrote:
> I would also like to add a vote for FCGI multiplexing.
>
> There is no obligation for backends, since non-implementing backends can
> indicate FCGI_CANT_MPX_CONN in response to a FCGI_GET_VALUES request by
> nginx. The other pos
Hello!
On Fri, Oct 04, 2013 at 11:44:41AM -0400, IggyDolby wrote:
> Thanks for your reply, would you suggest to use some other open source proxy
> like Squid or Varnish to use as forward proxy?
Squid is known to work.
Varnish, as far as I can tell, isn't a forward proxy either, much
like nginx
Thanks for your reply, would you suggest to use some other open source proxy
like Squid or Varnish to use as forward proxy?
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,243438,243441#msg-243441
___
nginx mailing list
nginx@nginx.org
http://
Hello!
On Fri, Oct 04, 2013 at 11:17:09AM -0400, IggyDolby wrote:
> Hi, I'm an nginx newbie and I need to configure it as an externally
> available forward proxy for manually configured iPads running an iOS app to
> test a soon to go Live website.
Just a disclaimer:
Please note that nginx is re
Hi, I'm an nginx newbie and I need to configure it as an externally
available forward proxy for manually configured iPads running an iOS app to
test a soon to go Live website.
You cannot change host files on an iPad but we can ask our external testers
to configure the proxy settings manually.
The o
Hello!
On Fri, Oct 04, 2013 at 05:33:57PM +0300, Anatoli Marinov wrote:
[...]
> From dev mail list Maxim advised me to backup $upstream_http_location in
> other variable and I did it but the result was the same - 500 internal
> server error. The config after the fix is:
[...]
> locatio
Hello!
On Fri, Oct 04, 2013 at 09:43:05AM -0400, ddutra wrote:
> Hello Maxim,
> Thanks again for your considerations and help.
>
> My first siege tests against the ec2 m1.small production server was done
> using a Dell T410 with 4CPUS x 2.4 (Xeon E5620). It was after your
> considerations about
My backend logs show response times generally less than 2ms and with 4
instances backends can handle 2000 reqs/sec.
But when I look at nginx logs I see response times around 200ms and i think
nginx is the main problem with my situation.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,243
Hello,
Is there an easy way to configure nginx upstream to follow 302 instead of
send them to the browser?
I tried with this config:
http {
proxy_intercept_errors on;
proxy_cache_path /home/toli/nginx/run/cache keys_zone=zone_c1:256m
inactive=5d max_size=30g;
upstream up_servers {
Well, I just looked at the results again and it seems my Throughput (mb per
s) are not very far from yours.
My bad.
So results not that bad right? What do you think.
Best regards.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,243412,243431#msg-243431
___
I would also like to add a vote for FCGI multiplexing.
There is no obligation for backends, since non-implementing backends can
indicate FCGI_CANT_MPX_CONN in response to a FCGI_GET_VALUES request by
nginx. The other poster has already mentioned FCGI_ABORT_REQUEST and
dropping response packets fr
Hello Maxim,
Thanks again for your considerations and help.
My first siege tests against the ec2 m1.small production server was done
using a Dell T410 with 4CPUS x 2.4 (Xeon E5620). It was after your
considerations about 127.0.0.1 why I did the siege from the same server that
is running nginx (pro
Hello!
On Thu, Oct 03, 2013 at 03:00:51PM -0400, ddutra wrote:
> Maxim Dounin Wrote:
> ---
>
> > The 15 requests per second for a static file looks utterly slow,
> > and first of all you may want to find out what's a limiting factor
> > in th
I tried to remove it already.
From: Richard Kearsley
To: nginx@nginx.org
Sent: Friday, October 4, 2013 6:20 PM
Subject: Re: ngx_cache_purge not found
On 04/10/13 12:04, Indo Php wrote:
allow 127.0.0.1;
> deny all;
the
On 04/10/13 12:04, Indo Php wrote:
allow 127.0.0.1;
denyall;
the url will only work if requested from the server itself...
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi there,
I tried to use ngx_cache_purge with the configuration below
location ~ /purge(/.*) {
allow 127.0.0.1;
deny all;
proxy_cache_purge one backend$request_uri;
}
location ~* ^.+\.(css|js)$ {
20 matches
Mail list logo