Hello!
On Mon, Sep 30, 2013 at 7:25 AM, laltin wrote:
> But looking at tornado logs I expect around 2000reqs/sec. Assuming that each
> request is handled in 2ms one instance can handle 500reqs/sec and with 4
> instances it sholud be 2000req/sec. But it is stuck at 1200reqs/sec I wonder
> why it is
Hello!
On Mon, Sep 30, 2013 at 09:16:14AM -0400, revirii wrote:
> Hi,
>
> thanks for your answer :-)
>
> > It's an implementation detail. As of now, two identical
> > upstream{} blocks will map the same ip address to the same peer's
> > number. But it's not something guaranteed.
>
> ok, th
HI,
I'm using Nginx 1.2.9 behind Amazon's ELB. We recently moved our servers
behind the ELB and in testing enabling keepalives between the server and the
ELB we are seeing request times double from around 25ms to 50ms. I tried
disabling postpone_output since I thought it was a buffering issue, b
On Mon, Sep 30, 2013 at 03:42:50PM +0200, Thijs Koerselman wrote:
Hi there,
> From the add_header docs I understand that it works at location, http and
> server context. But when I use add_header at the server level I don't see
> the headers being added to the response.
> Am I missing something
But looking at tornado logs I expect around 2000reqs/sec. Assuming that each
request is handled in 2ms one instance can handle 500reqs/sec and with 4
instances it sholud be 2000req/sec. But it is stuck at 1200reqs/sec I wonder
why it is stuck at that point?
Does increasing the number of instances c
>From the add_header docs I understand that it works at location, http and
server context. But when I use add_header at the server level I don't see
the headers being added to the response.
For example my server config starts with:
server {
listen9088;
server_name loca
Hi,
thanks for your answer :-)
> It's an implementation detail. As of now, two identical
> upstream{} blocks will map the same ip address to the same peer's
> number. But it's not something guaranteed.
ok, this is the behaviour when the upstreams are identical, i.e. they have
the same backen
Hello!
On Mon, Sep 30, 2013 at 08:11:13AM -0400, revirii wrote:
> hmm... no one? Is this unknown or a secret? At least i wasn't able to find
> any detailed documentation about this.
It's an implementation detail. As of now, two identical
upstream{} blocks will map the same ip address to the sa
Hello!
On Mon, Sep 30, 2013 at 12:18:41AM -0700, shaun wrote:
> I am very confused. There is an extra "?" added to the end of the
> rewrite. I have no idea why, I look at the logs, and is magically
> appears.
Could you please point out where is the "?" which confuses you?
Please note that in
hmm... no one? Is this unknown or a secret? At least i wasn't able to find
any detailed documentation about this.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,243162,243259#msg-243259
___
nginx mailing list
nginx@nginx.org
http://mailman.ng
My question is why response times increase when concurrency level is
increased?
What are your worker_processes and worker_connections values?
p.s. also to be sure is the OS open files limit increased from the typical
default of 1024
rr
___
ngi
I am have a problem
I am very confused. There is an extra "?" added to the end of the
rewrite. I have no idea why, I look at the logs, and is magically appears.
I want to reload the login page, but nothing happens, any ideas?
location /login/ {
if ($args) {
Hello folks!
I am happy to announce that the new mainline version of ngx_openresty,
1.4.2.9, is now released:
http://openresty.org/#Download
Special thanks go to all the contributors for making this happen!
Below is the complete change log for this release, as compared to the
last (stable)
13 matches
Mail list logo