This is difficult to assess without specific details of your environment, but
assuming that your http headers are not really that large, 414 could be a sign
that you perhaps have a loop somewhere and forwarding requests between
your servers until you hit 414.
Again this is assuming that you have a
is IP_BIND_ADDRESS_NO_PORT the best solution for OP's case? Unlike the
blog post with two backends, OP's case has one backend server. If any
of the hash slots exceed the 65K port limit, there's no chance to
recover. Despite having enough port capacity, the client will receive
an error if the client
Does anybody have any history/rationale on why keepalive_requests
use default of 100 requests in nginx? This same default is also used in
Apache. But the default seems very small in today's standards.
http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests
Regards,
Tolga
__
unique.
> [..]
>
>
> [1]
> https://kernelnewbies.org/Linux_4.2#head-8ccffc90738ffcb0c20caa96bae6799694b8ba3a
> [2] https://git.kernel.org/torvalds/c/90c337da1524863838658078ec34241f45d8394d
>
>
>> On 08 Mar 2017, at 01:10, Tolga Ceylan wrote:
>>
>> How about u
How about using
split_clients "${remote_addr}AAA" $proxy_ip {
10% 192.168.1.10;
10% 192.168.1.11;
...
* 192.168.1.19;
}
proxy_bind $proxy_ip;
where $proxy_ip is
> On Tue, Oct 25, 2016 at 04:28:22PM -0700, Frank Liu wrote:
>
> Hi there,
>
>> If I configure one "upstream" with 2 servers and use the default round
>> robin, will the traffic be balanced based on the upstream or the virtual
>> servers. e.g.: if I configure 2 virtual host "server" blocks, both
>>
FYI, possible related issue on nginx-dev mail list:
https://forum.nginx.org/read.php?29,264637,264637#msg-264637
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
This looks normal. Definition of 'send_timeout' is in:
http://nginx.org/en/docs/http/ngx_http_core_module.html#send_timeout
"Sets a timeout for transmitting a response to the client. The timeout
is set only between two successive write operations, not for the
transmission of the whole response. I
Maybe that "proxy_buffering on" badly interacts with websockets?
On Sun, Oct 11, 2015 at 2:42 PM, javdev wrote:
> Hello guys this is my first question here.
>
> I'm working on nginx almost 2 years, but in the laste days I have founded an
> error, very complicated to solve.
>
> I'm working on Amaz