The config is rather simple as following. My test version is 1.7.2, a bit
old. I can't upgrade to the latest one in our production for now. Anyway I
think it should work in 1.7.2 because the document says proxy_protocol was
introduced from 1.5.12.
http {
log_format combined '$proxy_protocol_ad
Hi Maxim,
We came across a case where kill -USR1 doesn't cause nginx reopen the
access.log. And we need to run nginx with "daemon off" and "master-process
off". Is that a known issue and is there any workaround?
On Tue, May 20, 2014 at 4:33 AM, Maxim Dounin wrote:
> Hello!
>
> On Mon, May 19, 2
Hello!
On Tue, Sep 26, 2017 at 05:24:26PM +0200, Grzegorz Kulewski wrote:
> W dniu 26.09.2017 15:20, Maxim Dounin pisze:
> > Hello!
> >
> > On Tue, Sep 26, 2017 at 03:48:57AM +0200, Grzegorz Kulewski
> > wrote:
> >
> >> Is resolver in nginx still needed for OCSP stapling?
> >
> > Yes.
> >
>
On Wednesday 27 September 2017 06:57:00 garyc wrote:
[..]
> We can live with a 2 stage process for now, may look to move to http 2 for
> other reasons in the future, we can address this properly then.
>
[..]
Well, Chrome behavior with HTTP/2 isn't better.
Here's a workaround that we had to add i
Thanks, understood.
Got a bit excited when I realized our client wasn't sending the 'Expect:
100-continue' header in our POST but as you have pointed out even with this
header Chrome and Firefox do not stop sending the body.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,276512,27657
Hi,
> Certainly things will be different if
> requests are not equal, though this is what least_conn is expected
> to address (and again, it does so better than just testing two
> choices).
Awesome, I hope to address this issue in my research. My suspicion is
that round-robin and random will c
Hello!
On Wed, Sep 27, 2017 at 06:57:00AM -0400, garyc wrote:
> Looks like support for 100 Continue:
>
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.3
>
> may have covered our scenario, I found an old ticket on this
>
> https://trac.nginx.org/nginx/ticket/493#no1
Support for
Looks like support for 100 Continue:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.3
may have covered our scenario, I found an old ticket on this
https://trac.nginx.org/nginx/ticket/493#no1
I guess there is no intention to support this in a future release?
We can live with a 2
I had a look around for expected behavior during large http 1.x POST
requests, it looks like the standards suggest that browsers should monitor
for 413 responses and terminate the body transmission but they don't:
https://stackoverflow.com/questions/18367824/how-to-cancel-http-upload-from-data-eve