On Thu, Jan 25, 2018 at 8:22 PM, Roman Arutyunyan wrote:
> Hi Jeffrey,
>
> On Thu, Jan 25, 2018 at 05:41:50PM +0800, Jeffrey 'jf' Lim wrote:
>> This is more of a curiosity thing, I guess, than anything else, but...
>> how do you trigger an "proxy_next_upst
Hi Jeffrey,
On Thu, Jan 25, 2018 at 05:41:50PM +0800, Jeffrey 'jf' Lim wrote:
> This is more of a curiosity thing, I guess, than anything else, but...
> how do you trigger an "proxy_next_upstream invalid_header" when
> testing?
>
> I've tried basically s
This is more of a curiosity thing, I guess, than anything else, but...
how do you trigger an "proxy_next_upstream invalid_header" when
testing?
I've tried basically sending random text from an upstream ('nc -l')...
but nginx holds on to the connection and ends up trigg
Thanks again for detailed reply. Yeah it would have been good to have this
feature in nginx upstream module.
Its an important feature, will try out your suggestions and will share.
Thanks a lot for sharing inputs!
Cheers,
Kaustubh
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,27244
On Tue, Feb 21, 2017 at 04:45:09AM -0500, kaustubh wrote:
Hi there,
> Thanks Francis! I was able to test that above works.
Good stuff -- at least we can see that things are fundamentally correct.
> But problem is when we have proxy buffering off and when we try to send
> large file say 1gb, it
tp://local;
proxy_next_upstream error timeout invalid_header http_502 http_503
http_504;
proxy_http_version 1.1;
proxy_request_buffering off;
}
}
server {
listen 8008;
access_log /var/log/nginx/503.log combined;
return 503;
}
server {
listen 8009;
On Wed, Feb 15, 2017 at 10:47:56PM +0530, Kaustubh Deorukhkar wrote:
Hi there,
> For some reason this is not working. Can someone suggest if am missing
> something?
It seems to work fine for me as-is for GET and PUT. And not for POST.
> proxy_next_upstream error timeout inval
Thanks for reply. But I checked upstreams and second instance is working
fine but does not receive retry request.
I did small setup where one upstream instance responds early with 503 and
other instance processes requests,
and I observe that the request never comes to working upstream server on
ea
;
}
server {
...
location / {
proxy_pass http://myservice <http://myservice/>;
proxy_next_upstream error timeout invalid_header http_502
http_503 http_504;
}
}
}
So what i want is if any upstream server gives the above errors, it
should try
the next up
> server localhost:8081;
> server localhost:8082;
> }
>
> server {
> ...
> location / {
> proxy_pass http://myservice;
> proxy_next_upstream error timeout invalid_header http_502 http_503
> http_504;
> }
> }
> }
>
>
localhost:8081;
server localhost:8082;
}
server {
...
location / {
proxy_pass http://myservice;
proxy_next_upstream error timeout invalid_header http_502 http_503
http_504;
}
}
}
So what i want is if any upstream server gives the above errors, it should
try
the next
Hello!
On Wed, Feb 15, 2017 at 01:27:53PM +0530, Kaustubh Deorukhkar wrote:
> We are using nginx as reverse proxy and have a set of upstream servers
> configured
> with upstream next enabled for few error conditions to try next upstream
> server.
> For some reason this is not working. Can someone
onses through while nginx returns its
own 503 response instead of server 503 html responses.
That doesn't seem to be possible with the existing proxy options.
On Wed, Oct 19, 2016 at 8:37 PM, Piotr Sikora
wrote:
> Hey Marques,
>
> > "proxy_next_upstream error" has exempt
Hey Marques,
> "proxy_next_upstream error" has exemptions for 402 and 403. Should it not
> have exemptions for 429 "Too many requests" as well?
>
> I want proxied servers' 503 and 429 responses with "Retry-After" to be
> delivered to the clie
:
> "proxy_next_upstream error" has exemptions for 402 and 403. Should it not
> have exemptions for 429 "Too many requests" as well?
>
> I want proxied servers' 503 and 429 responses with "Retry-After" to be
> delivered to the client as the server respo
"proxy_next_upstream error" has exemptions for 402 and 403. Should it not
have exemptions for 429 "Too many requests" as well?
I want proxied servers' 503 and 429 responses with "Retry-After" to be
delivered to the client as the server responded. The 429s i
I figured out what it was. I had an error_page directive in another location
block in the same server.conf that was apparently overriding the
proxy_next_upstream. I commented it out and now the upstream throwing the
404 is being skipped. I'm just going to remove 404 from the error_page
dire
show your upstream and proxy full config
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,268529,268537#msg-268537
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
We have a backend server throwing a 404 error, so I added the directive
proxy_next_upstream error timeout http_404; but that seems to have no
effect. Nginx is still performing round robin connections to the working
backend server and the backend server throwing a 404. Is there another
directive I
Does the proxy_next_upstream "timeout" apply to both connect timeout and
read timeout?
Is it possible to configure proxy_next_upstream to use connect timeout
only, not the read timeout? In case a connection is made and the request is
sent, I don't want to re-try next upstream eve
FYI, possible related issue on nginx-dev mail list:
https://forum.nginx.org/read.php?29,264637,264637#msg-264637
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Is this question to stupid or has nobody an answer on it? ;)
I just think a POSTs are not idempotent and should never be repeated for a
technical reason. Is there any change to configure nginx in a way to try the
next upstream only if the first one really failed when using POST requests?
Timed out
I want nginx to prevent trying the next upstream if the request is a POST
request and the request just timed out. POSTs should only be repeated on
error. I tried this config to implement it:
if ($request_method = POST) {
proxy_next_upstream error;
}
But this fails with:
nginx: [emerg
24:8080;
> server 10.0.0.25:8080;
> server 10.0.0.26:8080;
> }
>
> server {
> listen 90 default_server;
>
> location = / {
> proxy_pass http:/
{
listen 90 default_server;
location = / {
proxy_pass http://backend_2;
proxy_next_upstream error timeout http_404;
}
location / {
proxy_pass http://backend;
proxy_next_upstream
OK. Thank you very much . I will do an experiment to find out this
2014/1/16 itpp2012
> renenglish Wrote:
> ---
> > Sorry I can’t get it .
> >
> > If host A has added the counter and failed to response, the request
> > would be failed over to
renenglish Wrote:
---
> Sorry I can’t get it .
>
> If host A has added the counter and failed to response, the request
> would be failed over to host B with successful response, so the
> counter would be added twines. Wouldn’t it ?
Then a condit
Sorry I can’t get it .
If host A has added the counter and failed to response, the request would be
failed over to host B with successful response, so the counter would be added
twines. Wouldn’t it ?
在 2014年1月14日,下午5:48,itpp2012 写道:
> Unless the request is getting que'd while there is a short
Is there any update about this feature?
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,238124,246431#msg-246431
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Unless the request is getting que'd while there is a short wait for host A
to get online AND fail-over is also happening, its not likely to be added
twice.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,245979,246388#msg-246388
___
nginx mail
Can any one help ?
2014/1/3 任勇全
> Hi all:
> I am wondering if I set:
> proxy_next_upstream error timeout;
> Fox example , if the requested service is a counter , I issue the request
> use the interface http://example.com/incr . The request is failed on my
> first host A,
Hi all:
I am wondering if I set:
proxy_next_upstream error timeout;
Fox example , if the requested service is a counter , I issue the request use
the interface http://example.com/incr . The request is failed on my first host
A, then it is passed to the second host B , is the counter likely
Hello!
On Tue, Sep 24, 2013 at 3:39 PM, pigmej wrote:
> Yeah, I meant rewrite obviously... I would still prefer to not have even
> rewrite if it's possible.
>
It's not worth saving at all. If you take an on-CPU Flame Graph for
your loaded Nginx worker processes, you'll never even see it on the
y send the questions to openresty group too.
Thanks for your replies.
-Original Message-
From: "Yichun Zhang (agentzh)"
Sender: nginx-bounces@nginx.orgDate: Tue, 24 Sep 2013 13:28:06
To:
Reply-To: nginx@nginx.org
Subject: Re: ngx_lua + proxy_next_upstream
Hello!
On Tue, Sep
Hello!
On Tue, Sep 24, 2013 at 2:35 AM, Jedrzej Nowak wrote:
>
> The question is how can I do NOT redirect ?
Well, "rewrite ... break" is not a redirect. It is just an internal
URI rewrite. That's all.
> I tried with @test instead of
> /test but no success. Is there any other way to do that ?
>
20, 2013 at 2:34 AM, Yichun Zhang (agentzh) > wrote:
>
>> Hello!
>>
>> On Wed, Sep 18, 2013 at 6:09 AM, Jedrzej Nowak wrote:
>> > The question is how can I archive proxy_next_upstream.
>> > Preferably I would like to return to lua with a error reason.
>&
subrequest to @blah
}
Is something like that recommended or how should it be done ?
Pozdrawiam
Jędrzej Nowak
On Fri, Sep 20, 2013 at 2:34 AM, Yichun Zhang (agentzh)
wrote:
> Hello!
>
> On Wed, Sep 18, 2013 at 6:09 AM, Jedrzej Nowak wrote:
> > The question is how can I archive pr
Hello!
On Wed, Sep 18, 2013 at 6:09 AM, Jedrzej Nowak wrote:
> The question is how can I archive proxy_next_upstream.
> Preferably I would like to return to lua with a error reason.
> If the only way is to return several servers in upstream from lua, how to do
> so ?
>
If you wan
Hello,
I have configured:
1. ngx_lua as rewrite_by_lua_file
2. lua returns upstream
3. Nginx connects to the upstream
Works perfectly.
The question is how can I archive proxy_next_upstream.
Preferably I would like to return to lua with a error reason.
If the only way is to return several servers
he quick reply Maxim. That looks interesting, though in
> particular proxy_next_upstream and proxy_read_timeout don't report to be
> valid in that context. I'll give it a try, perhaps it's just an error in
> the docs.
>
>
> On Fri, Jul 5, 2013 at 9:02 AM, Maxim Dounin
Thanks for the quick reply Maxim. That looks interesting, though in
particular proxy_next_upstream and proxy_read_timeout don't report to be
valid in that context. I'll give it a try, perhaps it's just an error in
the docs.
On Fri, Jul 5, 2013 at 9:02 AM, Maxim Dounin wrote:
Hello!
On Fri, Jul 05, 2013 at 08:32:38AM -0400, Branden Visser wrote:
> Hi all,
>
> I was wondering if there is a way to have different proxy_* rules depending
> on the HTTP method? My use case is that I want to be a little more
> conservative about what requests I retry for POST requests, as t
Hi all,
I was wondering if there is a way to have different proxy_* rules depending
on the HTTP method? My use case is that I want to be a little more
conservative about what requests I retry for POST requests, as they have an
undesired impact if tried twice after a "false" timeout.
e.g., for GET
I agree. The directive name and format are always the diffcult parts.[?] I
thought we could add a new parameter to the proxy_next_upstream directive.
The individual directive is OK for me.
Limit the retry total time is great. It could eliminate some very long
timeout responses.
2013/4/6 Maxim
fter
> several times, all of the servers will be marked down for a while and all
> of the requests will be replied with 502.
>
> We also need the retry mechnism and don't want to diable it. I think if
> there is a configurable tries time with the direcitve proxy_next_upstream
f the requests will be replied with 502.
We also need the retry mechnism and don't want to diable it. I think if
there is a configurable tries time with the direcitve proxy_next_upstream,
it will be very nice.
2013/4/5 Maxim Dounin
> Hello!
>
> On Fri, Apr 05, 2013 at 04:38:36AM -
nd asking another
> node doesn't help.
No, as of now only switching off proxy_next_upstream completely is
available.
On the other hand, with switched of proxy_next_upstream you may
still configure retries to additional (or other) upstream servers
via error_page directive, using a configuratio
Is it possible to limit the amount of upstreams asked? I have four upstreams
defined and it makes no sense to ask all of them. If two of them timeout or
error there is possible something wrong with the request and asking another
node doesn't help.
Posted at Nginx Forum:
http://forum.nginx.org/rea
Good idea but we have to keep in mind it should depend on location context.
THX
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,236982,237064#msg-237064
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
cpu requests.
>
> So my idea is to use proxy_next_upstream for simple request as usual and it
> works perfectly.
> And for heavy requests based on URL I want to passthrough it to BE with
> lowest CPU load by specifying small proxy_connect_timeout and using
> proxy_next_upstream timeou
^~ /simple_requests {
proxy_read_timeout 2s;
proxy_send_timeout 2s;
proxy_connect_timeout 10ms;
proxy_next_upstream error timeout invalid_header http_500 http_502
http_503 http_504;
proxy_pass http://super_upstream;
}
location ^~ /very_heavy_requests {
send_timeout 60s;
proxy_read_timeout
Hello
In our setup we have an NGNX as front-end and several back-end.
The problem is our load profile, we have a lot of simple and fast http
requests, and very few but very heavy in terms of time and BE cpu requests.
So my idea is to use proxy_next_upstream for simple request as usual and it
52 matches
Mail list logo