Good idea but we have to keep in mind it should depend on location context.
THX
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,236982,237064#msg-237064
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
For beter understanding here is my config sniper
upstream super_upstream {
keepalive 128;
server be1 max_fails=45 fail_timeout=3s;
server be2 max_fails=45 fail_timeout=3s;
server be3 max_fails=45 fail_timeout=3s;
}
server {
server_name pytn.ru;
location ^~
Hello
In our setup we have an NGNX as front-end and several back-end.
The problem is our load profile, we have a lot of simple and fast http
requests, and very few but very heavy in terms of time and BE cpu requests.
So my idea is to use proxy_next_upstream for simple request as usual and it
work