It sounds like you want to control what upstream servers are available
based on a set of criteria.
This setup might do what you want, just not implemented in the exact way
you desire:
https://medium.com/@sigil66/dynamic-nginx-upstreams-from-consul-via-lua-nginx-module-2bebc935989b
On Tue, Oct 1
Igor,
Not sure if this will help, but I gather several metrics from the front end
to make a determination how long back end responses take.
Here's a snippet from my log format that might help:
"upstream_server":"$upstream_addr", "req_total_time":"$request_time",
"req_upstream_time":"$upstream_re
Is there a practical maximum size limit for config files?
What is possible - 100k, 1MB, 10MB?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
The only thing I ever experienced that would hold an old worker process open
after a restart (in my case config reload) were websocket connections.
Sent from my iPhone
> On Oct 5, 2016, at 5:59 AM, Sharan J wrote:
>
> Hi,
>
> While reloading nginx, sometimes old worker process are not exiting
I'm trying to reduce the number of location blocks I need for an
application by trying to route using a query parameter.
My URL looks like this: https://www.example.com/api/v1/proxy/?tid=
If I do this in my location block:
location ~* /api/v1/proxy {
proxy_pass http://origin.