I’ve tried chef, puppet and ansible at thre different shops. I wanted to like
chef and puppet because they are Ruby based (which I like) but they seemed
clunky, ugly, and heavyweight. Ansible seemed to solve the easy problems. When
I had a startup I just used Capistrano for deployments, with erb
Maxim Dounin Wrote:
---
> Hello!
> ...
> Note well that this configuration implies that every request to
> "/out/..." will generate a subrequest to "/auth". As such, you
> can safely move the "limit_req zone=auth ..." limit to "location
> /out
Hello!
On Tue, Jul 17, 2018 at 01:08:13PM -0400, jarstewa wrote:
> Hi, I currently have an nginx configuration that uses the limit_req
> directive to throttle upstream content requests. Now I'm trying to add
> similar rate limiting for auth requests, but I haven't been able to get the
> auth thr
I prefer simple setups that start with the question, "What is the least I can
do to manage this new thing?"
I've worked with all the various things mentioned below like Chef, Puppet,
Ansible and many more but scaled everything back to keep it really simple this
time. I have 6 Nginx servers (pa
Last year I gave a talk at nginx.conf describing some success we have had using
Octopus Deploy as a CD tool for nginx configs. The particular Octopus features
that make this good are
* Octopus gives us a good variable replacement / template system so that I can
define a template along with var
Hi, I currently have an nginx configuration that uses the limit_req
directive to throttle upstream content requests. Now I'm trying to add
similar rate limiting for auth requests, but I haven't been able to get the
auth throttle to kick in during testing (whereas the content throttle works
as expe
Hello!
On Tue, Jul 17, 2018 at 10:13:35AM -0400, bwmetc...@gmail.com wrote:
> Cool. Thanks. Slightly related... given that the proxy_timeout is 10m by
> default, can ephemeral ports on the backend be shared by different clients
> making requests to nginx?
No. The port is bound to a session wi
Cool. Thanks. Slightly related... given that the proxy_timeout is 10m by
default, can ephemeral ports on the backend be shared by different clients
making requests to nginx?
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,280540,280552#msg-280552
_
Hello!
On Mon, Jul 16, 2018 at 08:27:07PM -0400, bwmetc...@gmail.com wrote:
> A couple of questions regarding UDP load balancing. If a UDP listener is
> configured to expect a response from its upstream nodes, is it possible to
> have another IP outside of the pool of upstream nodes send a respo