Re: worker_connections are not enough, reusing connections with idle workers

2022-06-07 Thread Roger Fischer
limits are for the user that nginx runs at (only the master process runs as root). Roger > On Jun 7, 2022, at 2:29 PM, Sergey A. Osokin wrote: > > On Tue, Jun 07, 2022 at 01:18:36PM -0700, Roger Fischer wrote: >> >> We are simulating 1000 clients. Some get cache h

Re: worker_connections are not enough, reusing connections with idle workers

2022-06-07 Thread Roger Fischer
there anything we can configure to more evenly distribute the connections? Thanks… Roger > On Jun 3, 2022, at 8:40 PM, Sergey A. Osokin wrote: > > Hi Roger, > > hope you're doing well. > > On Fri, Jun 03, 2022 at 05:38:07PM -0700, Roger Fischer wrote: >> Hello,

worker_connections are not enough, reusing connections with idle workers

2022-06-03 Thread Roger Fischer
Hello, my understanding is that worker_connections applies to each worker (eg. when set to 1024, 10 worker processes could handle up to 10240 connections). But we are seeing 1024 worker_connections are not enough, reusing connections from one worker while other workers are idle. Is there somet

Preferred method to reopen the log with logrotate

2022-06-01 Thread Roger Fischer
Hello, there seem to be two methods to tell nginx to re-open the log file after the file was rotated (we use logrotate). 1) nginx -s reopen 2) kill -USR1 Which is the preferred method, and why. I am asking because we have seen nginx -s reopen failing because of a transient issue with the con

corrupted cache file: proxy_cache_valid ignored

2022-02-07 Thread Roger Fischer
Hello, we have observed a case where it seems that the proxy_cache_valid directive is ignored. nginx version: 1.19.9 Config: proxy_cache_valid 200 206 30d; Scenario: * A cache file was corrupted (a file system issue). A part of the section that contains the headers had been overwritten with

large number of cache files

2020-09-10 Thread Roger Fischer
Hello, from a practical perspective, what would be considered an unreasonable large number of cache files (unique cache keys) in a single nginx server? 1M, 10M, 100M? With a large cache, would there be any significant benefit in using multiple caches (multiple key_zones) in a single nginx serv

Re: proxy_cache_path 'inactive' vs http cache-control / expires headers?

2020-04-03 Thread Roger Fischer
Thanks, Maxim, for correcting my misunderstanding. With what frequency is the cache manger run? Roger > On Apr 3, 2020, at 9:26 AM, Maxim Dounin wrote: > > Hello! > > On Fri, Apr 03, 2020 at 08:33:43AM -0700, Roger Fischer wrote: > >>> You can just set the ina

Re: proxy_cache_path 'inactive' vs http cache-control / expires headers?

2020-04-03 Thread Roger Fischer
> You can just set the inactive time longer than your possible maximum expire > time for the objects then the cache manager won't purge the cache files even > the object is still valid but not accessed. That may only have a small impact. As far as I understand: NGINX will remove an item only

filter to modify upstream response before it is cached

2020-01-10 Thread Roger Fischer
Hello, is there a hook into the nginx processing to modify the response body (and headers) before they are cached when using with proxy_pass? I am aware of the body filters (http://nginx.org/en/docs/dev/development_guide.html#http_body_filters

Multiple server_name directives in same server block?

2019-12-20 Thread Roger Fischer
Hello, is it possible to have multiple server_name directives in the same server block? I.e. is the following possible? server { listen 1.2.3.4:443 ssl; server_name *.site1.org *.site2.org; server_name ~^app1.*\.site3\.org$; …. Or do I need to create a second server block? Than

Check if a resource is in the cache

2019-10-29 Thread Roger Fischer
Hello, is there a way to check if a requested resource is in the cache? For example, “if” has the option “-f”, which could be used to check if a static file is present. Is there something similar for a cached resource? Thanks… Roger ___ nginx maili

proxy_store: is it used much?

2019-10-28 Thread Roger Fischer
Hello, proxy_store seems to be a much simpler alternative to “cache" pseudo-static resources. But there is very little discussion of it on the Internet or nginx forum (compared to proxy_cache). Is there anything non-obvious that speaks agains the use of proxy_store? Thanks… Roger ___

Add header to response before response is cached

2019-10-07 Thread Roger Fischer
Hello, is there a way in an NGINX HTTP Proxy to add a header to the response before it is cached? I would like to capture some information from the request and add it to the cached response, so that all clients getting the cached response receive that info. Thanks… Roger ___

NGINX send 206 but wget retries

2019-08-02 Thread Roger Fischer
Hello, I am making a byte range request to NGINX using wget. NGINX responds with status code 206 (partial content), but instead of downloading the content, wget retries. Request: wget -S 'http://cache.example.com/video5.ts' --header="Range: bytes=0-1023" Output from wget: --2019-08-02 15:36:

proxy_ignore_client_abort with cache

2019-06-19 Thread Roger Fischer
Hello, I am using NGINX (1.17.0) as a reverse proxy with cache. I want the cache to be updated even when the client closes the connection before the response is delivered to the client. Will setting proxy_ignore_client_abort to on do this? Details: The client makes a HTTP range request on a l

Re: Slow OPTIONS requests with reverse proxy cache

2019-01-23 Thread Roger Fischer
ote: > > Hello! > > On Wed, Jan 23, 2019 at 09:03:33AM -0800, Roger Fischer wrote: > >> I am using the community version of NGINX, where these variables >> are not available (I was quite disappointed by that). > > Again: these variables are available in all known varia

Re: Slow OPTIONS requests with reverse proxy cache

2019-01-23 Thread Roger Fischer
I am using the community version of NGINX, where these variables are not available (I was quite disappointed by that). Roger > On Jan 23, 2019, at 5:23 AM, Maxim Dounin wrote: > > Hello! > > On Tue, Jan 22, 2019 at 04:48:23PM -0800, Roger Fischer wrote: > >>

Slow OPTIONS requests with reverse proxy cache

2019-01-22 Thread Roger Fischer
Hello, I have noticed that the response to OPTIONS requests via a reverse proxy cache are quite slow. The browser reports 600 ms or more of idle time until NGINX provides the response. I am using the community edition of NGINX, so I don’t have any timing for the upstream request. As I understa

Re: Two cache files with same key

2019-01-17 Thread Roger Fischer
ad04a9740_corrected.mpd Roger > On Jan 17, 2019, at 12:12 PM, Lucas Rolff wrote: > > What key do you see and what's your cache key configured as? > > On 17/01/2019, 20.39, "nginx on behalf of Roger Fischer" > wrote: > >Hello, > >I am using

Two cache files with same key

2019-01-17 Thread Roger Fischer
Hello, I am using the nginx cache. I observed a MISS when I expected a HIT. When I look at the cache, I see the same page (file) twice in the cache, the original that I expected to HIT on, and the new one from the MISS (that should have been a HIT). The key, as viewed in the cache file, is exa

logical or in location directive with regex (multiple location using same block)

2018-11-23 Thread Roger Fischer
Hello, how do I best handle multiple locations that use the same code block? I do not want to repeat the { … } part. Is there a way to do this with logical or between regular expressions? The first variation is that sometimes there are query parameters, and sometimes there are not. The second

Re: Securing the HTTPS private key

2018-11-15 Thread Roger Fischer
nginx. > > But till a memory dump of the app would get you the private key. > > > > > On Fri, 16 Nov 2018 at 00:03, Maxim Dounin <mailto:mdou...@mdounin.ru>> wrote: > Hello! > > On Wed, Nov 14, 2018 at 12:17:57PM -0800, Roger Fischer wrote: > >

Listen on transient address

2018-11-15 Thread Roger Fischer
Hello, I have an NGINX instance that listens on a tunnel (and some other interfaces). When NGINX was restarted while the tunnel was down (tun device and address did not exist), NGINX failed to start. [emerg] 1344#1344: bind() to 38.88.78.19:443 failed (99: Cannot assign requested address) Rel

Securing the HTTPS private key

2018-11-14 Thread Roger Fischer
Hello, does NGINX support any mechanisms to securely access the private key of server certificates? Specifically, could NGINX make a request to a key store, rather than reading from a local file? Are there any best practices for keeping private keys secure? I understand the basics. The key fi

Add Header to cached response?

2018-09-10 Thread Roger Fischer
Hello, is there a way to add a header to the cached response? I have used ngx_http_headers_module’s add_header directive, which adds the header to the response at the time the response is generated. What I would like to do is to add a response header at the time when the upstream request is ma

Re: Ignore Certificate Errors

2018-09-10 Thread Roger Fischer
is in widespread use. Roger > On Aug 30, 2018, at 11:13 AM, Maxim Dounin wrote: > > Hello! > > On Thu, Aug 30, 2018 at 09:09:44AM -0700, Roger Fischer wrote: > >> Hello, >> >> is there a way to make NGINX more forgiving on TLS certificate errors? Or >

Ignore Certificate Errors

2018-08-30 Thread Roger Fischer
Hello, is there a way to make NGINX more forgiving on TLS certificate errors? Or would that have to be done in OpenSSL instead? When I use openssl s_client, I get the following errors from the upstream server: 140226185430680:error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block ty

Actions after cache miss is detected

2018-08-10 Thread Roger Fischer
Hello, Is there a way to perform an action after a cache miss is detected but before the request is forwarded to the upstream server? Specifically, on a cache miss I want to: Return a response instead of forwarding the request to the upstream server. Trigger a handler (module or script) that exe