proxy_cache_background_update ignores regular expression match when updating
Hello, I'm running into an issue where a proxied location with a regular expression match does not correctly update the cache when using proxy_cache_background_update. The update request to the backend seems to be missing the captured parameters from the regex. I've created a small test case that demonstrates this in nginx 1.15.7. Hopefully I'm not missing anything, I checked the docs and didn't seem to find anything that would explain this behavior. nginx version: nginx/1.15.7 built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) built with OpenSSL 1.1.0f 25 May 2017 (running with OpenSSL 1.1.0j 20 Nov 2018) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.15.7/debian/debuild-base/nginx-1.15.7=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-specs=/usr/share/dpkg/no-pie-link.specs -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie' Configuration: proxy_cache_path /tmp keys_zone=test:1m max_size=1g inactive=1h use_temp_path=off; server { listen 127.0.0.1:8010; root /tmp/nginx; } server { listen 127.0.0.1:8011; location ~ /test/(regular|expression)$ { proxy_pass http://127.0.0.1:8010/test/$1; proxy_cache test; proxy_cache_background_update on; proxy_cache_use_stale updating; proxy_cache_valid 10s; } } Initial testing with proxy_cache_background_update off. Log excerpts show requests to both servers. First request (one to frontend, one to backend as expected): 127.0.0.1 - - [04/Dec/2018:17:42:31 +] "GET /test/regular HTTP/1.0" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:42:31 +] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" Second request (served from frontend cache, all good): 127.0.0.1 - - [04/Dec/2018:17:42:35 +] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" Third request (cache expired, so a new request to backend, also good): 127.0.0.1 - - [04/Dec/2018:17:43:14 +] "GET /test/regular HTTP/1.0" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:43:14 +] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" After setting proxy_cache_background_update on, every request tries to do a background update with the wrong URL once the content is expired. The stale content is still served in the meantime. 127.0.0.1 - - [04/Dec/2018:17:44:01 +] "GET /test/ HTTP/1.0" 403 153 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:01 +] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:15 +] "GET /test/ HTTP/1.0" 403 153 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:15 +] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:17 +] "GET /test/ HTTP/1.0" 403 153 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:17 +] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:19 +] "GET /test/ HTTP/1.0" 403 153 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:19 +] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:21 +] "GET /test/ HTTP/1.0" 403 153 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:21 +] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" Is this a bug or am I misunderstanding how this is supposed to work? ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: HTTPS Pinning
In the context of a mobile app, pinning usually means checking the public key of the server in your app matches what is expected. There is nothing to configure server-side. If you change the private key used by your SSL certificate, then your app will break. Renewing an SSL certificate doesn't usually change the private key, but check your renewal process to be sure. I would also suggest adding several backup public key hashes in the app in the event that you need to rotate your private key so you can do this without having to wait for an app store update. That said, pinning offers little benefit, as if your app is already verifying the certificate the most this protects you from is a root cert MITM, eg from a corporate network SSL interception product, which is quite rare. ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: How to compile Nginx with zlib-ng
I regularly build with zlib-ng, unfortunately it requires patching the zlib-ng files to enable zlib compatibility mode as nginx doesn't seem to have a way to pass options to configure. Edit "configure" in the zlib-ng directory and change the line compat=0 to compat=1. Then specify --with-zlib=/path/to/zlib-ng in your nginx configure and you should be set. Be aware that the memory requirements of zlib-ng have changed since support for it was added to nginx, so you will see a lot of "gzip filter failed to use preallocated memory" alerts in your log file when using zlib-ng. On Wed, 22 Mar 2023 at 15:50, Sergey A. Osokin wrote: > Hi Lance, > > thanks for your question. > > Since this is more or less related to nginx development or > new features I'd suggest to use nginx-devel mailing list > instead, thank you. > > On Tue, Mar 21, 2023 at 04:06:00PM -0500, Lance Dockins wrote: > > > > Has anyone had success compiling Nginx with zlib-ng instead of > > default Zlib versions? I seem to be able to compile Nginx with > > standard Zlib and various other Zlib libraries (e.g. Intel > > optimized or Cloudflare) but compiling with Zlib-NG always fails. > > NGINX builds well with zlib. In case of new functionality, like > an ability to build with zlib-ng, the source code requires some > patches. > > > I’ve tried passing in various options via with-zlib-opt to try > > to include the —zlib-compat flag for the Zlib NG configure > > directives but no matter what syntax I use, it seems like it > > always fails (whether I have that param or not). Perhaps I’m > > just struggling with the proper use of Zlib NG in an Nginx > > compile context. > > > > If Nginx should compile with Zlib NG, is there any documentation > > on what params to use in the Nginx compile command to get it to > > work? > > Some ideas can be found inside the zlib-ng project on GH, not > sure is that working solution or not, so you can try. > > I'd also recommend to raise a request in https://trac.nginx.org/nginx/ > about this feature request and provide patches for source code > and documentation. > > Thank you. > > -- > Sergey A. Osokin > ___ > nginx mailing list > nginx@nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: How to compile Nginx with zlib-ng
Yes, when using the latest zlib-ng on nginx-1.21.6 I received the alerts. Previous versions of zlib-ng have worked great after the 2021 patch. I tried to update it myself as follows based on advice of zlib-ng GitHub issues, while it reduced the number of alerts logged it did not completely solve the issue so it seems the memory requirements may have further changed. While I would appreciate a proper patch making it into nginx, the seemingly-frequent upstream changes may make this difficult to maintain. -ctx->allocated = 8192 + 16 + (1 << (wbits + 2)) +ctx->allocated = 8192 + 288 + 16 + (1 << (wbits + 2)) + 131072 + (1 << (memlevel + 8)); On Thu, 23 Mar 2023 at 04:16, Lance Dockins wrote: > > Thank you, Richard. I’ll give that a shot. I already have to do that sort > of patching with a variety of other things in the build that I use so that > particular adjustment isn’t too bad. > > Just for clarity, are you saying that the hash sizes within zlib-ng have > increased since Maxim’s last patch for that to accommodate zlib-ng? That > patch was back in 2021 and is part of Nginx core now. > https://mailman.nginx.org/pipermail/nginx-devel/2021-April/013945.html > > I think it’s coded to use a 128k hash per that patch. If the hash size has > increased again since that patch, that might justify a bug report to Nginx > devel. Since the code in that patch specifically relates to that error, I > thought I’d ask in case you have still been seeing that error with newer > Nginx versions that have come out since that patch was implemented. > > > -- > Lance > > On Mar 22, 2023 at 5:28 PM -0500, Richard Stanway via nginx > , wrote: > > I regularly build with zlib-ng, unfortunately it requires patching the > zlib-ng files to enable zlib compatibility mode as nginx doesn't seem to have > a way to pass options to configure. > > Edit "configure" in the zlib-ng directory and change the line compat=0 to > compat=1. Then specify --with-zlib=/path/to/zlib-ng in your nginx configure > and you should be set. > > Be aware that the memory requirements of zlib-ng have changed since support > for it was added to nginx, so you will see a lot of "gzip filter failed to > use preallocated memory" alerts in your log file when using zlib-ng. > > > On Wed, 22 Mar 2023 at 15:50, Sergey A. Osokin wrote: >> >> Hi Lance, >> >> thanks for your question. >> >> Since this is more or less related to nginx development or >> new features I'd suggest to use nginx-devel mailing list >> instead, thank you. >> >> On Tue, Mar 21, 2023 at 04:06:00PM -0500, Lance Dockins wrote: >> > >> > Has anyone had success compiling Nginx with zlib-ng instead of >> > default Zlib versions? I seem to be able to compile Nginx with >> > standard Zlib and various other Zlib libraries (e.g. Intel >> > optimized or Cloudflare) but compiling with Zlib-NG always fails. >> >> NGINX builds well with zlib. In case of new functionality, like >> an ability to build with zlib-ng, the source code requires some >> patches. >> >> > I’ve tried passing in various options via with-zlib-opt to try >> > to include the —zlib-compat flag for the Zlib NG configure >> > directives but no matter what syntax I use, it seems like it >> > always fails (whether I have that param or not). Perhaps I’m >> > just struggling with the proper use of Zlib NG in an Nginx >> > compile context. >> > >> > If Nginx should compile with Zlib NG, is there any documentation >> > on what params to use in the Nginx compile command to get it to >> > work? >> >> Some ideas can be found inside the zlib-ng project on GH, not >> sure is that working solution or not, so you can try. >> >> I'd also recommend to raise a request in https://trac.nginx.org/nginx/ >> about this feature request and provide patches for source code >> and documentation. >> >> Thank you. >> >> -- >> Sergey A. Osokin >> ___ >> nginx mailing list >> nginx@nginx.org >> https://mailman.nginx.org/mailman/listinfo/nginx > > ___ > nginx mailing list > nginx@nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: How to compile Nginx with zlib-ng
Thanks for the patch! I've been running it for about an hour and haven't seen the preallocated memory alert since, so it's looking good here. On Fri, 24 Mar 2023 at 03:07, Maxim Dounin wrote: > > Hello! > > On Thu, Mar 23, 2023 at 09:33:19PM +0100, Richard Stanway via nginx wrote: > > > Yes, when using the latest zlib-ng on nginx-1.21.6 I received the > > alerts. Previous versions of zlib-ng have worked great after the 2021 > > patch. I tried to update it myself as follows based on advice of > > zlib-ng GitHub issues, while it reduced the number of alerts logged it > > did not completely solve the issue so it seems the memory requirements > > may have further changed. While I would appreciate a proper patch > > making it into nginx, the seemingly-frequent upstream changes may make > > this difficult to maintain. > > > > -ctx->allocated = 8192 + 16 + (1 << (wbits + 2)) > > +ctx->allocated = 8192 + 288 + 16 + (1 << (wbits + 2)) > > + 131072 + (1 << (memlevel + 8)); > > It looks like there are at least two changes in zlib-ng since I > looked into it: > > - Window bits are no longer forced to 13 on compression level 1. > > - All allocations use custom alloc_aligned() wrapper, and > therefore all allocations are larger than expected by (64 + > sizeof(void*)). > > Further, due to the wrapper nginx sees all allocations as an > allocation of 1 element of a given size, so it misinterprets > some allocations as the state allocation. > > For example, allocations for a 1k responses are as follows (note > "a:8192" in most of the lines, that is, nginx thinks these are > state allocations): > > 2023/03/24 03:26:10 [debug] 36809#100069: *2 http gzip filter > 2023/03/24 03:26:10 [debug] 36809#100069: *2 malloc: 21DEE5C0:176144 > 2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip alloc: n:1 s:6036 a:8192 > p:21DEE5C0 > 2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip alloc: n:1 s:4180 a:8192 > p:21DF05C0 > 2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip alloc: n:1 s:4164 a:8192 > p:21DF25C0 > 2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip alloc: n:1 s:131140 > a:131140 p:21DF45C0 > 2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip alloc: n:1 s:4164 a:8192 > p:21E14604 > 2023/03/24 03:26:10 [debug] 36809#100069: *2 gzip in: 21C31D84 > > Allocations for 4k response are as follows (and generate an > alert): > > 2023/03/24 03:44:29 [debug] 36863#100652: *2 http gzip filter > 2023/03/24 03:44:29 [debug] 36863#100652: *2 malloc: 21DEE5C0:188432 > 2023/03/24 03:44:29 [debug] 36863#100652: *2 gzip alloc: n:1 s:6036 a:8192 > p:21DEE5C0 > 2023/03/24 03:44:29 [debug] 36863#100652: *2 gzip alloc: n:1 s:16468 a:16468 > p:21DF05C0 > 2023/03/24 03:44:29 [debug] 36863#100652: *2 gzip alloc: n:1 s:16452 a:16452 > p:21DF4614 > 2023/03/24 03:44:29 [debug] 36863#100652: *2 gzip alloc: n:1 s:131140 > a:131140 p:21DF8658 > 2023/03/24 03:44:29 [alert] 36863#100652: *2 gzip filter failed to use > preallocated memory: 16452 of 16180 while sending response to client, client: > 127.0.0.1, server: one, request: "GET /t/4k HTTP/1.1", host: "127.0.0.1:8080" > 2023/03/24 03:44:29 [debug] 36863#100652: *2 malloc: 21DC58C0:16452 > 2023/03/24 03:44:29 [debug] 36863#100652: *2 gzip in: 21C31D98 > > The "+ 288" you are using should be enough to cover additional > memory used for alignment, but it is not enough to account > for misinterpretation when using gzip_comp_level above 1 (so nginx > won't allocate additional memory assuming window bits will be > adjusted to 13). > > Please try the following patch, it should help with recent versions: > > # HG changeset patch > # User Maxim Dounin > # Date 1679622670 -10800 > # Fri Mar 24 04:51:10 2023 +0300 > # Node ID 67a0999550c3622e51639acb8bde57d199826f7e > # Parent d1cf09451ae84b930ce66fa6d63ae3f7eeeac5a5 > Gzip: compatibility with recent zlib-ng versions. > > It now uses custom alloc_aligned() wrapper for all allocations, > therefore all allocations are larger than expected by (64 + sizeof(void*)). > Further, they are seen as allocations of 1 element. Relevant calculations > were adjusted to reflect this, and state allocation is now protected > with a flag to avoid misinterpreting other allocations as the zlib > deflate_state allocation. > > Further, it no longer forces window bits to 13 on compression level 1, > so the comment was adjusted to reflect this. > > diff --git a/src/http/modules/ngx_http_gzip_filter_module.c > b/src/http/modules/ngx_http_gzip_filter_module.c > --- a/src/http/modules/ngx_http_gzip_filter_
Re: Cookie security for nginx
This is something you should fix on whatever application is setting the cookie. It probably isn't nginx. On Tue, Oct 10, 2017 at 10:04 AM, Johann Spies wrote: > A security scan on our server showed : > > Vulnerability Detection Method > Details: SSL/TLS: > Missing `secure` Cookie Attribute > OID:1.3.6.1.4.1.25623.1.0.902661 > Version used: > $Revision: 5543 > > This is on Debian 8.9. and nginx 1.6.2-5+deb8u5. > > I am uncertain on how to fix this using standard debian packages. > > Can you help me fixing this please? > > Regards > Johann > > > -- > Johann SpiesTelefoon: 021-808 4699 > Databestuurder / Data manager Faks: 021-883 3691 > > Sentrum vir Navorsing oor Evaluasie, Wetenskap en Tegnologie > Centre for Research on Evaluation, Science and Technology > Universiteit Stellenbosch. > > The integrity and confidentiality of this email is governed by these terms > / Hierdie terme bepaal die integriteit en vertroulikheid van hierdie epos. > http://www.sun.ac.za/emaildisclaimer > ___ > nginx mailing list > nginx@nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
[alert] epoll_ctl(1, 575) failed (17: File exists)
Hello, I have a location that proxies to a websocket server. Clients connect over HTTPS (HTTP2, wss://). Sometimes clients generate the following alerts in the error log when hitting the websocket location: 2017/10/11 21:03:23 [alert] 34381#34381: *1020125 epoll_ctl(1, 603) failed (17: File exists) while proxying upgraded connection, client: x.158, server: www.example.com, request: "GET /websocketpath HTTP/2.0", upstream: "http:///", host: "www.example.com" 2017/10/11 21:44:15 [alert] 34374#34374: *1274194 epoll_ctl(1, 1131) failed (17: File exists) while proxying upgraded connection, client: x.42, server: www.example.com, request: "GET /websocketpath HTTP/2.0", upstream: "http:///", host: "www.example.com" Here's the location excerpt: location /websocketpath { proxy_read_timeout 300; proxy_next_upstream off; proxy_buffering off; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_pass http://; } Config is otherwise pretty straightforward (static content, fastcgi backends, no AIO). nginx is from the nginx.org Debian repository. nginx version: nginx/1.13.6 built by gcc 6.3.0 20170516 (Debian 6.3.0-18) built with OpenSSL 1.1.0f 25 May 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.13.6/debian/debuild-base/nginx-1.13.6=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-specs=/usr/share/dpkg/no-pie-link.specs -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie' This seems to have started after upgrading to Debian 9 (which upgraded the OpenSSL library, allowing ALPN and thus HTTP2 to be usable). Previously the connections were mostly HTTP/1.1 and I didn't notice any such messages. Despite the alerts, the access log shows the clients with a 101 status code. Any idea if this is something on my end I should start looking at, or is this a possible issue with http2 and websockets? Thanks, Rich. ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: [alert] epoll_ctl(1, 575) failed (17: File exists)
On Wed, Oct 11, 2017 at 4:14 PM, Valentin V. Bartenev wrote: > > Websockets cannot work over HTTP/2. > > So it appears, I guess I should have checked that! Upon closer examination, all the 101 responses I was seeing in the access log were from HTTP/1.1 clients, the HTTP 2 requests never even got logged in the access log. I'll see if I can rework my application to avoid using websockets. Thanks. ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: when client->server socket is closed also server->client is closed and request is aborted ?
Look at http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_client_abort or http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_ignore_client_abort etc depending on what you're doing with the request. On Thu, Oct 26, 2017 at 1:07 PM, Torppa Jarkko wrote: > I have an old xmlrpc client that seems to close the client->server side of > the socket immediately after it has sent the request to server, server > seems to close the server->client side of socket in response to this. > > I have been trying to find setting for this, cannot find one. > > Also have been trying do dig into the sources to see where this happens, > but no luck so far. > > > ___ > nginx mailing list > nginx@nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: cts-submit
You can use ct-submit, once built the binary can be copied and run on any system without any dependencies. https://github.com/grahamedgecombe/ct-submit On Mon, Nov 27, 2017 at 10:21 PM, Ángel wrote: > On 2017-11-26 at 14:17 +0100, A. Schulze wrote: > > Hello, > > > > experiments with nginx-ct ¹) show that I need a tool to submit a > certificate to some public logs. > > cts-submit ²) seems useful. But it require me to install php on every > host :-/ > > > > I know there are also python implementations. but > > is anybody aware of an implementation in *plain posix shell + openssl* ? > > > > Andreas > > Doesn't your CA already submit them to the Certificate Transparency > logs? > > ___ > nginx mailing list > nginx@nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: domain only reachable with https:// in front
Your ISP is blocking port 80, so you cannot get redirected to HTTPS. http://www.dslreports.com/faq/11852 On Tue, Nov 28, 2017 at 6:17 PM, Jeff Dyke wrote: > I think it is unfortunate that certbot does it this way, with an if > statement, which i believe is evaluated in every request. I use something > like the following (with your names): > > server { > listen 80 default_server; > listen [::]:80 default_server; > server_name pstn.host www.pstn.host; > return 301 https://$host$request_uri; > } > > > server { > listen 443 ssl default_server; > ssl_certificate /etc/letsencrypt/live/pstn.host/fullchain.pem; > ssl_certificate_key /etc/letsencrypt/live/pstn.host/privkey.pem; > > reset of config > } > > Not part of your question, but I also use the hooks in webroot mode, > rather than nginx, for certbot, so it's never modifies my configuration, as > the sites-enabled files are managed by a configuration management system > across about 100 domains, some with special requirements. > > HTH, > Jeff > > On Tue, Nov 28, 2017 at 11:40 AM, pstnta > wrote: > >> hi, >> >> thanks for answering, >> >> shouldn't that forward everything to https? so shouldn't it work with just >> pstn.host? instead of https://pstn.host >> >> Posted at Nginx Forum: https://forum.nginx.org/read.p >> hp?2,277546,277548#msg-277548 >> >> ___ >> nginx mailing list >> nginx@nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > ___ > nginx mailing list > nginx@nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Secure Link Expires - URL Signing
Only the server should be generating the tokens, if the client knows the secret it can do whatever it wants. On Wed, Jan 10, 2018 at 10:32 AM, anish10dec wrote: > Let me explain the complete implementation methodology and problem > statement > > URL to be protected > http://site.media.com/mediafiles/movie.m3u8 > > We are generating token on application/client side to send it along with > request so that content is delivered by server only to authorized apps. > > Token Generation Methodology on App/Client > > expire = Current Epoch Time on App/Client + 600 ( 600 so that URL will be > valid for 10 mins) > uri = mediafiles/movie.m3u8 > secret = secretkey > > On Client , MD5 Function is used to generate token by using three above > defined values > token = MD5 Hash ( secret, uri, expire) > > Client passes generated token along with expiry time with URL > http://site.media.com/mediafiles/movie.m3u8?token={generated > value}&expire={value in variable expire} > > > Token Validation on Server > Token and Expire is captured and passed through secure link module > > location / { > > secure_link $arg_token,$arg_expire; > secure_link_md5 "secretkey$uri$arg_expire"; > > //If token generated here matches with token passed in request , content is > delivered > if ($secure_link = "") {return 405;} // token doesn't match > > if ($secure_link = "0") {return 410;} > //If value in arg_expire time is greater current epoch time of server , > content is delivered . > Since arg_expire has epoch time of device + 600 sec so on server it will be > success. If someone tries to access the content using same URL after 600 > sec > , time on server will be greater than time send in arg_expire and thus > request will be denied. > > > Problem Statement > Someone changes the time on his client device to say some future date and > time. In this case same app will generate the token with above mention > methodolgy on client and send it along with request to server. > Server will generate the token at its end using all the values along with > expire time send in URL request ( note here expire time is generated using > future date on device) > So token will match and 1st check will be successful . > In 2nd check since arg_expire has epoch time of future date + 600 sec which > will be obviously greater than current epcoh time of server and request > will be successfully delivered. > Anyone can use same token and extended epoch time with request for that > period of time for which future date was set on device. > > Hopefully now its explainatory . > Please let know if there is a way to protect the content in this scenario. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,278063,278088#msg-278088 > > ___ > nginx mailing list > nginx@nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Files still on disc after inactive time
> [alert] 11371#0: worker process 24870 exited on signal 9 This is almost certainly the cause of your problems - you need to figure out why the nginx processes are crashing and resolve that. Most likely a 3rd party module is responsible. On Fri, Feb 16, 2018 at 10:39 AM, Andrzej Walas wrote: > For me 40-50 GB cache is ok, because I have multiple files like 2-5GB. > Problem in my mind is this that I have settings: > proxy_cache_path /ephemeral/nginx/cache levels=1:2 > keys_zone=proxy-cache:4000m max_size=40g inactive=1d; > but I have over 40GB on disc and files older than 1 day inactive. > > Can you tell me what happend with downloaded part of files when I have: > [error] 16082#0: *1264804 upstream prematurely closed connection while > reading upstream > [crit] 16082#0: *1264770 pwritev() has written only 49152 of 151552 > while reading upstream > This part of file is still on disc and don't deleted after error? > > On most of my proxy rate is 90% HIT to 9% MISS and 1% ERROR. But couple of > them have stats like 10% HIT, 60% MISS, 30% ERROR. > > Sometimes I have problem with MISS on existing file. In logs I see 1 MISS, > after that 10-20 HIT and after that only multiple MISS on this same file. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,278589,278613#msg-278613 > > ___ > nginx mailing list > nginx@nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Why are my CGI scripts not executed like PHP ?
PHP-FPM is only for PHP. You'll want something like fcgiwrap for regular CGI files. See https://www.nginx.com/resources/wiki/start/topics/examples/fcgiwrap/ On Fri, Apr 6, 2018 at 6:02 PM, Ralph Seichter wrote: > Hello list, > > I am fairly new to nginx and now have stumbled across an issue I can't > solve. I have successfully configured nginx on Gentoo Linux to run PHP > applications (e.g. phpBB and phpMyAdmin) with php-fpm. > > As far as I understand, php-fpm should also be able to execute "regular > CGI" in the form of Shell-Scripts or Perl, as long as the files are > executable and use shebang-notation to indicate what interpreter they > want to be run with? > > In my test installation CGI scripts are never executed by php-fpm. File > contents are simply piped to the web browser, and I can't figure out > why. I searched the Net and mailing list archives, but did not find a > solution, so I thought it best to ask here. > > Output of nginx -V, configuration dump and test.cgi are attached. Your > help is appreciated. > > -Ralph > > > nginx version: nginx/1.13.11 > built with OpenSSL 1.0.2n 7 Dec 2017 > TLS SNI support enabled > configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error_log --pid-path=/run/nginx.pid > --lock-path=/run/lock/nginx.lock --with-cc-opt=-I/usr/include > --with-ld-opt=-L/usr/lib64 --http-log-path=/var/log/nginx/access_log > --http-client-body-temp-path=/var/lib/nginx/tmp/client > --http-proxy-temp-path=/var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi > --http-scgi-temp-path=/var/lib/nginx/tmp/scgi > --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --with-compat > --with-http_v2_module --with-pcre --with-pcre-jit > --with-http_addition_module > --with-http_dav_module --with-http_perl_module --with-http_realip_module > --add-module=external_module/headers-more-nginx-module-0.33 > --add-module=external_module/ngx-fancyindex-0.4.2 > --add-module=external_module/ngx_http_auth_pam_module-1.5.1 > --add-module=external_module/nginx-dav-ext-module-0.1.0 > --add-module=external_module/echo-nginx-module-0.61 > --add-module=external_module/nginx-auth-ldap- > 42d195d7a7575ebab1c369ad3fc5d78dc2c2669c > --add-module=external_module/nginx-module-vts-0.1.15-gentoo > --with-http_ssl_module --without-stream_access_module > --without-stream_geo_module --without-stream_limit_conn_module > --without-stream_map_module --without-stream_return_module > --without-stream_split_clients_module --without-stream_upstream_ > hash_module > --without-stream_upstream_least_conn_module > --without-stream_upstream_zone_module --without-mail_pop3_module > --with-mail > --with-mail_ssl_module --user=nginx --group=nginx > > # configuration file /etc/nginx/nginx.conf: > > user nginx nginx; > worker_processes 1; > > error_log /var/log/nginx/error_log info; > > events { > worker_connections 1024; > use epoll; > } > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > log_format main > '$remote_addr - $remote_user [$time_local] ' > '"$request" $status $bytes_sent ' > '"$http_referer" "$http_user_agent" ' > '"$gzip_ratio"'; > > client_header_timeout 10m; > client_body_timeout 10m; > send_timeout 10m; > > connection_pool_size 256; > client_header_buffer_size 1k; > large_client_header_buffers 4 2k; > request_pool_size 4k; > > gzip off; > > output_buffers 1 32k; > postpone_output 1460; > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > > keepalive_timeout 75 20; > > ignore_invalid_headers on; > > index index.html; > > server { > listen *:8080 default_server; > access_log /var/log/nginx/access_log main; > error_log /var/log/nginx/error_log info; > > server_name _; > root /var/www/localhost/htdocs; > > # Alternative: temp redirect to HTTPS > #return 302 https://$host$request_uri; > } > > include local/*.conf; > } > > # configuration file /etc/nginx/local/20-test.conf: > > server { > listen *:8443 ssl default_server; > server_name test.mydomain.tld; > access_log /var/log/nginx/ssl_access_log main; > error_log /var/log/nginx/ssl_error_log debug; > > ssl on; > ssl_certificate /etc/ssl/mydomain/cert.pem; > ssl_certificate_key /etc/ssl/mydomain/key.pem; > > root /var/www/localhost/test; > index test.cgi; > > location ~ \.cgi$ { > # Test for non-existent scripts or throw a 404 error > try_files $uri =404; > > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $request_filename; > fastcgi_pass unix:/run/php7-fpm.sock; > } > } > > # configuration file /etc/nginx/mime.types: > > types { > text/htmlhtml htm shtml; > text/css css; > text/xml xml; > image/gifgif; > image/jpeg jpeg jpg; > application
Re: Nginx throttling issue?
Even though it shouldn't be reaching your limits, limit_req does delay in 1 second increments which sounds like it could be responsible for this. You should see error log entries if this happens (severity warning). Have you tried without the limit_req option? You can also use the nodelay option to avoid the delaying behavior. http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req On Thu, Apr 5, 2018 at 6:45 AM, Peter Booth wrote: > John, > > I think that you need to understand what is happening on your host > throughout the duration of the test. Specifically, what is happening with > the tcp connections. If you run netstat and grep for tcp and do this in a > loop every say five seconds then you’ll see how many connections peak get > created. > If the thing you are testing exists in production then you are lucky. You > can do the same in production and see what it is that you need to replicate. > > You didn’t mention whether you had persistent connections (http keep > alive) configured. This is key to maximizing scalability. You did say that > you were using SSL. If it were me I’d use a load generator that more > closely resembles the behavior of real users on a website. Wrk2, Tsung, > httperf, Gatling are examples of some that do. Using jmeter with zero think > time is a very common anti pattern that doesn’t behave anything like real > users. I think of it as the lazy performance tester pattern. > > Imagine a real web server under heavy load from human beings. You will see > thousands of concurrent connections but fewer concurrent requests in > flight. With the jmeter zero think time model then you are either creating > new connections or reusing them - so either you have a shitload of > connections and your nginx process starts running out of file handles or > you are jamming requests down a single connection- neither of which > resemble reality. > > If you are committed to using jmeter for some reason then use more > instances with real thinktimes. Each instance’s connection wil have a > different source port > > Sent from my iPhone > > > On Apr 4, 2018, at 5:20 PM, John Melom wrote: > > > > Hi Maxim, > > > > I've looked at the nstat data and found the following values for > counters: > > > >> nstat -az | grep -I listen > > TcpExtListenOverflows 0 0.0 > > TcpExtListenDrops 0 0.0 > > TcpExtTCPFastOpenListenOverflow 0 0.0 > > > > > > nstat -az | grep -i retra > > TcpRetransSegs 12157 0.0 > > TcpExtTCPLostRetransmit 0 0.0 > > TcpExtTCPFastRetrans2700.0 > > TcpExtTCPForwardRetrans 11 0.0 > > TcpExtTCPSlowStartRetrans 0 0.0 > > TcpExtTCPRetransFail0 0.0 > > TcpExtTCPSynRetrans 25 0.0 > > > > Assuming the above "Listen" counters provide data about the overflow > issue you mention, then there are no overflows on my system. While > retransmissions are happening, it doesn't seem they are related to listen > queue overflows. > > > > > > Am I looking at the correct data items? Is my interpretation of the > data correct? If so, do you have any other ideas I could investigate? > > > > Thanks, > > > > John > > > > -Original Message- > > From: nginx [mailto:nginx-boun...@nginx.org] On Behalf Of John Melom > > Sent: Tuesday, March 27, 2018 8:52 AM > > To: nginx@nginx.org > > Subject: RE: Nginx throttling issue? > > > > Maxim, > > > > Thank you for your reply. I will look to see if "netstat -s" detects > any listen queue overflows. > > > > John > > > > > > -Original Message- > > From: nginx [mailto:nginx-boun...@nginx.org] On Behalf Of Maxim Dounin > > Sent: Tuesday, March 27, 2018 6:55 AM > > To: nginx@nginx.org > > Subject: Re: Nginx throttling issue? > > > > Hello! > > > >> On Mon, Mar 26, 2018 at 08:21:27PM +, John Melom wrote: > >> > >> I am load testing our system using Jmeter as a load generator. > >> We execute a script consisting of an https request executing in a > >> loop. The loop does not contain a think time, since at this point I > >> am not trying to emulate a “real user”. I want to get a quick look at > >> our system capacity. Load on our system is increased by increasing > >> the number of Jmeter threads executing our script. Each Jmeter thread > >> references different data. > >> > >> Our system is in AWS with an ELB fronting Nginx, which serves as a > >> reverse proxy for our Docker Swarm application cluster. > >> > >> At moderate loads, a subset of our https requests start experiencing > >> to a 1 second delay in addition to their normal response time. The > >> delay is not due to resource contention. > >> System utilizations remain low. The response times cluster around 4 > >> values: 0 millilseconds, 50 milliseconds, 1 second, and 1.050 > >> seconds. Right now, I am most interested in un
Re: Connection refused
You should check your upstream logs to see why it is closing connections or crashing. On Tue, May 15, 2018 at 6:22 PM Ricky Gutierrez wrote: > Any help? > > El lun., 14 may. 2018 20:02, Ricky Gutierrez > escribió: > >> hello list, I have a reverse proxy with nginx front end and I have the >> backend with nginx some applications in php7 with mariadb, reviewing >> the log I see a lot of errors like this: >> >> 2018/05/09 17:44:58 [error] 14633#14633: *1761 connect() failed (111: >> Connection refused) while connecting to upstream, client: >> 186.77.203.203, server: web.mydomain.com, request: "GET >> /imagenes/slide7.jpg HTTP/2.0", upstream: >> "http://192.168.11.7:80/imagenes/slide7.jpg";, host: >> "www.mydomain.com", referrer: "https://www.mydomain.com/"; >> >> 2018/05/09 17:45:09 [error] 14633#14633: *1761 connect() failed (111: >> Connection refused) while connecting to upstream, client: >> 186.77.203.203, server: web.mydomain.com, request: "GET >> /imagenes/slide8.jpg HTTP/2.0", upstream: >> "http://192.168.11.7:80/imagenes/slide8.jpg";, host: >> "www.mydomain.com", referrer: "https://www.mydomain.com/"; >> >> 2018/05/09 17:45:12 [error] 14633#14633: *1761 upstream prematurely >> closed connection while reading response header from upstream, client: >> 186.77.203.203, server: web.mydomain.com, request: "GET >> /imagenes/slide6.jpg HTTP/2.0", upstream: >> "http://192.168.11.7:80/imagenes/slide6.jpg";, host: >> "www.mydomain.com", referrer: "https://www.mydomain.com/"; >> >> I made a change according to this link on github, but I can not remove >> the error >> >> https://github.com/owncloud/client/issues/5706 >> >> my config : >> >> proxy_http_version 1.1; >> proxy_set_header Connection ""; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header Host $host; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> proxy_connect_timeout 900s; >> proxy_send_timeout 900s; >> proxy_read_timeout 900s; >> proxy_buffer_size 64k; >> proxy_buffers 16 32k; >> proxy_busy_buffers_size 64k; >> proxy_redirect off; >> proxy_request_buffering off; >> proxy_buffering off; >> proxy_pass http://backend1; >> >> regardss >> >> -- >> rickygm >> >> http://gnuforever.homelinux.com >> > ___ > nginx mailing list > nginx@nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: rewrite question
This is almost certainly not Google as they obey robots.txt. The & to & conversion is another sign of a poor quality crawler. Check the RDNS and you will find it's probably some IP faking Google UA, I suggest blocking at network level. On Fri, Jun 8, 2018 at 1:57 AM shiz wrote: > Hi, > > Recently, Google has started spidering my website and in addition to normal > pages, appended "&" to all urls, even the pages excluded by robots.txt > > e.g. page.php?page=aaa -> page.php?page=aaa& > > Any idea how to redirect/rewrite this? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,280093,280093#msg-280093 > > ___ > nginx mailing list > nginx@nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: rewrite question
That IP resolves to rate-limited-proxy-72-14-199-18.google.com - this is not the Google search crawler, hence why it ignores your robots.txt. No one seems to know for sure what the rate-limited-proxy IPs are used for. They could represent random Chrome users using the Google data saving feature, hence the varying user-agents you will see. Either way, they are probably best not blocked, as they could represent many end user IPs. Maybe there is an X-Forwarded-For header you could look at. The Google search crawler will resolve to an IP like crawl-66-249-64-213.googlebot.com. On Mon, Jun 11, 2018 at 5:05 PM Francis Daly wrote: > On Thu, Jun 07, 2018 at 07:57:43PM -0400, shiz wrote: > > Hi there, > > > Recently, Google has started spidering my website and in addition to > normal > > pages, appended "&" to all urls, even the pages excluded by robots.txt > > > > e.g. page.php?page=aaa -> page.php?page=aaa& > > > > Any idea how to redirect/rewrite this? > > Untested, but: > > if ($args ~ "&$") { return 400; } > > should handle all requests that end in the four characters you report. > > You may prefer a different response code. > > Good luck with it, > > f > -- > Francis Dalyfran...@daoine.org > ___ > nginx mailing list > nginx@nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx 1.20.0 coverity errors
This is like reading a book, not understanding some words and then complaining to the author to fix their spelling. Please don't rely on SAST analysis without understanding the code. I would expect the vast majority of these are false positives - provide evidence that these are real bugs if you want them to be taken seriously. On Thu, 7 Dec 2023 at 02:35, BILL wrote: > Hi, > > We have a coverity testing on nginx 1.20.0 and we got some errors. > Have any plan to resolve these errors? > > > Checker Number > ARRAY_VS_SINGLETON 3 > BAD_FREE 3 > BUFFER_SIZE 1 > CHECKED_RETURN 10 > COPY_PASTE_ERROR 1 > DC.WEAK_CRYPTO 18 > DEADCODE 8 > FORWARD_NULL 49 > MISSING_RESTORE 1 > NO_EFFECT 8 > NULL_RETURNS 8 > OVERRUN 12 > PW.INCLUDE_RECURSION 8 > RESOURCE_LEAK 5 > REVERSE_INULL 5 > SIGN_EXTENSION 1 > SIZEOF_MISMATCH 8 > STACK_USE 1 > STRING_NULL 1 > TAINTED_SCALAR 1 > TOCTOU 12 > UNINIT 10 > UNREACHABLE 63 > UNUSED_VALUE 4 > USE_AFTER_FREE 1 > Total 242 > ___ > nginx mailing list > nginx@nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: WordPress Website not rendered properly via nginx reverse proxy
You could consider adding a CSP header to cause clients to automatically fetch those resources over HTTPS: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/upgrade-insecure-requests On Wed, 16 Oct 2024 at 00:06, Nikolaos Milas via nginx wrote: > On 16/10/2024 12:19 π.μ., Nikolaos Milas via nginx wrote: > > > ... > > I tried that but no, removing the trailing slash did not change anything. > > ... > > I found that the problem is that, as the proxied page is rendered over > SSL, browsers are auto-blocking parts of the page as non-secure. > > This is due, I guess, to the fact that multiple page items are probably > hardcoded as http rather than as https links or as absolute rather than > as relative paths (images etc). > > I'll have to ask the developer to check the app throughout. > > Sorry for the fuss. > > All the best, > Nick > > ___ > nginx mailing list > nginx@nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx