Possible to modify response headers from a proxied request before the response is written do the cache? (modified headers should be written to disk)

2019-03-11 Thread Manuel
Hello,

nginx writes the rsponse from a proxy to disk. eg.
[...]
Server: nginx
Date: Mon, 11 Mar 2019 23:23:28 GMT
Content-Type: image/png
Content-Length: 45360
Connection: close
Expect-CT: max-age=0, report-uri="
https://openstreetmap.report-uri.com/r/d/ct/reportOnly";
ETag: "314b65190a8968893c6c400f29b13369"
Cache-Control: max-age=126195
Expires: Wed, 13 Mar 2019 10:26:43 GMT
Access-Control-Allow-Origin: *
X-Cache: MISS from trogdor.openstreetmap.org
X-Cache-Lookup: HIT from trogdor.openstreetmap.org:3128
Via: 1.1 trogdor.openstreetmap.org:3128 (squid/2.7.STABLE9)
Set-Cookie: qos_token=031042; Max-Age=3600; Domain=openstreetmap.org; Path=/
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
[...]

is it possible to modify the Cache-Control and Expires header before the
response is written to disk?

The config:

  location /tiles/ {
proxy_http_version 1.1;
proxy_ignore_headers "Cache-Control" "Expires" "Set-Cookie";
proxy_cache_valid any 30d;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header Host $proxy_host;
proxy_ssl_server_name on;
proxy_ssl_name $proxy_host;

proxy_ssl_certificate /etc/nginx/cert.pem;
proxy_ssl_certificate_key /etc/nginx/key.pem;

expires 30d;
proxy_cache_lock on;
proxy_cache_valid  200 302  30d;
proxy_cache_valid  404  1m;
proxy_cache_key "$request_uri";
proxy_redirect off;
proxy_cache_use_stale error timeout http_500 http_502
http_503 http_504;
add_header X-Cache-Status $upstream_cache_status;

# add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";

proxy_cache tiles;
proxy_pass https://openstreetmap_backend/;
}

The problem is: the cached tiles on disk do not have "Cache-Control:
max-age=2592000" but "Cache-Control: max-age=126195" regardless of setting
proxy_ignore_headers "Cache-Control".
I assumed that setting proxy_ignore_headers "Cache-Control"; and "expires
30d;" will remove the header from the response and write the corresponding
"Cache-Control" and "Expires" with the 30d.


Or do I have to do this:

browser ->
nginx, caches and if necessary requests new tile via ->
nginx, sets expires: 30d; calls tileserver

Kind regards,
Manuel
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Possible to modify response headers from a proxied request before the response is written do the cache? (modified headers should be written to disk)

2019-03-12 Thread Manuel
Hi Maxim,

thanks for taking the time to answer my question.

> From practical point of view, however, these should be enough to
> return correct responses to clients.  What is stored in the cache
> file is irrelevant.


Well, the expires header is in the cached file, and that was the problem.
The expires was not 30d but some 1.x days.
And so the cache will request upstream to early, because upstream
returned Cache-Control: max-age=126195

I want to cache the upstream resource for 30d
regardless of the returned cache headers from upstream.

My solution now is a two step approach:
step one: check cache, if the resource is expired
or not cached, nginx calls itself to get the resource.
Step two: call upstream and modify the expires
header to 30d. Return response to the cache.
Cache is now happy with an expires 30d header :-)

Kind regards,
Manuel
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Difference between Mainline and Stable Nginx version

2020-09-25 Thread Manuel
Kaushal,

If you look at the image
https://www.nginx.com/wp-content/uploads/2014/04/branch.png
I personally would only use the mainline version. If a fix was a hidden
security vulnerability and it is not a major bug fix it wont get into
stable.

Best,
Manuel


Am Do., 24. Sept. 2020 um 16:47 Uhr schrieb Maxim Konovalov :

> Hello,
>
> On 24.09.2020 17:16, Kaushal Shriyan wrote:
> > Hi,
> >
> > I am running CentOS Linux release 7.8.2003 (Core) and referring
> > to https://nginx.org/en/linux_packages.html#RHEL-CentOS. Are there any
> > difference between stable and mainline version?Should we need to use
> > stable or mainline for Production environment?
> >
> [...]
>
> We published a blog post on this topic while ago:
>
> https://www.nginx.com/blog/nginx-1-6-1-7-released/
>
> --
> Maxim Konovalov
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Help beating cloudflare

2023-02-03 Thread Manuel
Hi,

do you forward all headers the browser sends to the server?
The chrome version is very old.
You need to pretend that you are the browser.

Kind regards,
Manuel




> Am 03.02.2023 um 08:54 schrieb Lukas Tribus :
> 
> 
> On Friday, 3 February 2023, Saint Michael  wrote:
>> I have a reverse proxy but the newspaper that  I am proxying is
>> protected by cloudflare, and the block me immediately, even if I use a
>> different IP. So somehow they know how to identify my reverse-proxy.
>> How is my request different than a regular browser? What is giving me up?
>> can somebody give an example of what are the rules so my proxy passes
>> for a regular person?
> 
> OS fingerprinting, TLS fingerprinting, H2/H3 feature fingerprinting.
> 
> You will not beat Cloudflare anti bot functionality with a few nginx 
> settings... this is a rabbit hole.
> 
> 
> 
> 
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Help beating cloudflare

2023-02-03 Thread Manuel
How cool is that.

Now I am curious: what was the solution? :-)

> Am 03.02.2023 um 19:46 schrieb Saint Michael :
> 
> 
> Yes
> 2 years ago nginx was very popular.
> 
> Federico
> 
>> On Fri, Feb 3, 2023, 12:53 PM Payam Chychi  wrote:
>> Nice job! Though I think what you mean is that you found the answer by 
>> searching, chatgpt or otherwise :)
>> 
>> Keep in mind, chatgpt is trained on 2y old data :)
>> 
>>> On Fri, Feb 3, 2023 at 9:14 AM Saint Michael  wrote:
>>> I won already.
>>> Thanks to chatgpt.
>>> I asked the question and it gave the answer.
>>> 
>>>> On Fri, Feb 3, 2023, 11:22 AM Payam Chychi  wrote:
>>>> Lol, that is the point…. Proxies do stuff to the connection that makes it 
>>>> easy to detect if you know what you are looking for :)
>>>> 
>>>> As it stands… you are not going to win this one.
>>>> 
>>>> 
>>>>> On Fri, Feb 3, 2023 at 3:57 AM Saint Michael  wrote:
>>>>> I am sure that it can be done. I am just passing everything to them.
>>>>> 
>>>>>> On Fri, Feb 3, 2023, 4:39 AM Manuel  wrote:
>>>>>> Hi,
>>>>>> 
>>>>>> do you forward all headers the browser sends to the server?
>>>>>> The chrome version is very old.
>>>>>> You need to pretend that you are the browser.
>>>>>> 
>>>>>> Kind regards,
>>>>>> Manuel
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>>> Am 03.02.2023 um 08:54 schrieb Lukas Tribus :
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>> On Friday, 3 February 2023, Saint Michael  wrote:
>>>>>>>> I have a reverse proxy but the newspaper that  I am proxying is
>>>>>>>> protected by cloudflare, and the block me immediately, even if I use a
>>>>>>>> different IP. So somehow they know how to identify my reverse-proxy.
>>>>>>>> How is my request different than a regular browser? What is giving me 
>>>>>>>> up?
>>>>>>>> can somebody give an example of what are the rules so my proxy passes
>>>>>>>> for a regular person?
>>>>>>>> 
>>>>>>> 
>>>>>>> OS fingerprinting, TLS fingerprinting, H2/H3 feature fingerprinting.
>>>>>>> 
>>>>>>> You will not beat Cloudflare anti bot functionality with a few nginx 
>>>>>>> settings... this is a rabbit hole.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> ___
>>>>>>> nginx mailing list
>>>>>>> nginx@nginx.org
>>>>>>> https://mailman.nginx.org/mailman/listinfo/nginx
>>>>>> ___
>>>>>> nginx mailing list
>>>>>> nginx@nginx.org
>>>>>> https://mailman.nginx.org/mailman/listinfo/nginx
>>>>> ___
>>>>> nginx mailing list
>>>>> nginx@nginx.org
>>>>> https://mailman.nginx.org/mailman/listinfo/nginx
>>>> -- 
>>>> Payam Tarverdyan Chychi
>>>> ___
>>>> nginx mailing list
>>>> nginx@nginx.org
>>>> https://mailman.nginx.org/mailman/listinfo/nginx
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> https://mailman.nginx.org/mailman/listinfo/nginx
>> -- 
>> Payam Tarverdyan Chychi
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> https://mailman.nginx.org/mailman/listinfo/nginx
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: njs-0.8.1

2023-09-12 Thread Manuel
Hello,

thank you for all the work for njs. 
We will probably use it in one of our next projects.

Regarding the example 

> let body = async reply.text();

should it be const body = await reply.next(); ?

Kind regards,
Manuel


> Am 13.09.2023 um 01:10 schrieb Dmitry Volyntsev :
> 
> Hello,
> 
> I'm glad to announce a new release of NGINX JavaScript module (njs).
> 
> Notable new features:
> - Periodic code execution:
> js_periodic direcrive specifies a content handler to run at regular interval.
> The handler receives a session object as its first argument, it also has 
> access
> to global objects such as ngx.
> 
> : example.conf:
> :  location @periodics {
> :# to be run at 1 minute intervals in worker process 0
> :js_periodic main.handler interval=60s;
> :
> :# to be run at 1 minute intervals in all worker processes
> :js_periodic main.handler interval=60s worker_affinity=all;
> :
> :# to be run at 1 minute intervals in worker processes 1 and 3
> :js_periodic main.handler interval=60s worker_affinity=0101;
> :
> :resolver 10.0.0.1;
> :js_fetch_trusted_certificate /path/to/ISRG_Root_X1.pem;
> :  }
> :
> : example.js:
> :  async function handler(s) {
> :let reply = async ngx.fetch('https://nginx.org/en/docs/njs/');
> :let body = async reply.text();
> :
> :ngx.log(ngx.INFO, body);
> :  }
> 
> Learn more about njs:
> 
> - Overview and introduction:
> https://nginx.org/en/docs/njs/
> - NGINX JavaScript in Your Web Server Configuration:
> https://youtu.be/Jc_L6UffFOs
> - Extending NGINX with Custom Code:
> https://youtu.be/0CVhq4AUU7M
> - Using node modules with njs:
> https://nginx.org/en/docs/njs/node_modules.html
> - Writing njs code using TypeScript definition files:
> https://nginx.org/en/docs/njs/typescript.html
> 
> Feel free to try it and give us feedback on:
> 
> - Github:
> https://github.com/nginx/njs/issues
> - Mailing list:
> https://mailman.nginx.org/mailman/listinfo/nginx-devel
> 
> Additional examples and howtos can be found here:
> 
> - Github:
> https://github.com/nginx/njs-examples
> 
> Changes with njs 0.8.1   12 Sep 2023
> 
>   nginx modules:
> 
>   *) Feature: introduced js_periodic directive.
>   The directive specifies a JS handler to run at regular intervals.
> 
>   *) Feature: implemented items() method for a shared dictionary.
>  The method returns all the non-expired key-value pairs.
> 
>   *) Bugfix: fixed size() and keys() methods of a shared dictionary.
> 
>   *) Bugfix: fixed erroneous exception in r.internalRedirect()
>  introduced in 0.8.0.
> 
>   Core:
> 
>   *) Bugfix: fixed incorrect order of keys in
>  Object.getOwnPropertyNames().
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Debugging Nginx Memory Spikes on Production Servers

2023-09-20 Thread Manuel
Hello,

apparently you could look into dmesg. There should be a stacktrace of the 
process.

Also you could somehow start nginx with gdb.

You could also log all request and then when the server crashed try to replay 
them to be confident that the crash is reproducible.

What does ChatGpt says? 😅

Do you run the latest nginx version?

Any obscure modules / extensions?

Kind regards,
Manuel


> Am 20.09.2023 um 18:56 schrieb Lance Dockins :
> 
> 
> Are there any best practices or processes for debugging sudden memory spikes 
> in Nginx on production servers?  We have a few very high-traffic servers that 
> are encountering events where the Nginx process memory suddenly spikes from 
> around 300mb to 12gb of memory before being shut down by an out-of-memory 
> termination script.  We don't have Nginx compiled with debug mode and even if 
> we did, I'm not sure that we could enable that without overly taxing the 
> server due to the constant high traffic load that the server is under.  Since 
> it's a server with public websites on it, I don't know that we could filter 
> the debug log to a single IP either.
> 
> Access, error, and info logs all seem to be pretty normal.  Internal 
> monitoring of the Nginx process doesn't suggest that there are major 
> connection spikes either.  Theoretically, it is possible that there is just a 
> very large sudden burst of traffic coming in that is hitting our rate limits 
> very hard and bumping the memory that Nginx is using until the OOM 
> termination process closes Nginx (which would prevent Nginx from logging the 
> traffic).  We just don't have a good way to see where the memory in Nginx is 
> being allocated when these sorts of spikes occur and are looking for any good 
> insight into how to go about debugging that sort of thing on a production 
> server.
> 
> Any insights into how to go about troubleshooting it?
> 
> -- 
> Lance Dockins
> 
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: announcing freenginx.org

2024-02-14 Thread Manuel
Good Evening Maxim,

thank you for the work.

I am speechless. My personal opinion:
@F5 get an advisor for open source
and maybe read something about enshittification m(

TT

Will follow freenginx then.
Thx.


> Am 14.02.2024 um 18:59 schrieb Maxim Dounin :
> 
> Hello!
> 
> As you probably know, F5 closed Moscow office in 2022, and I no
> longer work for F5 since then.  Still, we’ve reached an agreement
> that I will maintain my role in nginx development as a volunteer.
> And for almost two years I was working on improving nginx and
> making it better for everyone, for free.
> 
> Unfortunately, some new non-technical management at F5 recently
> decided that they know better how to run open source projects.  In
> particular, they decided to interfere with security policy nginx
> uses for years, ignoring both the policy and developers’ position.
> 
> That’s quite understandable: they own the project, and can do
> anything with it, including doing marketing-motivated actions,
> ignoring developers position and community.  Still, this
> contradicts our agreement.  And, more importantly, I no longer able
> to control which changes are made in nginx within F5, and no longer
> see nginx as a free and open source project developed and
> maintained for the public good.
> 
> As such, starting from today, I will no longer participate in nginx
> development as run by F5.  Instead, I’m starting an alternative
> project, which is going to be run by developers, and not corporate
> entities:
> 
> http://freenginx.org/
> 
> The goal is to keep nginx development free from arbitrary corporate
> actions.  Help and contributions are welcome.  Hope it will be
> beneficial for everyone.
> 
> 
> --
> Maxim Dounin
> http://freenginx.org/
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Basic protection for different IPs

2015-10-08 Thread Manuel Thoenes
Hi Ian,

simply combine your basic auth with
==
auth_basic "";
auth_basic_user_file ;

satisfy any;
allow 127.0.0.1;
allow ::1;
deny all;
==

Regards,
Manu
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

NGINX: Reverse Proxy (SSL) with non-ssl backend

2014-05-26 Thread Nelson Manuel Marques

Hi,

I currently run a small system which consists on an Apache HTTP with PHP (8080) 
backend (no SSL on localhost) with a Varnish HTTP accelerator on Port 9000 
(localhost) and a NGINX reverse proxy (SSL).

I am facing a small issue with this setup, mainly, when I select checkboxes and 
friends and hit submit (ex; application setup) nothing happens… Boxes get 
unticket and I remain in the same screen. If bind Apache or Varnish on all 
interfaces and hit their ports directly, everything works. I believe this might 
be an issue with my nginx setup.

My nginx configuration (vhost, nginx.conf is the default):



server {
listen80;
server_name   foobar.local;
return 301https://foobar.local/$request_uri;
}

server {
listen443 ssl;
server_name   foobar.local;
   # virtual host error and access logs in /var/log/nginx
access_log/var/log/nginx/foobar.local-access.log;
error_log /var/log/nginx/foobar.local.vm-error.log;
# gzip compression configuration
gzip  on;
gzip_comp_level   7;
gzip_min_length   1000;
gzip_proxied  any;
# SSL configuration; generated cert
keepalive_timeout 60;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DS
S;
ssl_certificate   /etc/nginx/certs/self-ssl.crt;
ssl_certificate_key   /etc/nginx/certs/self-ssl.key;
ssl_session_cache shared:SSL:5m;
ssl_session_timeout   5m;
ssl_prefer_server_ciphers  on;

client_max_body_size 2M;

location / {
proxy_pass http://127.0.0.1:8080/;
add_header Front-End-Https   on;
proxy_next_upstreamerror timeout invalid_header http_500 
http_502 http_503 http_504;
#proxy_set_header   Accept-Encoding   "";
proxy_set_header   Host$http_host;
proxy_set_header   X-Real-IP   $remote_addr;
proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
allow all;
proxy_ignore_client_abort on;
proxy_redirect off;
}
}

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


NGINX built and analyzed

2013-03-17 Thread Antonio Manuel Muñiz Martín
Hi guys,

We've taken the freedom of configure a built of NGINX in a daily
basis. So, every day (if there are changes in the source code) a new
build will run.
After each build a Sonar analysis will be performed.

This is the build link: http://live.clinkerhq.com/jenkins/job/nginx-build
And this is the analysis results link:
http://live.clinkerhq.com/sonar/dashboard/index/3656

So, NGINX has a eye over his code now :-)

Cheers,
Antonio.

-- 
Antonio Manuel Muñiz Martín
Software Developer at klicap - ingeniería del puzle

work phone + 34 954 894 322
www.klicap.es | blog.klicap.es

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx