Rewrite -- failure

2020-04-14 Thread Paul
New to this list (lurked for a couple of weeks), so hope you'll bear 
with me. I'm trying to get a charity's volunteers set up to work from home.


Using nginx 1.14.0 (latest on Ubuntu 14.04LTS -- all up to date; #nginx 
-V below) as a front end for a number of servers using Apache 2.4.


My problem is that I need to split serv1.example.com to two physical 
servers (both fully functional on LAN). The first (192.168.aaa.bbb) 
serving static https works fine. But I cannot "rewrite" (redirect, 
re-proxy?) to the second server (192.168.xxx.yyy, Perl cgi) where the 
request comes in as https://serv1.example.com/foo and I need to get rid 
of "foo"


	"rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent;" (tried 
permanent, break, last and no flags)


is valid as a PCRE regex, but logs give me a 404 trying to find "foo" 
which has nothing to do with the cgi root:


[14/Apr/2020:16:14:19 -0400] "GET /foo HTTP/1.1" 404 2471

What I am trying for is "GET / HTTP/1.1" 200

Here's my server config.  Any all assistance would be greatly 
appreciated -- many thanks and stay well -- Paul



server {

listen 443 ssl;
# [4 lines managed by Certbot, working perfectly]

server_name serv1.example.com;

access_log /var/log/nginx/access.log;
error_log  /var/log/nginx/mysite-error_log;

proxy_buffering off;

location / {  # static server, html, works perfectly,
proxy_pass http://192.168.aaa.bbb;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   }

location /foo {   # big db server, perfect on LAN, PERL, cgi
# rewrite ^/foo(.*) /$1 break;   #tried permanent, break, last 
and no flags
# rewrite ^/foo/(.*)$ /$1 last;   #tried permanent, break, last 
and no flags
rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent; 
#tried permanent, break, last and no flags

proxy_pass http://192.168.xxx.yyy:8084;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   }

}

server {
if ($host = serv1.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot

# automatically sets to https if someone comes in on http
listen 80;
listen 8084;
server_name serv1.example.com;
rewrite ^   https://$host$request_uri? permanent;
}
_

nginx -V
nginx version: nginx/1.14.0 (Ubuntu)
built with OpenSSL 1.1.1  11 Sep 2018
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 
-fdebug-prefix-map=/build/nginx-GkiujU/nginx-1.14.0=. 
-fstack-protector-strong -Wformat -Werror=format-security -fPIC 
-Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions 
-Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx 
--conf-path=/etc/nginx/nginx.conf 
--http-log-path=/var/log/nginx/access.log 
--error-log-path=/var/log/nginx/error.log 
--lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid 
--modules-path=/usr/lib/nginx/modules 
--http-client-body-temp-path=/var/lib/nginx/body 
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi 
--http-proxy-temp-path=/var/lib/nginx/proxy 
--http-scgi-temp-path=/var/lib/nginx/scgi 
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit 
--with-http_ssl_module --with-http_stub_status_module 
--with-http_realip_module --with-http_auth_request_module 
--with-http_v2_module --with-http_dav_module --with-http_slice_module 
--with-threads --with-http_addition_module 
--with-http_geoip_module=dynamic --with-http_gunzip_module 
--with-http_gzip_static_module --with-http_image_filter_module=dynamic 
--with-http_sub_module --with-http_xslt_module=dynamic 
--with-stream=dynamic --with-stream_ssl_module --with-mail=dynamic 
--with-mail_ssl_module


  \\\||//
   (@ @)
ooO_(_)_Ooo__
|__|_|_|_|_|_|_|_|
|___||_|_|_|_|_|_||
|_|_| mailto:p...@stormy.ca _|||
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


SSL and port number [was: Rewrite -- failure]

2020-04-21 Thread Paul
Thanks for your input. I have spent quite some time on this, and have 
failed on "rewrite".


It all works using a different port number but *without* SSL -- the 
moment I add the Certbot back in (see config below) I get "Error code: 
SSL_ERROR_RX_RECORD_TOO_LONG".


Also, same server, on default port 80, works perfectly as https, but if 
I add :80 to the requested URL, I get the same "Error code: 
SSL_ERROR_RX_RECORD_TOO_LONG"...


All suggestions warmly welcomed, thanks. ...and stay well - Paul.

server {

listen 8084;
#listen 443 ssl;

#ssl_certificate 
/etc/letsencrypt/live/serv1.example.com/fullchain.pem; # managed by Certbot
#ssl_certificate_key 
/etc/letsencrypt/live/serv1.example.com/privkey.pem; # managed by Certbot

#include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
#ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

server_name my_app;

access_log /var/log/nginx/access.log;
error_log  /var/log/nginx/ships-error_log;

proxy_buffering off;

location / {
proxy_pass http://192.168.xxx.yyy:8084;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   }

}

#server {
#if ($host = serv1.example.com) {
#return 301 https://$host$request_uri;
#} # managed by Certbot

# automatically sets to https if someone comes in on http
#listen 8084;
#listen 443 ssl;
#server_name serv1.example.com;
#rewrite ^   https://$host$request_uri? permanent;
#}





On 2020-04-14 6:39 p.m., Francis Daly wrote:

On Tue, Apr 14, 2020 at 04:38:51PM -0400, Paul wrote:

Hi there,


My problem is that I need to split serv1.example.com to two physical servers
(both fully functional on LAN). The first (192.168.aaa.bbb) serving static
https works fine. But I cannot "rewrite" (redirect, re-proxy?) to the second
server (192.168.xxx.yyy, Perl cgi) where the request comes in as
https://serv1.example.com/foo and I need to get rid of "foo"


http://nginx.org/r/proxy_pass -- proxy_pass can (probably) do what
you want, without rewrites. The documentation phrase to look for is
"specified with a URI".


"rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent;" (tried
permanent, break, last and no flags)


"rewrite" (http://nginx.org/r/rewrite) works on the "/foo" part, not the
"https://"; or the "serv1.example.com" parts of the request, which is why
that won't match your requests.


 location /foo {   # big db server, perfect on LAN, PERL, cgi
 # rewrite ^/foo(.*) /$1 break;   #tried permanent, break, last and
no flags


That one looks to me to be most likely to work; but you probably need
to be very clear about what you mean when you think "it doesn't work".

In general - show the request, show the response, and describe the response
that you want instead.


 # rewrite ^/foo/(.*)$ /$1 last;   #tried permanent, break, last and
no flags
 rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent; #tried
permanent, break, last and no flags
 proxy_pass http://192.168.xxx.yyy:8084;
 proxy_set_header Host $host;
 proxy_http_version 1.1;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}


I suggest trying

 location /foo/ {
 proxy_pass http://192.168.xxx.yyy:8084/;
 }

(note the trailing / in both places) and then seeing what else needs to
be added.

Note also that, in any case, if you request /foo/one.cgi which is really
upstream's /one.cgi, and the response body includes a link to /two.png,
then the browser will look for /two.png not /foo/two.png, which will
be sought on the other server. That may or may not be what you want,
depending on how you have set things up.

That is: it is in general non-trivial to reverse-proxy a service at a
different places in the url hierarchy from where the service believes
it is located. Sometimes a different approach is simplest.


server {

# automatically sets to https if someone comes in on http
 listen 80;
 listen 8084;


Hmm. Is this 8084 the same as 192.168.xxx.yyy:8084 above? If so, things
might get a bit confused.

Good luck with it,

f




  \\\||//
   (@ @)
ooO_(_)_Ooo__
|__|_|_|_|_|_|_|_|
|___||_|_|_|_|_|_||
|_|_| mailto:p...@stormy.ca _|||
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: SSL and port number [was: Rewrite -- failure]

2020-04-28 Thread Paul

On 2020-04-22 3:14 a.m., Francis Daly wrote:

On Tue, Apr 21, 2020 at 07:09:41PM -0400, Paul wrote:

Hi there,

I confess I'm not quite certain what you are reporting here -- if you
can say "with *this* config, I make *this* request and I get *this*
response, but I want *that* response instead", it may be clearer.

However, there is one thing that might be a misunderstanding here:

"listen 8000;" means that nginx will listen for http, so you must make
requests to port 8000 using http not https.

"listen 8001 ssl;" means that nginx will listen for https, so you must
make requests to port 8001 using https not http.

You can have both "listen" directives in the same server{}, but you
still must use the correct protocol on each port, or there will be errors.


Hi Francis,

Thanks. I have the two sites "mostly" working now (full config below), 
but could you please expand on your comment ""listen 8001 ssl;" means 
that nginx will listen for https, so you must make requests to port 8001 
using https not http."


My problem is that app/server A (static html) is working perfectly, but 
app/server B works only if the user's browser requests specifically 
"https://... ", but returns a "400 Bad Request // The plain HTTP request 
was sent to HTTPS port // nginx" if the browser requests http (which I 
believe is the default for most browsers if you paste or type just the 
URL into them.)


In other words, the last few lines of the config. work for port 80 
(sends seamlessly the 301, then the content), but not for port 8084 
(sends only the 400.)


Many thanks -- Paul


# Combined file, two servers for myapps.example.com
# myappa "A" for static site /var//myappa on 192.168.aaa.bbb
# myappb "B" for cgi site /usr/share/myappb on 192.168.xxx.yyy

# Server A
server {

listen 443 ssl;

ssl_certificate 
/etc/letsencrypt/live/myapps.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key 
/etc/letsencrypt/myapps.example.com/privkey.pem; # managed by Certbot

include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

server_name myapps.example.com;

access_log /var/log/nginx/access.log;
error_log  /var/log/nginx/myapp-error_log;

proxy_buffering off;

location / {
proxy_pass http://myappa;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   }

}

# Server B
server {

listen 8084 ssl;

ssl_certificate 
/etc/letsencrypt/live/myapps.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key 
/etc/letsencrypt/live/myapps.example.com/privkey.pem; # managed by Certbot

include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

server_name myapps.example.com;

access_log /var/log/nginx/access.log;
error_log  /var/log/nginx/myapp-error_log;

proxy_buffering off;

location / {
proxy_pass http://myappb:8084;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   }

}

server {
if ($host = myapps.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot

# automatically sets to https if someone comes in on http
listen 80;
listen 8084;
server_name myapps.example.com;
rewrite ^   https://$host$request_uri? permanent;
}

  \\\||//
   (@ @)
ooO_(_)_Ooo__
|__|_|_|_|_|_|_|_|
|___||_|_|_|_|_|_||
|_|_| mailto:p...@stormy.ca _|||
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: bandwidth limit for specific server

2021-12-24 Thread Paul

On 2021-12-22 8:53 p.m., huiming wrote:

Francis Daly??

 ?0?2extremely appreciative of your feedback.

 ?0?2 ?0?2 ?0?2Any suggestion for third-party module that does some form of 
internal bandwidth limiting?


Disclaimer: this is not a specific nginx reply. And it's not 
third-party. You can profile all uses of bandwidth within a linux 
environment using tc (please see 
<https://man7.org/linux/man-pages/man8/tc.8.html>)


Season's greetings to all on list,
Paul
---
Tired old sys-admin.




thanks
huiming


--?0?2Original?0?2--
*From:* "nginx" ;
*Date:*?0?2Thu, Dec 23, 2021 07:47 AM
*To:*?0?2"nginx";
*Subject:*?0?2Re: bandwidth limit for specific server

On Sat, Dec 18, 2021 at 01:37:11PM +0800, huiming wrote:

Hi there,

 >?0?2?0?2 Is it possible to limit total bandwidth for server?

Using only stock nginx, I believe the answer is "yes, but not in a way
that you would want; so effectively no".

You can limit the number of concurrent (active) connections; you can
limit the rate of requests that nginx will process; and you can limit
the response bandwidth for each request.

By combining those, you can put an upper limit on the response bandwidth;
but I suspect that it is unlikely to be useful for you.



You might be happier looking for a third-party module that does some
form of internal bandwidth limiting; or use something outside of nginx
to limit the bandwidth.

The latter would probably be simpler if your chosen server_name was
the only one this nginx handled; or if the IP address were dedicated to
this server_name -- in those cases, the external thing would not need to
know much (or anything?) about what nginx is doing; it could just handle
"traffic from this process group", or "traffic from this IP address".

 >?0?2?0?2 server {
 >?0?2?0?2?0?2?0?2 listen?0?2?0?2 443 ssl;
 >?0?2?0?2?0?2?0?2 server_name x.x.x.x.x;
 >
 >
 >?0?2?0?2?0?2?0?2 is it possible to limit total bandwidth for this server to for 
example 5M ? not to limit TCP connection bandwidth. need total bandwidth.


It is using the TCP connection bandwidth limit; but if you were to
"limit_rate" to 1m and "limit_req" to 5 r/s, then you would not use
more than 5M (bps) -- but you would probably normally end up using less
than that; because individual requests would not use 5, while multiple
requests would probably lead to lots of small failure responses.

Good luck with it,

f
--
Francis Daly?0?2?0?2?0?2?0?2?0?2?0?2?0?2 fran...@daoine.org
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx




  \\\||//
   (@ @)
ooO_(_)_Ooo__
|__|_|_|_|_|_|_|_|
|___||_|_|_|_|_|_||
|_|_| mailto:p...@stormy.ca _|||
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: django app static files on the different server not get loaded

2022-01-02 Thread Paul

A happy new 2022 to all.

A few thoughts, Sunday afternoon, watching the snow fall...

On 2022-01-02 4:26 p.m., ningja wrote:
[snip]

App1 can load the static files and run correctly from URL
https://test1.com/app1. Test2 has a Django app2 which has static files under
/app/public/static on server test2. I can access it from URL
https://test2.com/app2. Everything works including static files.


What happens if you curl <https://test1.com> and <https://test2.com> 
with no trailing slash and app number?


Assuming that they have unique IP addresses:
	-- you write that your "index.html equivalent page" that you call app1 
for IP test1 serves static content and runs correctly
	-- you also say that for your "index.html equivalent page" that you 
call app2 for IP test2 "Everything works including static files."


The issue is I need to configure nginx1 


Assuming this is test1.com (or do you physically have two separate 
instances of nginx on two separate servers?):


to allow people to access app2 from the public internet. 


[you maybe mentioned it earlier] so the "index.html equivalent page" 
that you call app2 is LAN only?  Conceptually, you suggest that you want 
app1 and app2 available WAN. Why not write a simplistic entry page with 
two s to the two pages? You could possibly also use a 301 or 
meta "refresh" to simplify your users' experience?



The config file I post here is from test1 server.
With this config I can access app2 html pages from the internet (just what I
wanted) but the page did NOT try load the static files from
https://test2.com/app2/ instead it try to load the static from
https://test1.com/app2/.   How can I have the nginx to look app2's static
files under https://test2.com?


I didn't see the "config file I [you] post here is from test1 server", 
but maybe you are asking nginx to do something that could trivially be 
done with symbolic links? They work well, fast, and with suitable 
permissions pose few security risks.


Reminiscing while watching the snow fall, my first computers (Elea 9000, 
IBM 7090) glowed in the dark, were marvelously intriguing until you had 
to read a two foot pile of "fan-fold" to find which 80-column card had a 
glitch.


You're talking Django/Python, I'm remembering machine language, UNIVAC 
and COBOL, FORTRAN -- but the world has not changed much, you're still 
talking to a tube or transistor.


Please don't think that nginx is your initial "I can't get it to work." 
A tad of curiosity, creativity, imagination and (heaven forfend) 
thinking, will always prevail and prove rewarding.


Again, happy new year to all; and with my deepest appreciation of all 
the participants on this list.


Yours aye,
Paul

  \\\||//
   (@ @)
ooO_(_)_Ooo__
|__|_|_|_|_|_|_|_|
|___||_|_|_|_|_|_||
|_|_| mailto:p...@stormy.ca _|||
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx performance data

2022-01-10 Thread Paul

On 2022-01-10 12:47 p.m., James Read wrote:

I've been doing some preliminary experiments with PACKET_MMAP style 
communication. 


With apologies for "snipping", and disclaimer that I am not an nginx 
developer, only a long term user.


So, MMAP has given you "preliminary" analysis of what your kernel can do 
with your hardware. Would you care to share, in a meaningful manner, any 
results that you feel are relevant to any tcp processes - perhaps nginx 
in particular?


I'm able to max out the available bandwidth using this 
technique. 


Available bandwidth? Please define. Is this local, or WAN? Are you on a 
56k dial-up modem? or do you have multiple fail-over, load-balanced 
fibre connectivity?  MMAP to the best of my knowledge, never claimed to 
be able to simulate live (live in the sense 'externally processed IP') 
tcp/http connections, so what "recognized benchmark" did you max out?


Could Nginx be improved in a similar way?

"improved"? From what and to what? Starting point? End-point? Similar to 
what "way"?


You write (below) "a large number of small pages to a large number of 
clients..." Large number? 10 to what exponential?  I've just looked at 
an nginx server that has dealt with ~88.3 GB/sec over the last few 
minutes, and cpu usage across 32 cores is bumbling along at less that 
3%, temperatures barely 3 degrees above ambient, memcached transferring 
nothing to swap.


Either you have badly explained what you are looking for, or, heaven 
forfend, you're trolling.


Paul.
Tired old sys-admin.


James Read



Regards
Alex

 > On Fri, Jan 7, 2022 at 6:33 PM James Read
mailto:jamesread5...@gmail.com>> wrote:
 >
 >
 >
 >     On Fri, Jan 7, 2022 at 11:56 AM Anoop Alias
mailto:anoopalia...@gmail.com>> wrote:
 >
 >         This basically depends on your hardware and network speed etc
 >
 >         Nginx is event-driven and does not fork a
separate process for handling new connections which basically makes
it different from Apache httpd
 >
 >
 >     Just to be clear Nginx is entirely single threaded?
 >
 >     James Read
 >
 >
 >         On Wed, Jan 5, 2022 at 5:48 AM James Read
mailto:jamesread5...@gmail.com>> wrote:
 >
 >             Hi,
 >
 >             I have some questions about Nginx performance. How
many concurrent connections can Nginx handle? What throughput can
Nginx achieve when serving a large number of small pages to a large
number of clients (the maximum number supported)? How does Nginx
achieve its performance? Is the epoll event loop all done in a
single thread or are multiple threads used to split the work of
serving so many different clients?
 >
 >             thanks in advance
 >             James Read


___
nginx mailing list
nginx@nginx.org <mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx
<http://mailman.nginx.org/mailman/listinfo/nginx>


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx




  \\\||//
   (@ @)
ooO_(_)_Ooo__
|__|_|_|_|_|_|_|_|
|___||_|_|_|_|_|_||
|_|_| mailto:p...@stormy.ca _|||
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: OAuth/OpenID

2022-02-15 Thread Paul

On 2022-02-15 3:18 p.m., Michael Powell wrote:


On Tue, Feb 15, 2022 at 10:08 AM Sergey A. Osokin <mailto:o...@freebsd.org.ru>> wrote:


Hi Michael,

hope you're doing well.

On Tue, Feb 15, 2022 at 08:41:08AM -0500, Michael Powell wrote:
 > Hello,
 >
 > Setting up some web sites, etc, looking into alternatives to Amazon
 > Cognito, for instance, for user and/or 'identity' management,
integration
 > with 3P OAuth providers, i.e. Google, Facebook, etc. As I
understand it,
 > nginx provides these features, and more?

Yes, it's possible to setup OIDC flow with NGINX products.  Please note
an Identity Provider (IdP) needs to be configured as well, and that one
is a separate product.

So it is not 'free' or even 'open source'? What is the pricing/cost 
behind that? 


This is probably going well beyond the normal scope of this list, but 
have you looked at open source openLDAP?  It might do what you want, but 
might prove time-consuming at your end to set it up.  Up to you to 
decide how your inhouse costs compare to "not free" outside expertise.


Trying to inquire about that through the NGINX site, but my 
email is not allowed there apparently


Well... gmail in not exactly a business address. Have you tried 
old-fashioned telephone at 1-800-915-9122?


Paul
---
Disclaimer: I have absolutely no monetary affiliation whatsoever with nginx




We are effectively early stage
startup so it is what it is. Is there another way to obtain pricing for 
our cost purposes? Thank you...


Here's the reference implementation of OpenID Connection integration
for NGINX Plus, [1].  It uitilizes some NGINX Plus features, such as
auth_jwt directive, [2] from the ngx_http_auth_jwt_module, [3],
keyval [4]
and keyval_zone [5] directives from ngx_http_keyval_module [6] module,
and NGINX JavaScript module, [7].

References:
[1] https://github.com/nginxinc/nginx-openid-connect
<https://github.com/nginxinc/nginx-openid-connect>
[2]
https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html#auth_jwt 
<https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html#auth_jwt>
[3] https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html
<https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html>
[4]
https://nginx.org/en/docs/http/ngx_http_keyval_module.html#keyval
<https://nginx.org/en/docs/http/ngx_http_keyval_module.html#keyval>
[5]
https://nginx.org/en/docs/http/ngx_http_keyval_module.html#keyval_zone
<https://nginx.org/en/docs/http/ngx_http_keyval_module.html#keyval_zone>
[6] https://nginx.org/en/docs/http/ngx_http_keyval_module.html
<https://nginx.org/en/docs/http/ngx_http_keyval_module.html>
[7] http://nginx.org/en/docs/njs/ <http://nginx.org/en/docs/njs/>

-- 
Sergey Osokin

___
nginx mailing list -- nginx@nginx.org <mailto:nginx@nginx.org>
To unsubscribe send an email to nginx-le...@nginx.org
<mailto:nginx-le...@nginx.org>


___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org




  \\\||//
   (@ @)
ooO_(_)_Ooo__
|__|_|_|_|_|_|_|_|
|___||_|_|_|_|_|_||
|_|_| mailto:p...@stormy.ca _|||
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org


Re: Nginx rewrite issue

2022-02-20 Thread Paul

On 2022-02-20 8:17 a.m., Dr_tux wrote:

Hello,

I want to write a rewrite like http://url/index.php?target=server1 and
http://url/target=server1 in Nginx and I want to use it in reverse proxy.
This is possible in AWS, but how can I do it in Nginx?

I tried as follows. Not worked.

location = /index.html?target=server1 {
   proxy_pass http://server1;
}

location = /index.html?target=server2 {
   proxy_pass http://server2;
}


What do your error logs show?

Maybe the '=' after 'location' is superfluous?  Please refer to the 
manual at 
https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/>.


And for more info on using wildcards in 'location', please see 
<https://www.nginx.com/blog/regular-expression-tester-nginx/>


nginx documentation is pretty good, so start there.  If you still have 
problems after "read the manual" full details of your server setup and 
errors logged would be helpful for others to try and help you.


HTH -- Paul
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org


Re: NGINX 1.21.x EOL?

2022-10-18 Thread Paul

On 2022-10-18 10:26, Sergey A. Osokin wrote:
[snip]


nginx 1.21.6 is available for download, [1], here's the direct link, [2].


Is it already End Of Life?


Yes, it is, please welcome to 1.23.x series.


Interesting.  All the servers that I run in production are either Ubuntu 
22.04LTS or Debian bullseye (both are latest stable releases) and I find 
that they are still at 1.18.0-6ubuntu14.1 or 1.18.0-6.1+deb11u2


Now I fully recognize that package managers are a tad conservative, and 
that both Debian and Ubuntu try and stay on top of security patches, but 
"end of life" sounds a bit scary ;=}


Best -- Paul
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org


Redirect www to not-www

2023-01-10 Thread Paul

Happy 2023 to all on this list.

Using nginx (1.18.0 on Ubuntu 20.04.5) as proxy to back-end, I have 
three sites (a|b|c.example.com) in a fast, reliable production 
environment. I have DNS records set up for www.a|b|c.example.com.  I 
have CertBot set up for only a|b|c.example.com.


To avoid "doubling" the number of sites-available and security scripts, 
and to avoid the unnecessary "www." I would like to add something like:


server {
  server_name www.a.example.com;
  return 301 $scheme://a.example.com$request_uri;
}

but I have tried this in several places, www.a.example.com works, but 
does not remove the www prefix, and fails any browser's security checks 
(nginx -t is "ok").


Where, in the following config, is the most elegant place to put such a 
"return" line?  Maybe I'm missing something fundamental?



server {
listen 443 ssl;
 [ ... # 4 lines managed by Certbot ... ]
server_name a.example.com;# Note: or b.example.com, or 
c.example.com

 [ ... logging ... ]
proxy_buffering off;
if ($request_method !~ ^(GET|HEAD|POST)$) {
   return 444;
}
location / {
proxy_pass http://192.168.x.y:port;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   }
}
server {
if ($host = a.example.com) {# Note: or b.example.com, or 
c.example.com

return 301 https://$host$request_uri;
}
listen 80;
server_name a.example.com;  # Note: or b.example.com, or 
c.example.com

rewrite ^   https://$host$request_uri? permanent;
}

Many thanks -- Paul


  \\\||//
   (@ @)
ooO_(_)_Ooo__
|__|_|_|_|_|_|_|_|
|___||_|_|_|_|_|_||
|_|_| mailto:p...@stormy.ca _|||
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Redirect www to not-www

2023-01-10 Thread Paul

On 2023-01-10 13:43, Francis Daly wrote:


Using nginx (1.18.0 on Ubuntu 20.04.5) as proxy to back-end, I have three
sites (a|b|c.example.com) in a fast, reliable production environment. I have
DNS records set up for www.a|b|c.example.com.  I have CertBot set up for
only a|b|c.example.com.

To avoid "doubling" the number of sites-available and security scripts, and
to avoid the unnecessary "www." I would like to add something like:
/.../

There are 4 families of requests that the client can make:

* http://www.a.example.com
* http://a.example.com
* https://www.a.example.com
* https://a.example.com

It looks like you want each of the first three to be redirected to
the fourth?


Many thanks.  That is totally correct.  Given your comment re "lack of 
certificate" and "validation will fail"  I have now expanded CertBot to 
include the three "www." names. All works fine (as far as I can see 
using Firefox, Opera, Vivaldi clients -- and Edge, had to boot up an old 
laptop!)


BUT... for that one step further and have all server (nginx) responses 
go back to the end-client as:

https://a.example.com
and NOT as:
https://www.a.example.com
^^^
I have written an /etc/nginx/conf.d/redirect.conf as:
server {
  server_name www.a.example.com;
  return 301 $scheme://a.example.com$request_uri;
}

which seems to work, but I would appreciate your opinion - is this the 
best, most elegant, secure way?  Does it need "permanent" somewhere?


I've never used "scheme" before today, but we've got an external 
advisory audit going on, and I'm trying to keep them happy.


Many thanks and best regards,
Paul



It is straightforward to redirect the first two to the fourth --
something like

server {
server_name a.example.com www.a.example.com;
return 301 https://a.example.com$request_uri;
}

should cover both.

(Optionally with "listen 80;", it replaces your similar no-ssl server{}
block.)

But for the third family, the client will first try to validate the
certificate that it is given when it connects to www.a.example.com,
before it will make the http(s) request that you can reply to with
a redirect. And since you do not (appear to) have a certificate for
www.a.example.com, that validation will fail and there is nothing you
can do about it. (Other that get a certificate.)

Cheers,

f


  \\\||//
   (@ @)
ooO_(_)_Ooo__
|__|_|_|_|_|_|_|_|
|___||_|_|_|_|_|_||
|_|_| mailto:p...@stormy.ca _|||
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: SSL Reuse not happening in s3 presigned urls

2023-10-01 Thread Paul

On 2023-09-30 15:09, Vijay Kumar Kamannavar wrote:

I am using nginx reverse proxy for s3 presigned urls.


[Disclaimer: very limited experience with amazonaws, so will assume that 
you comply fully with 
<https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html>, 
if not, maybe ask them?]


[snip]


     # HTTPS server block with SSL certificate and S3 reverse proxy
     server {
         listen 443 ssl;
         ssl_protocols         SSLv3 TLSv1 TLSv1.1 TLSv1.2;


nginx strongly suggested at 
<https://www.nginx.com/blog/nginx-poodle-ssl/> removing SSLv3 nine years 
ago.  SSL Labs will also give you a rock bottom rating when you allow 
TLSv1 and TLSv1.1 (although they might still be vaguely acceptable) and 
the latest security standard TLSv1.3 (rfc8446, 2018) works extremely 
well in nginx with e.g. CertBot certificates.


*Perhaps* if you updated your config. to basic industry standards 
(probably required for compatibility with amazonaws?), then some of your 
handshake caching timeouts and errors would be vastly attenuated or 
disappear.


[snip]

If I run 4K clients using a simulator,I will see 100% CPU in the nginx 
container.I believe if we cache SSL sessions then SSL handshake for 
every request will be avoided hence we may not have high CPU at nginx 
container.


"run 4k clients"?  Over what period of time? Simultaneous, identical 
connection requests? Even if your connectivity, router and firewall can 
handle that, your "16 Core and 32GB" with potential security problems 
could well be brought to its knees.  As a rule of thumb for servers 
(nginx and apache), I have always used 8 GiB memory per core. YMMV.


Paul
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: NGINX has moved to Github!

2024-09-06 Thread Paul

On 2024-09-06 11:11, Roman Arutyunyan wrote:

Hello from NGINX!

Today we're thrilled to announce that the official NGINX Open Source development
repository has moved from Mercurial to GitHub [1][2][3], where we will now start
accepting contributions in the form of Pull Requests. Additionally, starting
today, we will begin accepting bugs reports, feature requests and enhancements
directly through GitHub, under the "Issues" tab. Moreover, we've moved our
community forums to the GitHub "Discussions" area, where you will now be able
to engage in conversation, ask, and answer questions.


Does this mean that this list and/or your nginx.org dies on 31 December 
2024?


I have tried to follow F5 and nginx internal differences for some time. 
Maybe I should have seen the creation of freenginx as "writing on the 
wall."  I, like some or many of my colleagues, will not migrate to 
github (maybe has some good points, but destroys your "named corporate" 
identity.)


What is the position of major (Redhat, Canonical, etc) Linux 
distributions?  Can we rely on continuing reliability?  What "free" 
and/or "paid" licensing agreements are you planning?


Many of us use nginx only as a reverse proxy.  It's fast and efficient. 
I flat out refuse to get into politics, but your announcement is not 
re-assuring.  Do we need (fast? before year end?) a fall-back position?


Thanks -- Paul



Important: to report a security vulnerability, please follow our security
policy [4].

We understand that changes like these may require adjustment, so to give you
more time, we will continue accepting patches and provide community support
via mailing lists until December 31st, 2024.

We believe these changes will serve to centralize, modernize and expand access
to NGINX development and communities. They represent our continued commitment
to open source, as outlined in the blog post [5]. Most of all, we can't wait to
see all of your contributions, discussions and feedback, as we move into this
next chapter for NGINX.

[1] https://github.com/nginx/nginx
[2] https://github.com/nginx/nginx-tests
[3] https://github.com/nginx/nginx.org
[4] https://github.com/nginx/nginx/blob/master/SECURITY.md
[5] 
https://www.f5.com/company/blog/nginx/meetup-recap-nginxs-commitments-to-the-open-source-community


On behalf of the NGINX Team,

Roman Arutyunyan
a...@nginx.com
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: I'm about to embark on creating 12000 vhosts

2019-02-11 Thread Richard Paul
Hi Ben,

Thanks for the quick response. That's great to hear, as we'd only get to find 
this out after putting rather a lot of effort into the process.
We'll be hosting these on cloud instances but since those aren't the fastest 
machines around I'll take the reloading as a word of caution (we're probably 
going to have to make another bit of application functionality which will 
handle this so that we're only reloading when we have domain changes rather 
than on a regular schedule that'd I'd thought would be the simplest method.)

I have a plan for the rate limits, but thank you for mentioning it. SANs would 
reduce the number of vhosts, but I'm not sure about the added complexity of 
managing the vhost templates and the key/cert naming.

Kind regards,
Richard


On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote:
Hi Richard,

we have experience with around 1/4th the vhosts on a single Server, no Issues 
at all.
Reloading can take up to a minute but the Hardware isn't what I would call 
recent.

The only thing that you'll have to watch out are Letsencrypt rate Limits > 
https://letsencrypt.org/docs/rate-limits/
#
/etc/letsencrypt/renewal $ ls | wc -l
1647
#
We switched to using SAN Certs whenever possible.

Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No Issues 
ether.

Cheers,
Ben

On Mon, Feb 11, 2019 at 4:16 PM rick_pri 
mailto:nginx-fo...@forum.nginx.org>> wrote:
Our current setup is pretty simple, we have a regex capture to ensure that
the incoming request is a valid ascii domain name and we serve all our
traffic from that.  Great ... for us.

However, our customers, with about 12000 domain names at present have
started to become quite vocal about having HTTPS on their websites, to which
we provide a custom CMS and website package, which means we're about to
create a new Nginx layer in front of our current servers to terminate TLS.
This will require us to set up vhosts for each certificate issued with
server names which match what's in the certificate's SAN.

To keep this simple we're currently thinking about just having each domain,
and www subdomain, on its own certificate (LetsEncrypt) and vhost but that
is going to lead, approximately, to the number of vhosts mentioned in the
subject line.  As such I wanted to put the feelers out to see if anyone else
had tried to work with large numbers of vhosts and any issues which they may
have come across.

Kind regards,

Richard

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,282986,282986#msg-282986

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

___

nginx mailing list



nginx@nginx.org




http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: I'm about to embark on creating 12000 vhosts

2019-02-12 Thread Richard Paul
Hi Robert,

I've not looked in a while but I think that there where some large assumptions 
in openresty that you are running on Linux. I'll have a look again but it might 
not quite be a good fit for us.

Kind regards,
Richard

On Mon, 2019-02-11 at 10:34 -0800, Robert Paprocki wrote:
FWIW, this kind of large installation is why solutions like OpenResty exist 
(providing for dynamic config/cert service/hostname registration without having 
to worry about the time/expense of re-parsing the Nginx config).

On Mon, Feb 11, 2019 at 7:59 AM Richard Paul 
mailto:rich...@primarysite.net>> wrote:
Hi Ben,

Thanks for the quick response. That's great to hear, as we'd only get to find 
this out after putting rather a lot of effort into the process.
We'll be hosting these on cloud instances but since those aren't the fastest 
machines around I'll take the reloading as a word of caution (we're probably 
going to have to make another bit of application functionality which will 
handle this so that we're only reloading when we have domain changes rather 
than on a regular schedule that'd I'd thought would be the simplest method.)

I have a plan for the rate limits, but thank you for mentioning it. SANs would 
reduce the number of vhosts, but I'm not sure about the added complexity of 
managing the vhost templates and the key/cert naming.

Kind regards,
Richard


On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote:
Hi Richard,

we have experience with around 1/4th the vhosts on a single Server, no Issues 
at all.
Reloading can take up to a minute but the Hardware isn't what I would call 
recent.

The only thing that you'll have to watch out are Letsencrypt rate Limits > 
https://letsencrypt.org/docs/rate-limits/
#
/etc/letsencrypt/renewal $ ls | wc -l
1647
#
We switched to using SAN Certs whenever possible.

Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No Issues 
ether.

Cheers,
Ben

On Mon, Feb 11, 2019 at 4:16 PM rick_pri 
mailto:nginx-fo...@forum.nginx.org>> wrote:
Our current setup is pretty simple, we have a regex capture to ensure that
the incoming request is a valid ascii domain name and we serve all our
traffic from that.  Great ... for us.

However, our customers, with about 12000 domain names at present have
started to become quite vocal about having HTTPS on their websites, to which
we provide a custom CMS and website package, which means we're about to
create a new Nginx layer in front of our current servers to terminate TLS.
This will require us to set up vhosts for each certificate issued with
server names which match what's in the certificate's SAN.

To keep this simple we're currently thinking about just having each domain,
and www subdomain, on its own certificate (LetsEncrypt) and vhost but that
is going to lead, approximately, to the number of vhosts mentioned in the
subject line.  As such I wanted to put the feelers out to see if anyone else
had tried to work with large numbers of vhosts and any issues which they may
have come across.

Kind regards,

Richard

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,282986,282986#msg-282986

___
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

___

nginx mailing list

<mailto:nginx@nginx.org>

nginx@nginx.org

<http://mailman.nginx.org/mailman/listinfo/nginx>

http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

___

nginx mailing list

<mailto:nginx@nginx.org>

nginx@nginx.org


<http://mailman.nginx.org/mailman/listinfo/nginx>

http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: I'm about to embark on creating 12000 vhosts

2019-02-12 Thread Richard Paul
Hi Rainer,

We don't control all the DNS, so of our customers prefer to keep control in 
house for that stuff. Also, wildcards don't work for us in this case, they have 
individual vanity domains, sometimes more than one which are not wildcardable 
unless I could get something like *.*.co.uk 😄.

Kind regards,
Richard

On Mon, 2019-02-11 at 19:57 +0100, Rainer Duffner wrote:


Am 11.02.2019 um 16:16 schrieb rick_pri 
mailto:nginx-fo...@forum.nginx.org>>:

However, our customers, with about 12000 domain names at present have


Let’s Encrypt rate limits will likely make these very difficult to obtain and 
also to renew.

If you own the DNS, maybe using Wildcard DNS entries is more practical.

Then, HAProxy allows to just drop all the certificates in a directory and let 
itself figure out the domain-names it has to answer.
At least, that’s what my co-worker told me.

Also, there’s the fabio LB with similar goal-posts.





___

nginx mailing list



nginx@nginx.org




http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: I'm about to embark on creating 12000 vhosts

2019-02-12 Thread Richard Paul
Hi Jeff

That's interesting, how do you manage the progamming to load the right 
certificate for the right domain coming in as the server name? We need to load 
the right certificate for the incoming domain and the 12000 figure is the 
number of unique vanity domains without the www. subdomains.

We're planning to follow the same path as you though, we're essentially putting 
these Nginx TLS terminators (fronted by GCP load balancers) in front of our 
existing Varnish caching and Nginx backend infrastructure which currently only 
listen on port 80.

I couldn't work out what the limits are at LE as it's not clear with regards to 
adding new unique domains limits. I'm going to have to ask in the forums at 
some point so that I can work out what our daily batches are going to be.

Kind regards,
Richard

On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote:
I use haproxy in a similar way as stated by Rainer, rather than having hundreds 
and hundreds of config files (yes there are other ways), i have 1 for haproxy 
and 2(on multiple machines defined in HAProxy). One for my main domain that 
listens to an "real" server_name and another that listens to `server_name _;`  
All of the nginx servers simply listen on 80 and 81 to handle non H2 clients 
and the application does the correct thing with the domain.  Which is where 
YMMV as all applications differ.

I found this much simpler and easier to maintain over time.  I got around the 
LE limits by a staggered migration, so i was only requesting what was in the 
limit each day, then have a custom script that calls LE (which is also on the 
same machine as HAProxy) when certs are about 10 days out, so the staggering 
stays within the limits.  When i was using custom configuration, i was build 
them via python using a yaml file and nginx would effectively be a jinja2 
template.  But even that became onerous.  When going down the nginx path ensure 
you pay attention to the variables that control domain hash sizes. 
http://nginx.org/en/docs/hash.html

HTH, good luck!
Jeff

On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner 
mailto:rai...@ultra-secure.de>> wrote:


Am 11.02.2019 um 16:16 schrieb rick_pri 
mailto:nginx-fo...@forum.nginx.org>>:

However, our customers, with about 12000 domain names at present have


Let’s Encrypt rate limits will likely make these very difficult to obtain and 
also to renew.

If you own the DNS, maybe using Wildcard DNS entries is more practical.

Then, HAProxy allows to just drop all the certificates in a directory and let 
itself figure out the domain-names it has to answer.
At least, that’s what my co-worker told me.

Also, there’s the fabio LB with similar goal-posts.




___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

___

nginx mailing list



nginx@nginx.org




http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: I'm about to embark on creating 12000 vhosts

2019-02-12 Thread Richard Paul
Hi Andreas,

Good to hear that this is scaling well for you at this level.

With regards to reload, you mean a reload rather than a restart I take it? 
We'll be load balanced and building these from config and deployment management 
systems so a long reload/restart is not the end of the world as we can build a 
patched box and take out an old unpatched machine.

Kind regards,
Richard

On Mon, 2019-02-11 at 20:53 +0100, A. Schulze wrote:


Am 11.02.19 um 16:16 schrieb rick_pri:

As such I wanted to put the feelers out to see if anyone else

had tried to work with large numbers of vhosts and any issues which they may

have come across.


Hello


we're running nginx (latest) with ~5k domains + 5k



www.domain


without issues. Configuration file is created by configuration management 
system.

Currently nginx only serve https and proxy to a apache@localhost.


Funfact: nging reload that number of vhost + certificates

faster then apache simply handling only plain http :-)


Andreas

___

nginx mailing list



nginx@nginx.org




http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: I'm about to embark on creating 12000 vhosts

2019-02-12 Thread Richard Paul
Hi Peter,

I'm sure that it's great and all, but I've just been to look at the 
https://openresty.org/en/installation.html page for the installation again and 
it's very much not friendly for configuration management unless you're on a 
supported platform with packages available to you. I'm sure that we could put 
together a poudriare server to do the package building from source/ports for 
FreeBSD but if I can avoid that I will for the time being.

Kind regards,
Richard

On Mon, 2019-02-11 at 14:54 -0500, Peter Booth via nginx wrote:
+1 to the openresty suggestion

I’ve found that whenever I want to do something gnarly or perverse with nginx, 
openresty helps me do it in a way that’s maintainable and with any ugliness 
minimized.

It’s like nginx with super-powers!

Sent from my iPhone

On Feb 11, 2019, at 1:34 PM, Robert Paprocki 
mailto:rpapro...@fearnothingproductions.net>>
 wrote:

FWIW, this kind of large installation is why solutions like OpenResty exist 
(providing for dynamic config/cert service/hostname registration without having 
to worry about the time/expense of re-parsing the Nginx config).

On Mon, Feb 11, 2019 at 7:59 AM Richard Paul 
mailto:rich...@primarysite.net>> wrote:
Hi Ben,

Thanks for the quick response. That's great to hear, as we'd only get to find 
this out after putting rather a lot of effort into the process.
We'll be hosting these on cloud instances but since those aren't the fastest 
machines around I'll take the reloading as a word of caution (we're probably 
going to have to make another bit of application functionality which will 
handle this so that we're only reloading when we have domain changes rather 
than on a regular schedule that'd I'd thought would be the simplest method.)

I have a plan for the rate limits, but thank you for mentioning it. SANs would 
reduce the number of vhosts, but I'm not sure about the added complexity of 
managing the vhost templates and the key/cert naming.

Kind regards,
Richard


On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote:
Hi Richard,

we have experience with around 1/4th the vhosts on a single Server, no Issues 
at all.
Reloading can take up to a minute but the Hardware isn't what I would call 
recent.

The only thing that you'll have to watch out are Letsencrypt rate Limits > 
https://letsencrypt.org/docs/rate-limits/
#
/etc/letsencrypt/renewal $ ls | wc -l
1647
#
We switched to using SAN Certs whenever possible.

Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No Issues 
ether.

Cheers,
Ben

On Mon, Feb 11, 2019 at 4:16 PM rick_pri 
mailto:nginx-fo...@forum.nginx.org>> wrote:
Our current setup is pretty simple, we have a regex capture to ensure that
the incoming request is a valid ascii domain name and we serve all our
traffic from that.  Great ... for us.

However, our customers, with about 12000 domain names at present have
started to become quite vocal about having HTTPS on their websites, to which
we provide a custom CMS and website package, which means we're about to
create a new Nginx layer in front of our current servers to terminate TLS.
This will require us to set up vhosts for each certificate issued with
server names which match what's in the certificate's SAN.

To keep this simple we're currently thinking about just having each domain,
and www subdomain, on its own certificate (LetsEncrypt) and vhost but that
is going to lead, approximately, to the number of vhosts mentioned in the
subject line.  As such I wanted to put the feelers out to see if anyone else
had tried to work with large numbers of vhosts and any issues which they may
have come across.

Kind regards,

Richard

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,282986,282986#msg-282986

___
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

___

nginx mailing list

<mailto:nginx@nginx.org>

nginx@nginx.org

<http://mailman.nginx.org/mailman/listinfo/nginx>

http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

___

nginx mailing list

<mailto:nginx@nginx.org>

nginx@nginx.org


<http://mailman.nginx.org/mailman/listinfo/nginx>

http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: I'm about to embark on creating 12000 vhosts

2019-02-12 Thread Richard Paul
Hi Anoop,

This is great and really valuable information, thank you. .

I'd heard that CloudFlare use a variant of Nginx for providing SSL termination 
which was why I was hopefully that it would be able to manage our use case.

Kind regards,
Richard

On Tue, 2019-02-12 at 07:31 +0530, Anoop Alias wrote:
I maintain an Nginx config generation plugin for a web hosting control panel, 
where people put on such high number of domains on a server normally and things 
I notice are

1. Memory consumption by worker process go up when vhost count go up , so we 
may need to reduce worker count

2. As already mentioned the reload might take a lot of time ,so do nginx -t

3. Even startup will take time as most package maintainers put a nginx -t on 
ExecPre(similar in non-systemd) which take a lot of time on startup

I have read somewhere, Nginx is not good at handling this many vhost defs ,so 
they use a dynamic setup (like the one in OpenResty) at CloudFlare edge servers 
for SSL

On Tue, Feb 12, 2019 at 1:25 AM Peter Booth via nginx 
mailto:nginx@nginx.org>> wrote:
+1 to the openresty suggestion

I’ve found that whenever I want to do something gnarly or perverse with nginx, 
openresty helps me do it in a way that’s maintainable and with any ugliness 
minimized.

It’s like nginx with super-powers!

Sent from my iPhone

On Feb 11, 2019, at 1:34 PM, Robert Paprocki 
mailto:rpapro...@fearnothingproductions.net>>
 wrote:

FWIW, this kind of large installation is why solutions like OpenResty exist 
(providing for dynamic config/cert service/hostname registration without having 
to worry about the time/expense of re-parsing the Nginx config).

On Mon, Feb 11, 2019 at 7:59 AM Richard Paul 
mailto:rich...@primarysite.net>> wrote:
Hi Ben,

Thanks for the quick response. That's great to hear, as we'd only get to find 
this out after putting rather a lot of effort into the process.
We'll be hosting these on cloud instances but since those aren't the fastest 
machines around I'll take the reloading as a word of caution (we're probably 
going to have to make another bit of application functionality which will 
handle this so that we're only reloading when we have domain changes rather 
than on a regular schedule that'd I'd thought would be the simplest method.)

I have a plan for the rate limits, but thank you for mentioning it. SANs would 
reduce the number of vhosts, but I'm not sure about the added complexity of 
managing the vhost templates and the key/cert naming.

Kind regards,
Richard


On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote:
Hi Richard,

we have experience with around 1/4th the vhosts on a single Server, no Issues 
at all.
Reloading can take up to a minute but the Hardware isn't what I would call 
recent.

The only thing that you'll have to watch out are Letsencrypt rate Limits > 
https://letsencrypt.org/docs/rate-limits/
#
/etc/letsencrypt/renewal $ ls | wc -l
1647
#
We switched to using SAN Certs whenever possible.

Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No Issues 
ether.

Cheers,
Ben

On Mon, Feb 11, 2019 at 4:16 PM rick_pri 
mailto:nginx-fo...@forum.nginx.org>> wrote:
Our current setup is pretty simple, we have a regex capture to ensure that
the incoming request is a valid ascii domain name and we serve all our
traffic from that.  Great ... for us.

However, our customers, with about 12000 domain names at present have
started to become quite vocal about having HTTPS on their websites, to which
we provide a custom CMS and website package, which means we're about to
create a new Nginx layer in front of our current servers to terminate TLS.
This will require us to set up vhosts for each certificate issued with
server names which match what's in the certificate's SAN.

To keep this simple we're currently thinking about just having each domain,
and www subdomain, on its own certificate (LetsEncrypt) and vhost but that
is going to lead, approximately, to the number of vhosts mentioned in the
subject line.  As such I wanted to put the feelers out to see if anyone else
had tried to work with large numbers of vhosts and any issues which they may
have come across.

Kind regards,

Richard

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,282986,282986#msg-282986

___
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

___

nginx mailing list

<mailto:nginx@nginx.org>

nginx@nginx.org

<http://mailman.nginx.org/mailman/listinfo/nginx>

http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx
__

Re: I'm about to embark on creating 12000 vhosts

2019-02-12 Thread Richard Paul
Hi Lucas,

Well that looks great. I've not looked at HAproxy too much, as I've not used it 
before other than during a switch over just prior to Christmas last year where 
rinetd couldn't cope with the incoming traffic load and we had to cobble 
together a quick HAProxy layer 4 configuration to redirect traffic from AWS to 
GCP.

I'll start digging into this a bit more as this looks like a better solution 
and I can maybe just use LE's webroot plugin without having to generate and 
sync Nginx configuration as well as the certs over to the TLS terminator 
instances.

Kind regards,
Richard

On Tue, 2019-02-12 at 09:32 +, Lucas Rolff wrote:
In haproxy, you simply specify a path where you have all your certificates.

frontend https_frontend
bind *:443 ssl crt /etc/haproxy/certs/default-cert.pem crt 
/etc/haproxy/certs alpn h2,http/1.1

This way, haproxy will read all certs, and when stuff comes in, it uses the 
host header to determine which certificate it should serve.

There was a thread on the haproxy mailing list not long ago, with managing more 
than 100k certificates per haproxy instance, and they’re working on further 
optimizations with those kinds of deployments (if it’s not already done.. 
haven’t checked to be honest).

Best Regards,

From: nginx  on behalf of Richard Paul 

Reply-To: "nginx@nginx.org" 
Date: Tuesday, 12 February 2019 at 10.04
To: "nginx@nginx.org" 
Subject: Re: I'm about to embark on creating 12000 vhosts

Hi Jeff

That's interesting, how do you manage the progamming to load the right 
certificate for the right domain coming in as the server name? We need to load 
the right certificate for the incoming domain and the 12000 figure is the 
number of unique vanity domains without the www. subdomains.

We're planning to follow the same path as you though, we're essentially putting 
these Nginx TLS terminators (fronted by GCP load balancers) in front of our 
existing Varnish caching and Nginx backend infrastructure which currently only 
listen on port 80.

I couldn't work out what the limits are at LE as it's not clear with regards to 
adding new unique domains limits. I'm going to have to ask in the forums at 
some point so that I can work out what our daily batches are going to be.

Kind regards,
Richard

On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote:
I use haproxy in a similar way as stated by Rainer, rather than having hundreds 
and hundreds of config files (yes there are other ways), i have 1 for haproxy 
and 2(on multiple machines defined in HAProxy). One for my main domain that 
listens to an "real" server_name and another that listens to `server_name _;`  
All of the nginx servers simply listen on 80 and 81 to handle non H2 clients 
and the application does the correct thing with the domain.  Which is where 
YMMV as all applications differ.

I found this much simpler and easier to maintain over time.  I got around the 
LE limits by a staggered migration, so i was only requesting what was in the 
limit each day, then have a custom script that calls LE (which is also on the 
same machine as HAProxy) when certs are about 10 days out, so the staggering 
stays within the limits.  When i was using custom configuration, i was build 
them via python using a yaml file and nginx would effectively be a jinja2 
template.  But even that became onerous.  When going down the nginx path ensure 
you pay attention to the variables that control domain hash sizes. 
http://nginx.org/en/docs/hash.html

HTH, good luck!
Jeff

On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner 
mailto:rai...@ultra-secure.de>> wrote:



Am 11.02.2019 um 16:16 schrieb rick_pri 
mailto:nginx-fo...@forum.nginx.org>>:

However, our customers, with about 12000 domain names at present have


Let’s Encrypt rate limits will likely make these very difficult to obtain and 
also to renew.

If you own the DNS, maybe using Wildcard DNS entries is more practical.

Then, HAProxy allows to just drop all the certificates in a directory and let 
itself figure out the domain-names it has to answer.
At least, that’s what my co-worker told me.

Also, there’s the fabio LB with similar goal-posts.




___
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

___

nginx mailing list
<mailto:nginx@nginx.org>
<mailto:nginx@nginx.org>

nginx@nginx.org



<http://mailman.nginx.org/mailman/listinfo/nginx>
<http://mailman.nginx.org/mailman/listinfo/nginx>

http://mailman.nginx.org/mailman/listinfo/nginx


___

nginx mailing list

<mailto:nginx@nginx.org>

nginx@nginx.org


<http://mailman.nginx.org/mailman/listinfo/nginx>

http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: I'm about to embark on creating 12000 vhosts

2019-02-12 Thread Richard Paul
And having looked at this further we would have to append the key to the end of 
the certificate bundle after it was issued from LE as an extra step in the 
processing so that this could work.

This still seems to be the best way forward, even if it requires an extra step 
in this case.

Kind regards,
Richard

On Tue, 2019-02-12 at 09:56 +, Richard Paul wrote:
Hi Lucas,

Well that looks great. I've not looked at HAproxy too much, as I've not used it 
before other than during a switch over just prior to Christmas last year where 
rinetd couldn't cope with the incoming traffic load and we had to cobble 
together a quick HAProxy layer 4 configuration to redirect traffic from AWS to 
GCP.

I'll start digging into this a bit more as this looks like a better solution 
and I can maybe just use LE's webroot plugin without having to generate and 
sync Nginx configuration as well as the certs over to the TLS terminator 
instances.

Kind regards,
Richard

On Tue, 2019-02-12 at 09:32 +, Lucas Rolff wrote:
In haproxy, you simply specify a path where you have all your certificates.

frontend https_frontend
bind *:443 ssl crt /etc/haproxy/certs/default-cert.pem crt 
/etc/haproxy/certs alpn h2,http/1.1

This way, haproxy will read all certs, and when stuff comes in, it uses the 
host header to determine which certificate it should serve.

There was a thread on the haproxy mailing list not long ago, with managing more 
than 100k certificates per haproxy instance, and they’re working on further 
optimizations with those kinds of deployments (if it’s not already done.. 
haven’t checked to be honest).

Best Regards,

From: nginx  on behalf of Richard Paul 

Reply-To: "nginx@nginx.org" 
Date: Tuesday, 12 February 2019 at 10.04
To: "nginx@nginx.org" 
Subject: Re: I'm about to embark on creating 12000 vhosts

Hi Jeff

That's interesting, how do you manage the progamming to load the right 
certificate for the right domain coming in as the server name? We need to load 
the right certificate for the incoming domain and the 12000 figure is the 
number of unique vanity domains without the www. subdomains.

We're planning to follow the same path as you though, we're essentially putting 
these Nginx TLS terminators (fronted by GCP load balancers) in front of our 
existing Varnish caching and Nginx backend infrastructure which currently only 
listen on port 80.

I couldn't work out what the limits are at LE as it's not clear with regards to 
adding new unique domains limits. I'm going to have to ask in the forums at 
some point so that I can work out what our daily batches are going to be.

Kind regards,
Richard

On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote:
I use haproxy in a similar way as stated by Rainer, rather than having hundreds 
and hundreds of config files (yes there are other ways), i have 1 for haproxy 
and 2(on multiple machines defined in HAProxy). One for my main domain that 
listens to an "real" server_name and another that listens to `server_name _;`  
All of the nginx servers simply listen on 80 and 81 to handle non H2 clients 
and the application does the correct thing with the domain.  Which is where 
YMMV as all applications differ.

I found this much simpler and easier to maintain over time.  I got around the 
LE limits by a staggered migration, so i was only requesting what was in the 
limit each day, then have a custom script that calls LE (which is also on the 
same machine as HAProxy) when certs are about 10 days out, so the staggering 
stays within the limits.  When i was using custom configuration, i was build 
them via python using a yaml file and nginx would effectively be a jinja2 
template.  But even that became onerous.  When going down the nginx path ensure 
you pay attention to the variables that control domain hash sizes. 
http://nginx.org/en/docs/hash.html

HTH, good luck!
Jeff

On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner 
mailto:rai...@ultra-secure.de>> wrote:



Am 11.02.2019 um 16:16 schrieb rick_pri 
mailto:nginx-fo...@forum.nginx.org>>:

However, our customers, with about 12000 domain names at present have


Let’s Encrypt rate limits will likely make these very difficult to obtain and 
also to renew.

If you own the DNS, maybe using Wildcard DNS entries is more practical.

Then, HAProxy allows to just drop all the certificates in a directory and let 
itself figure out the domain-names it has to answer.
At least, that’s what my co-worker told me.

Also, there’s the fabio LB with similar goal-posts.




___
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

___

nginx mailing list
<mailto:nginx@nginx.org>
<mailto:nginx@nginx.org>

nginx@nginx.org



<http://mailman.nginx.org

Re: I'm about to embark on creating 12000 vhosts

2019-02-13 Thread Richard Paul
Hi Jeff,

This is pretty much what I'm now looking at doing, with Some HA proxy servers 
behind the External Google LB and then their backend being an internal Google 
LB which then balances across our Varnish caching layer and eventually the 
Nginx app servers.

Thank you for sharing your config it'll be a good base for us to start from. We 
moved from AWS for cost/performance reasons but also because Google LBs allow 
for us to have a static public facing IP address. Currently our customers point 
the base of their domain at a server which just redirects requests to the www 
subdomain and the www subdomain is pointed at a friendly CNAME which used to 
then be pointed at an AWS ELB CNAME. It now points at an IP address and we can 
slowly get our customers to update their DNS to point the root of the domain at 
the Google LB IP before this work is ready.

Once again many thanks Jeff and for everyone else for their replies,

Kind regards,
Richard

On Tue, 2019-02-12 at 10:37 -0500, Jeff Dyke wrote:
Hi Richard.  HAProxy defaults to reading all certs in a directory and matching 
hosts names via SNI.  Here is the top of my haproxy config, you can see how i 
redirect LE requests to another server, which solely services up responses to 
acme-challenges:

frontend http
  mode http
  bind 0.0.0.0:80<http://0.0.0.0:80>

  #if this is a LE Request send it to a server on this host for renewals
  acl letsencrypt-request path_beg -i /.well-known/acme-challenge/
  redirect scheme https code 301 unless letsencrypt-request
  use_backend letsencrypt-backend if letsencrypt-request

frontend https
  mode tcp
  bind 0.0.0.0:443<http://0.0.0.0:443> ssl crt /etc/haproxy/certs alpn 
h2,http/1.1 ecdhe secp384r1
  timeout http-request 10s
  log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts \ %ac/%fc/%bc/%sc/%rc 
%sq/%bq SSL_version:%sslv SSL_cypher:%sslc SNI:%[ssl_fc_has_sni]"
  #send all HTTP/2 traffic to a specific backend
  use_backend http2-nodes if { ssl_fc_alpn -i h2 }
  #send HTTP/1.1 and HTTP/1.0 to default, which don't speak HTTP/2
  default_backend http1-nodes

I'm not sure exactly how this would work with GCP, but if you use AWS ELB's 
they will give you certs (you have to prove you own the domain), but you have 
to be able to use an ELB, which could change ips at any time.  Unfortunately 
this didn't work for us b/c a few of our larger customers whitelist ips and not 
domain names.  which is why i have stayed with HAProxy.

Jeff

On Tue, Feb 12, 2019 at 4:04 AM Richard Paul 
mailto:rich...@primarysite.net>> wrote:
Hi Jeff

That's interesting, how do you manage the progamming to load the right 
certificate for the right domain coming in as the server name? We need to load 
the right certificate for the incoming domain and the 12000 figure is the 
number of unique vanity domains without the www. subdomains.

We're planning to follow the same path as you though, we're essentially putting 
these Nginx TLS terminators (fronted by GCP load balancers) in front of our 
existing Varnish caching and Nginx backend infrastructure which currently only 
listen on port 80.

I couldn't work out what the limits are at LE as it's not clear with regards to 
adding new unique domains limits. I'm going to have to ask in the forums at 
some point so that I can work out what our daily batches are going to be.

Kind regards,
Richard

On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote:
I use haproxy in a similar way as stated by Rainer, rather than having hundreds 
and hundreds of config files (yes there are other ways), i have 1 for haproxy 
and 2(on multiple machines defined in HAProxy). One for my main domain that 
listens to an "real" server_name and another that listens to `server_name _;`  
All of the nginx servers simply listen on 80 and 81 to handle non H2 clients 
and the application does the correct thing with the domain.  Which is where 
YMMV as all applications differ.

I found this much simpler and easier to maintain over time.  I got around the 
LE limits by a staggered migration, so i was only requesting what was in the 
limit each day, then have a custom script that calls LE (which is also on the 
same machine as HAProxy) when certs are about 10 days out, so the staggering 
stays within the limits.  When i was using custom configuration, i was build 
them via python using a yaml file and nginx would effectively be a jinja2 
template.  But even that became onerous.  When going down the nginx path ensure 
you pay attention to the variables that control domain hash sizes. 
http://nginx.org/en/docs/hash.html

HTH, good luck!
Jeff

On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner 
mailto:rai...@ultra-secure.de>> wrote:


Am 11.02.2019 um 16:16 schrieb rick_pri 
mailto:nginx-fo...@forum.nginx.org>>:

However, our customers, with about 12000 domain names at present have


Let’s Encrypt rate limits will likely make th

Re: Help please

2020-01-28 Thread Richard Paul
It doesn't actually redirect to /wfc/ though, or rather your log lines show a 
404 at /wfc

Also, your log line says /wfc/logon not /wfc/htmlnavigator/logon

GET /wfc
GET /wfc/logon
GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032


On Tue, 2020-01-28 at 14:03 +, Johan Gabriel Medina Capois wrote:

Sure.


The problem is that we have an backend application running in HTML5, when we 
navigate to



http://kronos.mardom.com/wfc/htmlnavigator/logon

 and try to login, it redirect to



http://kronos.mardom.com/wfc/

 and deploy error message "you have no access" , but when navigate from 
localhost no problem.


And the nginx log


"GET /wfc HTTP/1.1" 404 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; 
rv:72.0) Gecko/20100101 Firefox/72.0"

"GET /wfc/logon HTTP/1.1" 200 7496 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; 
x64; rv:72.0) Gecko/20100101 Firefox/72.0"

"GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032 
HTTP/1.1" 200 2534 "



http://kronos.mardom.com/wfc/logon

" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 
Firefox/72.0"


Configuration is


server {

listen 80;

server_name kronos.mardom.com;


location / { proxy_pass



http://10.228.20.97

;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

}

}


Regards


-Original Message-

From: nginx <



nginx-boun...@nginx.org

> On Behalf Of J.R.

Sent: Tuesday, January 28, 2020 9:34 AM

To:



nginx@nginx.org


Subject: Re: Help please


Can you help us please?


You're going to have to be a *bit* more specific what your problem is...

___

nginx mailing list



nginx@nginx.org




http://mailman.nginx.org/mailman/listinfo/nginx


Johan Medina

Administrador de Sistemas e Infraestructura   [Logo]


Departamento: TECNOLOGIA

Central Tel: 809-539-600 Ext: 8139

Flota: (809) 974-4954

Directo: 809 974-4954

Email:



jmed...@mardom.com


Web:www.mardom.com


[Facebook icon] <



https://www.facebook.com/maritimadelcaribe

> [Instagram icon]  <



https://www.instagram.com/maritimadelcaribe

> [Linkedin icon]  <



https://www.linkedin.com/company/maritima-dominicana-sas/?viewAsMember=true

> [Youtube icon]


[Banner]


Sea amable con el medio ambiente: no imprima este correo a menos que sea 
completamente necesario.

___

nginx mailing list



nginx@nginx.org




http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Help please

2020-01-28 Thread Richard Paul
By the looks of things, if the application is redirecting to /wfc that's not 
working, your application doesn't seem to accept that as a valid. The Squid 
cache is returning a miss and so it is hitting the backend and getting a 404 
from there it seems. /wfc/ with a trailing slash does work however, so this 
looks like an issue with the IIS configuration to me. Also, this is a login 
form, I'd recommend that you get TLS set up on this (Let's Encrypt's certbot is 
free afterall).


On Tue, 2020-01-28 at 14:11 +, Richard Paul wrote:
It doesn't actually redirect to /wfc/ though, or rather your log lines show a 
404 at /wfc

Also, your log line says /wfc/logon not /wfc/htmlnavigator/logon

GET /wfc
GET /wfc/logon
GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032


On Tue, 2020-01-28 at 14:03 +, Johan Gabriel Medina Capois wrote:

Sure.


The problem is that we have an backend application running in HTML5, when we 
navigate to

<http://kronos.mardom.com/wfc/htmlnavigator/logon>

http://kronos.mardom.com/wfc/htmlnavigator/logon

 and try to login, it redirect to

<http://kronos.mardom.com/wfc/>

http://kronos.mardom.com/wfc/

 and deploy error message "you have no access" , but when navigate from 
localhost no problem.


And the nginx log


"GET /wfc HTTP/1.1" 404 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; 
rv:72.0) Gecko/20100101 Firefox/72.0"

"GET /wfc/logon HTTP/1.1" 200 7496 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; 
x64; rv:72.0) Gecko/20100101 Firefox/72.0"

"GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032 
HTTP/1.1" 200 2534 "

<http://kronos.mardom.com/wfc/logon>

http://kronos.mardom.com/wfc/logon

" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 
Firefox/72.0"


Configuration is


server {

listen 80;

server_name kronos.mardom.com;


location / { proxy_pass

<http://10.228.20.97>

http://10.228.20.97

;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

}

}


Regards


-Original Message-

From: nginx <

<mailto:nginx-boun...@nginx.org>

nginx-boun...@nginx.org

> On Behalf Of J.R.

Sent: Tuesday, January 28, 2020 9:34 AM

To:

<mailto:nginx@nginx.org>

nginx@nginx.org


Subject: Re: Help please


Can you help us please?


You're going to have to be a *bit* more specific what your problem is...

___

nginx mailing list

<mailto:nginx@nginx.org>

nginx@nginx.org

<http://mailman.nginx.org/mailman/listinfo/nginx>

http://mailman.nginx.org/mailman/listinfo/nginx


Johan Medina

Administrador de Sistemas e Infraestructura   [Logo]


Departamento: TECNOLOGIA

Central Tel: 809-539-600 Ext: 8139

Flota: (809) 974-4954

Directo: 809 974-4954

Email:

<mailto:jmed...@mardom.com>

jmed...@mardom.com


Web:www.mardom.com


[Facebook icon] <

<https://www.facebook.com/maritimadelcaribe>

https://www.facebook.com/maritimadelcaribe

> [Instagram icon]  <

<https://www.instagram.com/maritimadelcaribe>

https://www.instagram.com/maritimadelcaribe

> [Linkedin icon]  <

<https://www.linkedin.com/company/maritima-dominicana-sas/?viewAsMember=true>

https://www.linkedin.com/company/maritima-dominicana-sas/?viewAsMember=true

> [Youtube icon]


[Banner]


Sea amable con el medio ambiente: no imprima este correo a menos que sea 
completamente necesario.

___

nginx mailing list

<mailto:nginx@nginx.org>

nginx@nginx.org

<http://mailman.nginx.org/mailman/listinfo/nginx>

http://mailman.nginx.org/mailman/listinfo/nginx


___

nginx mailing list

<mailto:nginx@nginx.org>

nginx@nginx.org


<http://mailman.nginx.org/mailman/listinfo/nginx>

http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Implementation of http2/RST_STREAM in NGINX 1.18.0

2020-04-27 Thread Paul Hecker
Hi,

it seems that macOS still has an issue with the proper handling of RST_STREAM. 
Since NGINX 1.18.0 the proper handling of RST_STREAM is re-enabled in this 
commit: 

https://hg.nginx.org/nginx/rev/2e61e4b6bcd9

I used git bisect to track this down. Our server mainly handles basic-auth 
protected image uploads through a CGI. All the clients are using NSURLSession 
to connect to the CGI. After the 401 reply NGINX is sending the RST_STREAM (as 
the images may be quite large and the upload continues) but the NSURLSession 
and its subcomponents are not re-trying with an authorized requst. Instead they 
are failing with an error.

As I can patch the sources and build my own version as a work-around, I would 
like to send you an heads up. Maybe this is an issue with all browsers on 
macOS/iOS that are using the NSURLSession subsystem.

Also you may consider adding a configuration option for the RST_STREAM 
handling, so that the user/administrator can decide whether most of its 
web-clients properly support the RST_STREAM.

Thanks,
Paul

smime.p7s
Description: S/MIME cryptographic signature
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Where does $remote_addr come from?

2017-02-02 Thread Paul Nickerson
According to NGINX documentation, $remote_addr is an embedded variable in
the ngx_http_core_module module. I have searched all around, including the
source code, trying to figure out exactly how NGINX generates this
variable, but I have been unable to find anything beyond the description
"client address". Currently, my best guess is that it's the source address
field in the incoming TCP/IP packet's IPv4 internet header. Is this
correct? Or, does it come from somewhere else?

Relevant documentation:
http://nginx.org/en/docs/http/ngx_http_core_module.html#variables

Thank you,
-- Paul Nickerson

-- 


*CONFIDENTIALITY NOTICE*

The attached information is PRIVILEGED AND CONFIDENTIAL and is intended 
only for the use of the addressee named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible 
for delivering the message to the intended recipient, please be aware that 
any dissemination, distribution or duplication of this communication is 
strictly prohibited. If you receive this communication in error, please 
notify us immediately by telephone, delete the message and destroy any 
printed copy of the message. Thank you.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Where does $remote_addr come from?

2017-02-03 Thread Paul Nickerson
I accidentally turned on digest mode for myself on this mailing list (now
turned off), so this might not be threaded. Sorry.

Francis Dalyfrancis at daoine.org
> Exactly, it is what the source code says:
> v->data = r->connection->addr_text.data;
> and then you can track where that addr_text.data value is set.

I thought it might be coming from addr_text in the code, but my experience
with C is dated and limited. I wasn't able to figure out where
addr_text.data is set.

 ~ Paul Nickerson

-- 


*CONFIDENTIALITY NOTICE*

The attached information is PRIVILEGED AND CONFIDENTIAL and is intended 
only for the use of the addressee named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible 
for delivering the message to the intended recipient, please be aware that 
any dissemination, distribution or duplication of this communication is 
strictly prohibited. If you receive this communication in error, please 
notify us immediately by telephone, delete the message and destroy any 
printed copy of the message. Thank you.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Where does $remote_addr come from?

2017-02-03 Thread Paul Nickerson
> Reading that file, the next likely looking line is:
> c->addr_text.len = ngx_sock_ntop(c->sockaddr, c->socklen,
>  c->addr_text.data,
>  ls->addr_text_max_len, 0);

Thank you for the boost. From what you said, it looks like the variable is
constructed from c->sockaddr

src/event/ngx_event_accept.c
line 167
c->sockaddr = ngx_palloc(c->pool, socklen);

I chased that down, and it looks like ngx_palloc only allocates some
memory; it doesn't fill it. Moving on.

line 173
ngx_memcpy(c->sockaddr, &sa, socklen);

It looks like ngx_memcpy is a wrapper around the standard C library
function memcpy. For memcpy(A, B, C), it copies to destination A from
source B, and it does amount C. So now I want to know where &sa comes from.

line 70
s = accept(lc->fd, &sa.sockaddr, &socklen);

Here, &sa.sockaddr is being sent into something. I think &sa.sockaddr
becomes c->sockaddr, so I chase this.

Bash
man 2 accept

accept is a Linux system call: "accept a connection on a socket"
int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen);

"The argument addr is a pointer to a sockaddr structure.  This
structure is filled in with the address of the peer socket, as known
to the communications layer.  The exact format of the address
returned addr is determined by the socket's address family (see
socket(2) and the respective protocol man pages).  When addr is NULL,
nothing is filled in; in this case, addrlen is not used, and should
also be NULL."

And so, the answer to my question appears to be: $remote_addr is
constructed from "struct sockaddr *addr" of the "accept" Linux system call.
It is the address of the peer socket.

I am going to read through socket(2) and the respective protocol man pages,
but at this point we're outside of NGINX, and so the scope of this mailing
list.
Thank you again for your help.

 ~ Paul Nickerson

-- 


*CONFIDENTIALITY NOTICE*

The attached information is PRIVILEGED AND CONFIDENTIAL and is intended 
only for the use of the addressee named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible 
for delivering the message to the intended recipient, please be aware that 
any dissemination, distribution or duplication of this communication is 
strictly prohibited. If you receive this communication in error, please 
notify us immediately by telephone, delete the message and destroy any 
printed copy of the message. Thank you.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Where does $remote_addr come from?

2017-02-06 Thread Paul Nickerson
B.R.
> I am curious: apart from a training prospective at code digging, what was
the goal?
> In other words, where did you expect the IP address to come from, if not
from a system network socket?

We have NGINX AWS EC2 Instances behind AWS EC2 ELBs, as well as Fastly's
CDN and maybe some custom load balancers, but sometimes an IP address that
we log is not readily identifiable. I was also seeing some configurations
in our setup that suggested we may have been using $remote_addr
incorrectly, in log auditing for example.

So before I verified that and chased the odd IP's, I wanted to make sure
that I understood exactly what $remote_addr refers to. I thought that maybe
it was actually derived from the HTTP header, or maybe a module could be
modifying it without being explicitly configured to do so, or maybe it's
possible for a bad actor to spoof it. Now I know that it's independent of
the HTTP header, one native module and probably some third party modules
can modify it, and a bad actor would need to spoof the TCP IPv4 internet
header's source address.

I admit, I probably could have been reasonably confident in our
configuration without needing to determine this. But I was surprised to
find there was no documentation or past forum posts saying whether this
variable came from the TCP/IP or the HTTP headers. After that, my sense of
technical discovery took over and kept me interested in the problem.

 ~ Paul Nickerson

-- 


*CONFIDENTIALITY NOTICE*

The attached information is PRIVILEGED AND CONFIDENTIAL and is intended 
only for the use of the addressee named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible 
for delivering the message to the intended recipient, please be aware that 
any dissemination, distribution or duplication of this communication is 
strictly prohibited. If you receive this communication in error, please 
notify us immediately by telephone, delete the message and destroy any 
printed copy of the message. Thank you.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Behavior of realip module with this config

2017-02-09 Thread Paul Nickerson
I've got the config below. I don't have these settings reconfigured
anywhere else. My understanding is that no matter anything else at all
anywhere else, and no matter whether the X-Forwarded-For field in the HTTP
header has one or multiple IP addresses, or isn't even present,
$remote_addr will not be altered.

set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;

>From what I read, "real_ip_recursive on" means that $remote_addr can only
be set to an IP address that is not in the range set by set_real_ip_from.
And since that's 0.0.0.0/0, there is no IP that can meet this requirement.

Am I correct in my analysis?

http://nginx.org/en/docs/http/ngx_http_realip_module.html

 ~ Paul Nickerson

-- 


*CONFIDENTIALITY NOTICE*

The attached information is PRIVILEGED AND CONFIDENTIAL and is intended 
only for the use of the addressee named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible 
for delivering the message to the intended recipient, please be aware that 
any dissemination, distribution or duplication of this communication is 
strictly prohibited. If you receive this communication in error, please 
notify us immediately by telephone, delete the message and destroy any 
printed copy of the message. Thank you.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Behavior of realip module with this config

2017-02-10 Thread Paul Nickerson
On Fri, Feb 10, 2017 at 7:33 AM, Maxim Dounin wrote:
> And real_ip_recursive switched on means that this happens
> recursively.  As a result, with the configuration in question
> nginx will use the first address in X-Forwarded-For provided, if
> any (assuming all addresses are valid).
> Note that "set_real_ip_from 0.0.0.0/0" makes client's address as
> seen by nginx easily spoofable by any client, and it is generally
> a bad idea to use it in production.

Thank you for the reply, Maxim. "set_real_ip_from 0.0.0.0/0" does indeed
seem like a bad idea in production. Thank you for calling that out.

I am confused by this statement in the documentation:
http://nginx.org/en/docs/http/ngx_http_realip_module.html
"If recursive search is enabled, the original client address that matches
one of the trusted addresses is replaced by the last non-trusted address
sent in the request header field."

The language "last non-trusted address" suggests that NGINX looks for
something in real_ip_header which does not match set_real_ip_from. But
maybe I am interpreting that incorrectly.

If set_real_ip_from were set correctly to the host's content delivery
network, load balancer, and reverse proxy infrastructures, then my
interpretation would make sense, as $remote_addr would then get set to the
client's public IP, even if the client has network address translation and
forward proxy infrastructures which append to X-Forwarded-For. But in your
answer, wouldn't $remote_addr be set to the client's private IP address if
their firewall/gateway adds that private IP address to X-Forwarded-For
while it does the NATing? That doesn't seem very useful.

This is an example situation I'm thinking of (all the IPs are random, and
are the IPs "facing" NGINX):

set_real_ip_from 10.6.1.0/24, 8.47.98.0/24;
real_ip_header X-Forwarded-For;
real_ip_recursive on;

client's computer (192.168.1.79) > client's gateway (178.150.189.138) > my
content delivery network (8.47.98.129) > my load balancer (10.6.1.56) > my
NGINX box

X-Forwarded-For = 192.168.1.79, 178.150.189.138, 8.47.98.129

I think in your answer, $remote_addr would be set to 192.168.1.25, while in
my interpretation, it would be set to 178.150.189.138. And in either case,
$realip_remote_addr is 10.6.1.56.

It would be a strangely configured client gateway / firewall / NAT / proxy
that adds to X-Forwarded-For, but it can happen.

I guess I am still confused.

 ~ Paul Nickerson

-- 


*CONFIDENTIALITY NOTICE*

The attached information is PRIVILEGED AND CONFIDENTIAL and is intended 
only for the use of the addressee named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible 
for delivering the message to the intended recipient, please be aware that 
any dissemination, distribution or duplication of this communication is 
strictly prohibited. If you receive this communication in error, please 
notify us immediately by telephone, delete the message and destroy any 
printed copy of the message. Thank you.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Behavior of realip module with this config

2017-02-10 Thread Paul Nickerson
On Fri, Feb 10, 2017 at 11:05 AM, Maxim Dounin wrote:
> Note that my answer ("with the configuration in question nginx
> will use the first address in X-Forwarded-For provided") only
> applies to the particular configuration with "set_real_ip_from
> 0.0.0.0/0", and it is incorrect to assume it can be used as an
> universal answer to all questions.

Ah, OK, I see. Everything is making sense now. I somehow didn't see "with
the configuration in question" in your reply.

So "set_real_ip_from 0.0.0.0/0" brings in a special case, where the
leftmost / first IP address is used. It sounds like that's because it
recursively searches back through the list for an untrusted IP, and if it
doesn't find one, then it keeps whatever was the last one checked, which
would be the leftmost IP.

This is matching what I'm seeing, and I now know how to test out a
different configuration. Thank you for the help, Maxim!

 ~ Paul Nickerson

-- 


*CONFIDENTIALITY NOTICE*

The attached information is PRIVILEGED AND CONFIDENTIAL and is intended 
only for the use of the addressee named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible 
for delivering the message to the intended recipient, please be aware that 
any dissemination, distribution or duplication of this communication is 
strictly prohibited. If you receive this communication in error, please 
notify us immediately by telephone, delete the message and destroy any 
printed copy of the message. Thank you.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

MySQL Access w/ Nginx

2017-03-02 Thread Paul Romero

Dear Nginx Community:

Do you think NGinx is a viable and advisable solution for providing MySQL
server access to my application ? The basic requirements and goals of the
application are described below.

Although, NGinx is classified as a Web Server which can act as a
Reverse Proxy or Load Balancer, my application does not need exactly
that kind of functionality in the short term.  The short term need is
to allow mobile platforms to access a single MySQL server. Eventually,
there will be multiple MySQL servers and load balancing and failure
fallback will be issues, and perahs caching. That means the basic 
architecture

is as follows.

| Mobile |  <--> Internet <-->  | NGinx | <-->  | MySQL   | <-->  | MySQL |
| System |   (TCP/IP)   | Backend |   | Server|

Initially NGinx, the MySQL Backend, and MySQL Server will all be on the same
Linux host. My main concern is how the MySQL Backend fits and operates 
within

that architecture. (i.e. I am not sure about the correct terminology for the
MySQL Backend.) I assume, but am not sure, it can interact with the NGinx
without additional components, such as Drupal.

The basic requirement is the ability to perform remote MySQL queries and 
operations
with syntax and semantics which are virtually the same as the 
corresponding manual
operations. However, the remote system does not need to use the same 
syntax and semantics
as the module that performs MySQL operations. Also, smooth interaction 
with LAMP PHP
and MySQL components is a requirement. (i.e. I think Apache is not an 
issue.)
Note that application clients will put a large volume of data into the 
MySQL database

and interaction with a Web Server is not an issue at this point.

The priority is to allow a mobile system such as an Android, and 
eventually an Apple,
to access an MySQL server on a Unix/Linux system securely. However, the 
priority
for the same functionality in a conventional Internet host is almost as 
high.


The essential connection and authentication requirements are as follows.

* SSL encryption/authentication
* MySQL authentication
* No passwords etc. are transmitted in the open.
* Support for multiple concurrent connections from the same or multiple 
systems.
* Each remote MySQL user must perform SSL authentication separately and 
there

  is 1-1 relationship between the SSL and MySQL authentication data.


Best Regards,

Paul R.

--


Paul Romero
---
RCOM Communications Software
EMAIL: pa...@rcom-software.com
PHONE: (510)482-2769




___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx 1.13 binaries

2017-07-27 Thread Paul Smith
As Francis stated, you need to look for the mainline version, not the
stable version. They have different repositories.

http://nginx.org/packages/mainline/centos/6/x86_64/RPMS/


On Thu, Jul 27, 2017 at 12:47 PM, Gee Bunny  wrote:
> There might be talk of binaries being available on the page you linked, but
> if you check the repo link I provided (to the NGINX hosted YUM Repository
> for CentOS 6 x86_64) there's no version 1.13 it goes as high as nginx-1.12.1
> only.
>
> Don't take my word for it, see for yourself:
>
> http://nginx.org/packages/centos/6/x86_64/RPMS/
>
 My guess: they are already there.
>
 See http://nginx.org/en/linux_packages.html
>
>
>
> On Sat, Jul 22, 2017 at 9:45 PM, Gee Bunny  wrote:
>>
>> Are there plans to add nginx 1.13 to any of the yum repositories?
>>
>> Like:
>>
>> http://nginx.org/packages/centos/6/x86_64/RPMS/
>>
>> Some of us would like to run the latest version of NGINX with TLS1.3
>> support, without having to compile from source.
>>
>> Are there any plans or ETA to have nginx 1.13 aded to the repos within a
>> timely fashion? Or is the only option to get TLS1.3 support to compile from
>> source for the forseeable future?
>>
>> Thanks
>>
>> P.S. Awesome work on NGINX- best disruptive web-server tech to hit the
>> scene in 20+ years.
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


sent_http_HEADER Volatile under Nginx 1.2.4

2013-12-12 Thread Paul Taylor
I’m in the process of making some amends to an environment, where my upstream 
servers are sending a custom header (X-No-Cache), which I need to detect and 
alter caching rules within the configuration.

The custom header is visible within the output, and I can re-output it as 
another header through configuration (i.e. add_header  X-Sent-No-Cache 
$sent_http_x_no_cache; ).

However, as soon as I perform any type of testing of this custom header, it 
disappears.

For example, if I was to perform a map on the custom header, try to set an 
Nginx variable to the value of the header, or test within an IF statement, any 
future call to this header is no longer possible. Additionally any setting or 
testing of the header fails.

Unfortunately I have little control of the upstream, so cannot use an 
alternative method (such as proper Cache-Control headers).

Has anyone experienced similar behaviour, or have any pearls of wisdom?

Thanks in advance,

Paul

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: sent_http_HEADER Volatile under Nginx 1.2.4

2013-12-12 Thread Paul Taylor
Hi Maxim,
Thanks for your response. You’re right! Using the map did work (I thought I’d 
tried that, but must have been tired!).
So, now I have one other challenge, the value of $foo that you define below is 
needed to identify whether to cache the response of not. The only issue is that 
I have a number of other directives that I also need to add into the mix - 
therefore I use the set_by_lua code to nest/combine OR within an if 
statement…code below (I’ve kept the variable name as foo, so it’s clear which 
I’m referring to):
map $upstream_http_x_no_cache $foo {
  ""0;
  default   1;
}
set_by_lua $bypass_cache '
  local no_cache_dirs = tonumber(ngx.var.no_cache_dirs) or 0
  local logged_in = tonumber(ngx.var.logged_in) or 0
  local no_cache_header = tonumber(ngx.var.foo) or 0

  if((no_cache_dirs == 1) or (no_cache_header == 1) or (logged_in == 1)) then
return 1;
  end

  return 0;
';
Now when I make the Lua local variable declaration in order to use it, the 
value of $upstream_http_x_no_cache is reset to 0, even when it was set as 1 
originally. If I comment out the line declaring the local variable within the 
Lua call, it returns to being a value of 1 again.
Am I getting the sequencing of events wrong again? Is there any way that I can 
get the value of $upstream_http_x_no_cache into this Lua block, or would I need 
to do it another way?
Thanks very much for your help so far Maxim.
Paul
__
Hello!

On Thu, Dec 12, 2013 at 07:19:56PM +, Paul Taylor wrote:

> I’m in the process of making some amends to an environment, 
> where my upstream servers are sending a custom header 
> (X-No-Cache), which I need to detect and alter caching rules 
> within the configuration.
> 
> The custom header is visible within the output, and I can 
> re-output it as another header through configuration (i.e. 
> add_header  X-Sent-No-Cache $sent_http_x_no_cache; ).
> 
> However, as soon as I perform any type of testing of this custom 
> header, it disappears.
> 
> For example, if I was to perform a map on the custom header, try 
> to set an Nginx variable to the value of the header, or test 
> within an IF statement, any future call to this header is no 
> longer possible. Additionally any setting or testing of the 
> header fails.

Both "set" and "if" directives you mentioned are executed _before_ 
a request is sent to upstream, and at this point there is no 
X-No-Cache header in the response.  Due to this, using the 
$sent_http_x_no_cache variable in "set" or "if" will result in an 
empty value, and this value will be cached for later use.

It's not clear what you are trying to do so I can't advise any 
further, but certainly using the $sent_http_x_no_cache variable in 
"if" or "set" directives isn't going to work, and this is what 
causes behaviour you see.

Just a map{} should work fine though - as long as you don't try to 
call the map before the X-No-Cache header is actually available.  
E.g., something like this should work fine:

map $sent_http_x_no_cache $foo {
""empty;
default   foo;
}

add_header X-Foo $foo;

It might be also a goo idea to use $upstream_http_x_no_cache 
variable instead, see here:

http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: sent_http_HEADER Volatile under Nginx 1.2.4

2013-12-16 Thread Paul Taylor
Yup, again, you’re right! I’ve moved the config around, so that I’m testing for 
any ‘true’ value in the proxy_no_cache & proxy_bypass_cache directives 
(removing the existing set_by_lua block).

However, it’s still not behaving as I’d expect.

In the following scenario (note comments):

map $upstream_http_x_no_cache $no_cache_header {
""0;
default   1;
}

proxy_cache_bypass$no_cache_dirs $logged_in; # $no_cache_header;
proxy_no_cache$no_cache_dirs $logged_in; # $no_cache_header;

X-Cache-Status value is MISS, which is correct. Output of $no_cache_header is 1 
(as set in the map).

However, when adding back in the compare on $no_cache_header:

proxy_cache_bypass$no_cache_dirs $logged_in $no_cache_header;
proxy_no_cache$no_cache_dirs $logged_in $no_cache_header;

X-Cache-Status value is still MISS, which is not correct, as it should be 
BYPASS. Output of $no_cache_header is 0.

Unless I’m missing something, it still looks like touching the variable kills 
it?

Thanks again,

Paul

On 13 Dec 2013, at 16:31, Maxim Dounin  wrote:

> Hello!
> 
> On Thu, Dec 12, 2013 at 11:36:21PM +, Paul Taylor wrote:
> 
>> Hi Maxim,
>> Thanks for your response. You’re right! Using the map did work 
>> (I thought I’d tried that, but must have been tired!).
>> So, now I have one other challenge, the value of $foo that you 
>> define below is needed to identify whether to cache the response 
>> of not. The only issue is that I have a number of other 
>> directives that I also need to add into the mix - therefore I 
>> use the set_by_lua code to nest/combine OR within an if 
>> statement…code below (I’ve kept the variable name as foo, so 
>> it’s clear which I’m referring to):
>> map $upstream_http_x_no_cache $foo {
>>  ""0;
>>  default   1;
>> }
>> set_by_lua $bypass_cache '
>>  local no_cache_dirs = tonumber(ngx.var.no_cache_dirs) or 0
>>  local logged_in = tonumber(ngx.var.logged_in) or 0
>>  local no_cache_header = tonumber(ngx.var.foo) or 0
>> 
>>  if((no_cache_dirs == 1) or (no_cache_header == 1) or 
>>  (logged_in == 1)) then
>>return 1;
>>  end
>> 
>>  return 0;
>> ';
>> Now when I make the Lua local variable declaration in order to 
>> use it, the value of $upstream_http_x_no_cache is reset to 0, 
>> even when it was set as 1 originally. If I comment out the line 
>> declaring the local variable within the Lua call, it returns to 
>> being a value of 1 again.
>> Am I getting the sequencing of events wrong again? Is there any 
>> way that I can get the value of $upstream_http_x_no_cache into 
>> this Lua block, or would I need to do it another way?
> 
> Are you going to use the result in proxy_no_cache?  If yes, you 
> can just use multiple variables there, something like this should 
> work:
> 
>proxy_no_cache $upstream_http_x_no_cache
>   $no_cache_dirs
>   $logged_in;
> 
> See here for details:
> 
> http://nginx.org/r/proxy_no_cache
> 
>> Thanks very much for your help so far Maxim.
>> Paul
>> __
>> Hello!
>> 
>> On Thu, Dec 12, 2013 at 07:19:56PM +, Paul Taylor wrote:
>> 
>>> I’m in the process of making some amends to an environment, 
>>> where my upstream servers are sending a custom header 
>>> (X-No-Cache), which I need to detect and alter caching rules 
>>> within the configuration.
>>> 
>>> The custom header is visible within the output, and I can 
>>> re-output it as another header through configuration (i.e. 
>>> add_header  X-Sent-No-Cache $sent_http_x_no_cache; ).
>>> 
>>> However, as soon as I perform any type of testing of this custom 
>>> header, it disappears.
>>> 
>>> For example, if I was to perform a map on the custom header, try 
>>> to set an Nginx variable to the value of the header, or test 
>>> within an IF statement, any future call to this header is no 
>>> longer possible. Additionally any setting or testing of the 
>>> header fails.
>> 
>> Both "set" and "if" directives you mentioned are executed _before_ 
>> a request is sent to upstream, and at this point there is no 
>> X-No-Cache header in the response.  Due to this, using the 
>> $sent_http_x_no_cache variable in "set" or "if" will result in an 
>> empty value, and this value will be cached for later use.
>> 
>> It's not clear what you are trying to do so I c

Re: sent_http_HEADER Volatile under Nginx 1.2.4

2013-12-16 Thread Paul Taylor
Hi Maxim,

Ok, thanks for the clarification.

So to confirm, we are looking for the value of the sent header from the 
upstream, to identify whether the content should be served from the cache, or 
the upstream. Does this therefore mean that the code that we have below, will 
check for the X-No-Cache header, and if present, will always render the content 
from the upstream (no cache), and that if not present, will enable the result 
to be cacheable? If so, and it is only the reporting of the X-Cache-Status 
value that is rendering a false value, then this will give us what we want?

If not, then what suggestions would you have for caching only on the basis of 
this sent http header being present?

Thanks again…nearly there ;)

Paul

On 16 Dec 2013, at 11:12, Maxim Dounin  wrote:

> Hello!
> 
> On Mon, Dec 16, 2013 at 09:22:25AM +0000, Paul Taylor wrote:
> 
>> Yup, again, you’re right! I’ve moved the config around, so that I’m testing 
>> for any ‘true’ value in the proxy_no_cache & proxy_bypass_cache directives 
>> (removing the existing set_by_lua block).
>> 
>> However, it’s still not behaving as I’d expect.
>> 
>> In the following scenario (note comments):
>> 
>> map $upstream_http_x_no_cache $no_cache_header {
>>""0;
>>default   1;
>> }
>> 
>> proxy_cache_bypass$no_cache_dirs $logged_in; # $no_cache_header;
>> proxy_no_cache $no_cache_dirs $logged_in; # $no_cache_header;
>> 
>> X-Cache-Status value is MISS, which is correct. Output of $no_cache_header 
>> is 1 (as set in the map).
>> 
>> However, when adding back in the compare on $no_cache_header:
>> 
>> proxy_cache_bypass$no_cache_dirs $logged_in $no_cache_header;
>> proxy_no_cache $no_cache_dirs $logged_in $no_cache_header;
>> 
>> X-Cache-Status value is still MISS, which is not correct, as it should be 
>> BYPASS. Output of $no_cache_header is 0.
>> 
>> Unless I’m missing something, it still looks like touching the variable 
>> kills it?
> 
> The proxy_cache_bypass directive is expected to be checked before 
> a request is sent to a backend - it is to control whether a 
> request will be served from a cache or passed to a backend.
> 
> That is, what you see is actually expected behaviour - there are 
> no reasons X-Cache-Status to be BYPASS, and the cached 
> $no_cache_header value to be different from 0.
> 
> -- 
> Maxim Dounin
> http://nginx.org/
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-06-24 Thread Paul Schlie
I've noticed that multiple (as great as 8 or more) parallel redundant streams 
and corresponding temp_files are opened reading the same file from a reverse 
proxy backend into nginx, upon even a single request by an up-stream client, if 
not already cached (or stored in a static proxy'ed file) local to nginx.

This seems extremely wasteful of bandwidth between nginx and corresponding 
reverse proxy backends; does anyone know why this is occurring and how to limit 
 this behavior?

(For example, upon receiving a request for example small 250MB mp4 video 
podcast video file, it's not uncommon for 8 parallel streams to be opened, each 
sourcing (and competing for bandwidth) a corresponding temp_file, where the 
upstream client appears to being feed by the most complete stream/temp_file; 
but even upon the compete file being fully transferred to the upstream 
client,the remaining streams remain active until they too have finished their 
transfers, and then closed, and their corresponding temp_files deleted. All 
resulting in 2GB of data being transferred when only 250MB needed be, not to 
mention that the transfer took nearly 8x longer to complete, so unless there 
were concerns about the integrity of the connection, it seems like a huge waste 
of resources?)

Thanks, any insight/assistance would be appreciated.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-06-24 Thread Paul Schlie
Thank you; however it appears to have no effect on reverse proxy_store'd static 
files?

(Which seems odd, if it actually works for cached files; as both are first read 
into temp_files, being the root of the problem.)

Any idea on how to prevent multiple redundant streams and corresponding 
temp_files being created when reading/updating a reverse proxy'd static file 
from the backend?

(Out of curiosity, why would anyone ever want many multiple redundant 
streams/temp_files ever opened by default?)

On Jun 24, 2014, at 6:36 PM, Maxim Dounin  wrote:

> Hello!
> 
> On Tue, Jun 24, 2014 at 02:49:57PM -0400, Paul Schlie wrote:
> 
>> I've noticed that multiple (as great as 8 or more) parallel 
>> redundant streams and corresponding temp_files are opened 
>> reading the same file from a reverse proxy backend into nginx, 
>> upon even a single request by an up-stream client, if not 
>> already cached (or stored in a static proxy'ed file) local to 
>> nginx.
>> 
>> This seems extremely wasteful of bandwidth between nginx and 
>> corresponding reverse proxy backends; does anyone know why this 
>> is occurring and how to limit  this behavior?
> 
> http://nginx.org/r/proxy_cache_lock
> 
> -- 
> Maxim Dounin
> http://nginx.org/
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-06-24 Thread Paul Schlie
Again thank you. However ... (below)

On Jun 24, 2014, at 8:30 PM, Maxim Dounin  wrote:

> Hello!
> 
> On Tue, Jun 24, 2014 at 07:51:04PM -0400, Paul Schlie wrote:
> 
>> Thank you; however it appears to have no effect on reverse proxy_store'd 
>> static files?
> 
> Yes, it's part of the cache machinery.  The proxy_store 
> functionality is dumb and just provides a way to store responses 
> received, nothing more.

- There should be no difference between how reverse proxy'd files are accessed 
and first stored into corresponding temp_files (and below).

> 
>> (Which seems odd, if it actually works for cached files; as both 
>> are first read into temp_files, being the root of the problem.)
> 
> See above (and below).
> 
>> Any idea on how to prevent multiple redundant streams and 
>> corresponding temp_files being created when reading/updating a 
>> reverse proxy'd static file from the backend?
> 
> You may try to do so using limit_conn, and may be error_page and 
> limit_req to introduce some delay.  But unlikely it will be a 
> good / maintainable / easy to write solution.

- Please consider implementing by default that no more streams than may become 
necessary if a previously opened stream appears to have died (timed out), as 
otherwise only more bandwidth and thereby delay will most likely result to 
complete the request.  Further as there should be no difference between how 
reverse proxy read-streams and corresponding temp_files are created, regardless 
of whether they may be subsequently stored as either symbolically-named static 
files, or hash-named cache files; this behavior should be common to both.

>> (Out of curiosity, why would anyone ever want many multiple 
>> redundant streams/temp_files ever opened by default?)
> 
> You never know if responses are going to be the same.  The part 
> which knows (or, rather, tries to) is called "cache", and has 
> lots of directives to control it.

- If they're not "the same" then the tcp protocol stack has failed, which is 
nothing to do with ngiinx.
(unless a backend server is frequently dropping connections, it's 
counterproductive to open multiple redundant streams; as doing so by default 
will only likely result in higher-bandwidth and thereby slower response 
completion.)

> -- 
> Maxim Dounin
> http://nginx.org/
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-06-24 Thread Paul Schlie
Hi, Upon further testing, it appears the problem exists even with proxy_cache'd 
files with "proxy_cache_lock on".

(Please consider this a serious bug, which I'm surprised hasn't been detected 
before; verified on recently released 1.7.2)

On Jun 24, 2014, at 8:58 PM, Paul Schlie  wrote:

> Again thank you. However ... (below)
> 
> On Jun 24, 2014, at 8:30 PM, Maxim Dounin  wrote:
> 
>> Hello!
>> 
>> On Tue, Jun 24, 2014 at 07:51:04PM -0400, Paul Schlie wrote:
>> 
>>> Thank you; however it appears to have no effect on reverse proxy_store'd 
>>> static files?
>> 
>> Yes, it's part of the cache machinery.  The proxy_store 
>> functionality is dumb and just provides a way to store responses 
>> received, nothing more.
> 
> - There should be no difference between how reverse proxy'd files are 
> accessed and first stored into corresponding temp_files (and below).
> 
>> 
>>> (Which seems odd, if it actually works for cached files; as both 
>>> are first read into temp_files, being the root of the problem.)
>> 
>> See above (and below).
>> 
>>> Any idea on how to prevent multiple redundant streams and 
>>> corresponding temp_files being created when reading/updating a 
>>> reverse proxy'd static file from the backend?
>> 
>> You may try to do so using limit_conn, and may be error_page and 
>> limit_req to introduce some delay.  But unlikely it will be a 
>> good / maintainable / easy to write solution.
> 
> - Please consider implementing by default that no more streams than may 
> become necessary if a previously opened stream appears to have died (timed 
> out), as otherwise only more bandwidth and thereby delay will most likely 
> result to complete the request.  Further as there should be no difference 
> between how reverse proxy read-streams and corresponding temp_files are 
> created, regardless of whether they may be subsequently stored as either 
> symbolically-named static files, or hash-named cache files; this behavior 
> should be common to both.
> 
>>> (Out of curiosity, why would anyone ever want many multiple 
>>> redundant streams/temp_files ever opened by default?)
>> 
>> You never know if responses are going to be the same.  The part 
>> which knows (or, rather, tries to) is called "cache", and has 
>> lots of directives to control it.
> 
> - If they're not "the same" then the tcp protocol stack has failed, which is 
> nothing to do with ngiinx.
> (unless a backend server is frequently dropping connections, it's 
> counterproductive to open multiple redundant streams; as doing so by default 
> will only likely result in higher-bandwidth and thereby slower response 
> completion.)
> 
>> -- 
>> Maxim Dounin
>> http://nginx.org/
>> 
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx Windows High Traffic issues

2014-06-27 Thread Paul Schlie
I don't know if what you're experiencing is related to a problem I'm still 
tracking down, specifically that multiple redundant read-streams and 
corresponding temp_files are being opened to read the same file from a backend 
server for what appears to be a single initial get request by a client for a 
large mp4 file which was not yet been locally reverse proxy cashed by nginx as 
an substantially static file. This appears to end up creating 6-10x more 
traffic and disk activity than is actually required to cache the single file 
(depending on how many redundant read-stream/temp_files are created.  If a 
server is attempting to reverse proxy many such relatively large files, it 
could easily saturate nginx with network/disk traffic until most such files 
requested were eventually locally cached.

On Jun 27, 2014, at 4:30 PM, c0nw0nk  wrote:

> My new soloution did not last very long everything shot up again so the mp4
> function is needed to drop I/O usage but as of what the optimal setting for
> the buffers are realy does baffle me
> 
> Posted at Nginx Forum: 
> http://forum.nginx.org/read.php?2,251186,251265#msg-251265
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-06-30 Thread Paul Schlie
Is there any possible solution for this problem?

As although proxy_cache_lock may inhibit the creation of multiple proxy_cache 
files, it has seemingly no effect on the creation of multiple proxy_temp files, 
being the true root of the problem which the description of proxy_cache_lock 
claims to solve (as all proxy_cache files are first proxy_temp files, so unless 
proxy_cache_lock can properly prevent the creation of multiple redundant 
proxy_temp file streams, it can seemingly not have the effect it claims to)?

(Further, as temp_file's are used to commonly source all reverse proxy'd reads, 
regardless of whether they're using a cache hashed naming scheme for 
proxy_cache files, or a symbolic naming scheme for reverse proxy'd static 
files; it would be nice if the fix were applicable to both.)


On Jun 24, 2014, at 10:58 PM, Paul Schlie  wrote:

> Hi, Upon further testing, it appears the problem exists even with 
> proxy_cache'd files with "proxy_cache_lock on".
> 
> (Please consider this a serious bug, which I'm surprised hasn't been detected 
> before; verified on recently released 1.7.2)
> 
> On Jun 24, 2014, at 8:58 PM, Paul Schlie  wrote:
> 
>> Again thank you. However ... (below)
>> 
>> On Jun 24, 2014, at 8:30 PM, Maxim Dounin  wrote:
>> 
>>> Hello!
>>> 
>>> On Tue, Jun 24, 2014 at 07:51:04PM -0400, Paul Schlie wrote:
>>> 
>>>> Thank you; however it appears to have no effect on reverse proxy_store'd 
>>>> static files?
>>> 
>>> Yes, it's part of the cache machinery.  The proxy_store 
>>> functionality is dumb and just provides a way to store responses 
>>> received, nothing more.
>> 
>> - There should be no difference between how reverse proxy'd files are 
>> accessed and first stored into corresponding temp_files (and below).
>> 
>>> 
>>>> (Which seems odd, if it actually works for cached files; as both 
>>>> are first read into temp_files, being the root of the problem.)
>>> 
>>> See above (and below).
>>> 
>>>> Any idea on how to prevent multiple redundant streams and 
>>>> corresponding temp_files being created when reading/updating a 
>>>> reverse proxy'd static file from the backend?
>>> 
>>> You may try to do so using limit_conn, and may be error_page and 
>>> limit_req to introduce some delay.  But unlikely it will be a 
>>> good / maintainable / easy to write solution.
>> 
>> - Please consider implementing by default that no more streams than may 
>> become necessary if a previously opened stream appears to have died (timed 
>> out), as otherwise only more bandwidth and thereby delay will most likely 
>> result to complete the request.  Further as there should be no difference 
>> between how reverse proxy read-streams and corresponding temp_files are 
>> created, regardless of whether they may be subsequently stored as either 
>> symbolically-named static files, or hash-named cache files; this behavior 
>> should be common to both.
>> 
>>>> (Out of curiosity, why would anyone ever want many multiple 
>>>> redundant streams/temp_files ever opened by default?)
>>> 
>>> You never know if responses are going to be the same.  The part 
>>> which knows (or, rather, tries to) is called "cache", and has 
>>> lots of directives to control it.
>> 
>> - If they're not "the same" then the tcp protocol stack has failed, which is 
>> nothing to do with ngiinx.
>> (unless a backend server is frequently dropping connections, it's 
>> counterproductive to open multiple redundant streams; as doing so by default 
>> will only likely result in higher-bandwidth and thereby slower response 
>> completion.)
>> 
>>> -- 
>>> Maxim Dounin
>>> http://nginx.org/
>>> 
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>> 
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> 

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-06-30 Thread Paul Schlie
(Seemingly, it may be beneficial to simply replace the sequentially numbered 
temp_file scheme with hash-named scheme, where if cached, the file is simply 
retained for some period of time and/or other condition, and which may be 
optionally symbolically aliased using their uri path and thereby respectively 
logically accessed as a local static file, or deleted upon no longer being 
needed and not being cached; and thereby kill multiple birds with one stone 
per-se?)

On Jun 30, 2014, at 8:44 PM, Paul Schlie  wrote:

> Is there any possible solution for this problem?
> 
> As although proxy_cache_lock may inhibit the creation of multiple proxy_cache 
> files, it has seemingly no effect on the creation of multiple proxy_temp 
> files, being the true root of the problem which the description of 
> proxy_cache_lock claims to solve (as all proxy_cache files are first 
> proxy_temp files, so unless proxy_cache_lock can properly prevent the 
> creation of multiple redundant proxy_temp file streams, it can seemingly not 
> have the effect it claims to)?
> 
> (Further, as temp_file's are used to commonly source all reverse proxy'd 
> reads, regardless of whether they're using a cache hashed naming scheme for 
> proxy_cache files, or a symbolic naming scheme for reverse proxy'd static 
> files; it would be nice if the fix were applicable to both.)
> 
> 
> On Jun 24, 2014, at 10:58 PM, Paul Schlie  wrote:
> 
>> Hi, Upon further testing, it appears the problem exists even with 
>> proxy_cache'd files with "proxy_cache_lock on".
>> 
>> (Please consider this a serious bug, which I'm surprised hasn't been 
>> detected before; verified on recently released 1.7.2)
>> 
>> On Jun 24, 2014, at 8:58 PM, Paul Schlie  wrote:
>> 
>>> Again thank you. However ... (below)
>>> 
>>> On Jun 24, 2014, at 8:30 PM, Maxim Dounin  wrote:
>>> 
>>>> Hello!
>>>> 
>>>> On Tue, Jun 24, 2014 at 07:51:04PM -0400, Paul Schlie wrote:
>>>> 
>>>>> Thank you; however it appears to have no effect on reverse proxy_store'd 
>>>>> static files?
>>>> 
>>>> Yes, it's part of the cache machinery.  The proxy_store 
>>>> functionality is dumb and just provides a way to store responses 
>>>> received, nothing more.
>>> 
>>> - There should be no difference between how reverse proxy'd files are 
>>> accessed and first stored into corresponding temp_files (and below).
>>> 
>>>> 
>>>>> (Which seems odd, if it actually works for cached files; as both 
>>>>> are first read into temp_files, being the root of the problem.)
>>>> 
>>>> See above (and below).
>>>> 
>>>>> Any idea on how to prevent multiple redundant streams and 
>>>>> corresponding temp_files being created when reading/updating a 
>>>>> reverse proxy'd static file from the backend?
>>>> 
>>>> You may try to do so using limit_conn, and may be error_page and 
>>>> limit_req to introduce some delay.  But unlikely it will be a 
>>>> good / maintainable / easy to write solution.
>>> 
>>> - Please consider implementing by default that no more streams than may 
>>> become necessary if a previously opened stream appears to have died (timed 
>>> out), as otherwise only more bandwidth and thereby delay will most likely 
>>> result to complete the request.  Further as there should be no difference 
>>> between how reverse proxy read-streams and corresponding temp_files are 
>>> created, regardless of whether they may be subsequently stored as either 
>>> symbolically-named static files, or hash-named cache files; this behavior 
>>> should be common to both.
>>> 
>>>>> (Out of curiosity, why would anyone ever want many multiple 
>>>>> redundant streams/temp_files ever opened by default?)
>>>> 
>>>> You never know if responses are going to be the same.  The part 
>>>> which knows (or, rather, tries to) is called "cache", and has 
>>>> lots of directives to control it.
>>> 
>>> - If they're not "the same" then the tcp protocol stack has failed, which 
>>> is nothing to do with ngiinx.
>>> (unless a backend server is frequently dropping connections, it's 
>>> counterproductive to open multiple redundant streams; as doing so by 
>>> default will only likely result in higher-bandwidth and thereby slower 
>>> response completion.)
>>> 
>>>> -- 
>>>> Maxim Dounin
>>>> http://nginx.org/
>>>> 
>>>> ___
>>>> nginx mailing list
>>>> nginx@nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>> 
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>> 
> 

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-06-30 Thread Paul Schlie
Regarding:

> In http, responses are not guaranteed to be the same.  Each 
> response can be unique, and you can't assume responses have to be 
> identical even if their URLs match.

Yes, but potentially unique does not imply that upon the first valid ok or valid
partial response that it will likely be productive to continue to open further 
such
channels unless no longer responsive, as doing so will most likely be counter
productive, only wasting limited resources by establishing redundant channels;
being seemingly why proxy_cache_lock was introduced, as you initially suggested.

On Jun 30, 2014, at 9:32 PM, Maxim Dounin  wrote:

> Hello!
> 
> On Mon, Jun 30, 2014 at 09:14:06PM -0400, Paul Schlie wrote:
> 
>> (Seemingly, it may be beneficial to simply replace the 
>> sequentially numbered temp_file scheme with hash-named scheme, 
>> where if cached, the file is simply retained for some period of 
>> time and/or other condition, and which may be optionally 
>> symbolically aliased using their uri path and thereby 
>> respectively logically accessed as a local static file, or 
>> deleted upon no longer being needed and not being cached; and 
>> thereby kill multiple birds with one stone per-se?)
> 
> Sorry for not following your discussion with yourself, but it looks 
> you didn't understand what was explained earlier:
> 
> [...]
> 
>>>>>>> (Out of curiosity, why would anyone ever want many multiple 
>>>>>>> redundant streams/temp_files ever opened by default?)
>>>>>> 
>>>>>> You never know if responses are going to be the same.  The part 
>>>>>> which knows (or, rather, tries to) is called "cache", and has 
>>>>>> lots of directives to control it.
>>>>> 
>>>>> - If they're not "the same" then the tcp protocol stack has failed, which 
>>>>> is nothing to do with ngiinx.
> 
> In http, responses are not guaranteed to be the same.  Each 
> response can be unique, and you can't assume responses have to be 
> identical even if their URLs match.
> 
> -- 
> Maxim Dounin
> http://nginx.org/
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-07-01 Thread Paul Schlie
As it appears a downstream response is not cached until first completely read 
into a temp_file (which for a large file may require 100's if not 1,000's of MB 
be transferred), there appears to be no "cache node formed" which to "lock" or 
serve "stale" responses from, and thereby until the first "cache node" is 
useably created, proxy_cache_lock has nothing to lock requests to?

The code does not appear to be forming a "cache node" using the designated 
cache_key until the requested downstream element has completed transfer as 
you've noted?

For the scheme to work, a lockable cache_node would need to formed immediately 
upon the first unique cache_key request, and not wait until the transfer of the 
requested item being stored into a temp_file is complete; as otherwise multiple 
redundant active streams between nginx and a backend server may be formed, each 
most likely transferring the same information needlessly; being what 
proxy_cache_lock was seemingly introduced to prevent (but it doesn't)?

On Jul 1, 2014, at 7:01 AM, Maxim Dounin  wrote:

> Hello!
> 
> On Mon, Jun 30, 2014 at 11:10:52PM -0400, Paul Schlie wrote:
> 
>> Regarding:
>> 
>>> In http, responses are not guaranteed to be the same.  Each 
>>> response can be unique, and you can't assume responses have to be 
>>> identical even if their URLs match.
>> 
>> Yes, but potentially unique does not imply that upon the first valid ok or 
>> valid
>> partial response that it will likely be productive to continue to open 
>> further such
>> channels unless no longer responsive, as doing so will most likely be counter
>> productive, only wasting limited resources by establishing redundant 
>> channels;
>> being seemingly why proxy_cache_lock was introduced, as you initially 
>> suggested.
> 
> Again: responses are not guaranteed to be the same, and unless 
> you are using cache (and hence proxy_cache_key and various header 
> checks to ensure responses are at least interchangeable), the only 
> thing you can do is to proxy requests one by one.
> 
> If you are using cache, then there is proxy_cache_key to identify 
> a resource requested, and proxy_cache_lock to prevent multiple 
> parallel requests to populate the same cache node (and 
> "proxy_cache_use_stale updating" to prevent multiple requests when 
> updating a cache node).
> 
> In theory, cache code can be improved (compared to what we 
> currently have) to introduce sending of a response being loaded 
> into a cache to multiple clients.  I.e., stop waiting for a cache 
> lock once we've got the response headers, and stream the response 
> body being load to all clients waited for it.  This should/can 
> help when loading large files into a cache, when waiting with 
> proxy_cache_lock for a complete response isn't cheap.  In 
> practice, introducing such a code isn't cheap either, and it's not 
> about using other names for temporary files.
> 
> -- 
> Maxim Dounin
> http://nginx.org/
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-07-01 Thread Paul Schlie
Then how could multiple streams and corresponding temp_files ever be created 
upon successive requests for the same $uri with "proxy_cache_key $uri" and 
"proxy_cache_lock on"; if all subsequent requests are locked to the same 
cache_node created by the first request even prior to its completion?

You've previously noted:

> In theory, cache code can be improved (compared to what we 
> currently have) to introduce sending of a response being loaded 
> into a cache to multiple clients.  I.e., stop waiting for a cache 
> lock once we've got the response headers, and stream the response 
> body being load to all clients waited for it.  This should/can 
> help when loading large files into a cache, when waiting with 
> proxy_cache_lock for a complete response isn't cheap.  In 
> practice, introducing such a code isn't cheap either, and it's not 
> about using other names for temporary files.

Being what I apparently incorrectly understood proxy_cache_lock to actually do.

So if not the above, what does proxy_cache_lock actually do upon receipt of 
subsequent requests for the same $uri?


On Jul 1, 2014, at 9:20 AM, Maxim Dounin  wrote:

> Hello!
> 
> On Tue, Jul 01, 2014 at 08:44:47AM -0400, Paul Schlie wrote:
> 
>> As it appears a downstream response is not cached until first 
>> completely read into a temp_file (which for a large file may 
>> require 100's if not 1,000's of MB be transferred), there 
>> appears to be no "cache node formed" which to "lock" or serve 
>> "stale" responses from, and thereby until the first "cache node" 
>> is useably created, proxy_cache_lock has nothing to lock 
>> requests to?
>> 
>> The code does not appear to be forming a "cache node" using the 
>> designated cache_key until the requested downstream element has 
>> completed transfer as you've noted?
> 
> Your reading of the code is incorrect.
> 
> A node in shared memory is created on a request start, and this is 
> enough for proxy_cache_lock to work.  On the request completion, 
> the temporary file is placed into the cache directory, and the 
> node is updated to reflect that the cache file exists and can be 
> used.
> 
> -- 
> Maxim Dounin
> http://nginx.org/
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-07-01 Thread Paul Schlie
Thank you for your patience.

I mistakenly thought the 5 second default value associated with 
proxy_cache_lock_timeout was the maximum delay allowed between successive 
responses from the backend server is satisfaction of the reverse proxy request 
being cached prior to the cache lock being released, not the maximum delay for 
the response to be completely received and cached as it appears to actually be.

Now that I understand, please consider setting the default value much higher, 
or more ideally set in proportion to the size of the item being cached and 
possibly some measure of the activity of the stream; as in most circumstances, 
redundant streams should never be opened, as it will tend to only make matters 
worse.

Thank you.

On Jul 1, 2014, at 12:40 PM, Maxim Dounin  wrote:
> On Tue, Jul 01, 2014 at 10:15:47AM -0400, Paul Schlie wrote:
>> Then how could multiple streams and corresponding temp_files 
>> ever be created upon successive requests for the same $uri with 
>> "proxy_cache_key $uri" and "proxy_cache_lock on"; if all 
>> subsequent requests are locked to the same cache_node created by 
>> the first request even prior to its completion?
> 
> Quoting documentation, http://nginx.org/r/proxy_cache_lock:
> 
> : When enabled, only one request at a time will be allowed to 
> : populate a new cache element identified according to the 
> : proxy_cache_key directive by passing a request to a proxied 
> : server. Other requests of the same cache element will either wait 
> : for a response to appear in the cache or the cache lock for this 
> : element to be released, up to the time set by the 
> : proxy_cache_lock_timeout directive.
> 
> So, there are at least two cases "prior to its completion" which 
> are explicitly documented:
> 
> 1. If the cache lock is released - this happens, e.g., if the 
>   response isn't cacheable according to the response headers.
> 
> 2. If proxy_cache_lock_timeout expires.
> 
> -- 
> Maxim Dounin
> http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How can the number of parallel/redundant open streams/temp_files be controlled/limited?

2014-07-01 Thread Paul Schlie
Lastly, is there any way to try to get proxy_store to work in combination with 
proxy_cache, possibly by enabling the completed temp_file to be saved as a 
proxy_store file within its uri logical path hierarchy, and the cache_file 
descriptor aliased to it, or visa versa?

(As it's often nice to be able to view/access cached files within their natural 
uri hierarchy, being virtually impossible if stored using their corresponding 
hashed names alone; and not lose the benefit of being able to lock multiple 
pending requests to the same cache_node being fetched so as to minimize 
otherwise redundant down-stream requests prior to the file being cached.)


On Jul 1, 2014, at 4:11 PM, Paul Schlie  wrote:

> Thank you for your patience.
> 
> I mistakenly thought the 5 second default value associated with 
> proxy_cache_lock_timeout was the maximum delay allowed between successive 
> responses from the backend server is satisfaction of the reverse proxy 
> request being cached prior to the cache lock being released, not the maximum 
> delay for the response to be completely received and cached as it appears to 
> actually be.
> 
> Now that I understand, please consider setting the default value much higher, 
> or more ideally set in proportion to the size of the item being cached and 
> possibly some measure of the activity of the stream; as in most 
> circumstances, redundant streams should never be opened, as it will tend to 
> only make matters worse.
> 
> Thank you.
> 
> On Jul 1, 2014, at 12:40 PM, Maxim Dounin  wrote:
>> On Tue, Jul 01, 2014 at 10:15:47AM -0400, Paul Schlie wrote:
>>> Then how could multiple streams and corresponding temp_files 
>>> ever be created upon successive requests for the same $uri with 
>>> "proxy_cache_key $uri" and "proxy_cache_lock on"; if all 
>>> subsequent requests are locked to the same cache_node created by 
>>> the first request even prior to its completion?
>> 
>> Quoting documentation, http://nginx.org/r/proxy_cache_lock:
>> 
>> : When enabled, only one request at a time will be allowed to 
>> : populate a new cache element identified according to the 
>> : proxy_cache_key directive by passing a request to a proxied 
>> : server. Other requests of the same cache element will either wait 
>> : for a response to appear in the cache or the cache lock for this 
>> : element to be released, up to the time set by the 
>> : proxy_cache_lock_timeout directive.
>> 
>> So, there are at least two cases "prior to its completion" which 
>> are explicitly documented:
>> 
>> 1. If the cache lock is released - this happens, e.g., if the 
>>  response isn't cacheable according to the response headers.
>> 
>> 2. If proxy_cache_lock_timeout expires.
>> 
>> -- 
>> Maxim Dounin
>> http://nginx.org/
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: static file performance "staircase" pattern

2015-05-09 Thread Paul Smith
On Sat, May 9, 2015 at 11:37 AM, Dennis Jacobfeuerborn
 wrote:
> Hi,
> I'm trying to find out how to effectively deliver pages with lots of
> images on a page. Attached you see opening a static html page that
> contains lots of img tags pointing to static images. Please also note
> that all images are cached in the browser (hence the 304 response) so no
> actual data needs to be downloaded.
> All of this is happening on a CentOS 7 system using nginx 1.6.
>
> The question I have is why is it that the responses get increasingly
> longer? There is nothing else happening on that server and I also tried
> various optimizations like keepalive, multi_accept, epoll,
> open_file_cache, etc. but nothing seems to get rid of that "staircase"
> pattern in the image.
>
> Does anybody have an idea what the cause is for this behavior and how to
> improve it?
>
> Regards,
>   Dennis
>

I am not an expert but I believe that most browsers only make between
4 to 6 simultaneous connections to a domain. So the first round of
requests are sent and the response received and then the second round
go out and are received back and so forth. Doing a search for
something like "max downloads per domain" may bring you better
information.

Paul

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Origin/Edge Questions

2016-05-02 Thread Paul Stewart
Hi folks.

 

Today, we have several NGINX systems deployed .. most of it is for caching
frontend to websites via anycasted instance.  

 

A couple of our systems though involve video - streaming of realtime linear
encrypted video. For that project, I'm looking to build out scale in the
system.  Today, there are just a few NGINX systems providing an edge
function but to scale, we need to deploy a large scale origin/edge scenario.
I keep looking around for reference designs and/or any whitepapers on others
who have done this specifically with realtime encrypted linear video and not
finding much . any ideas?

 

Thanks,

Paul

 

 

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

missing something with auth_jwt_key_request

2024-03-11 Thread Christopher Paul

Hi NGINX-users,

I am running nginx version: nginx/1.25.3 (nginx-plus-r31-p1 on Rocky 9.3 
in a lab, trying to get OIDC authentication working to KeyCloak 23.0.7. 
Attached are the relevant files /etc/nginx.conf and included 
/etc/nginx/conf.d files, most of which are from the nginx-openid-connect 
github repo (https://github.com/nginxinc/nginx-openid-connect).


Keycloak and nginx are running on the same VM.

What am I missing/doing wrong? When I try to hit the server, the 
redirect to Keycloak does not happen. I can tell this for sure by 
running "sudo tcpdump -i lo". There are no packets transmitted to 
localhost:8080. When I "curl -v https://rocky.rexconsulting.net";, 
besides no packets between nginx and keycloak, the output of curl is:


* Connected to rocky.rexconsulting.net (10.5.5.90) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/ssl/cert.pem
*  CApath: none
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Unknown (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / [blank] / UNDEF
* ALPN: server accepted http/1.1
* Server certificate:
*  subject: CN=rocky.rexconsulting.net
*  start date: Mar  7 23:46:13 2024 GMT
*  expire date: Jun  5 23:46:12 2024 GMT
*  subjectAltName: host "rocky.rexconsulting.net" matched cert's 
"rocky.rexconsulting.net"

*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
*   Certificate level 0: Public key type ? (256/128 Bits/secBits), 
signed using sha256WithRSAEncryption
*   Certificate level 1: Public key type ? (2048/112 Bits/secBits), 
signed using sha256WithRSAEncryption

* using HTTP/1.x
> GET / HTTP/1.1
> Host: rocky.rexconsulting.net
> User-Agent: curl/8.6.0
> Accept: */*
>
* old SSL session ID is stale, removing
< HTTP/1.1 401 Unauthorized
< Server: nginx/1.25.3
< Date: Tue, 12 Mar 2024 03:07:32 GMT
< Content-Type: text/html
< Content-Length: 179
< Connection: keep-alive
< WWW-Authenticate: Bearer realm="closed site"
<

401 Authorization Required

401 Authorization Required
nginx/1.25.3


* Connection #0 to host rocky.rexconsulting.net left intact

Many thanks for any insight that might be offered on this.

Chris Paul
user  nginx;
worker_processes  auto;
error_log  /var/log/nginx/error.log debug;
pid/var/run/nginx.pid;
load_module modules/ngx_http_js_module.so;
load_module modules/ngx_stream_js_module.so;
events {
worker_connections  1024;
}
http {
include   /etc/nginx/mime.types;
default_type  application/octet-stream;
log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
  '$status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';
access_log  /var/log/nginx/access.log  main;
sendfileon;
keepalive_timeout  65;
include /etc/nginx/conf.d/*.conf;
}
# OpenID Connect configuration
#
# Each map block allows multiple values so that multiple IdPs can be supported,
# the $host variable is used as the default input parameter but can be changed.
#
map $host $oidc_authz_endpoint {
#default 
"http://127.0.0.1:8080/auth/realms/master/protocol/openid-connect/auth";;
#www.example.com "https://my-idp/oauth2/v1/authorize";;
default "http://127.0.0.1:8080/realms/rexlab/protocol/openid-connect/auth";;
}

map $host $oidc_authz_extra_args {
# Extra arguments to include in the request to the IdP's authorization
# endpoint.
# Some IdPs provide extended capabilities controlled by extra arguments,
# for example Keycloak can select an IdP to delegate to via the
# "kc_idp_hint" argument.
# Arguments must be expressed as query string parameters and URL-encoded
# if required.
default "";
#www.example.com "kc_idp_hint=another_provider"
}

map $host $oidc_token_endpoint {
#default 
"http://127.0.0.1:8080/auth/realms/master/protocol/openid-connect/token";;
default 
"http://127.0.0.1:8080/auth/realms/rexlab/protocol/openid-connect/token";;
}

map $host $oidc_jwt_keyfile {
#default 
"http://127.0.0.1:8080/auth/realms/master/protocol/openid-connect/certs";;
default "http://127.0.0.1:8080/realms/rexlab/protocol/openid-connect/certs";;
}

map $host $oidc_client {
default "nginx-plus";
}

map $host $oidc_pkce_enable {
default 0;
}

map $host $oidc_client_secret {
default "UxPA37ZTMv36mTGSZhfSTFCl91YYzwcx";
}

map $host $oidc_scopes {
default "openid+profile+email+offline_acc

RE: missing something with auth_jwt_key_request

2024-03-14 Thread Christopher Paul
> -Original Message-
> From: nginx  On Behalf Of Sergey A. Osokin
 
> please correct me if I'm wrong here, but the question is related to NGINX Plus
> and OIDC implementation.

Hi Sergey,

The question is related to NGINX in general. I tried NGINX FOSS first, then 
Plus. Should this "nginx-openid-connect" work differently for Plus vs the FOSS 
version?
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


custom 502 error for stacked proxies

2019-05-02 Thread Paul B. Henson
So, I've got a need for a reverse proxy where first it tries server A;
if it gets a 404 from server A it should try server B, and then just
return whatever happens with server B.

I've got this config so far:

location /_nginx_/ {
internal;
root /var/www/localhost/nginx;
}

location / {

proxy_intercept_errors on;
error_page 403 /_nginx_/error_403.html;
error_page 404 = @server_b;
error_page 405 /_nginx_/error_405.html;
error_page 500 /_nginx_/error_500.html;
error_page 502 /_nginx_/error_503.html;
error_page 503 /_nginx_/error_503.html;
proxy_pass https://serverA;
proxy_redirect http://$host/ /;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_connect_timeout 3m;
proxy_read_timeout 3m;
proxy_buffers 1024 4k;
}

location @server_b {
proxy_intercept_errors off;
proxy_pass https://serverB;
proxy_redirect http://$host/ /;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_connect_timeout 3m;
proxy_read_timeout 3m;
proxy_buffers 1024 4k;
}

This seems to work *except* when it fails to connect to server B, in which
case it gives a standard nginx 502 error page rather than a custom page.

I've tried all kinds of things, from setting proxy_intercept_errors on
for the @server_b location and adding error_page configuration like in
the / location, and a bunch of other stuff I can't even remember exactly,
but no matter what I do I always get the stock nginx 502 rather than
the custom error page.

Ideally I'd like to just pass through whatever error comes from B, unless
nginx fails completely to connect to B, in which case I'd like to pass
the local custom error page rather than the default nginx page.

What am I missing?

Thanks much...
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: custom 502 error for stacked proxies

2019-05-03 Thread Paul B. Henson
On Fri, May 03, 2019 at 01:47:40PM +0300, Sergey Kandaurov wrote:

> you may want to try recursive error pages in location / {}
> with error_page 502 in @server_b.

Sweet, that did indeed do the trick. Thank you very much for the
suggestion.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Using proxy_cache_background_update

2017-02-26 Thread Jean-Paul Hemelaar
Hi all,

I tested the new proxy_cache_background_update function to serve stale
content while fetching an update in the background.

I ran into the following issue:
- PHP application running on www.example.com
- Root document lives on /index.php

As soon as the cache has expired:
- A client requests http://www.example.com/
- Nginx returns the stale response
- In the background Nginx will fetch
http://www.mybackend.com/index.html (index.html
instead of index.php or just /)
- The backend server returns a 404 (which is normal)
- The root document remains in stale state as Nginx is unable to fetch it
properly

As a workaround I included "rewrite ^/index.html$ / break;" to rewrite the
/index.html call to a simple / for the backend server.
This works, but is not ideal.

Is there a better way to tell Nginx to just fetch "/"?

Thanks,

Jean-Paul Hemelaar
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

200ms delay when serving stale content and proxy_cache_background_update enabled

2017-03-15 Thread Jean-Paul Hemelaar
Hi,

I noticed a delay of approx. 200ms when the proxy_cache_background_update
is used and Nginx sends stale content to the client.

Current setup:
- Apache webserver as backend sending a slow response delay.php that simply
waits for 1 second: 
- Nginx in front to cache the response, and send stale content it the cache
needs to be refreshed.
- wget sending a request from another machine

Nginx config-block:
location /delay.php {
 proxy_pass  http://backend;
 proxy_next_upstream error timeout invalid_header;
 proxy_redirect http://$host:8000/ http://$host/;
 proxy_buffering on;
 proxy_connect_timeout 1;
 proxy_read_timeout 30;
 proxy_cache_background_update on;

 proxy_http_version 1.1;
 proxy_set_header Connection "";

 proxy_cacheSTATIC;
 proxy_cache_key"$scheme$host$request_uri";
 proxy_cache_use_stale  error timeout invalid_header updating http_500
http_502 http_503 http_504;
 proxy_set_headerHost$host;
 proxy_set_headerX-Real-IP   $remote_addr;
 proxy_set_headerX-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_headerAccept-Encoding  "";

 # Just to test if this caused the issue, but it doesn't change
 tcp_nodelay on;
   }

Wget request: time wget --server-response --output-document=/dev/null "
http://www.example.com/delay.php?teststales=true";
Snippet of wget output: X-Cached: STALE
Output of time command: real 0m0.253s

Wget request: time wget --server-response --output-document=/dev/null "
http://www.example.com/delay.php?teststales=true";
Snippet of wget output: X-Cached: UPDATING
Output of time command: real 0m0.022s

So a cache HIT (not shown) or an UPDATING are fast, sending a STALE
response takes some time.
Tcpdump showed that all HTML content and headers are send immediately after
the request has been received, but the last package will be delayed; that's
why I tested the tcp_nodelay option in the config.

I'm running version 1.11-10 with the patch provided by Maxim:
http://hg.nginx.org/nginx/rev/8b7fd958c59f

Any idea's on this?

Thanks,

Jean-Paul
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_cache_background_update after cache expiry

2017-04-05 Thread Jean-Paul Hemelaar
Hi,

I have a similar issue:
http://mailman.nginx.org/pipermail/nginx/2017-March/053198.html

I noticed (using tcpdump) that all data except the last package is send
immediately.
Can you verify it that's happening in your case as well?

JP

On Wed, Apr 5, 2017 at 1:32 PM, IgorR  wrote:

> Hello,
>
> I'm trying to configure nginx to use proxy_cache_background_update but it
> seems like after expiry it still waits for the full roundtrip to the
> backend, returning a MISS in X-Cache-Status. What am I MISSing?
>
> I'm using nginx 1.11.12 under ubuntu 14.04 running inside docker, but
> hopefully this is too much detail.
>
> location ~ ^/?(\d+/[^/]+)?/?$
> {
>expires 20s;
>
>proxy_cache app_cache;
>proxy_cache_lock on;
>
>proxy_cache_bypass $http_upgrade;
>
>proxy_pass http://172.17.0.2:5000;
>proxy_http_version 1.1;
>error_log/nginxerror.log debug;
>
>add_header X-Cache-Status $upstream_cache_status;
>
>proxy_cache_use_stale error timeout updating http_500 http_502 http_503
> http_504;
>proxy_cache_background_update on;
>
>break;
> }
>
> NB: This is a duplicate of my so question, I was kindly advised on the
> nginx
> IRC to repost here for better chances, the original question is here:
> http://stackoverflow.com/questions/43223993/nginx-
> proxy-cache-background-update-after-cache-expiry
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,273417,273417#msg-273417
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 200ms delay when serving stale content and proxy_cache_background_update enabled

2017-05-30 Thread Jean-Paul Hemelaar
I think this solves the issue: http://hg.nginx.org/nginx/rev/9552758a786e

Thanks,

JP

On Wed, Mar 15, 2017 at 11:05 AM, Jean-Paul Hemelaar 
wrote:

> Hi,
>
> I noticed a delay of approx. 200ms when the proxy_cache_background_update
> is used and Nginx sends stale content to the client.
>
> Current setup:
> - Apache webserver as backend sending a slow response delay.php that
> simply waits for 1 second: 
> - Nginx in front to cache the response, and send stale content it the
> cache needs to be refreshed.
> - wget sending a request from another machine
>
> Nginx config-block:
> location /delay.php {
>  proxy_pass  http://backend;
>  proxy_next_upstream error timeout invalid_header;
>  proxy_redirect http://$host:8000/ http://$host/;
>  proxy_buffering on;
>  proxy_connect_timeout 1;
>  proxy_read_timeout 30;
>  proxy_cache_background_update on;
>
>  proxy_http_version 1.1;
>  proxy_set_header Connection "";
>
>  proxy_cacheSTATIC;
>  proxy_cache_key"$scheme$host$request_uri";
>  proxy_cache_use_stale  error timeout invalid_header updating http_500
> http_502 http_503 http_504;
>  proxy_set_headerHost$host;
>  proxy_set_headerX-Real-IP   $remote_addr;
>  proxy_set_headerX-Forwarded-For $proxy_add_x_forwarded_for;
>  proxy_set_headerAccept-Encoding  "";
>
>  # Just to test if this caused the issue, but it doesn't change
>  tcp_nodelay on;
>}
>
> Wget request: time wget --server-response --output-document=/dev/null "
> http://www.example.com/delay.php?teststales=true";
> Snippet of wget output: X-Cached: STALE
> Output of time command: real 0m0.253s
>
> Wget request: time wget --server-response --output-document=/dev/null "
> http://www.example.com/delay.php?teststales=true";
> Snippet of wget output: X-Cached: UPDATING
> Output of time command: real 0m0.022s
>
> So a cache HIT (not shown) or an UPDATING are fast, sending a STALE
> response takes some time.
> Tcpdump showed that all HTML content and headers are send immediately
> after the request has been received, but the last package will be delayed;
> that's why I tested the tcp_nodelay option in the config.
>
> I'm running version 1.11-10 with the patch provided by Maxim:
> http://hg.nginx.org/nginx/rev/8b7fd958c59f
>
> Any idea's on this?
>
> Thanks,
>
> Jean-Paul
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Different Naxsi rulesets

2017-11-12 Thread Jean-Paul Hemelaar
Hi!

I'm using Nginx together with Naxsi; so not sure it this is the correct
place for this post, but I'll give it a try.

I want to configure two detection thresholds: a strict detection threshold
for 'far away countries', and a less-strict set
for local countries. I'm using a setup like:

location /strict/ {
 include /usr/local/nginx/naxsi.rules.strict;

 proxy_pass  http://app-server/;
}

location /not_so_strict/ {
 include /usr/local/nginx/naxsi.rules.not_so_strict;

 proxy_pass  http://app-server/;
}

location / {
 # REMOVED BUT THIS WORKS:
 # include /usr/local/nginx/naxsi.rules.not_so_strict;
 set $ruleSet "strict";
 if ( $geoip_country_code ~ (TRUSTED_CC_1|TRUSTED_CC_2TRUSTED_CC_3) ) {
set $ruleSet "not_so_strict";
 }

 rewrite ^(.*)$ /$ruleSet$1 last;
}

location /RequestDenied {
return 403;
}


The naxsi.rules.strict file contains the check rules:
CheckRule "$SQL >= 8" BLOCK;
etc.

For some reason this doesn't work. The syntax is ok, and I can reload
Nginx. However the firewall never triggers. If I uncomment the include in
the location-block / it works perfectly.
Any idea's why this doesn't work, or any better setup to use different
rulesets based on some variables?

Thanks,

JP
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Different Naxsi rulesets

2017-11-12 Thread Jean-Paul Hemelaar
Hi Aziz,

True; this got lost during my copy-anonymize-paste process. The real config
doesn't have this.

Thanks so far,

JP

On Sun, Nov 12, 2017 at 2:34 PM, Aziz Rozyev  wrote:

> at least you’re missing or (|) operator between
>
> > TRUSTED_CC_2  and TRUSTED_CC_3
>
>
>
> br,
> Aziz.
>
>
>
>
>
> > On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar 
> wrote:
> >
> > Hi!
> >
> > I'm using Nginx together with Naxsi; so not sure it this is the correct
> place for this post, but I'll give it a try.
> >
> > I want to configure two detection thresholds: a strict detection
> threshold for 'far away countries', and a less-strict set
> > for local countries. I'm using a setup like:
> >
> > location /strict/ {
> >  include /usr/local/nginx/naxsi.rules.strict;
> >
> >  proxy_pass  http://app-server/;
> > }
> >
> > location /not_so_strict/ {
> >  include /usr/local/nginx/naxsi.rules.not_so_strict;
> >
> >  proxy_pass  http://app-server/;
> > }
> >
> > location / {
> >  # REMOVED BUT THIS WORKS:
> >  # include /usr/local/nginx/naxsi.rules.not_so_strict;
> >  set $ruleSet "strict";
> >  if ( $geoip_country_code ~ (TRUSTED_CC_1|TRUSTED_CC_2TRUSTED_CC_3)
> ) {
> > set $ruleSet "not_so_strict";
> >  }
> >
> >  rewrite ^(.*)$ /$ruleSet$1 last;
> > }
> >
> > location /RequestDenied {
> > return 403;
> > }
> >
> >
> > The naxsi.rules.strict file contains the check rules:
> > CheckRule "$SQL >= 8" BLOCK;
> > etc.
> >
> > For some reason this doesn't work. The syntax is ok, and I can reload
> Nginx. However the firewall never triggers. If I uncomment the include in
> the location-block / it works perfectly.
> > Any idea's why this doesn't work, or any better setup to use different
> rulesets based on some variables?
> >
> > Thanks,
> >
> > JP
> >
> >
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Different Naxsi rulesets

2017-11-13 Thread Jean-Paul Hemelaar
Hi,

I have updated the config to use 'map' instead of the if-statements. That's
indeed a better way.
The problem however remains:

- Naxsi mainrules are in the http-block
- Config similar to:

map $geoip_country_code $ruleSetCC {
default "strict";
CC1 "relaxed";
CC2 "relaxed";
}

location /strict/ {
   include /usr/local/nginx/naxsi.rules.strict;

   proxy_pass  http://app-server/;
}

location /relaxed/ {
   include /usr/local/nginx/naxsi.rules.relaxed;

   proxy_pass  http://app-server/;
}

location / {
   include /usr/local/nginx/naxsi.rules.default;

   set $ruleSet $ruleSetCC;
   rewrite ^(.*)$ /$ruleSet$1 last;
}


It's always using naxsi.rules.default. If this line is removed it's not
using any rules (pass-all).

Thanks so far!

JP





On Mon, Nov 13, 2017 at 2:14 PM, Aziz Rozyev  wrote:

> At first glance config looks correct, so probably it’s something with naxi
> rulesets.
> Btw, why don’t you use maps?
>
> map $geoip_coutnry_code $strictness {
>   default “strict";
>   CC_1“not-so-strict";
>   CC_2“not-so-strict";
>   # .. more country codes;
> }
>
> # strict and not-so-strict locations
>
> map $strictness $path {
>"strict” "/strict/";
>"not-so-strict”  "/not-so-strict/“;
> }
>
> location / {
>return 302 $path;
># ..
> }
>
>
> br,
> Aziz.
>
>
>
>
>
> > On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar 
> wrote:
> >
> > T THIS WORKS:
> >  # include /usr/local/n
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Different Naxsi rulesets

2017-11-15 Thread Jean-Paul Hemelaar
Hi,

With help from the Naxsi maillist I found that my idea is indeed not
possible.
Naxsi doesn't process subrequests, so that's why it didn't work as I
expected.
It seems to be on the roadmap to change this behavior.

My workaround for now it to move the two rulesets into different server
blocks in Nginx:

Serverblock 1 listening on port 8080 makes the decision to send the request
to the strict or not-strict Naxsi
Serverblock 2 listening on port 8081 applies the strict rules
Serverblock 3 listening on port 8082 applies the less-strict rules

This works!

Thanks for your help,

JP



On Mon, Nov 13, 2017 at 8:30 PM, Aziz Rozyev  wrote:

> hello,
>
> how about logs? does naxisi provide any variables that can be monitored?
>
> so far it seems that your rules in ‘strict|relaxed’ are not triggering,
> the ‘default’
> one will always hit (as expected), as it’s first location ‘/‘ from where
> you route to other 2 locations.
>
> also, try to log in debug mode, may be that will give more insights.
>
> br,
> Aziz.
>
>
>
>
>
> > On 13 Nov 2017, at 21:47, Jean-Paul Hemelaar 
> wrote:
> >
> > Hi,
> >
> > I have updated the config to use 'map' instead of the if-statements.
> That's indeed a better way.
> > The problem however remains:
> >
> > - Naxsi mainrules are in the http-block
> > - Config similar to:
> >
> > map $geoip_country_code $ruleSetCC {
> > default "strict";
> > CC1 "relaxed";
> > CC2 "relaxed";
> > }
> >
> > location /strict/ {
> >include /usr/local/nginx/naxsi.rules.strict;
> >
> >proxy_pass  http://app-server/;
> > }
> >
> > location /relaxed/ {
> >include /usr/local/nginx/naxsi.rules.relaxed;
> >
> >proxy_pass  http://app-server/;
> > }
> >
> > location / {
> >include /usr/local/nginx/naxsi.rules.default;
> >
> >set $ruleSet $ruleSetCC;
> >rewrite ^(.*)$ /$ruleSet$1 last;
> > }
> >
> >
> > It's always using naxsi.rules.default. If this line is removed it's not
> using any rules (pass-all).
> >
> > Thanks so far!
> >
> > JP
> >
> >
> >
> >
> >
> > On Mon, Nov 13, 2017 at 2:14 PM, Aziz Rozyev  wrote:
> > At first glance config looks correct, so probably it’s something with
> naxi rulesets.
> > Btw, why don’t you use maps?
> >
> > map $geoip_coutnry_code $strictness {
> >   default “strict";
> >   CC_1“not-so-strict";
> >   CC_2“not-so-strict";
> >   # .. more country codes;
> > }
> >
> > # strict and not-so-strict locations
> >
> > map $strictness $path {
> >"strict” "/strict/";
> >"not-so-strict”  "/not-so-strict/“;
> > }
> >
> > location / {
> >return 302 $path;
> ># ..
> > }
> >
> >
> > br,
> > Aziz.
> >
> >
> >
> >
> >
> > > On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar 
> wrote:
> > >
> > > T THIS WORKS:
> > >  # include /usr/local/n
> >
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

No live upstreams

2018-05-10 Thread Jean-Paul Hemelaar
Hi!

I'm using Nginx as a proxy to Apache.

I noticed some messages in my error.log that I cannot explain:
27463#0: *125209 no live upstreams while connecting to upstream, client:
x.x.x.x, server: www.xxx.com, request: "GET /xxx/ HTTP/1.1", upstream: "
http://backend/xxx/";, host: "www.xxx.com"

The errors appear after Apache returned some 502-errors; however in the
configuration I have set the following:

upstream backend {
server  10.0.0.2:8080 max_fails=3 fail_timeout=10;
server  127.0.0.1:8000 backup;
keepalive   6;
}

server {
location / {
 proxy_pass  http://backend;
 proxy_next_upstream error timeout invalid_header;

 etc.
}

I expected that, if Apache returns a few 502's:
- Nginx will not try to proceed to the next upstream as proxy_next_upstream
doesn't mention the http_502 but just forward the 502 to the client
- if the upstream is marked as failed (what I didn't expect to happen) the
server will try the backup server instead

What can be happening:
- If the primary server sends a 502 it tries the backup that will send a
502 as well. Because the max_fails is not defined it will be marked as
failed after the first failure.

Not sure if the above assumption is true. If it is, why are they marked as
failed even when the http_502 is not mentioned?

Thanks!

JP
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Why does nginx work at the server IP address only with default root location?

2013-04-21 Thread Paul N. Pace
I have set up a server on Rackspace using Ubuntu 12.04 and the nginx stable PPA.

Using the default root location of /usr/share/nginx/html the
index.html file is displayed when I call the public IP address of the
server.

If I change the root location to my own /var/www/example.com/public
the index.html file is not displayed.

Output of ll on /var/www/example.com/public:

drwxrwsr-x 2 www-data www-data 4096 Apr 21 04:13 ./
drwxrwsr-x 7 www-data www-data 4096 Apr 21 03:55 ../
-rw-rw-r-- 1 www-data www-data  624 Apr 21 04:17 index.html

This is the only change I make and I get the failure, but I don't
expect it. What am I doing wrong?

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Why does nginx work at the server IP address only with default root location?

2013-04-21 Thread Paul N. Pace
Steve, you are a Linux genius, and I am but a humble plebe, forever in
your debt.

On Sun, Apr 21, 2013 at 1:09 PM, Steve Holdoway  wrote:
> At a guess, /var or /var/www isn't readable by www-data
>
> Steve
>
> On 22/04/2013, at 7:50 AM, "Paul N. Pace"  wrote:
>
>> I have set up a server on Rackspace using Ubuntu 12.04 and the nginx stable 
>> PPA.
>>
>> Using the default root location of /usr/share/nginx/html the
>> index.html file is displayed when I call the public IP address of the
>> server.
>>
>> If I change the root location to my own /var/www/example.com/public
>> the index.html file is not displayed.
>>
>> Output of ll on /var/www/example.com/public:
>>
>> drwxrwsr-x 2 www-data www-data 4096 Apr 21 04:13 ./
>> drwxrwsr-x 7 www-data www-data 4096 Apr 21 03:55 ../
>> -rw-rw-r-- 1 www-data www-data  624 Apr 21 04:17 index.html
>>
>> This is the only change I make and I get the failure, but I don't
>> expect it. What am I doing wrong?
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Online Jobs and Entertainment

2013-04-29 Thread Paul N. Pace
Speaking of which, what do you guys use for a spam filter? I've been thinking 
about setting up mailman. I'm surprised at how little spam I've seen here given 
how popular nginx is. (I realize this gem came from a forum post).
--Original Message--
From: Rickey
Sender: nginx-boun...@nginx.org
To: nginx@nginx.org
ReplyTo: nginx@nginx.org
Subject: Online Jobs and Entertainment
Sent: Apr 29, 2013 9:44 AM

http://kholyar.blogspot.com/

Open It And Enjoy :)

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,238720,238720#msg-238720

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx 1.4 problem

2013-05-01 Thread Paul N. Pace
On Wed, May 1, 2013 at 2:07 PM, Maxim Dounin  wrote:
> Hello!
>
> On Wed, May 01, 2013 at 11:17:10AM -0700, Alder Network wrote:
>
>> Just for clarity, I want to be listening on both IPv4 and IPv6 on the same
>> port.
>
> You have to write
>
> listen 80;
> listen [::]:80;
>
> to listen on both IPv4 and IPv6.

Doesn't that require ipv6only=on?

 listen 80;
 listen [::]:80 ipv6only=on;


>
>>
>>
>> On Wed, May 1, 2013 at 11:00 AM, Alder Network wrote:
>>
>> > netstat -pln shows the server is waiting on that port.
>> >
>> > Yes, I have been using in server section
>> > listen [::]:80;
>> > What is supposed to be for IPV4 now?
>> >
>> > I'll go over the changelist later, Thanks,
>> >
>> > - Alder
>> >
>> >
>> > On Wed, May 1, 2013 at 10:40 AM, Maxim Dounin  wrote:
>> >
>> >> Hello!
>> >>
>> >> On Wed, May 01, 2013 at 10:13:34AM -0700, Alder Network wrote:
>> >>
>> >> >   Tried to upgrade to just-released Nginx1.4. TCP 3-way hand-shake
>> >> > aborted by server's ACK+RST packet, but netstat shows server
>> >> > is listening on that port. Any config has been changed since Nginx 1.2
>> >> > to 1.4 in this regard?
>> >>
>> >> There are lots of changes in 1.4.0 compared to 1.2.x, see
>> >> http://nginx.org/en/CHANGES-1.4.
>> >>
>> >> In this particular case I would recommend checking if nginx is
>> >> listening on the port, the address, and the protocol in question.
>> >> Note that since 1.3.4 ipv6only listen option is on by default, and
>> >> if you have
>> >>
>> >> listen [::]:80;
>> >>
>> >> in your config, it no longer implies IPv4 addresses regardless of
>> >> your OS settings.
>> >>
>> >> --
>> >> Maxim Dounin
>> >> http://nginx.org/en/donation.html
>> >>
>> >> ___
>> >> nginx mailing list
>> >> nginx@nginx.org
>> >> http://mailman.nginx.org/mailman/listinfo/nginx
>> >>
>> >
>> >
>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
> --
> Maxim Dounin
> http://nginx.org/en/donation.html
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


[no subject]

2013-05-02 Thread Paul N. Pace
I am trying my first install of Mailman. I have a working
Postfix/Dovecot/MySQL mail server running Ubuntu 12.04 and nginx
stable 1.2.7.

I am following the ngnix Mailman wiki article

http://wiki.nginx.org/Mailman

and the Ubuntu Official Documentation for setting up Mailman.

https://help.ubuntu.com/12.04/serverguide/mailman.html

I had to install thttpd from the deb file because starting with 12.04
it is not available in the repositories.

Other than that, I tried to follow both guides to the letter. When I
go to http://lists.example.com I get redirected to
http://lists.example.com/mailman/listinfo (on Chrome and FF, but not
IE) and I get a 400 Bad Request Request Header Or Cookie Too Large.

Any ideas on where to start looking?

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: your mail

2013-05-02 Thread Paul N. Pace
On Thu, May 2, 2013 at 1:21 PM, Francis Daly  wrote:
> On Thu, May 02, 2013 at 01:05:29PM -0700, Paul N. Pace wrote:
>
> Hi there,
>
>> Other than that, I tried to follow both guides to the letter. When I
>> go to http://lists.example.com I get redirected to
>> http://lists.example.com/mailman/listinfo (on Chrome and FF, but not
>> IE) and I get a 400 Bad Request Request Header Or Cookie Too Large.
>
> Different redirection per client is unexpected. I'm guessing that the
> browser cache wasn't cleared? It's frequently simplest to test using
> "curl" to see exactly what response is sent.

I did try clearing cache and cookies as well as opening the site on a
device that had never opened it (my BlackBerry) and received the same
error.

Curl just states "moved permanently" as per the changes put in the
sites-available file (see below).

>
>> Any ideas on where to start looking?
>
> Your nginx.conf almost certainly does a "proxy_pass" to the web server
> that actually runs mailman.
>
> I suggest you confirm that mailman is installed and working correctly
> on that web server -- if it isn't, nginx won't help.

How to do this other than viewing the mailman page?

> If the 400 error comes from nginx, there should be something in the logs
> to indicate the nature of the problem.

Strangely, the logs do not state any errors. This is the server block
I added to sites-available file (mostly) as per the nginx wiki. Was I
supposed to add this to the nginx.conf file?

server {
listen [::]:80;
server_name lists.example.com;
root /usr/lib;
access_log /var/www/example.com/logs/access.log;
error_log /var/www/example.com/logs/error.log;

location = / {
rewrite ^ /mailman/listinfo permanent;
}

location / {
rewrite ^ /mailman$uri?$args;
}

location = /mailman/ {
rewrite ^ /mailman/listinfo permanent;
}

location /mailman/ {
include proxy_params;
proxy_pass http://127.0.0.1/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}

location /cgi-bin {
rewrite ^/cgi-bin(.*)$ $1 permanent;
}

location /images/mailman {
alias /var/lib/mailman/icons;
}

location /pipermail {
alias /var/lib/mailman/archives/public;
autoindex on;
}
}


>
> f
> --
> Francis Dalyfran...@daoine.org
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: [1.4.1] Finding docroot directory?

2013-05-13 Thread Paul N. Pace
This Ars Technica article is where I learned how to use nginx. Lee
Hutchinson does a good job explaining all that.

http://arstechnica.com/gadgets/2012/11/how-to-set-up-a-safe-and-secure-web-server/

On Mon, May 13, 2013 at 4:02 AM, Shohreh  wrote:
> I found a work-around: Reading "/var/log/nginx/error.log" includes a warning
> that Nginx can't find "/usr/share/nginx/html/favicon.ico", so for some
> reason, Nginx uses /usr/share/nginx/html/ instead of /var/www.
>
> I'm confused about the multiple configuration files used by Nginx:
>
> /etc/nginx.conf
> /etc/conf.d/
> /etc/sites-available/
> /etc/sites-enabled/
>
> Why are there more than one?
>
> Thank you.
>
> Posted at Nginx Forum: 
> http://forum.nginx.org/read.php?2,239114,239115#msg-239115
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Hosting multiple domains

2013-06-15 Thread Paul N. Pace
I have a server that I set up to run several domains from and it has
worked great and without issue for about 6 months.

I have another server that I had set up and was only running one
domain from it and I just added a second domain. For some reason, this
second server does not want to serve two domains, and I can find no
substantial differences in the configuration files (nginx.conf and
sites-available files).

On both servers I put a symlink in the sites-enabled folder to the
corresponding sites-available file.

On the second, problematic server, when creating a symlink to the
second site and restarting nginx, testing the second domain only
brings up the first domain. Rebooting the server disables both domains
and the server appears unresponsive, except that I can SSH into it.
Then removing the symlink to the second domain and restarting nginx
returns the server to serving the one domain as it has been doing.

The first server is running nginx 1.5.0  and the second server is
running nginx 1.4.1.

What should I be looking at to resolve this issue?

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Hosting multiple domains

2013-06-17 Thread Paul N. Pace
Thank you Steve for nginx -t, and Sajan was correct, I had a syntax
error in a server block.

However, while I was troubleshooting I noticed my log files getting
rather huge. I keep the access_log and error_log files in the
directories for each site.

How can I keep these log files to a reasonable size without losing the
data? (I use Piwik to analyze access logs, so I don't want to lose any
data).

Thanks!

On Sat, Jun 15, 2013 at 8:04 PM, Steve Holdoway  wrote:
> Hello!
>
> On Sat, 2013-06-15 at 19:39 -0700, Paul N. Pace wrote:
>> I have a server that I set up to run several domains from and it has
>> worked great and without issue for about 6 months.
>>
>> I have another server that I had set up and was only running one
>> domain from it and I just added a second domain. For some reason, this
>> second server does not want to serve two domains, and I can find no
>> substantial differences in the configuration files (nginx.conf and
>> sites-available files).
>>
>> On both servers I put a symlink in the sites-enabled folder to the
>> corresponding sites-available file.
>>
>> On the second, problematic server, when creating a symlink to the
>> second site and restarting nginx, testing the second domain only
>> brings up the first domain. Rebooting the server disables both domains
>> and the server appears unresponsive, except that I can SSH into it.
>> Then removing the symlink to the second domain and restarting nginx
>> returns the server to serving the one domain as it has been doing.
>>
>> The first server is running nginx 1.5.0  and the second server is
>> running nginx 1.4.1.
>>
>> What should I be looking at to resolve this issue?
>>
> Without seeing the config files/error logs, it's difficult to find the
> problem. However, I can confirm that both name and IP address based
> hosting works perfectly.
>
> nginx -t
>
> may well help identify incorrect config files.
>
> Note there is some precedence in the listen 80 / listen ip:80 statements
> which might be causing the problem.
>
> Steve
>
> --
> Steve Holdoway BSc(Hons) MNZCS
> http://www.greengecko.co.nz
> Linkedin: http://www.linkedin.com/in/steveholdoway
> Skype: sholdowa
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Update nginx with Ubuntu PPA

2013-07-22 Thread Paul N. Pace
On Mon, Jul 22, 2013 at 10:13 AM, howard chen  wrote:
> I am upgrading nginx to latest 1.4.1 using PPA. repository.
>
> 1. After install, do I need to restart it manually, or it is restarted
> automatically?
> 2. Is reload enough for the nginx upgrade? Or do I need to restart or
> stop/start?

If you are using the apt-get upgrade or aptitude upgrade commands, the
service will be restarted for you.

You may want to run sudo nginx -t to check for errors.

>
> Thanks.
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Update nginx with Ubuntu PPA

2013-07-25 Thread Paul N. Pace
On Thu, Jul 25, 2013 at 10:29 AM, Valentin V. Bartenev  wrote:
> On Tuesday 23 July 2013 12:24:38 JackB wrote:
>> openletter Wrote:
>> ---
>>
>> > If you are using the apt-get upgrade or aptitude upgrade commands, the
>> > service will be restarted for you.
>>
>> This might be a little off topic, but how can one upgrade nginx on ubuntu
>> with the official ppa via apt without having a restart of nginx but an
>> upgrade instead?  (/etc/init.d/nginx upgrade)
>>
>
> Please note, there is no "official ppa".
>
> Official nginx repositories for Ubuntu (and other Linux ditros) are here:
> http://nginx.org/en/linux_packages.html
>
>   wbr, Valentin V. Bartenev


Yes, there is no official PPA.

The PPA I and many others use is unofficial, but seems to be well
maintained (thanks for that, whoever you are).

Someone wanting to use the same unofficial repository may execute the following:

add-apt-repository ppa:nginx/stable
apt-get update
apt-get install nginx

If you want to use the devel version, then replace ppa:nginx/stable
with ppa:nginx/development

If you don't have add-apt-repository, then apt-get install
python-software-properties.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How to turn off gzip compression for SSL traffic

2013-08-18 Thread Paul N. Pace
Igor said:
>You have to split the dual mode server section into two server server sections 
>and set "gzip off"
>SSL-enabled on. There is no way to disable gzip in dual mode server section, 
>but if you really
>worry about security in general the server sections should be different.

Adie said:
>This is why Igor recommends you to split the server config for SSL and 
>non-SSL, and put 'gzip
>on' only at the non-SSL one.

So I can be clear, I have 'gzip_vary on' in my http block and in
subsequent HTTPS blocks (I separate HTTP from HTTPS) I have
'gzip_vary' off. Am I doing it right?

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: How to turn off gzip compression for SSL traffic

2013-08-18 Thread Paul N. Pace
On Sun, Aug 18, 2013 at 12:31 PM, Paul N. Pace  wrote:
> Igor said:
>>You have to split the dual mode server section into two server server 
>>sections and set "gzip off"
>>SSL-enabled on. There is no way to disable gzip in dual mode server section, 
>>but if you really
>>worry about security in general the server sections should be different.
>
> Adie said:
>>This is why Igor recommends you to split the server config for SSL and 
>>non-SSL, and put 'gzip
>>on' only at the non-SSL one.
>
> So I can be clear, I have 'gzip_vary on' in my http block and in
> subsequent HTTPS blocks (I separate HTTP from HTTPS) I have
> 'gzip_vary' off. Am I doing it right?

'gzip_vary' was supposed to be 'gzip'

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Piwik conf file

2013-08-19 Thread Paul N. Pace
I have recently discovered this wonderful includes directive and I am
using it to clean up my server blocks. Doing this has forced me to
evaluate some of my configurations.

I am trying to set up a conf file for Piwik installations and I'm
hoping a second set of of eyes can help:

location /piwik/ {

 location /js/ {
 allow all;
 }

 location ~ /js/.*\.php$ {
 include /etc/nginx/global-configs/php.conf;
 }

location ~ /piwik.php$ {
 include /etc/nginx/global-configs/php.conf;
 }

return 301 https://server_name$request_uri?;
}

Piwik seems trickier than other applications because certain
components must be available through HTTP sessions or else browsers
give scary warnings or don't load the tracking code, but I want to
force the Piwik dashboard to open in HTTPS.

Any comments appreciated.

Thanks!


Paul

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Piwik conf file

2013-08-20 Thread Paul N. Pace
Thank you for your responses!

On Tue, Aug 20, 2013 at 2:44 PM, Francis Daly  wrote:
> On Mon, Aug 19, 2013 at 02:53:36PM -0700, Paul N. Pace wrote:
>
> Hi there,
>
>> I am trying to set up a conf file for Piwik installations and I'm
>> hoping a second set of of eyes can help:
>
> In nginx one request is handled in one location. The rules for selecting
> the location are at http://nginx.org/r/location
>
> Given that information, the following output...
>
>> location /piwik/ {
>>
>>  location /js/ {
>>  allow all;
>>  }
>>
>>  location ~ /js/.*\.php$ {
>>  include /etc/nginx/global-configs/php.conf;
>>  }
>>
>> location ~ /piwik.php$ {
>>  include /etc/nginx/global-configs/php.conf;
>>  }
>>
>> return 301 https://server_name$request_uri?;
>> }
>
> $ sbin/nginx -t
> nginx: [emerg] location "/js/" is outside location "/piwik/" in 
> /usr/local/nginx/conf/nginx.conf:14
> nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed
>
> should not be a surprise.

Yes, I fixed that by changing to /piwik/js/ - is this the right way to
enter it? Here is what the file would read now:

location /piwik/ {

location /piwik/js/ {
allow all;
}

location ~ /piwik/js/.*\.php$ {
include /etc/nginx/global-configs/php.conf;
}

location ~ /piwik/piwik.php$ {
include /etc/nginx/global-configs/php.conf;
}

return 301 https://www.unpm.org$request_uri?;
}

>
> Can you list some of the requests that you want to have handled, and
> how you want them to be handled? That might help someone who knows nginx
> but not piwik to understand what the intention is.
>
> Doing a web search for "site:nginx.org piwik" does seem to point at a
> config file, which seems very different from yours.

Yes, to be honest, that config is beyond my current understanding of
nginx. I reviewed the GitHub entry on the configuration, and it
included instructions to "Move the old /etc/nginx directory to
/etc/nginx.old" which seems a bit extreme to me and more work to
reconfigure for the other settings on my server, not to mention that
their /etc/nginx.conf file, among others, hasn't been updated in 2
years.

I have the Mastering Nginx book, but I still struggle to decode many
example configurations. I especially struggle with regular
expressions.

> Searching for "nginx" on the piwik.org web site also refers to an
> install document.

The nginx FAQ points to the above GitHub page.

>> Piwik seems trickier than other applications because certain
>> components must be available through HTTP sessions or else browsers
>> give scary warnings or don't load the tracking code, but I want to
>> force the Piwik dashboard to open in HTTPS.
>
> These words don't obviously directly translate to your config file
> snippet above. What request is the Piwik dashboard? What request is
> certain components?

The Piwik dashboard is located in /piwiki/index.php, and that is what
always needs to be served securely.

The tracking code for Piwik is loaded with either /piwik/js/index.php,
/piwik/piwik.php, or the /piwik/js/ directory, depending on various
client or server configurations.

Thank you for you help!

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: [DOC] Guide to Nginx + SSL + SPDY

2013-09-09 Thread Paul N. Pace
We had a discussion on this list recently about using gzip in the SSL block.

On Aug 17 Igor Sysoev wrote:
>You have to split the dual mode server section into two server server sections 
>and set "gzip off"
>SSL-enabled on. There is no way to disable gzip in dual mode server section, 
>but if you really
>worry about security in general the server sections should be different.

On Sun, Sep 8, 2013 at 10:50 AM, mex  wrote:
> hi list,
>
> i recently had to dig deeper into nginx + ssl-setup and came up with a
> short documentation on how to setup and run nginx as SSL-Gateway/Offload,
> including SPDY. beside basic configuration this guide covers HSTS-Headers,
> Perfect Forward Secrecy(PFS) and the latest and greatest ssl-based attacks
> like
> CRIME, BEAST, and Lucky Thirteen.
>
> Link:  http://www.mare-system.de/blog/page/1378546400/
>
> the reason for this 321th guide to nginx+ssl: i did not found any valid
> source that covers all aspects, including spdy and hsts, so i made this
> collection and will keep it updated.
>
> comments and critics appreciated
>
>
>
> regards,
>
>
> mex
>
> Posted at Nginx Forum: 
> http://forum.nginx.org/read.php?2,242672,242672#msg-242672
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: [DOC] Guide to Nginx + SSL + SPDY

2013-09-14 Thread Paul N. Pace
Dear Mr. or Ms. mex,

Could you please contact me paulnp...@gmail.com regarding this very
useful guide you have created? I have some specific questions and I
would also like to help out, if I can.

Thanks!


Paul

On Thu, Sep 12, 2013 at 11:36 AM, mex  wrote:
> Hi Valentin,
>
>>
>> In your section about BREACH requirements:
>>
>
> correct(ed)
>
>
> thanx
>
> mex
>
> Posted at Nginx Forum: 
> http://forum.nginx.org/read.php?2,242672,242818#msg-242818
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Passing / denying PHP requests

2013-10-23 Thread Paul N. Pace
Hello-

I am trying to allow only the PHP files required for a given PHP
package to function correctly, then deny access to all other PHP files
to prevent people from snooping on the site's configuration. I have
created the location block, but I'm not so good with regular
expressions and the block is assembled mostly through copy & paste.

location /installdirectory/ {
# from nginx pitfalls page
location ~*
(installdirectory/file_a|installdirectory/file_b|installdirectory/file_c)\.php$
{
include global-configs/php.conf;
}
location ~* installdirectory/.*\.php$ {
deny all;
}
}

If someone can let me know if I am at least on the right track, I
would appreciate it.

Thanks!

Paul

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Passing / denying PHP requests

2013-10-23 Thread Paul N. Pace
Thank you, Francis.

On Wed, Oct 23, 2013 at 9:49 AM, Francis Daly  wrote:
> If you don't like regex, don't use regex.
>
> You probably want another location{} to "deny", and that might be
> "location ~ php$ {}", or it might be that nested inside
>
>   location ^~ /installdirectory/ {}
>
> depending on what else you want in the server config.

"location ~ php$ { deny all; }" does not deny access to any php files,
even when nested in "location ^~ /installdirectory/ {}". The previous
configuration "location ~* installdirectory/.*\.php$ { deny all; }"
did block access to all php files. The ".*\." - is that why one works
and the other doesn't?

> http://nginx.org/r/location for how the one location{} is chosen to
> handle a request.

I read through the nginx.org explanation of the location directive,
but it isn't helping me with understanding how to build the deny
statement.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Passing / denying PHP requests

2013-10-25 Thread Paul N. Pace
Hi Francis, and again thanks for your help in this matter. I would
have responded sooner but the day I was planning to resolve this issue
I had an unseasonably long power outage.

On Wed, Oct 23, 2013 at 11:41 AM, Francis Daly  wrote:
> On Wed, Oct 23, 2013 at 11:32:33AM -0700, Paul N. Pace wrote:
>> On Wed, Oct 23, 2013 at 9:49 AM, Francis Daly  wrote:
>
> Hi there,
>
>> "location ~ php$ { deny all; }" does not deny access to any php files,
>> even when nested in "location ^~ /installdirectory/ {}". The previous
>> configuration "location ~* installdirectory/.*\.php$ { deny all; }"
>> did block access to all php files. The ".*\." - is that why one works
>> and the other doesn't?
>
> I suspect not.
>
> What "location" lines do you have in the appropriate server{} block in
> your config file?

hese are the location directives that would apply to the /forums/
directory, the /installdirectory/ of the server block that I'm
currently working on. This is an installation of Vanilla, but I'm
trying to come up with a general template that I can apply to other
packages (not a template as in one single file, but a way to apply
directives to each package I use):

server {

location = /forums/index.php {
include global-configs/php.conf;
fastcgi_split_path_info ^(.+\.php)(.*)$;
}

 location ^~ forums/ {
location ~ php$ { deny all;}
}

#location ~* forums/.*\.php$ {
#deny all;
#}

location ~* ^/forums/uploads/.*.(html|htm|shtml|php)$ {
types { }
default_type text/plain;
}

location /forums/ {
try_files $uri $uri/ @forum;
location ~* /categories/([0-9]|[1-9][0-9]|[1-9][0-9][0-9])$ {
return 404;
}
}

location @forum {
rewrite ^/forums/(.+)$ /forums/index.php?p=$1 last;
}
}


>
> What one request do you make?
>
> From that, which one location{} block is used to handle this one request?
>
>> > http://nginx.org/r/location for how the one location{} is chosen to
>> > handle a request.
>>
>> I read through the nginx.org explanation of the location directive,
>> but it isn't helping me with understanding how to build the deny
>> statement.
>
> Do whatever it takes to have these requests handled in a known location{}
> block.
>
> Put the config you want inside that block.

Do you mean that I should single out each php file and create a
location block to deny access the file?

> If you enable the debug log, you will see lots of output, but it will tell
> you exactly which block is used, if it isn't clear from the "location"
> documentation.

I navigated to /forums/login.php. Here seems to be the pertinent part
of error.log:

2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "forums/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "phpmyadmin/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "forums"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: "index.php"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~
"/categories/([0-9]|[1-9][0-9]|[1-9][0-9][0-9])$"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "/\."
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "~$"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "piwik/config/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~ "piwik/core/"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~
"(piwik/index|piwik/piwik|piwik/js/index)\.php$"
2013/10/25 21:39:19 [debug] 2771#0: *1 test location: ~
"^/forums/uploads/.*.(html|htm|shtml|php)$"
2013/10/25 21:39:19 [debug] 2771#0: *1 using configuration "/forums/"

I'm not sure which location block is "/forums/". The login.php file is
served as a downloadable file.

Thanks!


Paul

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Maintenance mode for all but my ip

2013-12-07 Thread Paul N. Pace
Did you try putting 'allow ;' above 'return...' line in if 
block?

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Service restart testing nginx.conf

2014-02-21 Thread Paul N. Pace
It seems like way back in the olden days, when I restarted nginx ('sudo
service nginx restart'), if there was a configuration issue in nginx.conf,
I would get a warning telling me such and, IIRC, nginx would boot using the
last known valid configuration.

It doesn't seem to happen that way any more. Did I unknowingly change a
configuration setting, or was there a change to nginx?

Thanks!


Paul
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Just looking for guide to understand query strings

2014-05-29 Thread Paul N. Pace
My logs have been inundated with hits at example.com/?anything, though in
the actual logs 'anything' is a very long string of characters.

Log entry:

"GET /?anything HTTP/1.1" 200 581 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64;
Trident/7.0; rv:11.0) like Gecko"

(note there is no location for 'anything')

I didn't even know this was possible. I'm still not sure what nginx is
doing when it processes this request. If someone could help me out, even
just point me to a good explanation of what is happening, that would be
great.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

error running shared postrotate script

2015-05-03 Thread Paul N. Pace
Ever since upgrading to 1.8.0 I get the following report from Cron:

/etc/cron.daily/logrotate:
error: error running shared postrotate script for '/var/log/nginx/*.log '
error: error running shared postrotate script for '/var/
www.example.com/logs/*.log '
run-parts: /etc/cron.daily/logrotate exited with return code 1

Contents of /etc/logrotate.d/nginx:

/var/log/nginx/*.log {
weekly
missingok
rotate 52
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi; \
endscript
postrotate
invoke-rc.d nginx rotate >/dev/null 2>&1
endscript
}

/var/www/example.com/logs/*.log {
   daily
   missingok
   rotate 36500
   compress
   delaycompress
   notifempty
   create 0640 www-data adm
   sharedscripts
   prerotate
   if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
   run-parts /etc/logrotate.d/httpd-prerotate; \
   fi; \
   endscript
   postrotate
invoke-rc.d nginx rotate >/dev/null 2>&1
endscript
}

Their are numerous .../example.com/... directories in the config file, but
I have had this configuration for ages, and the update to 1.8.0 did not
attempt to make any changes to this file.

There is a bug report (dated 2015-05-01) at Launchpad that appears
identical to my issue:
https://bugs.launchpad.net/nginx/+bug/1450770

Are there any workarounds or configuration changes to correct this issue?

Thanks!


Paul

System configuration:

Ubuntu 12.0.4.5 LTS (GNU/Linux 3.2.0-82-virtual x86_64)

Nginx installed from PPA
https://launchpad.net/~nginx/+archive/ubuntu/stable

# nginx -V
built with OpenSSL 1.0.1 14 Mar 2012
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector
--param=ssp-buffer-size=4 -Wformat -Wformat-security
-Werror=format-security -D_FORTIFY_SOURCE=2'
--with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now'
--prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf
--http-log-path=/var/log/nginx/access.log
--error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock
--pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi
--http-proxy-temp-path=/var/lib/nginx/proxy
--http-scgi-temp-path=/var/lib/nginx/scgi
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit
--with-ipv6 --with-http_ssl_module --with-http_stub_status_module
--with-http_realip_module --with-http_auth_request_module
--with-http_addition_module --with-http_dav_module --with-http_geoip_module
--with-http_gunzip_module --with-http_gzip_static_module
--with-http_image_filter_module --with-http_spdy_module
--with-http_sub_module --with-http_xslt_module --with-mail
--with-mail_ssl_module
--add-module=/build/buildd/nginx-1.8.0/debian/modules/nginx-auth-pam
--add-module=/build/buildd/nginx-1.8.0/debian/modules/nginx-dav-ext-module
--add-module=/build/buildd/nginx-1.8.0/debian/modules/nginx-echo
--add-module=/build/buildd/nginx-1.8.0/debian/modules/nginx-upstream-fair
--add-module=/build/buildd/nginx-1.8.0/debian/modules/ngx_http_substitutions_filter_module
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx