Nginx as reverse proxy - proxy_ssl_x questions

2023-11-18 Thread Mark
Hello there.

Having a proxy directive like;

location / {
proxy_pass http://10.10.10.4:4020;
...

I wonder when using proxy_pass http://... (not httpS),
are these directives effective, under the proxy_pass?

proxy_ssl_name $host;
proxy_ssl_server_name on;
proxy_ssl_session_reuse off;

Or they would work ONLY if proxy_pass is pointed to an "https://";?

Best wishes,
Regards.
Mark.
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx as reverse proxy - proxy_ssl_x questions

2023-11-19 Thread Mark
Hello Mr. Maxim, thank you very much for your reply.

Things are much clearer now, thanks!

One, last question;

I have implemented nginx as a reverse proxy with TLS termination in my
FreeBSD host machine, and another nginx instance running in my jail, in;
10.10.10.2.

So, the host machine does the reverse proxying and SSL.

Before I open my website to public and production (a Wordpress website),
could you please kindly have a look at my reverse proxy configuration here;

http://paste.nginx.org/b8

So that you might wish to add some suggestions, or perhaps I still have a
misconfigured/unneeded directive there?

Thanks once again,
Regards.
Mark.


Maxim Dounin , 19 Kas 2023 Paz, 03:05 tarihinde şunu
yazdı:

> Hello!
>
> On Sat, Nov 18, 2023 at 01:54:21PM +0300, Mark wrote:
>
> > Hello there.
> >
> > Having a proxy directive like;
> >
> > location / {
> > proxy_pass http://10.10.10.4:4020;
> > ...
> >
> > I wonder when using proxy_pass http://... (not httpS),
> > are these directives effective, under the proxy_pass?
> >
> > proxy_ssl_name $host;
> > proxy_ssl_server_name on;
> > proxy_ssl_session_reuse off;
> >
> > Or they would work ONLY if proxy_pass is pointed to an "https://";?
>
> The "proxy_ssl_*" directives define configuration for SSL
> proxying.  That is, corresponding values are only used when
> proxy_pass is used with the "https" scheme.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: NGINX to Join F5

2019-03-12 Thread Mark Moseley
On Mon, Mar 11, 2019 at 1:16 PM Igor Sysoev  wrote:

> Today is an important day for NGINX. We signed an agreement to join to F5.
>
> The NGINX team and I believe this is a significant milestone for our
> open source technology, community, and the company.
>
> F5 is committed to our open source mission. There will be no changes
> to the name, projects, their licenses, development team, release
> cadence, or otherwise. In fact, F5 will increase investment to
> ensure NGINX open source projects become even stronger.
>
> Our CEO, Gus Robertson, wrote a blog to explain more:
> https://www.nginx.com/blog/nginx-joins-f5/
>
>
Two of the favorite things in my toolbox, nginx and BigIP, under the same
roof :)
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: LiteSpeed 5.4 vs Nginx 1.16 benchmarks

2019-08-18 Thread Mark Mielke
Any idea how they did what? Misconfigure Nginx and use an obsolete distro
version of Nginx? 😁


On Sat., Aug. 17, 2019, 1:17 p.m. Christos Chatzaras, 
wrote:

> Today I read this post:
>
> http://www.webhostingtalk.com/showthread.php?t=1775139
>
> In their changelog (
> https://www.litespeedtech.com/products/litespeed-web-server/release-log )
> I see that they did changes related to HTTP/2.
>
> Any idea how they did it?
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

lzw compression

2019-08-23 Thread Mark Lybarger
Hi,

I have embedded clients using my REST api (HTTP POST/GET etc).  We want to
be able to compress the client data over the wire so that there are fewer
packets.  Apparently, in some markets, people still pay by the MB.  The
embedded client can only support LZW compression due to available
memory/space on the device.

I see the option to enable gzip compression, but that's not going to work
for me.  Any help or tips would be appreciated on possible solutions for
me.  I'd like to transparently decompress the traffic before it gets to my
application layer.

Thanks!
-mark-
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

rewrite ssl proxy retain query string parms

2020-08-13 Thread Mark Lybarger
I'm using rewrite to change some tokens in the url path, and am using ssl
proxy to send traffic to a downstream server.

if i post to https://myhost/start/foo/213/hello, the request gets to
https://client-service-host/client/service/hello/213 using the needed
certificate. great.

my question is, how do i retain query string parameters in this example so
that if i post(or get)   using query strings, they get also used?

https://myhost/start/foo/213/hello?name=world
https://myhost/start/foo/213/hello?name=world&greet=full

thanks!

location ~ /start/(.*)/(.*)/hello {
# $1 is used to pick which cert to use. removed by proxy pass.
rewrite /start/(.*)/(.*)/(.*)? /client/service/$1/$3/$2 ;
}

location /client/service/foo/ {
proxy_buffering off;
proxy_cache off;
proxy_ssl_certificate
/etc/ssl/certs/client-service-foo-cert.pem;
proxy_ssl_certificate_key /etc/ssl/certs/client-service-foo.key;
proxy_pass https://client-service-host/client/service/;
proxy_ssl_session_reuse on;
proxy_set_header X-Proxy true;
proxy_set_header Host $proxy_host;
proxy_ssl_server_name on;
proxy_set_header X-Real-IP $remote_addr;
}
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

transforming static files

2020-08-31 Thread Mark Lybarger
i have a bunch of files on a local filesystem (ok, it's NAS) that I serve
up using an nginx docker image, just pointing the doc root to the system i
want to share.

that's fine for my xml files.  the users can browse and see then on the
filesystem.

i also have some .bin files that can be converted using a custom java api.
how can i easily hook the bin files to processed through a command on the
system?

java -jar MyTranscoder.jar myInputFile.bin
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

we need an explicit max_fails flag to allow origin error through?

2016-09-15 Thread Mark McDonnell
Hello,

We have an upstream that we know is serving a 500 error.

We've noticed that NGINX is serving up a nginx specific "502 Bad Gateway"
page instead of showing the actual Apache origin error that we'd expect to
come through.

To solve this we've added `max_fail: 0` onto the upstream server (there is
only one server inside the upstream block) and now the original apache
error page comes through.

I'm not sure why that is for two reasons:


   1. because max_fail should have no effect on the behaviour of something
   like proxy_intercept_errors (which is disabled/off by default, meaning any
   errors coming from an upstream should be proxied 'as is' to the client)

   2. because max_fail should (according to nginx's docs) be a no-op... "If
   there is only a single server in a group, max_fails, fail_timeout and
   slow_start parameters are ignored, and such a server will never be
   considered unavailable"

​Does​ any one have any further insights here?

Thanks.

M.

-- 

Mark McDonnell | BuzzFeed | Senior Software Engineer | @integralist
<https://twitter.com/integralist>
https://keybase.io/integralist 40 Argyll Street, 2nd Floor, London, W1F 7EB
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

What is "seconds with milliseconds resolution"

2016-09-21 Thread Mark McDonnell
Hello,

I'm not sure I really understand the `msec` embedded variable.

I'm getting the value back as `1474452771.178`

It's described as "seconds with milliseconds resolution", but I'm not sure
what that really means (maths isn't a strong skill for me).

How do I convert the number into seconds?
So I can get a better idea of the relation of this value.

Thanks

M.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: What is "seconds with milliseconds resolution"

2016-09-21 Thread Mark McDonnell
Thanks Santiago, that actually makes perfect sense.

Think I just needed the words read back to me in a different order or
something lol ¯\_(ツ)_/¯

On Wed, Sep 21, 2016 at 11:39 AM, Santiago Vila  wrote:

> On Wed, Sep 21, 2016 at 11:19:43AM +0100, Mark McDonnell wrote:
>
> > I'm not sure I really understand the `msec` embedded variable.
> >
> > I'm getting the value back as `1474452771.178`
>
> That would be the number of seconds since the "epoch" (1970-01-01 00:00
> UTC),
> similar to "date +%s" but more accurate.
>
> > It's described as "seconds with milliseconds resolution", but I'm not
> sure
> > what that really means (maths isn't a strong skill for me).
>
> It just means the number is a number of seconds (i.e. the second is
> the unit of measure), and you can expect the number to have three
> decimal places after the point.
>
> > How do I convert the number into seconds?
>
> The number is already in seconds :-)
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 

Mark McDonnell | BuzzFeed | Senior Software Engineer | @integralist
<https://twitter.com/integralist>
https://keybase.io/integralist 40 Argyll Street, 2nd Floor, London, W1F 7EB
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx on Windows

2018-07-10 Thread Kevin Mark
Hello all,

I was wondering if there was any up-to-date documentation about running nginx 
on Windows in a production environment. The official documentation here 
(https://nginx.org/en/docs/windows.html) notes some pretty serious limitations 
but its last update was 18 months ago and its last major revision was in 2012. 
For instance, is the 1024 connection limit still around and would nginx on 
Windows still be characterized as beta software?

Thanks,
Kevin Mark

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: 502 bad gateway error with php5-fpm on Debian 7

2013-02-21 Thread Mark Alan
On Thu, 21 Feb 2013 12:07:41 +0100, GASPARD Kévin
 wrote:

> To be honest I don' know. When I've setup this configuration (more
> than 1 year ago I think)

It seems that you are trying to force a non Debian directory
structure into a Debian one.

Show us the result of:

nginx -V 2>&1|sed 's,--,\n--,g'

find /etc/nginx/ -name *.conf|xargs -r grep -v '^\s*\(#\|$\)'

find /etc/nginx/sites-*/*|xargs -r grep -v '^\s*\(#\|$\)'

M.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx + php5-fpm on Debian

2013-02-21 Thread Mark Alan
On Thu, 21 Feb 2013 14:07:45 +0100, GASPARD Kévin
 wrote:
> > nginx -V 2>&1|sed 's,--,\n--,g'
> nginx version: nginx/1.2.1

Ok, this seems pretty standard for Debian.

> > find /etc/nginx/ -name *.conf|xargs -r grep -v '^\s*\(#\|$\)'
> /etc/nginx/conf.d/koshie-island.koshie.fr.conf:server {
> /etc/nginx/conf.d/koshie-island.koshie.fr.conf:
> listen

To get out of a hole, first you must stop digging.

So, in order to regain control of your Nginx under Debian:

1. Clean /etc/nginx/conf.d/
  sudo mkdir /etc/nginx/conf.d-backup
  sudo mv /etc/nginx/conf.d/* /etc/nginx/conf.d-backup/

2. Simplify your /etc/nginx/sites-available/default
server {
  listen 80 default_server;
  server_name_in_redirect off;
  return 444;
}
server {
  listen 443 default_server ssl;
  server_name_in_redirect off;
  ssl_certificate /etc/ssl/certs/dummy-web.crt;
  ssl_certificate_key /etc/ssl/private/dummy-web.key;
  return 444;
}

3. Create simpler domain config files,
and put them inside /etc/nginx/sites-available/:

# /etc/nginx/sites-available/koshiefr # for http only
server {
  listen 80;
  server_name www.koshie.fr; # may also add IP here
  return 301 $scheme://koshie.fr$request_uri; # 301/perm 302/temp
}
server {
  listen 80;
  server_name koshie.fr;
  root /var/www/koshiefr; # avoid non alfanum here & rm last /
  #client_max_body_size  8M;
  #client_body_buffer_size 256K;
  index index.php /index.php;
  location ~ \.php$ {
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
   }
}

# /etc/nginx/sites-available/koshiefrs # for https only
server {
 listen 443; # ssl not needed here
 server_name www.koshie.fr; # may also add IP here
 return 301 $scheme://koshie.fr$request_uri; # 301=perm, 302=temp
}
server {
  listen 443 ssl;
  server_name koshie.fr;
  root /var/www/koshiefr; # avoid non alfanum here
  #client_max_body_size  8M;
  #client_body_buffer_size 256K;
  ssl_certificate /etc/ssl/certs/dummy-web.crt;
  ssl_certificate_key /etc/ssl/private/dummy-web.key;
  location ~ \.php$ {
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
   }
}

4. link files into place:

sudo ln -svf /etc/nginx/sites-available/default \
 /etc/nginx/sites-enabled/

sudo ln -svf /etc/nginx/sites-available/koshiefr \
 \ /etc/nginx/sites-enabled/

sudo ln -svf /etc/nginx/sites-available/koshiefrs \
 \ /etc/nginx/sites-enabled/

5. restart nginx:
a) again keep it simple (I don't trust Debian's nginx restart)
  sudo /etc/init.d/nginx stop
  sudo /etc/init.d/nginx start
  sudo /etc/init.d/nginx status

b) OR, if the server is 'in production', use alternative 'restart'
trying to not disturb the established connections:

  pgrep nginx && sudo kill -s USR2 $(cat /var/run/nginx.pid)
  pgrep nginx >/dev/null && sudo kill -s QUIT \
 $(cat /var/run/nginx.pid.oldbin)
  sleep .5
  pgrep nginx || sudo /etc/init.d/nginx start

# check status
  sudo /usr/sbin/nginx -t && /etc/init.d/nginx status

6. regarding PHP-FPM:
a) DO install at least:
sudo apt-get install php5-fpm php5-suhosin php-apc
and, if needed:
# sudo apt-get install php5-mysql php5-mcrypt php5-gd

A common simple PHP config could include:

grep -v '^\s*\(;\|$\)' /etc/php5/fpm/*.conf

[global]
pid = /var/run/php5-fpm.pid
error_log = /var/log/php5-fpm.log
include=/etc/php5/fpm/pool.d/*.conf

grep -v '^\s*\(;\|$\)' /etc/php5/fpm/pool.d/*.conf[www]

user = www-data
group = www-data
listen = 127.0.0.1:9000
pm = dynamic
pm.max_children = 10
pm.start_servers = 4
pm.min_spare_servers = 2
pm.max_spare_servers = 6
pm.max_requests = 384
request_terminate_timeout = 30s
chdir = /var/www

# restart it
  pgrep php5-fpm && sudo /etc/init.d/php5-fpm restart
  sleep .5
  pgrep php5-fpm || sudo /etc/init.d/php5-fpm start

Because of the above 'chdir = /var/www' and 'group = www-data' files
inside /var/www/ like, for instance, those inside /var/www/koshiefr/
should be owned (and readable, or read/writeable) by group www-data

REMEMBER: 
  - keep it simple,
  - do trust nginx defaults as they usually work rather well,
  - test each config file well and restart/reload its parent app (nginx
or php) before doing another config change.

And, if you can live with a lighter Nginx, you can try my own
extra-light nginx builds from: https://launchpad.net/~malan/+archive/dev
  sudo dpkg -i nginx-common*.deb
  sudo dpkg -i nginx-light*.deb

Regards,

M.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Default error_page for multiple vhosts

2013-02-22 Thread Mark Alan
On Fri, 22 Feb 2013 09:32:33 +0100, Alexander Nestorov
 wrote:
> I'm trying to set a default error_page for my entire nginx server (as
> http {
> error_page 404 /var/www/default/404.html;
> server {
> root /var/www/mydomain.com/;
> }
> }
> Is there any other way I could achieve what I'm trying?

What about soft linking it into wherever you want it?

# in the OS
  ln -s /var/www/default/404.html /var/www/mydomain.com/

#in Nginx config
  error_page 404 /404.html;

Regards,

M.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Problem with auth_basic + proxy_pass + transmission-daemon

2013-02-23 Thread Mark Alan
Hello list,

While using nginx 1.3.12 + transmission-daemon 2.77 + Ubuntu 12.04,

# /etc/transmission-daemon/settings.json
...
"rpc-bind-address": "127.0.0.1",
"rpc-port": 9091,
"rpc-url": "/transmission/",
...

ls -l /etc/nginx/.htpasswdtrans 
   -rw-r- 1 root www-data 64 ... /etc/nginx/.htpasswdtrans

Trying to browse to:  https://example.localdomain/transmission

WORKS IF:
  location /transmission {
proxy_pass http://127.0.0.1:9091/transmission;
  }

DOES NOT WORK IF:
  location /transmission {
auth_basic "Restricted Area";
auth_basic_user_file .htpasswdtrans;
proxy_pass http://127.0.0.1:9091/transmission;
  }

AND GIVES THESE ERRORS:
==> /var/log/nginx/access.log <==
  192.168.0.70 - - [23/Feb/2013:20:38:19 +] "POST /transmission/rpc
HTTP/1.1" 302 158 "https://example.localdomain/transmission/web/";
"Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:19.0) Gecko/20100101
Firefox/19.0" 192.168.0.70 - - [23/Feb/2013:20:38:19 +]
"{\x22method\x22:\x22session-get\x22}" 400 170 "-" "-"
  [error] 6012#0: *799 no user/password was provided for basic
authentication, client: 192.168.0.70, server: example.localdomain,
request: "GET /transmission/web/style/transmission/images/logo.png
HTTP/1.1", host: "example.localdomain", referrer:
"https://example.localdomain/transmission/web/style/transmission/common.css";

==> /var/log/nginx/error.log <==
  2013/02/23 20:38:19 [error] 6012#0: *799 no user/password was provided
for basic authentication, client: 192.168.0.70, server:
example.localdomain, request:
"GET /transmission/web/style/transmission/images/logo.png HTTP/1.1",
host: "example.localdomain", referrer:
"https://example.localdomain/transmission/web/style/transmission/common.css";

Note:
Adding the following to the 'location /transmission', or to the
parent server {} did not help:
  proxy_redirect off;
  proxy_set_header Host $http_host;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Real-IP  $remote_addr;


Any ideas on how to make 'auth_basic' work?

Thank you.

M.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: "nginx does not suck at ssl"

2013-03-11 Thread Mark Alan
On Sat, 9 Mar 2013 21:55:13 -0800, Grant  wrote:
> After reading "nginx does not suck at ssl":
> 
> http://matt.io/entry/ur
> 
> I'm using:
> 
> ssl_ciphers
> ALL:!aNULL:!ADH:!eNULL:!MEDIUM:!LOW:!EXP:!kEDH:RC4+RSA:+HIGH;

Some of us use the following to mitigate BEAST attacks:
ssl_ciphers ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!aNULL:!MD5:!EDH;

r.

M.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Need some help with rewrite rule translation

2013-06-14 Thread Mark Alan
On Fri, 14 Jun 2013 09:58:12 +0200, mailinglis...@simonhoenscheid.de
wrote:

> Both solutions look interesting, I will have a look on it.

We use been successfully the "return no content=204" opiton:

location = /favicon.ico { access_log off; log_not_found off; expires
30d; try_files /sites/$server_name/files/favicon.ico $uri =204; }

Meaning that try_files first looks in a directory where we usually keep
favicon.ico and logos (.png, .jpg, etc.), then it tries the user
provied $uri and, if it does not find any then thows out a "no content"
code (204).

There is no need to load one more module (Empty Gif Module) just to do
that.

M.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Ignore broken SSL servers in config

2013-06-19 Thread Mark Moseley
TL;DR:
Any nginx setting to say 'if a vhost's ssl settings are broken, skip it and
don't fail to start' ?

I've certainly RTFM'd this and peered at the source, but I figured I might
as well throw it out there, in case there's some hidden setting I've missed.

I'm building a reverse proxy config for thousands of SSL virtual hosts, to
replace an apache solution.

It very often happens that someone in support will make a mistake with
regards to certs/keys. E.g. updating someone's SSL cert but actually
putting the CSR there instead.

In apache, since the config is being generated out of mod_perl, I can get
around this situation by having mod_perl do a modulus check on the cert and
key and skip the vhost if they don't match. In my case, I'd far prefer to
have a missing vhost and have the other 1000 sites working, than all down.

And, yes, I realize in default apache, it'd just fail to load. And also,
yes, I realize asking something to ignore broken configs is a bit
non-standard :)

Since I don't have mod_perl at my fingertips in nginx to perform a similar
trick, the startup will just fail.

So I was curious if there's some obscure setting to tell nginx "if a vhost
fails to loads its cert properly (or potentially any other vhost setting),
skip it and continue loading the rest"?

If such a thing did exist, I imagine that the configtest would have to turn
errors for that vhost into warnings as well.

My guess is obviously 'no', but I figured asking woud only cost me the time
it takes to compose an email.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Ignore broken SSL servers in config

2013-06-20 Thread Mark Moseley
On Thu, Jun 20, 2013 at 2:00 AM, Maxim Dounin  wrote:

> Hello!
>
> On Wed, Jun 19, 2013 at 11:06:19AM -0700, Mark Moseley wrote:
>
> > TL;DR:
> > Any nginx setting to say 'if a vhost's ssl settings are broken, skip it
> and
> > don't fail to start' ?
> >
> > I've certainly RTFM'd this and peered at the source, but I figured I
> might
> > as well throw it out there, in case there's some hidden setting I've
> missed.
> >
> > I'm building a reverse proxy config for thousands of SSL virtual hosts,
> to
> > replace an apache solution.
> >
> > It very often happens that someone in support will make a mistake with
> > regards to certs/keys. E.g. updating someone's SSL cert but actually
> > putting the CSR there instead.
> >
> > In apache, since the config is being generated out of mod_perl, I can get
> > around this situation by having mod_perl do a modulus check on the cert
> and
> > key and skip the vhost if they don't match. In my case, I'd far prefer to
> > have a missing vhost and have the other 1000 sites working, than all
> down.
> >
> > And, yes, I realize in default apache, it'd just fail to load. And also,
> > yes, I realize asking something to ignore broken configs is a bit
> > non-standard :)
> >
> > Since I don't have mod_perl at my fingertips in nginx to perform a
> similar
> > trick, the startup will just fail.
> >
> > So I was curious if there's some obscure setting to tell nginx "if a
> vhost
> > fails to loads its cert properly (or potentially any other vhost
> setting),
> > skip it and continue loading the rest"?
> >
> > If such a thing did exist, I imagine that the configtest would have to
> turn
> > errors for that vhost into warnings as well.
> >
> > My guess is obviously 'no', but I figured asking woud only cost me the
> time
> > it takes to compose an email.
>
> In nginx, there are two mechanism to deal with configuration
> errors:
>
> 1) On configuration reload nginx refuses to load a new
> configuration if there are errors (and continues to work with
> previously loaded correct configuration).
>
> 2) There is "nginx -t" to test configs.
>
> By using the two you are safe from a situation when a typo
> in configuration takes a service down (well, mostly: one always
> can take it down with a valid config).  There is no kludge to
> magically apply only parts of a configuration though.
>
>
Yup, that's what I figured. Thanks for confirming.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Root ignored for "location = /"?

2014-02-06 Thread Mark James

Hello,

I want the index.html file in a particular directory to only be served when the 
domain's root URI is requested.

Using the config

server example.com;
index index.html;
location = / {
  root path/to/dir;
}

a request to example.com results in index.html in the Nginx default root 
"/html" directory being served.

The same thing happens with a trailing slash on the root, or when I substitute 
a trailing-slash alias directive.

If I use an alias directive without a trailing slash I get 403 error

   directory index of "path/to/dir" is forbidden.

There are no problems if I instead use "location /".

Can anyone suggest a reason or a resolution?

Thanks.

Mark

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Root ignored for "location = /"?

2014-02-06 Thread Mark James

On 07/02/14 02:23, Valentin V. Bartenev wrote:

The reason is documented:http://nginx.org/r/index

"It should be noted that using an index file
  causes an internal redirect ..."


Thanks very much for this Valentin. I've been stuck on this for a while. The solution was to replace the "location = /" 
block with a "location = /index.html" block.



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Issue from forum: SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac

2014-04-29 Thread Mark Moseley
I'm running into a lot of the same error as was reported in the forum at:
http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004385.html

> SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad
record mac

I've got an nginx server doing front-end SSL, with the upstream also over
SSL and also nginx (fronting Apache). They're all running 1.5.13 (all
Precise 64-bit), so I can goof with various options like ssl_buffer_size.
These are running SSL-enabled web sites for my customers.

I'm curious if there is any workaround for this besides patching openssl,
as mentioned a couple of weeks ago in http://trac.nginx.org/nginx/ticket/215

In the wake of heartbleed, I'm not super excited about rolling my own
openssl/libssl packages (and straying from easy updates), but I also need
to put a lid on these SSL errors. I've also not tested yet to verify that
the openssl patch fixes my issue (wanted to check here first).

Like the forum notes, they seem to happen just in larger files (I've not
dug extensively, but every one that I've seen is usually at least a 500k
file). I've also noticed that if I request *just* the file, it seems to
succeed every time. It's only when it's downloading a number of other files
that it seems to occur. On a lark, I tried turning off front-end keepalives
but that didn't make any difference. I've been playing with the
ssl_buffer_size on both the frontend (which is where the errors show up)
and the upstream servers to see if there was a magic combination, but no
combo makes things happy.

Am I doomed to patch openssl?

Thanks!
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Issue from forum: SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac

2014-04-29 Thread Mark Moseley
On Tue, Apr 29, 2014 at 4:36 PM, Lukas Tribus  wrote:

> Hi Mark,
>
>
> > I'm running into a lot of the same error as was reported in the forum
> > at:
> http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004385.html
> >
> >> SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or
> > bad record mac
> >
> > I've got an nginx server doing front-end SSL, with the upstream also
> > over SSL and also nginx (fronting Apache). They're all running 1.5.13
> > (all Precise 64-bit), so I can goof with various options like
> > ssl_buffer_size. These are running SSL-enabled web sites for my
> > customers.
> >
> > I'm curious if there is any workaround for this besides patching
> > openssl, as mentioned a couple of weeks ago
> > in http://trac.nginx.org/nginx/ticket/215
>
>
> A patch was committed to openssl [1] and backported to the openssl-1.0.1
> stable branch [2], meaning that the next openssl release (1.0.1h) will
> contain the fix.
>
> You can:
> - cherry-pick the fix and apply it on 1.0.1g
> - use the 1.0.1 stable git branch
> - asking your openssl package maintainer to backport the fix (its security
>   relevant, see CVE-2010-5298 [3])
>
> The fix is already in OpenBSD [4], Debian and Ubuntu will probably ship the
> patch soon, also see [5] and [6].
>
>
> Oh, cool, that's good news that it's upstream then. Getting the patch to
apply is a piece of cake. I was more worried about what would happen for
the next libssl update. Hopefully Ubuntu will pick that update up. Thanks!
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Issue from forum: SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac

2014-05-07 Thread Mark Moseley
On Wed, Apr 30, 2014 at 12:55 AM, Lukas Tribus  wrote:

> Hi,
>
>
> >> The fix is already in OpenBSD [4], Debian and Ubuntu will probably ship
> the
> >> patch soon, also see [5] and [6].
> >
> > Oh, cool, that's good news that it's upstream then. Getting the patch
> > to apply is a piece of cake. I was more worried about what would happen
> > for the next libssl update. Hopefully Ubuntu will pick that update up.
> > Thanks!
>
> FYI, debian already ships this since April, 17th:
> https://lists.debian.org/debian-security-announce/2014/msg00083.html
>
> Ubuntu not yet, as it seems.
>


Looks like it's hit Ubuntu now. Since I've updated, I've not seen a single
one of these errors, which is great. I was seeing at least a handful per
hour before, so that's a pretty good sign.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: ssl proxys https web server is very slow

2014-06-20 Thread Mark Moseley
On Fri, Jun 20, 2014 at 5:20 AM, Maxim Dounin  wrote:

> Hello!
>
> On Fri, Jun 20, 2014 at 10:51:38AM +0200, Yifeng Wang wrote:
>
> > Hi, It's my first time using NGINX to proxy other web servers. I set a
> > variable in location, this variable may be gotten in cookie or args. if
> > I use it directly likes "proxy_pass https://$nodeIp2;";, it will get the
> > response for a long time. but if I hardcode likes "proxy_pass
> > https://147.128.22.152:8443"; it works normally. Do I need to set more
> > cofiguration parameters to solve this problem.Below is the segment of my
> > windows https configuration.
> >
> > http {
> > ...
> > server {
> >listen   443 ssl;
> >server_name  localhost;
> >
> >ssl_certificate  server.crt;
> >ssl_certificate_key  server.key;
> >
> >location /pau6000lct/ {
> > set $nodeIp 147.128.22.152:8443;
> > proxy_pass https://$nodeIp;
>
> Use of variables in the proxy_pass, in particular, implies that
> SSL sessions will not be reused (as upstream address is not known
> in advance, and there is no associated storage for an SSL
> session).  This means that each connection will have to do full
> SSL handshake, and this is likely the reason for the performance
> problems you see.
>
> Solution is to use proxy_pass without variables, or use
> preconfigured upstream{} blocks instead of ip addresses if you
> have to use variables.
>

So to prevent the heart attack I almost just had, can you confirm how I
interpret that last statement:

If you define your upstream using "upstream upstream_name etc" and then use
a variable indicating the name of the upstream in proxy_pass statement,
that will *not* cause SSL sessions to not be reused. I.e. proxy_pass with a
variable indicating upstream would not cause a performance issue.

Is that correct?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Google dumps SPDY in favour of HTTP/2, any plans for nginx?

2015-03-18 Thread Mark Mielke
I think the ability to "push" content, and prioritize requests are examples
of capabilities that might require intelligence upstream, and therefore a
requirement to proxy HTTP/2 upstream. However, I expect much of this is
still theoretical at this point, and until there are actually upstream
servers that are providing effective capabilities here, HTTP/1.1 will
perform just as good as HTTP/2? I also expect that some of these benefits
could only be achieved if the upstream server knows it is talking to a
specific client, in which case it would make more sense to use an HAProxy
approach, where one client connection is mapped to one upstream
connection...


On Tue, Mar 17, 2015 at 6:37 PM, Rainer Duffner 
wrote:

>
> > Am 17.03.2015 um 23:32 schrieb Valentin V. Bartenev :
> >
> > On Tuesday 17 March 2015 09:49:04 alexandru.eftimie wrote:
> >> Will there be support for http/2 for upstream connections? I can't seem
> to
> >> find anything about this online ( either SPDY or HTTP/2 for upstream
> >> connections )
> >>
> >
> > The problems that SPDY (and HTTP/2) is trying to solve usually do not
> > exist in upstream connections, or can be solved more effectively using
> > other methods already presented in nginx (e.g. keepalive cache).
> >
> > Could you provide any real use case for HTTP/2 in this scenario?
> >
>
>
>
> My guess would be if your upstream is actually a „real“ internet-server
> (that happens to do http/2).
>
> Somebody trying to build the next „CloudFlare/Akamai/WhateverCDN“?
> ;-)
>
> Is a world possible/imaginable that only does http/2?
>
>
> Rainer
>
>
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
Mark Mielke 
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Google dumps SPDY in favour of HTTP/2, any plans for nginx?

2015-03-18 Thread Mark Mielke
Hi Valentin:

Are you talking about the same "push" as I am? HTTP/2, or at least SPDY,
had the ability to *push* content like CSS in advance of the request,
pushing content into the browsers cache *before* it needs it. I'm not
talking about long polling or other technology. I've only read about this
technology, though. I've never seen it implemented in practice. And for
prioritization, it's about choosing to send more important content before
less important content. I don't think you are correct in terms of future
potential here. But, it's very likely that you are correct in terms of
*current* potential. That is, I think this technology is too new for people
to understand it and really think about how to leverage it. It sounds like
you don't even know about it...

On Wed, Mar 18, 2015 at 10:45 AM, Valentin V. Bartenev 
wrote:

> On Wednesday 18 March 2015 04:32:55 Mark Mielke wrote:
> > I think the ability to "push" content,and prioritize requests are
> examples
> > of capabilities that might require intelligence upstream, and therefore a
> > requirement to proxy HTTP/2 upstream.
>
> "Server push" doesn't require HTTP/2 for upstream connection.
>
> Upstreams don't request content, instead they return it, so there's nothing
> to prioritize from the upstream point of view.
>
>
> > However, I expect much of this is
> > still theoretical at this point, and until there are actually upstream
> > servers that are providing effective capabilities here, HTTP/1.1 will
> > perform just as good as HTTP/2?
>
> HTTP/1.1 actually can perform better than HTTP/2.
>
> HTTP/1.1 has less overhead by default (since it doesn't introduce another
> framing layer and another flow control over TCP), and it also uses more
> connections, which means more TCP window, more socket buffers and less
> impact from packet loss.
>
> There's almost no reason for HTTP/2 to perform better unless you're doing
> many handshakes over high latency network or sending hundreds of kilobytes
> of headers.
>
>   wbr, Valentin V. Bartenev
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
Mark Mielke 
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

NGINX gateway problem

2015-03-30 Thread Mark Asysteo
Hey! i got problem with my ruby site http://www.asysteo.pl it was
working amazing and today i made PHP update i got this error. I double
checked all 9000 port connections but its still active.

any smart ideas from guru's on this forum? ;)

-- 
Posted via http://www.ruby-forum.com/.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Free O’Reilly animal book about nginx

2015-07-31 Thread Mark Mielke
I requested it a few days ago. It was a little confusing. The link was
circular... You get a link to request a copy which gets you a link to
request a copy. But the email had a pdf attachment and that was the book if
I recall correctly... I really like nginx and the thinking of the people
behind it. Thank you!
On Jul 31, 2015 4:18 AM, "Maxim Konovalov"  wrote:

> Hi,
>
> [...]
> > Thank you, but the download doesn't start on my Firefox.
> >
> Check your mailbox instead.  You should receive a link to the preview.
>
> --
> Maxim Konovalov
> http://nginx.com
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: TCP hostname routing using SNI

2016-06-17 Thread Mark Moseley
On Fri, Jun 17, 2016 at 2:48 PM, jordan.davidson <
nginx-fo...@forum.nginx.org> wrote:

> We need TCP (not http) hostname routing for an environment we are creating
> using k8s and ingress with the nginx ingress controller. We are trying to
> figure out if there is a way to create an nginx server with a config that
> will route TCP calls to a single host:port to different host:port combos
> based on the hostname held in the SNI information.
>
>
This isn't to say that there's not a fully-formed module, but just wanted
to point out that the nginx+lua extensions allow you to get at that
information. And then you could presumably use the balancer-by-lua stuff to
route.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

access logs to parquet

2024-01-11 Thread Mark Lybarger
hi,  i'm using nginx as a proxy to api gateway / lambda services.  each
day, i get 500mb of gzipped access logs from 6 proxy servers.  i want to
load these nginx access logs into a data lake that takes parquet format as
input.  my question is fairly general, is there something that easily
converts nginx access logs to parquet format given some conversion map?
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx Digest, Vol 178, Issue 2

2025-07-07 Thread Mark Mielke
Also, words mean things. When one doesn't understand the words it is easy
to presume that it could be explained more simply, but actually it can be
explained less accurately.

If the target of the document is people with no background, it may be
important to stick to the basics and gloss over complications, like how in
primary grade school they might teach that divided by zero equals zero,
because the students are not ready for the more complex topics.

But the target of nginx docs is not people with no background. If you want
this, I'm sure there are books you can purchase which approach this at a
basics level. The reference documentation is required to be accurate.

I am sure the docs can be improved, but they might be improved by becoming
more complex and covering more things. I have a personal complaint about
nginx docs that they are ambiguous in some respects and I have to read the
same text multiple times before concluding that it doesn't really say what
will happen in the situation I am thinking about, so I will have to test it
myself, or read the code. But if they made the docs more accurate they
might be even less accessible to people without the background to
understand them.

-- 
Mark Mielke 

On Mon, Jul 7, 2025, 8:07 a.m. Tobias Damisch  wrote:

> Hi Matthew,
>
> I can still remember when I was a n00b, so I'll show some mercy here.
>
> 1.) Learn how to email. You can start by providing a meaningful and
>  descriptive subject and checking if you actually want to send an
>  email before doing so.
>
> 2.) Learn how to google. I did a quick "opensuse install nginx" search,
>  and think the first search result is quite a good guide from
>  opensuse themselves: https://en.opensuse.org/Nginx
>
> 3.) Nginx pros please correct me if I'm wrong, but I would start by
>  installing nginx from the official opensuse repos with a simple
>  "sudo zypper install nginx" - more on starting/enabling nginx is in
>  the link above.
>  If you can't manage configuring the nginx version provided by
>  opensuse, compiling it from source won't improve anything for you.
>
> 4.) Learn how to Linux. https://opensuse-guide.org is probably a good
>  starting point for you. If you have questions relating more to Linux
>  in general and opensuse in particular than to nginx, maybe ask on
>  https://forums.opensuse.org ?
>
> And now, experiment a bit, and please stop sending one email after the
> other! If noone answers on a mailinglist, maybe it's just not the right
> crowd to ask.
>
> Cheers and good luck to you,
>
>  Tobias
>
>
>
> > On Mon, Jul 7, 2025, 6:53 AM Matthew Ngaha  > <mailto:chigga...@gmail.com>> wrote:
> >
> > Like I said, those explanations are for people with Linux/programming
> > knowledge. I'm struggling to understand what's being said. I.e this
> > sentence is hard to comprehend:
> >
> > """Variables are e.g. useful for related repositories like packman
> > (http://ftp.gwdg.de/pub/linux/packman/suse/$releasever  > ftp.gwdg.de/pub/linux/packman/suse/$releasever>), which shall
> > always fit the installed distribution, even after a distribution
> > upgrade. To help performing a distribution upgrade, the value of
> > $releasever can be overwritten in zypper using the --releasever
> global
> > option. This way you can easily switch all repositories using
> > $releasever to the new version (provided the server layouts did not
> > change and new repos are already available)."""
> >
> > What do they mean by related repository? What's an installed
> > distribution?
> > I can make some sense of it, but a quick layman's explanation would
> > have been better than reading through webpages trying to decipher
> > technical terms. Which is the very reason why I asked here to get
> > guided assistance.
> > The 2nd link (forum) is not needed. SLES, which I didn't understand
> > was mentioned in the installation guide, which is why reading the
> > installation guide's  website just adds more confusing terms. I don't
> > need to browse the web just to understand a single term. This is time
> > consuming if there are a lot of terms I don't understand. You've
> > already provided 3 links, how many more do I need for such a simple
> > task?
> > I'm not begging for your help so don't worry about it, I'll try
> > asking AI.
> > Later.
> > ___
> >   

Re: nginx Digest, Vol 178, Issue 2

2025-07-11 Thread Mark Mielke
On Tue, Jul 8, 2025 at 1:25 AM Ian Hobson  wrote:

> I do hope they taught you that dividing by zero was impossible/ did not
> work/was not allowed.
>
> On 08/07/2025 04:46, Mark Mielke wrote:
> > like how in primary grade school they might teach that divided by zero
> > equals zero,
>

Much outside of Nginx relevant - but unfortunately, they did teach that
divided by zero was zero in grade 2, around 1985. My teacher was probably
not happy with it either, being forced to teach a curriculum, and instead
of telling me I was wrong, she invited me to go up to the board after the
lesson and explain my thoughts to the class. I think if I explained my
thoughts, it was an interesting loophole for her, where she could claim she
stayed on curriculum and could ask questions rather than "teaching".

A similar thing happened with "power of", where I was telling one of my
peers about  how it worked on the bus, and he told me I was making up fake
math, so he invited the grade 6 bus monitor to the conversation to confirm
if I was telling the truth or lying, and she told us I was lying, and there
was "no such thing". But, my grade 2 teacher also invited me to explain it
the next day.

Childhood scars... but that teacher was a favourite of mine. :-)

-- 
Mark Mielke 
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


Re: Cross-compiling Nginx for ARM?

2013-04-17 Thread W-Mark Kubacki
2013/4/16 Shohreh:
> djczaski Wrote:
>
> Thanks for the input. By any chance, did you write a tutorial that I could
> use to try and compile it for that other ARM processor?

Here you go:
[1] http://mark.ossdl.de/en/2009/09/nginx-on-sheevaplug.html

You don't need to patch Nginx anymore and can skip step 7.

I've run a Gentoo binhost for ARM architecture, compatible to the
Sheevaplug's Kirkwood 88F6281 »Feroceon«. Some binaries might work on
Ubuntu, though I've switched to Gentoo:
[2] http://binhost.ossdl.de/ARM/armv5tel-softfloat-linux-gnueabi/ (see
www-servers there; »Packages« is a plaintext file which lists the
contents of the binhost)

More:
[3] http://mark.ossdl.de/en/2009/09/gentoo-on-the-sheevaplug.html
[4] http://mark.ossdl.de/en/2009/09/network-booting-linux-on-the-sheevaplug.html
[5] 
http://mark.ossdl.de/en/2009/09/cross-compiling-for-the-sheevaplug-kernel-distcc.html
[6] http://mark.ossdl.de/en/2009/10/sheevaplug-kernel-and-gentoo-binhost.html
Links to git.ossdl.de don't work, but you can download my modified
kernel, get its ».config« and compile your own. Most patches
(excluding the one for SATA on the SheevaPlug) have already been
integrated into Linux.

[7] http://mark.ossdl.de/en/2010/04/howto-extend-the-sheevaplug-by-esata.html

If I were you I would go for a Mikrotik Routerboard (the RB951G-2HnD
is excellent except its lack of 5GHz wifi). That are MIPS machines,
though. ;-)

-- 
Mark

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: "A" Grade SSL/TLS with Nginx and StartSSL

2013-10-20 Thread W-Mark Kubacki
2013-10-15 Piotr Sikora 
  has cited Julien Vehent :
>
> ssl_ciphers 
> 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK';

Why did you sort the ciphers in this particular order?

If you wanted to prefer AES128 over AES256 over RC4 you could write:
# ssl_ciphers 'AES128:AES256:RC4+SHA:!aNULL:!PSK:!SRP';
See the output of:
# openssl ciphers list -v 'AES128:AES256:RC4+SHA:!aNULL:!PSK'
OpenSSL will order the combinations by strength and include new modes
by default.

Why do you include the weak RC4?
  You don't use SSLv3. The subset of outdated clients not able to
use TLSv1.1 *and* AES properly is diminishing. (They would have been
not been patched for about more than two years and need to repeatedly
(think: millions of times) request the same binary data without Nginx
changing the response…)

Given that AES256 boils down to 2**99.5 bits attack (time/step)
complexity [1] and AES128 to 2**100 if you agree with [2] I would
suggest this:
# ssl_ciphers 'AES128:!aNULL:!PSK:!SRP'
… Include PSK and/or SRP if you need them, which almost none webserver
operator does. Optionally with !ECDH if you don't trust the origin of
the random seed values for NIST curves.

-- 
Mark
http://mark.ossdl.de/

[1] http://eprint.iacr.org/2009/317
[2] http://eprint.iacr.org/2002/044

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Dynamic ssl certificate ? (wildcard+ multiple different certs)

2014-01-09 Thread W-Mark Kubacki
Certificates are selected and presented by the server before the
client even has the chance to send any cookies, the latter
happening after the »TLS handshake«.

2014/1/9 Larry :
> Hello,
>
> Here is my current conf
>
> server {
> listen   443;
>
> server_name ~^(.*)\.sub\.domain\.com$
>
> sslon;
> ssl_certificate$cookie_ident/$1.crt;
> ssl_certificate_key$cookie_ident/$1.key;
> server_tokens off;
>
> ssl_protocols TLSv1.2 TLSv1.1 TLSv1 SSLv3;
> ssl_prefer_server_ciphers on;
> ssl_session_timeout 5m;
> ssl_session_cache builtin:1000 shared:SSL:10m;
>
> ssl_ciphers
> ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:RC4-SHA;
>
>
> autoindex off;
> root /upla/http/www.domain.com;
> port_in_redirect off;
> expires 10s;
> #add_header Cache-Control "no-cache,no-store";
> #expires max;
> add_header Pragma public;
> add_header Cache-Control "public";
>
> location / {
>
> try_files $uri /$request_uri =404;
>
> }
>
> }
>
> I would like to be able to "load" the right cert according to the cookie set
> and request uri.
>
> A sort of dynamic setting.
>
> But of course, when I start nginx, it complains :
> SSL: error:02001002:system library:fopen:No such file or directory:
>
> Perfectly normal since $cookie_ident is empty and no subdomain has been
> requested.
>
> So, what is the workaround I could use to avoid creating one file per new
> (self-signed)certificate issued ?
>
> I cannot use only one certificate for all since I have to be able to revoke
> the certs with granularity.
>
>
> How should I make it work ?
>
> Thanks
>
> Posted at Nginx Forum: 
> http://forum.nginx.org/read.php?2,246178,246178#msg-246178
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Header Vary: Accept-Encoding - security risk ?

2014-05-29 Thread W-Mark Kubacki
2014-05-28 23:20 GMT+02:00 chili_confits :
> I have enabled gzip with
>   ...
>   gzip on;
>   gzip_http_version 1.0;
>   gzip_vary on;
>   ...
> to satisfy incoming HTTP 1.0 requests.
>
> In a very similiar setup which got OWASP-evaluated, I read this - marked as
> a defect:
> "The web server sent a Vary header, which indicates that server-driven
> negotiation was done to determine which content should be delivered. This
> may indicate that different content is available based on the headers in the
> HTTP request."
> IMHO this is a false positive ...

Do not suppress header »Vary« or you will run into problems with
proxies, which would otherwise always serve the file gzip-ped
regardless of a requester indicating support or lack thereof.

Nginx does no content negotiation to the extend which would reveal
that »/config.inc« exists if »/config« were requested with the intend
to get »/config.css«. As you can see, even this example is
far-fetched.

-- 
Mark

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

View the client's HTTP protocol?

2017-02-10 Thread Mark McDonnell via nginx
I know the $status variable shows you the upstream/origin's HTTP protocol
(e.g. HTTP/1.1 200) but is there a way to view the protocol the client made
the request with?

For example we've seen some S3 errors returned with a 505 which suggests
the user made a request with some strange HTTP protocol, but we don't know
what it would have been.

It would be good for us to log the client's protocol so we have that
information in future.

​Thanks.​
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx