we have a shared macro/include used for letsencrypt verification, which proxies
requests to the `./well-known` directory onto an upstream provider.
the macro uses an old flag/semaphore based technique to toggle if the route is
enabled or not, so we can disable it when not needed. it works great.
On Dec 9, 2016, at 7:09 PM, Robert Paprocki wrote:
> Should be fairly easy to do with any command to write data over the wire
> (nc/netcat/echo into /dev/tcp):
Thanks for all this... I now mostly understand what was going on.
The *intent* of the nginx setup was do to the following, via 3 serv
I got hit with a portscanner a few minutes ago, which caused an edge-case I
can't repeat.
the access log looks like this:
94.102.48.193 - [09/Dec/2016:22:15:03 +][_] 500 "GET / HTTP/1.0"
10299 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)" "-"
cookies="-"
the se
On Dec 4, 2016, at 11:03 AM, Reinis Rozitis wrote:
> In case of https I don't even think it makes sense to provide any
> certificates (even self-signed).
> Without those the connection will/should be just terminated because of peer
> not providing any certificates and self-signed certs shouldn
On Nov 30, 2016, at 5:09 PM, steve wrote:
> Well, no as I've fixed this. However, if you have a probe for site x on
> https: and it doesn't exist, then the default https site for that IP address
> will be returned. Depending on configuration, it may still be attributed to
> the original search
On Nov 28, 2016, at 4:07 PM, Jeff Dyke wrote:
> And you do get a small SEO boost for being https forward.
Not necessarily -- some SEO engines are now doing the opposite, and penalizing
non-https sites. Google announced plans to start labeling non-https sites as
"insecure" in 2017 too.
It's i
I'm not sure if this is a feature request or just an issue with our
deployment...
We host many domains, often partitioned across many configuration files
(ie: sites-enabled/domain1.conf,
sites-enabled/domain2.conf,
sites-enabled/domain3.conf,
)
An issue that has
On Nov 4, 2016, at 5:43 AM, mex wrote:
> we do a similar thing but keep a counter within nginx (lua_shared_dict FTW)
> and export this stuff via /badass - location.
>
> although its not realtime we have a delay of 5 sec which is enough for us
We have a somewhat similar setup under openresty/n
I'm doing a quick audit on an nginx deployment and want to make sure something
is implemented correctly
We have a handful of domains that redirect http traffic to https.
we used to do this, which is very efficient:
sever {
listen 80:
server_name example.
On Sep 28, 2016, at 5:34 AM, jhernandez wrote:
> But we're not sure if 1.10.1 would support OpenSSL 1.0.2i. Has anyone tried
> this approach before ?
FYI, OpenSSL 1.1 and 1.02 branches had security fixes on 9/26 to their 9/22
releases
The current releases are:
1.0.2j
1.1.0b
On Jul 11, 2016, at 4:27 PM, Maxim Dounin wrote:
> No location information is added by nginx to error pages, and
> never was. You are probably using something with 3rd party
> patches. An obvious fix is to switch to using vanilla nginx
> instead, it can be downloaded here:
On Jul 11, 2016, a
I have some servers where I use an old method of gating a path by using a file
check.
this allows staff to turn off certain locations during migrations/updates
without having root privileges (needed to restart nginx)
an issue I noticed– this method now (perhaps always) shows the name of the
lo
On Mar 24, 2016, at 1:27 PM, Maxim Dounin wrote:
> In most cases this is more or less obvious when directives are not
> inherited, though docs can be a bit more clear on this.
What is not-obvious / confusing is that the *_pass items are not inherited...
but their associated directives from the
On Mar 23, 2016, at 2:14 PM, Francis Daly wrote:
> Any directives that inherit do not need to be repeated.
>
> If it does not work for you, that's probably due to proxy_pass not
> inheriting.
Thanks - that's it -- `proxy_pass` does not inherit, but all the
`proxy_set_header` directives in that
apologies for the simple question, but i could only find the opposite situation
in the list archives and I haven't had to reconfigure some of these routes in
years!
i have
# works
location /foo {
proxy_pass http://127.0.0.1:6543;
}
I want to lock down
On Mar 23, 2015, at 11:15 PM, Steve Holdoway wrote:
> Well, I'm going for the multiple levels of protection approach, but am
> trying to mate that with a 'simple to maintain' methodology.
>
> So, yes I'd like to do both, but without being heavy-handed on the
> website owners.
I understand the
On Mar 24, 2015, at 3:26 PM, Francis Daly wrote:
> but the original "if ($query_string)" is probably simpler at
> that point.
thanks for the help! it's great having a second set of eyes on this!
___
nginx mailing list
nginx@nginx.org
http://mailman.
On Mar 24, 2015, at 2:10 PM, Gena Makhomed wrote:
> Probably you can do such tracking just looking at Referer request header
Long story short - we actually are doing that. This is just to get stats into
the HTTPS log analyzer, which is a different system and much easier for us to
deploy chang
i need to redirecting from http to https, and append a "source" attribute for
tracking (we're trying to figure out how the wrong requests are coming in)
this seems to work:
if ($query_string){
return 301
https://$host$request_uri&source=s
I recently encountered an issue with a 1.5.7 branch on OSX. i did not check
1.5.8
The following code will set ALL css/js files as the default_type
include /usr/local/nginx/conf/mime.types;
default_type application/octet-stream;
The following code works as intended
default_
are there any official recommendations from nginx to safeguard against the
BREACH exploit ?
http://breachattack.com/
http://arstechnica.com/security/2013/08/gone-in-30-seconds-new-attack-plucks-secrets-from-https-protected-pages/
___
nginx mailing li
we'd like to add onto the query string an identifier of the nginx server
something like:
return 301 https://$host$request_uri?source=server1 ;
the problem is that we can't figure out how to make this work correctly when
the url already contains query strings.
Example:
return
On Jun 3, 2013, at 10:13 AM, Belly wrote:
>>> What is the best setting for my situation?
>>
>> I would recommend using "fastcgi_max_temp_file_size 0;" if you
>> want to disable disk buffering (see [1]), and configuring some
>> reasonable number of reasonably sized fastcgi_buffers. I would
>>
using the inet interface on supervisord was simple ( proxying to port 9001)
i wanted to turn that off, avoid tcp and just use the unix socket
after a bit of tinkering, this seems to work :
proxy_pass http://unix:///tmp/supervisord.sock:;
is this correct way to handle this ?
__
I'm surprised this hasn't come up before ( I've looked on this archive + Stack
Overflow )
There doesn't seem to be a way to catch all errors in nginx. They need to all
be specified.
I'm using nginx with proxy_intercept_errors, so there can be many different
codes.
I've built out all the co
> server {
>listen 80;
>listen IP:80;
>server_name example.com;
># site A
> }
>
> server {
>listen 80 default_server;
># site B
> }
>
> "listen 80/server_name example.com" route all requests to example.com to
> site A.
> "listen IP:80" routes all request
forgive me if this has been asked before -- I couldn't find this exact question
in my mailing list archives back to 2007
I am trying to deal with wildcard domains in a setup.
The intended result is to do this :
Requests for example.com
Serve Site A
All IP Addr
27 matches
Mail list logo