I am running with Nginx 1.16. I have a really simple configuration for
wordpress, seen below.
I have one test case:
curl -H "Host: x.com" "http://127.0.0.1/wp-admin/";
Which succeeds - I can see in the php-fpm log that it does "GET
/wp-admin/index.php"
I have a second test case:
curl -H "Host: x.
r from 172.0.0.0/8.
>
> On Mon, Aug 28, 2017 at 8:25 PM, CJ Ess wrote:
>
>> I've been struggling all day with this, I'm missing something, hoping
>> someone can point out what I'm doing wrong w/ the realip module:
>>
>> nginx.co
I've been struggling all day with this, I'm missing something, hoping
someone can point out what I'm doing wrong w/ the realip module:
nginx.conf:
...
log_format xyz '$remote_addr - $remote_user [$time_iso8601] '
'"$request" $status $body_bytes_sent '
'"$h
Sounds like you've got all the common stuff well covered. I'm curious what
the flame graphs show, I'd like to implement the port reuse feature for my
employer if it works as described in the article you referenced.
On Tue, May 23, 2017 at 3:39 AM, fengx wrote:
> Hello, CJ E
How about logging? If your using the default settings then nginx is logging
directly to disk, and disk writes will block the worker. Do you see the
same degradation with logging disabled or via syslog?
On Mon, May 22, 2017 at 10:59 PM, fengx wrote:
> There should been no blocking operation. At
I'd be interested in knowing more also - I know that the Linux 2.6 kernel
is still really popular and didn't have the SO_REUSEPORT socket option
(though it was in the include files and wouldn't cause an error if you
referenced it), might that be what your running into?
On Wed, May 17, 2017 at 7:58
My employer uses Nginx in front of PHP-FPM to generate their web content.
They have PHP's error reporting shut off in production so when something
does go wrong in their PHP scripts they end up with a "White Screen Of
Death". From a protocol level the white screen of death is a 200 response
with no
Both NIC supports the speed of 1000Mb/s the server got round about
> 600 Mb/s up and 13Mb/s down.
>
> CJ Ess Wrote:
> ---
> > Which OS? What NIC? You also have to consider the traffic source, is
> > it
> > known capable o
Which OS? What NIC? You also have to consider the traffic source, is it
known capable of of saturating the NIC on your server?
On Fri, Jan 6, 2017 at 10:24 AM, MrFastDie
wrote:
> Hello,
>
> the last days I played a little with the NGINX settings and the tcp stack
> to
> test the best performance
specified or scheme default)? I looked though the vaiable
descriptions but didn't see any that looked appropriate.
On Fri, Nov 18, 2016 at 3:15 PM, Maxim Dounin wrote:
> Hello!
>
> On Fri, Nov 18, 2016 at 02:55:13PM -0500, CJ Ess wrote:
>
> > I know its not encouraged but I
I know its not encouraged but I am trying to use Nginx (specifically
openresty/1.11.2.1 which is based on Nginx 1.11.2) as a forward proxy.
I did a quick setup based on all the examples I found in Google and tried
"GET http://www.google.com/"; as an example and found:
This does work:
location
OVH and Hetzner CIDR lists from RIPE are huge because of all the tiny
subnets - however they compress down really well if you merge all the
adjacent networks, you end up with a few dozen entires each. Whatever set
of CIDRs you are putting in a set, always merge them unless you need to
know which sp
at bit rate limits
> tcp streams..
>
> just bought into nginx so looking at stream proxing it through it instead
> A
>
> On 29 October 2016 at 02:48, CJ Ess wrote:
> > Cool. Probably off topic, but why rate limit FIX? My solution for heavy
> > traders was always to p
I don't think managing large lists of IPs is nginx's strength - as far as I
can tell all of its ACLs are arrays that have the be iterated through on
each request.
When I do have to manage IP lists in Nginx I try to compress the lists into
the most compact CIDR representation so there is less to se
:
> Hi
>
> yeah I have had a very quick look, just wondering if any one on the
> list had set one up.
>
> Alex
>
> On 28 October 2016 at 16:15, CJ Ess wrote:
> > Maybe this is what you want:
> > https://nginx.org/en/docs/stream/ngx_stream_proxy_mod
Maybe this is what you want:
https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html
See the parts about proxy_download_rate and proxy_upload_rate
On Thu, Oct 27, 2016 at 11:22 PM, Alex Samad wrote:
> Yep
>
> On 28 October 2016 at 11:57, CJ Ess wrote:
> > FIX as i
FIX as in the financial information exchange protocol?
On Thu, Oct 27, 2016 at 7:19 PM, Alex Samad wrote:
> Hi
>
> any one setup nginx infront of a fix engine to do rate limiting ?
>
> Alex
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailma
The clients will send an "Accept-Encoding" header which includes "br" as
one of the accepted types, that will trigger the module if its configured.
It has a set of directives similar to the gzip module, so you'll need to
set those.
I think I see brotli support most from Chrome on Android.
On Th
You probably have some module leaking memory - send output of 'nginx -V' so
people can see what version you have and what modules are there.
On Wed, Aug 24, 2016 at 1:54 PM, Amanat wrote:
> i was using Apache from last 3 years. never faced a single problem. Few
> days
> ago i think to try Nginx
md5 shouldn't give different results regardless of implementation - my
guess is that your different platforms are using different character
encodings (iso8859 vs utf8 for instance) and that is the source of your
differences. To verify your md5 implementation there are test vectors here
https://www.
You can get the nginx source code from here: http://nginx.org/
Or here: https://github.com/nginx/nginx
On Wed, Jul 20, 2016 at 3:51 PM, Thiago Farina wrote:
>
>
> On Mon, Jun 27, 2016 at 8:37 AM, Pankaj Chaudhary
> wrote:
>
>>
>>
>> Is there such thing? As far as I know it is distributed only
Ok, that explains it then. Does the cache survive reloads? Or does it need
to requery?
On Wed, Jun 29, 2016 at 1:23 AM, Kurt Cancemi
wrote:
> Hello,
>
> Nginx uses a per worker OCSP cache.
>
> On Tuesday, June 28, 2016, CJ Ess wrote:
>
>> I think I've got ocsp
I think I've got ocsp stapling setup correctly with Nginx (1.9.0). I am
seeing valid OCSP responses however if I keep querying the same server I
also frequently see "No response". The OCSP responses are valid for seven
days. Is each worker doing its own OCSP query independently of the others?
Or is
You were correct, there was a typeo in my rpm spec that kept the diff from
applying but didn't kill the build. The curl request is working now! Now I
need to see if those other POST requests are working.
On Mon, Jun 27, 2016 at 8:38 PM, CJ Ess wrote:
> I'm trying to use http
5:45 PM, Valentin V. Bartenev
wrote:
> On Monday 27 June 2016 17:33:12 CJ Ess wrote:
> > I finally had a chance to test this, I applied ce94f07d5082 to the 1.9.15
> > code -- it applied cleanly and compiled cleanly. However, my test post
> > request over http2 with curl failed
pe: application/json" -d "{}" "
https://test-server_name/";
And my curl is: curl 7.49.1 (x86_64-pc-linux-gnu) libcurl/7.49.1
OpenSSL/1.0.2h nghttp2/1.11.1
On Sun, Jun 26, 2016 at 8:55 AM, Valentin V. Bartenev
wrote:
> On Saturday 25 June 2016 21:00:37 CJ Ess wrote:
>
Thank you very much for the pointer to the change, I'm going give that a
shot ASAP.
On Sun, Jun 26, 2016 at 8:55 AM, Valentin V. Bartenev
wrote:
> On Saturday 25 June 2016 21:00:37 CJ Ess wrote:
> > I could use some help with this one - I took a big leap with enabling
> >
I could use some help with this one - I took a big leap with enabling
http/2 support and I got knocked back really quick. There seems to be an
issue with POSTs and it seems to be more pronounced with ios devices (as
much as you can trust user agents) but there were some non-ios devices that
seemed
Check that you have both the certificate and any intermediate certificates
in your pem file - you can skip the top-most CA certificates as those are
generally included in your browser's CA store - but the intermediates are
not.
I believe Nginx wants certs ordered from bottom-most (your cert) to
to
I once knew a guy who convinced someone they had hacked their site by
making a DNS entry to 127.0.0.1. So when the guy tried to access the
"other" site his passwords worked, all his files were there, it was even
running the same software! He made changes on his site and they instantly
appeared on t
ctory, it
appears that any one of them could bump up the backlog, but if any two
server stanzas have options to do it then it causes an error. Maybe the
best way to do it is to have some sort of dummy entry that sets the options
- if its always the last server stanza that sets the listen optio
Very cool! lua-resty-waf is actually at the top of my list of WAFs to try
as soon as I finish deploying openresty everywhere.
On Mon, Apr 25, 2016 at 11:09 AM, Robert Paprocki <
rpapro...@fearnothingproductions.net> wrote:
> There are also several WAFs built upon Openresty (nginx + luajit at
>>
There is a version of modsecurity for Nginx -
https://github.com/SpiderLabs/ModSecurity - however it tends to cause
random mysterious problems including segfaults so maybe not what your
looking for.
There are also several WAFs built upon Openresty (nginx + luajit at
openresty.com) however I haven'
Ok, I understand what is happening now, thank you!
On Wed, Apr 20, 2016 at 11:52 AM, Maxim Dounin wrote:
> Hello!
>
> On Wed, Apr 20, 2016 at 09:24:52AM -0400, CJ Ess wrote:
>
> > I've tried putting this directive into the nginx config file in both the
>
rt (not a reload)? I would imagine the master
> process needs to flush everything out.
>
> > On Apr 20, 2016, at 06:24, CJ Ess wrote:
> >
> > I've tried putting this directive into the nginx config file in both the
> main and html sections:
> >
> > er
I've tried putting this directive into the nginx config file in both the
main and html sections:
error_log syslog:server=127.0.0.1,facility=local5 error;
The file tests fine and reloads without issue, however if I do fuser -u on
the error file (which is the same one used by syslog) I see that eve
a legit reason for doing this. Either way its not an nginx (or
haproxy) issue.
On Fri, Apr 15, 2016 at 4:49 PM, Валентин Бартенев wrote:
> On Thursday 14 April 2016 22:45:36 CJ Ess wrote:
> > In my environment I have Nginx terminating connections, then sending them
> > to an H
n Thursday 14 April 2016 22:45:36 CJ Ess wrote:
> > In my environment I have Nginx terminating connections, then sending them
> > to an HAProxy upstream. We've noticed that whenever HAProxy emts a 403
> > error (Forbidden, in response to our ACL rules), NGINX reports a 503
>
In my environment I have Nginx terminating connections, then sending them
to an HAProxy upstream. We've noticed that whenever HAProxy emts a 403
error (Forbidden, in response to our ACL rules), NGINX reports a 503 result
(service unavailable) and I believe is logging an "upstream prematurely
closed
I was trying to think of a hack day project, and one idea was to implement
a blob server similar to Facebook's haystack. Facebook did their server
with the evhttpd library, I was thinking of making it an nginx module. In
order to make it work I'd need to have nginx send a range of bytes from a
larg
Your right, I should make a simple test case like you did in the prev
message. I'll put that together.
On Thu, Mar 31, 2016 at 4:29 PM, Francis Daly wrote:
> On Thu, Mar 31, 2016 at 01:21:02PM -0400, CJ Ess wrote:
>
> Hi there,
>
> > I would like to have an Nginx setu
I would like to have an Nginx setup where I have specific logic depending
on which interface (ip) the request arrived on.
I was able to make this work by having a server stanza for each ip on the
server, but was't able to do a combination of a specific ip and a wildcard
ip (as a catchall) - is the
, most probably
> allocated by the master process at configuration loading time and then
> accessible/accessed by workers when needed.
>
> You will be able to make a conclusion by yourself. :o)
> ---
> *B. R.*
>
> On Sat, Mar 19, 2016 at 10:02 PM, CJ Ess wrote:
>
>> Th
The value I specify for the size of my key zone in the proxy_path statement
- is that a per-worker memory allocation or a shared memory zone? (i.e. if
its 64mb and I have 32 processors, does the zone consume 64mb of main
memory or 2gb?)
___
nginx mailing
: Thu, 10 Mar 2016 05:01:30 GMT
> ETag: "d0f3-52daab51fbe80"
> Expires: Sun, 19 Mar 2017 20:42:48 GMT
> Cache-Control: max-age=31536000
> Cache-Control: public, max-age=31536000
> X-Cache-Status: HIT
> Accept-Ranges: bytes
>
>
> CJ Ess wrote:
>
> I think I&
I think I've run into the problem before - move the proxypass statement
from the top of the location stanza to the bottom, and I think that will
solve your issue.
On Sat, Mar 19, 2016 at 4:10 PM, shiz wrote:
> Been playing with this for 2 days.
>
> proxy_pass is working correctly but the proxy_
One other question - is the key zone shared between the worker processes?
Or will each worker allocate its own copy?
On Thu, Mar 10, 2016 at 4:18 PM, CJ Ess wrote:
> I will try that now - but shouldn't it be evicting a key if it can't fit a
> new one?
>
>
> On Thu
I will try that now - but shouldn't it be evicting a key if it can't fit a
new one?
On Thu, Mar 10, 2016 at 2:38 PM, Richard Stanway
wrote:
> At a guess I would say your key zone is full. Try increasing the size of
> it.
>
> On Thu, Mar 10, 2016 at 8:07 PM, CJ Ess wro
This is nginx/1.9.0 BTW
On Thu, Mar 10, 2016 at 2:06 PM, CJ Ess wrote:
> Same condition on two more of the servers in the same pool. Reload doesn't
> resolve the issue, but restart does. No limit being hit on disk space,
> inodes, open files, or memory.
>
>
> On Thu, M
Same condition on two more of the servers in the same pool. Reload doesn't
resolve the issue, but restart does. No limit being hit on disk space,
inodes, open files, or memory.
On Thu, Mar 10, 2016 at 12:12 PM, CJ Ess wrote:
> I have four servers in a pool running nginx with proxy_ca
I have four servers in a pool running nginx with proxy_cache. One of the
nodes started spewing "could not allocate node in cache keys zone" errors
for every request (which gave 500 status). I did a restart and it started
working again.
What conditions cause that error in general?
If it happens ag
I did some performance tests and it seemed to me as-if the status stub
caused a bit of a performance hit but nothing really concerning. However
for the status stub doesn't really give a lot of useful information IMO
because its just supposed to be a placeholder for an nginx+ status page-
I'm going
If your backend is sensitive to keepalive traffic (mine are), then my
advice is to enable keepalives as far into your stack as you can.
i.e. I have nginx fronting haproxy and varnish, I enable keepalives to both
haproxy and varnish add have them add a "connection: close" header to their
backend re
2:03 PM, CJ Ess wrote:
> Hello! I'm testing out a new configuration and there are two issues with
> the proxy cacheing feature I'm getting stuck on.
>
> 1) Everything is a cache miss, and I'm not sure why:
>
> My cache config (anonymized):
>
> ...
>
location / {
> return 404;
> }
> On Feb 29, 2016 16:15, "Payam Chychi" wrote:
>
>> Look at your proxy cache path... (proxy_cache_path /var/www/test_cache)
>> Are you sure the path exists and had proper perms/ownership?
>>
>> Payam
>>
>>
&
error log.
On Mon, Feb 29, 2016 at 2:15 PM, Payam Chychi wrote:
> Look at your proxy cache path... (proxy_cache_path /var/www/test_cache)
> Are you sure the path exists and had proper perms/ownership?
>
> Payam
>
>
> On Feb 29, 2016, 11:03 AM -0800, CJ Ess , wrote:
&
Hello! I'm testing out a new configuration and there are two issues with
the proxy cacheing feature I'm getting stuck on.
1) Everything is a cache miss, and I'm not sure why:
My cache config (anonymized):
...
proxy_cache_path /var/www/test_cache levels=2:2 keys_zone=TEST:32m
inactive=365d max_si
Thank you! I've got it all set up now, thanks for the pointer to $
EscapeControlCharactersOnReceive
On Thu, Feb 25, 2016 at 6:25 PM, Ekaterina Kukushkina wrote:
> Hello CJ,
>
>
> > On 26 Feb 2016, at 00:50, CJ Ess wrote:
> >
> > I would really like to output my
I would really like to output my nginx access log to syslog in a tab
delimited format.
I'm using the latest nginx and rsyslogd 7.2.5
I haven't found an example of doing this, I'm wondering if/how to add tabs
to the format in the log_format directive
And also if there is anything I need to do to
Does anyone know if the author still maintains nginx_upstream_check_module?
I see only a handful of commits in the past year and they all look like
contributed changes.
On Thu, Jan 28, 2016 at 9:28 PM, Dewangga Bachrul Alam <
dewangg...@xtremenitro.org> wrote:
> -BEGIN PGP SIGNED MESSAGE-
I think what they are asking is to support the transport layer so that they
don't have to support both protocols on whatever endpoint they are
developing.
Maybe I'm wrong and someone has grand plans about multiplexing requests to
an upstream with http/2, but I haven't seen anyone ask for that expl
Looks like Cloudflare patched SPDY support back into NGINX, and they will
release the patch to everyone next year:
https://blog.cloudflare.com/introducing-http2/#comment-2391853103
On Thu, Dec 3, 2015 at 1:14 PM, CJ Ess wrote:
> NGINX devs,
>
> I know you were very excited to re
Let me get back to you on that - we're going to send some traffic through
Cloudflare and see how the traffic breaks out given the choice of all three
protocols.
On Thu, Dec 3, 2015 at 1:29 PM, Maxim Konovalov wrote:
> Hello,
>
> On 12/3/15 9:14 PM, CJ Ess wrote:
> > NGINX
NGINX devs,
I know you were very excited to remove SPDY support from NGINX, but for the
next few years there are a lot of devices (mobile devices that can't
upgrade, end users who aren't comfortable upgrading, etc) that are not
going to have http/2 support. By removing SPDY support you've created
Good info, thank you!
On Mon, Nov 9, 2015 at 7:53 AM, Maxim Dounin wrote:
> Hello!
>
> On Sat, Nov 07, 2015 at 08:28:29PM -0500, CJ Ess wrote:
>
> > Just curious - if I am using the deferred listen option on Linux my
> > understanding is that nginx will not be woken u
He has a point - if your using multiple CDNs you can have many dozens of
addresses for the real_ip module - it would be nice to be able to source
them from a file.
Also last I checked the real_ip module did a linear search through all the
addresses configured, its not an issue yet but at some poin
Just curious - if I am using the deferred listen option on Linux my
understanding is that nginx will not be woken up until data arrives for the
connection. If someone is trying to DDOS me by opening as many connections
as possible (has happened before) how does that situation play out with
deferred
ld be active simultaneously at any point.
On Thu, Nov 5, 2015 at 8:19 AM, Maxim Dounin wrote:
> Hello!
>
> On Thu, Nov 05, 2015 at 12:55:36AM -0500, CJ Ess wrote:
>
> > So I'm looking for some advice on determining an appropriate number for
> the
> > keepaliv
So I'm looking for some advice on determining an appropriate number for the
keepalive parameter within an upstream stanza.
They server processes ~3000 requests per second, and haproxy is the single
upstream destination. Dividing by the request rate by the number of
processors (workers) I'm thinkin
I was under the impression that SPDY support had been dropped from NGINX
altogether - however
http://nginx.org/en/docs/http/ngx_http_core_module.html#listen seems to
suggest it might still be possible to select it. Which is correct?
If its not possible to select SPDY it would have been nice to hav
Hello!
I'm experimenting with fastcgi caching - I've added $upstream_cache_status
to the access log, and I can see that periodically there will be a small
cluster of EXPIRED requests for an object.
Does EXPIRED imply that the object was fetched from origin each time?
..or that the requests were q
I have an nginx 1.9.0 deploy and I noticed a working config where the name
given to the server_name directive doesn't match the names in the Host
headers or the certificate DNs. It looks like a mistake, but it works, and
I don't know why! Is it possible that if there is only one server stanza
that
Try incorporating haproxy (http://www.haproxy.org/) or Apache Traffic
Server (http://trafficserver.apache.org/) into your setup. I use NGINX to
terminate SSL/SPDY then haproxy to direct the request to the appropriate
backend server pool - Haproxy is very good at being a reverse proxy but has
no for
Everything is fastcgi, my question is how best to treat one single fastcgi
URL differently (caching it instead of forwarding every request to the
backend).
On Wed, Jun 24, 2015 at 12:37 PM, ryd994 wrote:
>
>
> On Tue, Jun 23, 2015 at 6:27 PM CJ Ess wrote:
>
>> So looks li
ot;if" to set a
variable which I could use to match on the URL and trigger
fastcgi_cache_bypass for everything not matching. Is "if" so toxic that I
shouldn't consider doing it this way?
On Tue, Jun 23, 2015 at 6:07 PM, Francis Daly wrote:
> On Tue, Jun 23, 2015 at 04
Hello,
I am looking for advice. I am using nginx to terminate SSL and forward the
request to php via fastcgi. Of all of requests I am forwarding to fastcgi
there is one particular URL that I want to cache, hopefully bypassing
communication with the fastcgi and php processes altogether.
- Would I
In my current setup I have nginx behind a load balancing router (OSPF)
where each connection to the same address has about 16% chance of hitting
the same server as the last time.
In a setup like that, does SSL session caching make any difference? I was
thinking it through this morning and I'm bett
What is the best approach for having nginx in a web farm type setup where I
want to forward http connections to an proxy upstream if they match one of
a very long/highly dynamic list of host names? All of the host names we are
interested in will resolve to our address space, so could it be as simpl
The only way you can stop people from mirroring your site is to pull the
plug. Anything you set up can be bypassed like a normal user would. If you
put CAPTCHAs on every page, someone motivated can get really smart people
in poor countries to type in the letters, click the blue box, complete the
pa
Behind my web server is an application that doesn't include content-length
headers because it doesn't know what it is. I'm pretty sure this is an
application issue but I promised I'd come here and ask the question - is
there a way to have nginx buffer an entire server response and insert a
content-
80 matches
Mail list logo