I would like to use the gunzip module to serve cached, gzipped
responses to clients that do not support gzip. I am running an Ubuntu
14.04 server. According to this post [1] the nginx-extras package
includes support for gunzip, but when I add the 'gunzip on;' directive
to my config I get an error t
On Sat, Apr 5, 2014 at 3:07 PM, Maxim Dounin wrote:
>
> we need something like
>
> limit_req_zone $limit zone=one:10m rate=1r/s;
>
> where the $limit variables is empty for non-POST requests (as we
> don't want to limit them), and evaluates to $binary_remote_addr
> for POST requests.
A follow
On Thu, Apr 24, 2014 at 11:29 AM, Maxim Dounin wrote:
> I believe more or less the same question was discussed a couple of
> weeks ago:
>
> http://mailman.nginx.org/pipermail/nginx/2014-April/043034.html
Thank you! I must have missed that one.
___
ng
Is there any way I can impose a rate limit on a location or back-end
by HTTP method? Specifically I would like to limit the number of POST
requests that a single client IP can perform within a given timespan.
___
nginx mailing list
nginx@nginx.org
http:/
On Fri, Apr 4, 2014 at 12:33 PM, Knut Moe wrote:
> Does anyone have updated instructions for 12.04?
sudo apt-get install nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
After upgrading my stack Apache2-Nginx Ubuntu stack to Ubuntu 14.04
beta, I am getting these errors in /var/log/nginx/error.log
2014/03/13 16:11:01 [emerg] 14625#0: bind() to 0.0.0.0:80 failed (98:
Address already in use)
2014/03/13 16:11:01 [emerg] 14625#0: bind() to [::]:80 failed (98:
Address a
I am using
add_header x-responsetime $upstream_response_time;
to report response times of the back-end to the client. I was
expecting to see the back-end response time (e.g. 0.500 for half a
second), however the headers that I am getting contain an epoch
timestamp, e.g:
x-responsetime: 13920
On Mon, Feb 10, 2014 at 5:15 AM, Maxim Dounin wrote:
> it is likely the cause, as the config includes the following lines:
>
> proxy_cache_methods POST;
> proxy_cache_key "$request_method$request_uri$request_body";
>
Yikes I was not aware that the cache key gets stored into the buffers
as
On Thu, Feb 6, 2014 at 4:18 AM, Maxim Dounin wrote
>
> Response headers should fit into proxy_buffer_size, see
> http://nginx.org/r/proxy_buffer_size. If they don't, the error
> is reported.
In which the "size" refers to the number of characters that appear up
till the blank line that separates
After I added some CORS headers to my API, one of the users of my
nginx-based system complained about occasional errors with:
upstream sent too big header while reading response header from upstream
He also reported to have worked around the issue using:
proxy_buffers 8 512k;
proxy_buffer_size 2
On Fri, Jan 24, 2014 at 11:42 PM, wishmaster wrote:
> What is your proxy_cache_methods value?
I tried both
proxy_cache_methods OPTIONS;
as well as
proxy_cache_methods GET HEAD OPTIONS;
but both gave the error.
___
nginx mailing list
nginx@n
On Sat, Jan 25, 2014 at 5:24 AM, Jonathan Kolb wrote:
> You can chain two maps to get a logical and:
Thank you, this is precisely what I needed.
> # note the lack of : after default in the maps, it's incorrect to have it
> there like your original map did
Good catch, thanks. Appreciate it.
___
On Fri, Jan 24, 2014 at 1:04 PM, B.R. wrote:
> Does the following work?
This looks like a fragile solution. You're basically simulating an
"if", but I don't think we should assume that nginx will resolve all
maps in the defined order, as would be using "if".
The nginx documentation for HttpMapMo
Is it possible to cache the OPTIONS method? This pages gives exactly
that example: http://www.packtpub.com/article/nginx-proxy
proxy_cache_methods OPTIONS;
However, when I try this, nginx writes in the error log:
[warn] 7243#0: invalid value "OPTIONS" in ...
___
I use nginx to cache both GET and POST requests. I want to use
proxy_cache_bypass to allow users to bypass the cache, but ONLY for
GET requests. POST requests should always be cached. I tried this:
map $request_method $is_get {
default: "";
GET "true";
}
proxy_cache_methods POST;
proxy_cache_b
On Mon, Nov 4, 2013 at 5:08 PM, Maxim Dounin wrote:
> The proxy_redirect directive does string replacement, not URI
> mapping. If you want it to replace "/two/" with "/one/", you can
> configure it to do so. It's just not something it does by
> default.
Exactly. I was trying to argue that it pr
HTTP status codes such as 201, 301, 302, etc rely on the HTTP Location
header. The current standard of HTTP specifies that this URL must be
absolute. However, all popular browsers will accept a relative URL,
and it is correct according to the upcoming revision of HTTP/1.1. See
also [1].
I noticed
used together with "proxy_cache_bypass"
Do I just need to add an additional line:
proxy_cache_bypass $request_body_file;
It is not clear to me how proxy_cache_bypass is different from proxy_no_cache.
On Fri, Sep 13, 2013 at 8:56 PM, Jeroen Ooms wrote:
> Is it c
@ Maxim Dounin
Thanks! This is very helpful. I have also set:
client_body_buffer_size 1m;
Could this setting have any side effects? I am not expecting too many
large POST request. From what I read, client_body_buffer_size is
actually the maximum amount of memory allocated. Does this mean that
fo
Is it correct that when $content_length > client_body_buffer_size,
then $request_body == "" ? If so this would be worth documenting at
request_body.
I am using:
proxy_cache_methods POST;
proxy_cache_key "$request_method$request_uri$request_body";
Which works for small requests, but for l
20 matches
Mail list logo