Hello,
Because uwsgi_cache_key has no default value (or rather, has the empty
string as its default value), a configuration with uwsgi_cache set but
uwsgi_cache_key not set behaves in a way that is very unlikely to be
desired: Nginx caches the first publicly cacheable response it gets from
upstrea
Yep, works like a charm, thank you! And two consecutive ifs to strip two
cookies works as well:
set $stripped_cookie $http_cookie;
if ($http_cookie ~ "(.*)(?:^|;)\s*sessionid=[^;]+(.*)$") {
set $stripped_cookie $1$2;
}
if ($stripped_cookie ~ "(.*)(?:^|;)\s*csrftoken=[^;]+(.
Hi,
is it possible to hide one request cookie (but not all, so proxy_set_header
Cookie "" is not the way) when proxying to an upstream server?
The use case is:
* website foo.com uses a hosted service on a subdomain, e.g. blog.foo.com
hosted by Wordpress.com
* horror: MSIE will send all foo.com
Hi,
in a single server block listening on both 80 and 443 ssl, currently in
production, I want to start redirecting all HTTP GET requests to HTTPS ...
but keep serving non-GET requests on HTTP for a little while, so as not to
bork form posts and such made by clients from pages loaded on HTTP befor
Hi,
> Trivial workaround is to use "uwsgi_buffers 8 64k" instead.
> Or you may try the following patch:
Thank you! I tried the uwsgi_buffers workaround in production, and the patch
in my reproduction setup, and indeed both seem to fix this problem; the
request runs to completion with no memory gr
Hi,
here's a minimal configuration where I can reproduce this:
error_log debug.log debug;
events {
worker_connections 1024;
}
http {
uwsgi_buffers 64 8k;
upstream nginx-test.uwsgi {
server 10.0.0.7:13003;
least_conn;
}
server {
listen 8080;
Hi,
I finally reproduced this, with debug logging enabled --- I found the
problematic request in the error log preceding the kill signal, saying it
was being buffered to a temporary file:
2014/07/21 11:39:39 [warn] 21182#0: *32332838 an upstream response is
buffered to a temporary file /var/c
> How do you track "nginx memory"?
What I was tracking was memory use per process name as reported by New Relic
nrsysmond, which I'm pretty sure is RSS from ps output, summed over all
nginx processes.
> From what you describe I suspect that disk buffering occurs (see
> http://nginx.org/r/uwsgi_ma
Hi,
Several times recently, we have seen our production nginx memory usage flare
up a hundred-fold, from its normal ~42 MB to 3-4 GB, for 20 minutes to an
hour or so, and then recover. There is not a spike in number of connections,
just memory use, so whatever causes this, it does not seem to be a