> After 2minutes response 'stabilizes' with correct size (in this example
> 1526025). Problem is also amplified due clients validate response and retry
> progressively if corrupted.
What is the response your upstream is sending back? If the 'corrupted'
data is still a 200, then nginx will cache th
Hi,
this bugs me for some time now. I have nginx 1.16.0 configured as following
on proxy cache:
proxy_cache_path /dev/shm/nginx_cache levels=1:2
keys_zone=proxy:1024m max_size=1024m inactive=60m;
proxy_temp_path/dev/shm/nginx_proxy_tmp;
proxy_cache_use_stale updating
We are facing the same issue. File gets deleted but it holds the FD and disk
space is never released until we restart the nginx server. Most of the files
we is from proxy_temp_path directory. This is causing filesystem to go out
of space. We tried this with tmpfs and normal disk.
Posted at Nginx F
ng the other portion. Expiration is
handled strictly from the 'expires' tag, which seems to be valid
according to one of those RFC's. Testing things out and caching
expires exactly when it is supposed to! So happy I can keep the proxy
cache enabled now!
_
caching, consider using the "Expires" header.
>
> The 'age' header appears to be something else... What I'm talking
> about specifically is part of the 'cache-control' header...
>
> For example: "cache-control: max-age=9848, public, must-re
der appears to be something else... What I'm talking
about specifically is part of the 'cache-control' header...
For example: "cache-control: max-age=9848, public, must-revalidate"
Without max-age decrementing while in the nginx proxy cache, all
client will receive the sa
Hello!
On Mon, Apr 06, 2020 at 10:26:04AM -0500, J.R. wrote:
> This was driving me crazy and I think I've figured out the problem.
>
> I started using the proxy cache (which is great, saves regenerating a
> lot of dynamic pages), except a bunch of my pages expire at a very
&g
This was driving me crazy and I think I've figured out the problem.
I started using the proxy cache (which is great, saves regenerating a
lot of dynamic pages), except a bunch of my pages expire at a very
specific time, at the start of the hour, and my cache-control /
expires headers reflect
Hello!
On Thu, Oct 17, 2019 at 08:35:58AM -0400, sachin.she...@gmail.com wrote:
> Thankyou, we use proxy_cache_lock as well, but in certain weird burst
> scenarios, it still ends up filling the disk.
There are two timeouts for proxy_cache_lock to tune,
proxy_cache_lock_age and proxy_cache_lock_
Thankyou, we use proxy_cache_lock as well, but in certain weird burst
scenarios, it still ends up filling the disk.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,285896,285910#msg-285910
___
nginx mailing list
nginx@nginx.org
http://mailman
Hello!
On Wed, Oct 16, 2019 at 09:44:08AM -0400, sachin.she...@gmail.com wrote:
> Thankyou Maxim, is there anyway I can make the cache manager a bit more
> aggressive in prune and purge? We already leave 20% of space free on the
> disks, but the concurrent request rate for large files can be hug
Hello,
I Dont think the way you currently use nginx as cache proxy is best practice.
Serving large file then store whole file into cache with large number of
request is like burning your disk, even if nginx cache manager can delete and
refill cache fast enough, it will keep write/delete file in
Thankyou Maxim, is there anyway I can make the cache manager a bit more
aggressive in prune and purge? We already leave 20% of space free on the
disks, but the concurrent request rate for large files can be huge and we
still run in to this issue.
What are your thoughts about disabling buffering
Hello!
On Wed, Oct 16, 2019 at 05:24:01AM -0400, sachin.she...@gmail.com wrote:
> We have a nginx fronting our object storage which caches large objects.
> Objects are as large as 100GB. The nginx cache max size is set to about
> 3.5TB.
>
> When there is a surge of large object requests and disk
Hi,
We have a nginx fronting our object storage which caches large objects.
Objects are as large as 100GB. The nginx cache max size is set to about
3.5TB.
When there is a surge of large object requests and disk quickly fills up,
nginx runs into out of disk space error. I was expecting the cache m
Oops, my mistake. I was looking at the wrong logs. The upstream connect and
response times are indeed reported.
Now that I have analyzed the timing logs, I see that there is a networking
issue between the proxy and the origin.
Roger
> On Jan 23, 2019, at 10:00 AM, Maxim Dounin wrote:
>
> He
Hello!
On Wed, Jan 23, 2019 at 09:03:33AM -0800, Roger Fischer wrote:
> I am using the community version of NGINX, where these variables
> are not available (I was quite disappointed by that).
Again: these variables are available in all known variants of
nginx. In particular, the $upstream_re
I have noticed that the response to OPTIONS requests via a
>> reverse proxy cache are quite slow. The browser reports 600 ms
>> or more of idle time until NGINX provides the response. I am
>> using the community edition of NGINX, so I don’t have any timing
>> for the upstream r
Hello!
On Tue, Jan 22, 2019 at 04:48:23PM -0800, Roger Fischer wrote:
> I have noticed that the response to OPTIONS requests via a
> reverse proxy cache are quite slow. The browser reports 600 ms
> or more of idle time until NGINX provides the response. I am
> using the communit
Hello,
I have noticed that the response to OPTIONS requests via a reverse proxy cache
are quite slow. The browser reports 600 ms or more of idle time until NGINX
provides the response. I am using the community edition of NGINX, so I don’t
have any timing for the upstream request.
As I
Thanks for the fast reply.
>1. The static content - jpg, png, tiff, etc. It looks as though you are
serving them your backend and caching them. Are they also being built on
demand dynamically? If not, then why csche them? Why not deploy them to
nginx and serve them directly?
There is a huge part
?
2. The text content - is this fragments of html that don’t have names that end
in html?
Sent from my iPhone
> On Jun 22, 2018, at 3:42 AM, Szop wrote:
> Something
> Hello guys,
>
> I'm having a hard time defining a proxy cache because my landing page
> doesn't ge
Hello guys,
I'm having a hard time defining a proxy cache because my landing page
doesn't generate any HTML which can be cached. Quit complicated to explain,
let me show you some logs and curl requests:
curl:
curl -I https://info/de
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 21 Jun
Sounds weird.
1. It doesn’t make sense for your cache to be on a tmpfs share. Better to use s
physical disk allow Linux ‘s page csche to do its job
2. How big are the files in the larger cache? Min/median/max?
Sent from my iPhone
> On Jun 20, 2018, at 7:38 AM, rihad wrote:
>
> Have you be
Have you been able to solve the issue? We're having the same problem after
upgrading 1.12.2 to 1.14
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,272519,280189#msg-280189
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailm
Hi Maxim,
Thanks for your inputs. I now understand that it is probably better, not to
externally interfere with the contents of the cache directory assigned to
NGINX.
Regards
Rajesh
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,276611,276625#msg-276625
_
Hello!
On Fri, Sep 29, 2017 at 05:00:22AM -0400, rnmx18 wrote:
> I have a use-case, where NGINX (say NGINX-process-1) is set up as a reverse
> proxy, with caching enabled (say in /mnt/disk2/pubRoot, with zone name
> "cacheA"). However, I have another NGINX (say NGINX-Process-B) which also
> runs
Hi,
I have a use-case, where NGINX (say NGINX-process-1) is set up as a reverse
proxy, with caching enabled (say in /mnt/disk2/pubRoot, with zone name
"cacheA"). However, I have another NGINX (say NGINX-Process-B) which also
runs in parallel, and caches its content in (/mnt/disk2/frontstore, with
Wow, it is a great workaround. If the upstream response times are contained
proxy_cache_lock_timeout, this should work perfectly.
Thank you for the help.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,276344,276350#msg-276350
___
nginx maili
On Wed, Sep 13, 2017 at 07:47:25AM -0400, sivasara wrote:
> Ah.. thanks for the reply.
> 500ms seems too large. Is there any way to decrease this wait time?
Currently there's no way to change 500ms to a different value. What you can do
is reduce proxy_cache_lock_timeout (5s by default) to make t
Ah.. thanks for the reply.
500ms seems too large. Is there any way to decrease this wait time?
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,276344,276347#msg-276347
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/l
Hello,
On Wed, Sep 13, 2017 at 06:07:34AM -0400, sivasara wrote:
> Greetings everbody,
>
> I have the following config. I give 3 simulatneous requests and 1 goes back
> to the upstream and the 2 of them are in proxy_cache_lock. After the first
> request completes, I am always seeing 500ms delay w
Greetings everbody,
I have the following config. I give 3 simulatneous requests and 1 goes back
to the upstream and the 2 of them are in proxy_cache_lock. After the first
request completes, I am always seeing 500ms delay with proxy_cache_locked
requests. Is this expected behavior or am i missing s
So as you guys said: it's a normal behavior of nginx and the problem is how
can I monitor response time exactly?because, when I request a static link (a
jpeg i.e), it take about 3s to completely download, but request_time still
0.000, and because it's a HIT request so I dont have upstream_response
> - a cache hit means that the resource should also be in the linux
page cache - so no physical disk read needed.
That's a very wrong assumption to make, and only makes sense in very
small scale setups - and multiple terabytes of memory isn't exactly
cheap, that's why we have SSD storage to ha
This might not be a bug at all. Remember that when nginx logs request
time it's doing so with millisecond precision. This is very, very
coarse-grained when you consider what
modern hardware is capable of. The Tech Empower benchmarks shwo that an
(openresty) nginx on
a quad-socket host can server
Hello!
On Thu, Jun 22, 2017 at 05:53:04AM -0400, jindov wrote:
> I've configured for nginx to cache static like jpeg|png. The problem is if
> request with MISS status, it will show a non-zero value request_time, but if
> a HIT request, the request_time value is 0.000.
This is expected behaviour.
Hi guys,
I've configured for nginx to cache static like jpeg|png. The problem is if
request with MISS status, it will show a non-zero value request_time, but if
a HIT request, the request_time value is 0.000.
This is an nginx bug and is there anyway to resolve it.
My log format
```
log_format c
Hi,
The information is not publicly available, it is protected by
authentication, we have an auth plugin which makes sure auth happens before
the request is routed to this cache.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,273311,273363#msg-273363
_
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hello!
On 04/03/2017 08:21 PM, sachin.she...@gmail.com wrote:
> Hi,
>
> We are testing using nginx as a file cache in front of our app,
> but the contents of the proxy cache directory are readable to any
> body who has access
On 03/04/2017 16:50, sachin.she...@gmail.com wrote:
Thanks Maxim for the reply. We have evaluated disk based encryption
etc, but
that does not prevent sysadmins from viewing user data which is a
problem
for us.
Do you think we could build something using lua and intercept read and
wriite call
Am 2017-04-03 17:50, schrieb sachin.she...@gmail.com:
Thanks Maxim for the reply. We have evaluated disk based encryption
etc, but
that does not prevent sysadmins from viewing user data which is a
problem
for us.
Then you should put your servers someplace where you trust your the
sysadmins.
Thanks Maxim for the reply. We have evaluated disk based encryption etc, but
that does not prevent sysadmins from viewing user data which is a problem
for us.
Do you think we could build something using lua and intercept read and
wriite call from cache?
Posted at Nginx Forum:
https://forum.ngin
Hello!
On Mon, Apr 03, 2017 at 09:21:10AM -0400, sachin.she...@gmail.com wrote:
> We are testing using nginx as a file cache in front of our app, but the
> contents of the proxy cache directory are readable to any body who has
> access to the machine. Is there a way to encrypt the fil
Am 2017-04-03 15:21, schrieb sachin.she...@gmail.com:
Hi,
We are testing using nginx as a file cache in front of our app, but
the
contents of the proxy cache directory are readable to any body who has
access to the machine. Is there a way to encrypt the files stored in
the
proxy cache
Hi,
We are testing using nginx as a file cache in front of our app, but the
contents of the proxy cache directory are readable to any body who has
access to the machine. Is there a way to encrypt the files stored in the
proxy cache folder so that it' not exposed to the naked eye but
Hi!
We are useing Ubuntu 16.04 with nginx version 1.10.0-0ubuntu0.16.04.4.
nginx.conf:
user nginx;
worker_processes auto;
worker_rlimit_nofile 20480; # ulimit open files per worker process
events {
# Performance
worker_connections 2048; # openfilelimits beachten
mult
Hi,
link to patch is not working, could you please provide the new one?
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,20,269746#msg-269746
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi guys!
In order to make the `proxy cache` and `pseudo-streaming for mp4/flv` work
together, I do a rough hacking on nginx 1.9.12 (there is my patch:
https://pastebin.mozilla.org/8869689). It works well apparently, but I
wonder that if I did it right? or I was taking a risk?
Posted at Nginx
Awesome. Thank you :)
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,265943,265986#msg-265986
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
between Nginx &
Apache like similar plugins for cPanel.
I am defining 2 proxy cache zones/pools plus a proxy temp location in
/etc/nginx/nginx.conf like this:
proxy_cache_path /tmp/engintron_dynamic levels=1:2
keys_zone=engintron_dynamic:20m inactive=10m max_size=500m;
pro
erver and
it just works, with zero maintenance and no vhost sync'ing between Nginx &
Apache like similar plugins for cPanel.
I am defining 2 proxy cache zones/pools plus a proxy temp location in
/etc/nginx/nginx.conf like this:
proxy_cache_path /tmp/engintron_dynamic le
Found it! The solution was in a a mailing list item from 2012. You have to
turn proxy buffering on in order for the proxy cache to work. I'm caching
like a champ now.
There probably ought to be a warning about that someplace, or it should be
in the docs someplace.
On Mon, Feb 29, 2016 at
location / {
> return 404;
> }
> On Feb 29, 2016 16:15, "Payam Chychi" wrote:
>
>> Look at your proxy cache path... (proxy_cache_path /var/www/test_cache)
>> Are you sure the path exists and had proper perms/ownership?
>>
>> Payam
>>
>>
&
The location = / is a exactly match.
To execute a "catch all" returning a 404 you can do a
location / {
return 404;
}
On Feb 29, 2016 16:15, "Payam Chychi" wrote:
> Look at your proxy cache path... (proxy_cache_path /var/www/test_cache)
> Are you sure the path e
error log.
On Mon, Feb 29, 2016 at 2:15 PM, Payam Chychi wrote:
> Look at your proxy cache path... (proxy_cache_path /var/www/test_cache)
> Are you sure the path exists and had proper perms/ownership?
>
> Payam
>
>
> On Feb 29, 2016, 11:03 AM -0800, CJ Ess , wrote:
&
Look at your proxy cache path... (proxy_cache_path /var/www/test_cache)Are you
sure the path exists and had proper perms/ownership?
Payam
On Feb 29, 2016, 11:03 AM -0800, CJ Ess, wrote:
> Hello! I'm testing out a new configuration and there are two issues with the
> proxy cacheing
Hello! I'm testing out a new configuration and there are two issues with
the proxy cacheing feature I'm getting stuck on.
1) Everything is a cache miss, and I'm not sure why:
My cache config (anonymized):
...
proxy_cache_path /var/www/test_cache levels=2:2 keys_zone=TEST:32m
inactive=365d max_si
Hello!
On Mon, Feb 15, 2016 at 05:11:40AM -0500, vps4 wrote:
> i tried to use like this
>
> server {
>
> set $cache_time 1d;
>
> if ($request_uri = "/") {
> set $cache_time 3d;
> }
>
> if ($request_uri ~ "^/other") {
> set $cache_time 30d;
> }
>
> loca
i tried to use like this
server {
set $cache_time 1d;
if ($request_uri = "/") {
set $cache_time 3d;
}
if ($request_uri ~ "^/other") {
set $cache_time 30d;
}
location / {
try_files $uri @fetch;
}
location @fetch {
proxy_cache_val
Hello,
I'm using proxy_cache module and I noticed nginx replies with whole response
and 200 OK status on requests such as this and for content that is already
in cache:
User-Agent: curl/7.26.0
Accept: */*
Range:bytes=128648358-507448924
If-Range: Thu, 26 Nov 2015 13:48:46 GMT
However, If I remov
Hello,
When using nginx in proxy_cache mode, and configuring the cache zone like
below, it seems that the zone sometimes exceed the configured size by like
10%; I'm buffering new items at a high rate, and I'm wondering how this
cache zone is cleaned up ? Is there a time-based recurring size check
Hi.
I have a RoR project which was working just fine.
But i've tryied to follow this site and add proxy cache to my nginx config:
http://vstark.net/2012/10/21/nginx-unicorn-performance-tweaks/ and devise
just won't sign_in anymore. I don't get any errors, just don't sign_i
Hi!
Tried to cache the X-Accel-Redirect responses from Phusion Passenger
application server with the use of a second layer without success (followed
the hint on http://forum.nginx.org/read.php?2,241734,241948#msg-241948).
Configuration:
1) Application server (Phusion Passenger)
adds X-A
Hi,
> Hello!
>
> On Sun, Aug 10, 2014 at 05:24:04PM -0700, Robert Paprocki wrote:
>
> > Any options then to support an architecture with multiple nginx
> > nodes sharing or distributing a proxy cache between them? i.e.,
> > a HAProxy machine load balanc
hi,
thanks a lot!!!
it seems to work, we'll have to test. I'll report back
kind regards
MasterTH
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,255357,255379#msg-255379
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailm
On Sat, Dec 06, 2014 at 06:49:44PM -0500, MasterTH wrote:
Hi there,
> What we'd like to cache is something like:
> http://api.domain.tld/calculate/%CUSTOMER_ID%/ (and everything what comes
> behind that url)
>
> And this calls we doesn't like to cache:
> http://api.domain.tld/calculate/?calc=23+
gt; i got a special proxy cache configuration to do and i really don't know how
> to solve it.
>
> the situation is the following. We use an upstream proxy to be high
> availible with our project. The project is a api which uses get und
> post-data to calculate something.
>
&
Hi,
i got a special proxy cache configuration to do and i really don't know how
to solve it.
the situation is the following. We use an upstream proxy to be high
availible with our project. The project is a api which uses get und
post-data to calculate something.
the caching is working nic
default, whereas
Linux caches a lot of file content on memory. Is there a way to tell
how much RAM Nginx uses overall for proxy cache?
Thanks,
-Wei
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hello!
On Sun, Aug 10, 2014 at 05:24:04PM -0700, Robert Paprocki wrote:
> Any options then to support an architecture with multiple nginx
> nodes sharing or distributing a proxy cache between them? i.e.,
> a HAProxy machine load balances to several nginx nodes (for
> failover r
Robert Paprocki Wrote:
---
> like rsyncing the cache contents between nodes thus would not work);
> are there any recommendations to achieve such a solution?
I would imagine a proxy location directive and location tag;
shared memory pool1 = ngin
Any options then to support an architecture with multiple nginx nodes sharing
or distributing a proxy cache between them? i.e., a HAProxy machine load
balances to several nginx nodes (for failover reasons), and each of these nodes
handles http proxy + proxy cache for a remote origin? If nginx
Hello!
On Mon, Aug 04, 2014 at 07:42:20PM -0400, badtzhou wrote:
> I am thinking about setting up multiple nginx instances share single proxy
> cache storage using NAS, NFS or some kind of distributed file system. Cache
> key will be the same for all nginx instances.
> Will this
I am thinking about setting up multiple nginx instances share single proxy
cache storage using NAS, NFS or some kind of distributed file system. Cache
key will be the same for all nginx instances.
Will this theory work? What kind of problem will it cause(locking, cached
corruption or missing
Hi,
I am also trying to cache a url have this query but still no luck any
suggestion
http://x.x.x.x/aaa/splashTF=100%25&US=A&AR=D&TD=%2450&BR=T&TS=12515f542140&OR=%2410%2F100MB&DN=BAQGBgEEBwQAAw%3D%3D&ET=TD&BE=1332d9a&BN=CS&AL=2000
I tired these three different proxy_cache_key , I am not sure
I am using part of the request_body as the Cache_key in setting up the
Proxy_cache_key and I was wondering how the actual lookup / matching of the
Cache would occur?
>From the documentation, it looks like it's a MD5 encryption of the Cache Key
that I set.
Does that mean the cache_key lookup woul
Hi,
In my configuration I have caching layer of Nginx and a separate proxy
layer which works as reverse proxy to original upstream backend.
As discussed previously we can pass cache status from caching layer to
upstream but is there anything else which I can get from cache file (when
cache is exp
Hello!
On Thu, Mar 27, 2014 at 05:08:31PM +0530, Makailol Charls wrote:
> Hi,
>
> Would it be possible to add this as new feature?
>
> Is there some other alternative ? Actually based on this header value I
> want to select named based location.
Response headers of expires cached responses are
Hi,
Would it be possible to add this as new feature?
Is there some other alternative ? Actually based on this header value I
want to select named based location.
Thanks,
Makailol
On Thu, Mar 27, 2014 at 5:02 PM, Maxim Dounin wrote:
> Hello!
>
> On Thu, Mar 27, 2014 at 03:01:22PM +0530, Maka
Hello!
On Thu, Mar 27, 2014 at 03:01:22PM +0530, Makailol Charls wrote:
> Hi Maxim,
>
> Apart from passing cache status to backend, would it be possible to send
> some other headers which are stored in cache?
>
> For example, If backed sets header "Foo : Bar" , which is stored in cache.
> Now w
Hi Maxim,
Apart from passing cache status to backend, would it be possible to send
some other headers which are stored in cache?
For example, If backed sets header "Foo : Bar" , which is stored in cache.
Now when cache is expired , request will be sent to backend. At that time
can we send the val
Hello!
On Thu, Mar 20, 2014 at 09:38:40AM +0530, Makailol Charls wrote:
> Hi,
>
> Is there some way to achieve this? I want to pass requests to backend based
> on cache status condition.
This is not something easily possible, as cache status is only
known after we started processing proxy_pass
Hi,
Is there some way to achieve this? I want to pass requests to backend based
on cache status condition.
Thanks,
Makailol
On Wed, Mar 19, 2014 at 7:45 PM, Maxim Dounin wrote:
> Hello!
>
> On Wed, Mar 19, 2014 at 03:30:03PM +0530, Makailol Charls wrote:
>
> > One more thing I would like to k
Hello!
On Wed, Mar 19, 2014 at 03:30:03PM +0530, Makailol Charls wrote:
> One more thing I would like to know, would it be possible to proxy_pass
> request conditionally based on $upstream_cache_status ?
>
> For example here we set request header with proxy_set_header Cache-Status
> $upstream_ca
One more thing I would like to know, would it be possible to proxy_pass
request conditionally based on $upstream_cache_status ?
For example here we set request header with proxy_set_header Cache-Status
$upstream_cache_status; instead we can use condition like below,
if ($upstream_cache_status = "
Hello!
On Wed, Mar 19, 2014 at 10:19:59AM +0530, Makailol Charls wrote:
> Thanks, It is working. I was checking wrong variable name.
> I have noticed that when catch is being "REVALIDATED", backend receives
> "EXPIRED" and upstream receives "REVALIDATED". Is this correct ?
Yes, it's correct. Th
Thanks, It is working. I was checking wrong variable name.
I have noticed that when catch is being "REVALIDATED", backend receives
"EXPIRED" and upstream receives "REVALIDATED". Is this correct ?
On Mon, Mar 17, 2014 at 8:20 AM, Maxim Dounin wrote:
> Hello!
>
> On Sat, Mar 15, 2014 at 01:14:51
Hello!
On Sat, Mar 15, 2014 at 01:14:51PM +0530, Makailol Charls wrote:
> I have been using add_header to send cache status to downstream server or
> client like this. This is working fine.
> add_header Cache-Status $upstream_cache_status;
>
> Now I tried proxy_set_header to send header to upstr
:
> Hello!
>
> On Fri, Mar 14, 2014 at 07:35:52PM +0530, Makailol Charls wrote:
>
> > Hello,
> >
> > I have been using proxy cache of Nginx. It provides response header to
> > indicate cache status. Is there some way to forward the cache status (in
> > case
Hello!
On Fri, Mar 14, 2014 at 07:35:52PM +0530, Makailol Charls wrote:
> Hello,
>
> I have been using proxy cache of Nginx. It provides response header to
> indicate cache status. Is there some way to forward the cache status (in
> case of miss, expired or revalidate ) to b
Hello,
I have been using proxy cache of Nginx. It provides response header to
indicate cache status. Is there some way to forward the cache status (in
case of miss, expired or revalidate ) to backend upstream server?
Thanks,
Makailol
___
nginx mailing
Hello!
On Sat, Feb 15, 2014 at 02:23:26AM +0400, Valentin V. Bartenev wrote:
> On Friday 14 February 2014 17:11:37 jove4015 wrote:
> [..]
> > Is there any way to set up the proxy to the upstream so that it proxies HEAD
> > requests as HEAD requests, and GET requests as normal, and still caches
>
On Friday 14 February 2014 17:11:37 jove4015 wrote:
[..]
> Is there any way to set up the proxy to the upstream so that it proxies HEAD
> requests as HEAD requests, and GET requests as normal, and still caches
> responses as well? Ideally it would cache all GET responses, check against
> the cache
I'm trying to figure out how to get Nginx to proxy cache HEAD requests as
HEAD requests and I can't find any info on google.
Basically, I have an upstream server in another datacenter, far away, that I
am proxying requests to. I'm caching those requests, so the next time the
reque
Maxim Dounin Wrote:
> The "proxy_cache_valid" directives are used if there are no
> Cache-Control/Expires to allow caching (or they are ignored with
> proxy_ignore_headers). That is, with "proxy_cache_valid" you can
> cache something which isn't normally cached, but they don't
> prevent cachi
Hello!
On Thu, Jan 16, 2014 at 02:50:41PM -0500, rge3 wrote:
> Maxim Dounin Wrote:
> ---
>
> > An exiting cache can be bypassed due to proxy_cache_bypass in your
> > config, and 503 response can be cached if it contains
> > Cache-Control and
Maxim Dounin Wrote:
---
> An exiting cache can be bypassed due to proxy_cache_bypass in your
> config, and 503 response can be cached if it contains
> Cache-Control and/or Expires which allow caching.
Oh, I hadn't thought of that part about t
Hello!
On Thu, Jan 16, 2014 at 09:02:36AM -0500, rge3 wrote:
> Hi,
>
> I'm trying to use the proxy cache to store regular pages (200) from my web
> server so that when the web server goes into maintenance mode and starts
> returning 503 nginx can still serve the good pa
Hi,
I'm trying to use the proxy cache to store regular pages (200) from my web
server so that when the web server goes into maintenance mode and starts
returning 503 nginx can still serve the good page out of cache. It works
great for a few minutes but then at some point (5 to 10 minut
1 - 100 of 134 matches
Mail list logo