>No, I didnot find a way to resolve this, I have to make the cached files to
a smaller count and add more devices to share the load ...
But is this split with samller devices "solve" the problem? I mean how many
file could you have in cache? How many devices you have?
Posted at Nginx Forum:
http
> Can I know what is the ingest Gbps into the SSDs when you hit the
problem?
About ~500 Mbps
> and How many cached file nodes in cache-manager? I have millions ...
Between 7-9 milions
Can you tell more about your configuration os/nginx/cache? And how have you
tried to solve the problem.
Posted a
> The "proxy_cache_path" looks corrupted and incomplete. First of
> all, I would suggest you to make sure you are using "levels"
> parameter, see http://nginx.org/r/proxy_cache_path.
I didn't paste all of proxy_cache_path directive. Here you have all.
proxy_temp_path /cache/tmp;
proxy_cach
I'm having problem with I/O performance. I'm running nginx as caching
reverse proxy server.
When cache size on disk exceeds max_size cache manager starts working, but
it causes two problems occur:
1) I/O %util reach 100% and nginx starts dropping connections
2) cache manager process dosen't unlink
Ok, maybe not so beautiful solution but upstream can be used with one server
makred as down.
http {
upstream backend-jail {
server 0.0.0.0 down;
}
server {
listen 80;
underscores_in_headers on;
recursive_error_pages on;
error_page 597 = @jail;
location / {
I'm using ngnix as proxy server. There is a situation when backend server is
down and I know about it. Thus I'm adding 'Check-Cache' header to request
and what I want to do is get the file when it is in cache, and when it is
not just return error page. I don't want to pass the request to the
backen