Hello!
On Fri, Sep 22, 2017 at 03:04:54PM +0100, peter.wri...@icmcapital.co.uk wrote:
> nginx: [emerg]
> SSL_CTX_use_PrivateKey_file("/etc/ssl/private/access.uat.icmcapital.co.uk.ke
> y") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start
> line:Expecting: ANY PRIVATE KEY error:140B00
We currently have ~30k req/s but our network is growing very fast so i need
to make sure our architecture is scalable .
After some researching i've decided to go with individual nginx nodes for
now . If we encounter too much request to our upstream, i'm gonna set up
the multi layer architecture yo
> if one node had the storage capacity to satisfy my needs it couldn't handle
> all the requests
What amount of requests / traffic are we talking about, and which kind of
hardware do you use?
You can make nginx serve 20+ gigabit of traffic from a single machine if the
content is right, or 50k+
Sorry for the confusion .
My problem is that i need to cache items as much as possible so even if one
node had the storage capacity to satisfy my needs it couldn't handle all
the requests and we can't afford multiple nginx nodes request to our main
server each time an item is requested on a differe
> is there any way to share a cache directory between two nginx instances ?
> If it can't be done what do you think is the best way to go when we need to
> scale the nginx caching storage ?
One is about using same storage for two nginx instances, the other one is
scaling the nginx cache storage.
Hello,
Since nginx stores some cache metadata in memory , is there any way to
share a cache directory between two nginx instances ?
If it can't be done what do you think is the best way to go when we need to
scale the nginx caching storage ?
Thanks
___