I maintain an Nginx config generation plugin for a web hosting control
panel, where people put on such high number of domains on a server normally
and things I notice are
1. Memory consumption by worker process go up when vhost count go up , so
we may need to reduce worker count
2. As already men
On Sun, Feb 10, 2019 at 10:21:22PM -0500, nevereturn01 wrote:
Hi there,
> Thanks for your suggestions.
> The rule seems to work.
Good to hear that it is working for you :-)
Cheers,
f
--
Francis Dalyfran...@daoine.org
___
nginx maili
+1 to the openresty suggestion
I’ve found that whenever I want to do something gnarly or perverse with nginx,
openresty helps me do it in a way that’s maintainable and with any ugliness
minimized.
It’s like nginx with super-powers!
Sent from my iPhone
> On Feb 11, 2019, at 1:34 PM, Robert Pap
Am 11.02.19 um 16:16 schrieb rick_pri:
> As such I wanted to put the feelers out to see if anyone else
> had tried to work with large numbers of vhosts and any issues which they may
> have come across.
Hello
we're running nginx (latest) with ~5k domains + 5k www.domain
without issues. Configur
> On my logs I can see that HIT's are very fast but STALEs take as much as MISS
> while I believe they should take as much as HITs.
>
> Is there something I can do to improve this ? Are the stale responses a true
> "stale-while-revalidate" response ?or are they waiting for the response from
> the
I use haproxy in a similar way as stated by Rainer, rather than having
hundreds and hundreds of config files (yes there are other ways), i have 1
for haproxy and 2(on multiple machines defined in HAProxy). One for my main
domain that listens to an "real" server_name and another that listens to
`ser
You are specifying a key zone that can hold about 80 million keys,
and three level cache. Do you really have that many cached files?
Unless you are serving petabytes of content, I’d suggest reverting your
settings to default values
and running some test cases to validate correct caching behavior
You should be able to answer this by tailing the log of your nginx and orig
server at the same time.
It would be helpful if you shared an (anonymized) section of both logs. When I
say fast or slow
I might mean something very different to what you hear.
> On 11 Feb 2019, at 10:06 AM, joao.pere
> Am 11.02.2019 um 16:16 schrieb rick_pri :
>
> However, our customers, with about 12000 domain names at present have
Let’s Encrypt rate limits will likely make these very difficult to obtain and
also to renew.
If you own the DNS, maybe using Wildcard DNS entries is more practical.
Then, HA
FWIW, this kind of large installation is why solutions like OpenResty exist
(providing for dynamic config/cert service/hostname registration without
having to worry about the time/expense of re-parsing the Nginx config).
On Mon, Feb 11, 2019 at 7:59 AM Richard Paul
wrote:
> Hi Ben,
>
> Thanks fo
Hi Ben,
Thanks for the quick response. That's great to hear, as we'd only get to find
this out after putting rather a lot of effort into the process.
We'll be hosting these on cloud instances but since those aren't the fastest
machines around I'll take the reloading as a word of caution (we're p
Hi Richard,
we have experience with around 1/4th the vhosts on a single Server, no
Issues at all.
Reloading can take up to a minute but the Hardware isn't what I would call
recent.
The only thing that you'll have to watch out are Letsencrypt rate Limits >
https://letsencrypt.org/docs/rate-limits/
Our current setup is pretty simple, we have a regex capture to ensure that
the incoming request is a valid ascii domain name and we serve all our
traffic from that. Great ... for us.
However, our customers, with about 12000 domain names at present have
started to become quite vocal about having H
Just to add more information, I also have:
proxy_cache_use_stale error
timeout
invalid_header
updating
http_500
http_502
Hi all,
I'm trying to set up an nginx with a big amount of disk to serve as a cache
server.
I have the following configuration:
proxy_cache_path /mnt/cache levels=2:2:2 keys_zone=my-cache:1m
max_size=70m inactive=30d;
proxy_temp_path /mnt/cache/tmp;
On my logs I can see that HIT's are ve
15 matches
Mail list logo