I side with Toke on this. Enterprise bare metal machines often have
hundreds of gigs of memory and tens of CPU cores -- you would have to fit
multiple instances in a machine to make use of them to circumvent huge
heaps.

If this is not a common case now, it could well be in the future the way
hardware evolves -- so I would rather mention the factors which need
multiple instances than discourage them.
On 20 Feb 2016 14:55, "Toke Eskildsen" <t...@statsbiblioteket.dk> wrote:

> Shawn Heisey <apa...@elyograg.org> wrote:
> > I've updated the "Taking Solr to Production" reference guide page with
> > what I feel is an appropriate caution against running multiple instances
> > in a typical installation.  I'd actually like to use stronger language,
>
> And I would like you to use softer language.
>
> Machines gets bigger all the time and as you state yourself, GC can
> (easily) be a problem with the heap grows. With reference to the 32GB JVM
> limit for small pointers, a max Xmx just below 32GB looks like a practical
> choice for a Solr installation (if possible of course): Running 2 instances
> of 31GB will provide more usable memory than a single instance of 64GB.
>
> https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/
>
> Caveat: I have not done any testing on this with Solr, so I do not know
> how large the effect is. Some things, such as String faceting, DocValues
> structures and some of the field caches are array-of-atomics oriented and
> will not suffer with larger pointers. Other things, such as numerics
> faceting, large rows-settings and grouping uses a lot of objects and will
> require more memory. The overhead will differ depending on usage.
>
> We tend to use separate Solr installations on the same machines. For some
> machines we do it to allow for independent upgrades (long story), for
> others because a heap of 200GB is not something we are ready to experiment
> with.
>
> - Toke Eskildsen
>

Reply via email to