On 12/31/2015 8:03 PM, Zheng Lin Edwin Yeo wrote:
> But the problem I'm facing now is that during optimizing, the memory usage
> of the server hit the maximum of 64GB, and I believe the optimization could
> not be completed fully as there is not enough memory, so when I check the
> index again, it says that it is not optimized. Before the optimization, the
> memory usage was less than 16GB, so the optimization actually uses up more
> than 48GB of memory.
> 
> Is it normal for an index size of 200GB to use up so much memory during
> optimization?

What *exactly* are you looking at that says Solr is using all your
memory?  You must be extremely specific when answering this question.
This will determine whether we should be looking for a bug or not.

It is completely normal for all modern operating systems to use all the
memory when the amount of data being handled is large.  Some of the
memory will be allocated to programs like Java/Solr, and the operating
system will use everything else to cache data from I/O operations on the
disk.  This is called the page cache.  For Solr to perform well, the
page cache must be large enough to effectively cache your index data.

https://en.wikipedia.org/wiki/Page_cache

In another message thread, you indicated that your max heap was set to
14GB.  Java will only ever use that much memory for the program that is
being run, plus a relatively small amount so that Java itself can
operate.  Any significantly large resident memory allocation beyond the
max heap would be an indication of a bug in Java, not a bug in Solr.

With the index size at 200GB, I would hope to have at least 128GB of
memory in the server, but I would *want* 256GB.  64GB may not be enough
for good performance.

Thanks,
Shawn

Reply via email to