On 1/3/2016 7:05 PM, Zheng Lin Edwin Yeo wrote:
> A) Before I start the optimization, the server's memory usage
> is consistent at around 16GB, when Solr startsup and we did some searching.
> However, when I click on the optimization button, the memory usage
> increases gradually, until it reaches the maximum of 64GB which the server
> has. But this only happens to the collection with index of 200GB, and not
> other collections which has smaller index size (they are at most 1GB at the
> moment).

<snip>

> A) I am quite curious at this also, because in the Task Manager of the
> server, the amount of memory usage stated does not tally with the
> percentage of memory usage. When I start optimizatoin, the memory usage
> states the JVM is only using 14GB, but the percentage of memory usage is
> almost 100%, when I have 64GB RAM. I have check the other processes running
> in the server, and did not found any other processes that takes up a large
> amount of memory, and the total amount of memory usage for the whole sever
> is only around 16GB.

Toke's reply is spot on.

In your first answer above, you didn't really answer my question, which
was "What *exactly* are you looking at that says Solr is using all your
memory?"  You've said "the server's memory usage" but haven't described
how you got that number.

Here's a screenshot of "top" on one of my Solr servers, with the list
sorted by memory usage:

https://www.dropbox.com/s/i49s2uyfetwo3xq/solr-mem-prod-8g-heap.png?dl=0

This machine has 165GB (base 2 number) of index data on it, and 64GB of
memory.  Solr has been assigned an 8GB heap.  Here's more specific info
about the size of the index data:

root@idxb3:/index/solr5/data# du -hs data
165G    data
root@idxb3:/index/solr5/data# du -s data
172926520       data

You can see that the VIRT memory size of the Solr process is
approximately the same as the total index size (165GB) plus the max heap
(8GB), which adds up to 173GB.  The RES memory size of the java process
is 8.3GB -- just a little bit larger than the max heap.

At the OS level, my server shows 46GB used out of 64GB total ... which
probably seems excessive, until you consider the 36 million kilobytes in
the "cached" statistic.  This is the amount of memory being used for the
page cache.   If you subtract that memory, then you can see that this
server has only allocated about 10GB of RAM total -- exactly what I
would expect for a Linux machine dedicated to Solr with the max heap at 8GB.

Although my server is indicating about 18GB of memory free, I have seen
perfectly functioning servers with that number very close to zero.  It
is completely normal for the "free" memory statistic on Linux and
Windows to show a few megabytes or less, especially when you optimize a
Solr index, which reads (and writes) all of the index data, and will
fill up the page cache.

So, I will ask something very similar to my initial question.  Where
*exactly* are you looking to see the memory usage that you believe is a
problem?  A screenshot would be very helpful.

Here's a screenshot from my Windows client.  This machine is NOT running
Solr, but the situation with free and cached memory is similar.

https://www.dropbox.com/s/wex1gbj7e45g8ed/windows7-mem-usage.png?dl=0

I am not doing anything particularly unusual with this machine, but it
says there is *zero* free memory, out of 16GB total.  There is 9GB of
memory in the page cache, though -- memory that the OS will instantly
give up if any program requests it, which you can see because the
"available" stat is also about 9GB.  This Windows machine is doing
perfectly fine as far as memory.

Thanks,
Shawn

Reply via email to