On 3/18/2014 5:30 AM, Avishai Ish-Shalom wrote:
> My solr instances are configured with 10GB heap (Xmx) but linux shows
> resident size of 16-20GB. even with thread stack and permgen taken into
> account i'm still far off from these numbers. Could it be that jvm IO
> buffers take so much space? does lucene use JNI/JNA memory allocations?

Solr does not do anything off-heap.  There is a project called
heliosearch underway that aims to use off-heap memory extensively with Solr.

There IS some mis-reporting of memory usage, though.  See a screenshot
that I just captured of top output, sorted by memory usage.  The java
process at the top of the list is Solr, running under the included Jetty:

https://www.dropbox.com/s/03a3pp510mrtixo/solr-ram-usage-wrong.png

I have a 6GB heap and 52GB of index data on this server.  This makes the
62.2GB virtual memory size completely reasonable.  The claimed resident
memory size is 20GB, though.  If you add that 20GB to the 49GB that is
allocated to the OS disk cache and the 6GB that it says is free, that's
75GB.  I've only got 64GB of RAM on the box, so something is being
reported wrong.

If I take my 20GB resident size and subtract the 14GB shared size, that
is closer to reality, and it makes the numbers fit into the actual
amount of RAM that's on the machine.  I believe the misreporting is
caused by the specific way that Java uses MMap when opening Lucene
indexes.  This information comes from what I remember about a
conversation I witnessed in #lucene or #lucene-dev, not from my own
exploration.  I believe they said that the MMap methods which don't
misreport memory usage would not do what Lucene requires.

Thanks,
Shawn

Reply via email to