On 9/9/2013 10:35 AM, P Williams wrote:
Is it odd that my index is ~16GB but top shows 30GB in virtual memory?
  Would the extra be for the field and filter caches I've increased in size?

This should probably be a new thread, but it might have some applicability here, so I'm replying.

I have noticed some inconsistencies in memory reporting on Linux with regard to Solr. Here's a screenshot of top on one of my production systems, sorted by memory:

https://www.dropbox.com/s/ylxm0qlcegithzc/prod-top-sort-mem.png

The virtual memory size for the top process is right in line with my index size, plus a few gig for the java heap. Something to note as you ponder these numbers: My java heap is only 6GB. Java has allocated the entire 6GB. The other two java processes are homegrown Solr-related applications.

What's odd is the resident and shared memory sizes. I have pretty much convinced myself that the shared memory size is misreported. If you add up the numbers for cached and free, you get a total of 53659264 ... about 11GB shy of the 64GB total memory.

if the reported resident memory for the Solr java process (17GB) were accurate, this would exceed total physical memory by several gigabytes, and there would be swap in use, but as you can see, there is no swap in use.

Recently I overheard a conversation between Lucene committers in a lucene IRC channel that seemed to be discussing this phenomenon. There is apparently some issue with certain mmap modes that result in the operating system shared memory number going up even though no actual memory is being consumed.

Thanks,
Shawn

Reply via email to