On Mon, 2016-11-14 at 16:29 -0800, Chetas Joshi wrote:
> Hi Toke, can you explain exactly what you mean by "the aggressive IO
> for the memory mapping caused the kernel to start swapping parts of
> the JVM heap to get better caching of storage data"?

I am not sure what you are asking for. I'll try adding more details:


Our machine(s) which ran into the swap problem had 256GB of physical
memory, with some 50GB+ free for caching, but handled multi terabytes
of index. So the free memory for memory mapping (aka disk cache) was
around 1% if the index size. With 25 active shards on the machine, each
search request resulted in a lot of IO to map memory from index data to
physical memory.

Solr JVMs on the machine did not do a lot of garbage collection. Partly
because of low query rate, partly because of some internal hacks.

So we had a machine with very heavy memory mapping and not-too-active
JVM heaps.

The principle behind swap is to store infrequently used memory in on
slower storage. This is where I guess that the kernel guessed that
freeing more memory for mapping, by pushing relatively stale parts of
the JVM heaps onto swap, would result in overall better performance.

- Toke Eskildsen, State and University Library, Denmark

Reply via email to