Okay I'll post some shots somewhere people can get to them to demonstrate what I'm seeing. Unfortunately I just deployed some unrelated stuff to Solr that caused me to restart each node in the SolrCloud cluster. So right now the swap usage is minimal. I'll let it grow for a few days then send some URLs to the list.
BTW, we're running RHEL 5.9 (Tikanga) and uname -a reports: Linux da-pans-xxx 2.6.18-348.12.1.el5 #1 SMP Mon Jul 1 17:54:12 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux Thanks! Darrell -----Original Message----- From: Shawn Heisey [mailto:s...@elyograg.org] Sent: Wednesday, March 26, 2014 8:14 PM To: solr-user@lucene.apache.org Subject: RE: Solr 4.3.1 memory swapping > Thanks - we're currently running Solr inside of RHEL virtual machines > inside of VMware. Running "numactl --hardware" inside the VM shows the > following: > > available: 1 nodes (0) > node 0 size: 16139 MB > node 0 free: 364 MB > node distances: > node 0 > 0: 10 > > So there is only one node being shown. So there is only one node and > only one memory bank. Am I correct in assuming that means NUMA can't > be the issue? > > My best guess as to what is going on relates to that big memory-mapped > file Solr allocates. Our search index is about 60GB or so, much bigger > than the 16GB RAM the operating system has to work with. Could it be > that the swapping is due to the memory-mapped file in some way? If mmap is leading to swapping, that's a serious operating system glitch. That's not supposed to happen. The numa idea is the only thing I know about that could cause this to happen, assuming that there's not something else on the system that's using memory. If you could run top, press shift-M to sort by memory, and the get a screenshot, that would be good. Be sure the terminal has enough height that we can see quite a few of the top entries. Thanks, Shawn