Hello,
We recently upgraded to SolrCloud 6.6. We are running on Ubuntu servers LTS 
14.x - VMware on Nutanics boxs. We have 4 nodes with 32GB each and 16GB for the 
jvm with 12GB minimum. Usually it is only using 4-7GB.


We do nightly indexing of partial fields for all our docs ~200K. This usually 
takes 3hr using 10 threads. About every other week we have a server go into 
recovery mode during the update. The recovering server has a much larger swap 
usage than the other servers in the cluster. We think this this related to the 
mmap files used for indexes. The server eventually recovers but it triggers 
alerts for devops which are annoying.


I have found a previous mail  list question (Shawn responded to) with almost an 
identical problem from 2014 but there is no suggested remedy. ( 
http://lucene.472066.n3.nabble.com/Solr-4-3-1-memory-swapping-td4126641.html)


Questions :


Is there progress regarding this?


Some kind of configuration that can mitigate this?


Maybe this is a lucene issue.


Thanks,

Bill OConnor (www.plos.org)

Reply via email to