Shital,

Take a look at 
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html as it's 
a pretty decent explanation of memory mapped files. I don't believe that the 
default configuration for solr is to use MMapDirectory but even if it does my 
understanding is that the entire file won't be forcibly cached by solr. The 
OS's filesystem cache should control what's actually in ram and the eviction 
process will depend on the OS.

Thanks,
Greg

On Feb 12, 2014, at 12:57 PM, "Joshi, Shital" <shital.jo...@gs.com> wrote:

> Does Solr4 load entire index in Memory mapped file? What is the eviction 
> policy of this memory mapped file? Can we control it?
> 
> _____________________________________________
> From: Joshi, Shital [Tech]
> Sent: Wednesday, February 05, 2014 12:00 PM
> To: 'solr-user@lucene.apache.org'
> Subject: Solr4 performance
> 
> 
> Hi,
> 
> We have SolrCloud cluster (5 shards and 2 replicas) on 10 dynamic compute 
> boxes (cloud). We're using local disk (/local/data) to store solr index 
> files. All hosts have 60GB ram and Solr4 JVM are running with max 30GB heap 
> size. So far we have 470 million documents. We are using custom sharding and 
> all shards have ~9-10 million documents. We have a GUI sending queries to 
> this cloud and GUI has 30 seconds of timeout.
> 
> Lately we're getting many timeouts on GUI and upon checking we found that all 
> timeouts are happening on 2 hosts. The admin GUI for one of the hosts show 
> 96% of physical memory but the other host looks perfectly good. Both hosts 
> are for different shards. Would increasing ram of these two hosts make these 
> timeouts go away? What else we can check?
> 
> Many Thanks!
> 
> 

Reply via email to