Hi,
The recommendation to have RAM enough to place your entire index into memory is 
sort of worst case scenario (maybe better called the best case scenario) where 
your index is optimal and is fully used all the time. OS will load pages that 
are used and those that might be used to memory, so even if you have 40GB of 
index files on disk if you do not use the files they will not be loaded to 
memory. Why would you not use some files: maybe some fields are stored but you 
never retrieve them, or you enabled doc values but you never use doc values or 
you use only subset of your documents and old documents are never part of 
result…
The best thing is that you run your Solr with some monitoring tool and see how 
much RAM is actually used on average/max and use that value with some headroom. 
You can put some alert on used RAM and react if/when your system starts 
requiring more. One such tool is our https://sematext.com/spm 
<https://sematext.com/spm> 

HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



> On 15 Apr 2019, at 15:25, SOLR4189 <klin892...@yandex.ru> wrote:
> 
> Hi all,
> 
> I have a collection with many shards. Each shard is in separate SOLR node
> (VM) has 40Gb index size, 4 CPU and SSD. 
> 
> When I run performance checking with 50GB RAM (10Gb for JVM and 40Gb for
> index) per node and 25GB RAM (10Gb for JVM and 15Gb for index), I get the
> same queries times (percentile80, percentile90 and percentile95). I run the
> long test - 8 hours production queries and updates.
> 
> What does it mean? All index in RAM it not must? Maybe is it due to SSD? How
> can I check it?
> 
> Thank you.
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html

Reply via email to