bq: You should have memory to fit your whole database in disk cache and then
some more.

I have to disagree here if for no other reason than stored data, which
is irrelevant
for searching, may make up virtually none or virtually all of your
on-disk space.
Saying it all needs to fit in disk cache is too broad-brush a
statement, gotta test.

In this case, though, I _do_ think that there's not enough memory here, Toke's
comments are spot on.

On Sat, Nov 29, 2014 at 2:02 AM, Toke Eskildsen <t...@statsbiblioteket.dk> 
wrote:
> Po-Yu Chuang [ratbert.chu...@gmail.com] wrote:
>> [...] Everything works fine now, but I noticed that the load
>> average of the server is high because there is constantly
>> heavy disk read access. Please point me some directions.
>
>> RAM: 18G
>> Solr home: 185G
>> disk read access constantly 40-60M/s
>
> Solr search performance is tightly coupled to the speed of small random 
> reads. There are two obvious ways of ensuring that in these days:
>
> 1) Add more RAM to the server, so that the disk cache can hold a larger part 
> of the index. If you add enough RAM (depends on your index, but 50-100% of 
> the index size is a rule of thumb), you get "ideal" storage speed, by which I 
> mean that the bottleneck moves away from storage. If you are using spinning 
> drives, the 18GB of RAM is not a lot for a 185GB index.
>
> 2) Use SSDs instead of spinning drives (if you do not already do so). The 
> speed-up depends a lot on what you are doing, but is is a cheap upgrade and 
> it can later be coupled with extra RAM if it is not enough in itself.
>
> The Solr Wiki has this: https://wiki.apache.org/solr/SolrPerformanceProblems
> And I have this: http://sbdevel.wordpress.com/2013/06/06/memory-is-overrated/
>
> - Toke Eskildsen

Reply via email to