You should have memory to fit your whole database in disk cache and then
some more. I prefer to have at least twice that to accommodate startup of
new searchers while still serving from the "old".

Less than that performance drops a lot.

> Solr home: 185G
If that is your database size then you need new machines....



2014-11-29 6:59 GMT+01:00 Po-Yu Chuang <ratbert.chu...@gmail.com>:

> Hi all,
>
> I am using Solr 4.9 with Tomcat. Thanks to the suggestions from Yonik and
> Dmitry about the slow start up. Everything works fine now, but I noticed
> that the load average of the server is high because there is constantly
> heavy disk read access. Please point me some directions.
>
> Some numbers about my system:
> RAM: 18G
> swap space: 2G
> number of documents: 27 million
> Solr home: 185G
> disk read access constantly 40-60M/s
> document cache size: 16K entries
> document cache hit ratio: 0.65
> query cache size: 16K
> query cache hit ratio: 0.03
>
> At first, I wondered if the disk read comes from swap, so I decreased the
> swappiness from 60 to 10, but the disk read is still there, which means
> that the disk read access does not result from swapping in.
>
> Then, I tried different document cache size and query different size. The
> effect on changing query cache size is not obvious. I tried 512, 16K, 256K
> entries and the hit ratio is between 0.01 to 0.03.
>
> For document cache, the larger cache size did improve the hit ratio of
> document cache size (I tried 512, 16K, 256K, 512K, 1024K and the hit ratio
> is between 0.58 - 0.87), but the disk read is still high.
>
> Is adjusting document cache size a reasonable direction? Or I should just
> increase the physical memory? Is there any method to estimate the right
> size of document cache (or other caches) and to estimate the size of
> physical memory needed?
>
> Thanks,
> Po-Yu
>

Reply via email to