Hi everyone,

currently I am taking some performance measurements on a Solr installation and I am trying to figure out if what I see mostly fits expectations:

The data is as follows:

- solr 4.8.1
- 8 millon documents
- mostly office documents with real text content, stored
- index size on disk 90G
- full index memory mapped into virtual memory:
- this is a on a vmware server, 4 cores, 16 GB RAM

PID PR  NI  VIRT  RES  SHR S   %CPU %MEM    TIME+  nFLT
961 20   0 93.9g  10g 6.0g S     19 64.5 718:39.81 757k

When I start running a jmeter query test sending requests as fast a possible with a few threads, it peaks at about 4 qps with a real-world query replay of mostly 1, 2, sometimes more terms.

What I see are around 150 to 200 major page faults per second, meaning that Solr is not really happy with what happens to be in memory at any instance in time.

My hunch is that this hints at a too small RAM footprint. Much more RAM is needed to get the number of major page faults down.

Would anyone agree or disagree with this analysis. Someone out there saying "200 major page faults/second are normal, there must be another problem"?

Thanks,
Harald.

Reply via email to