I am aware of the power of the caches.
I do not want to completely remove the caches - I want them to be small.
- So I can launch a stress test with small amount of data.
( Some items may come from cache - some need to be searched up <->
right now everything comes from the cache... )

2010/12/21 Toke Eskildsen <t...@statsbiblioteket.dk>:
> Stijn Vanhoorelbeke [stijn.vanhoorelb...@gmail.com] wrote:
>> I want to do a quick&dirt load testing - but all my results are cached.
>> I commented out all the Solr caches - but still everything is cached.
>>
>> * Can the caching come from the 'Field Collapsing Cache'.
>  > -- although I don't see this element in my config file.
>> ( As the system now jumps from 1GB to 7 GB of RAM when I do a load
>> test with lots of queries ).
>
> If you allow the JVM to use a maximum of 7GB heap, it is not that surprising 
> that it allocates it when you hammer the searcher. Whether the heap is used 
> for caching or just filled with dead object waiting for garbage collection is 
> hard to say at this point. Try lowering the maximum heap to 1 GB and do your 
> testing again.
>
> Also note that Lucene/Solr performance on conventional harddisks benefits a 
> lot from disk caching: If you perform the same search more than one time, the 
> speed will increase significantly as relevant parts of the index will 
> (probably) be in RAM. Remember to flush your disk cache between tests.

Reply via email to