To add some numbers to adityab's comment.

Each entry in your filter cache will probably consist
of maxDocs/8 bytes plus some overhead. Or about 16G.
This will only grow as you fire queries at Solr, so
it's no surprise you're running out of memory as you
process queries.

Your documentCache is probably also a problem, although
I'm extrapolating based on an 80G index with only 1M docs.

The result cache is also very big, but it's usually much smaller.
Still, I'd set it back to the defaults.

Why did you change these from the defaults? The very
first thing I'd do is change them back.

Your autowarm counts are also a problem at 2,048.
Again, take the filterCache. It's essentially a map
where each entry's key is the fq clause and the
value is the set of documents that match the query,
often stored as a bit set (thus the maxDocs/8 above).
Whenever a new searcher is opened in your setup, the
most recent 2,048 fq clauses will be re-executed. Which
should really kill your searcher open times. Try something
reasonable like 16-32.

These are caches that are intended to age out the oldest
entries, not hold all the entries you ever send at Solr.

Best
Erick

On Wed, Jun 5, 2013 at 9:35 AM, adityab <aditya_ba...@yahoo.com> wrote:
> Did you try reducing filter and query cache. They are fairly large too unless
> you really need them to be cached for your use cache.
> Do you have that many distinct filter queries hitting solr for the size you
> have defined for filterCache?
> Are you doing any sorting? as this will chew up a lot of memory because of
> lucene's internal field cache
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Heap-space-problem-with-mlt-query-tp4068278p4068326.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to