mark.
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-lucene-4-10-out-of-memory-issues-tp4158262p4159829.html
Sent from the Solr - User mailing list archive at Nabble.com.
I checked and these 'insanity' cached keys correspond to fields we use for
both grouping and faceting. The same behavior is documented here:
https://issues.apache.org/jira/browse/SOLR-4866, although I have single
shards for every replica which the jira says is a setup which should not
generate thes
Thanks for the response, I've been working on solving some of the most
evident issues and I also added your garbage collector parameters. First of
all the Lucene field cache is being filled with some entries which are
marked as 'insanity'. Some of these were related to a custom field that we
use fo
Probably need to look at it running with a profiler to see what's up.
Here's a few additional flags that might help the GC work better for
you (which is not to say there isn't a leak somewhere):
-XX:MaxTenuringThreshold=8 -XX:CMSInitiatingOccupancyFraction=40
This should lead to a nice up-and-dow
hey guys,
I'm running a solrcloud cluster consisting of five nodes. My largest index
contains 2.5 million documents and occupies about 6 gigabytes of disk
space. We recently switched to the latest solr version (4.10) from version
4.4.1 which we ran successfully for about a year without any major i