Hello Solr community,
On our Production SolrCloud Server, OutOfMemory has been occurring on lot of
instances. When I download the HEAP DUMP and analyzed it. I got to know that in
multiple HEAP DUMPS there are lots of instances of
org.apache.lucene.codecs.blocktree.BlockTreeTermsReader which has the highest
retained heap memory and further I have checked the outgoing-reference for
those objects, the org.apache.lucene.util.fst.FST is the one which occupy 90%
of the heap memory.
it's like
Production HEAP memory :- 12GBout of which
org.apache.lucene.codecs.blocktree.BlockTreeTermsReader total retained heap :-
7-8 GB(vary from instance to instance)and org.apache.lucene.util.fst.FST total
retained heap :- 6-7 GB
Upon further looking I have calculated the total retained heap for
FieldReader.fieldInfo.name="my_field" is around 7GB. Now this is the same
reader which also contains reference to org.apache.lucene.util.fst.FST.
Now "my_field" is the field on which we are performing spatial searches. Is
spatial searches use FST internally and hence we are seeing lot of heap memory
used by FST.l only.
IS there any way we can optimize the spatial searches so that it take less
memory.
Can someone please give me any pointer that from where Should I start looking
to debug the above issue.
Thanks and Regards,Sanjay Dutt
Sent from Yahoo Mail on Android