Hello,

adding my 5 cents here as well: it seems that we experienced similar problem that was supposed to be fixed or not appear at all for 64-bit systems. Our current solution is custom build of Solr with DEFAULT_READ_CHUNK_SIZE set t0 10MB in FSDirectory class. This fix was done however not by me and in the old times of Solr 1.4.1 so I'm not sure if it's valid anymore considering vast changes in Lucene/Solr code and JVM improvements, so I'd like much to hear suggestions of experienced users.

--
Warm regards,
Artem Karpenko

On 25.02.2013 14:33, zqzuk wrote:
Just to add... I noticed this line in the stack trace particularly:

*try calling FSDirectory.setReadChunkSize with a value smaller than the
current chunk size (2147483647)*

Had a look at the javadoc and solrconfig.xml. I cannot see a way to call
this method to change it with solr. If that would be a possible fix, how can
I do it in Solr?

Thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/170G-index-1-5-billion-documents-out-of-memory-on-query-tp4042696p4042705.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to