Thanks Alex!
Yes, you hit my key points.
Actually I have to implement both of the requirements.
The first one works very well as the reason you state. Now I have a website
client which is 20 records per page. It is fast.
However, my customer also wants to use Servlet to  download the whole query
set.(1 millions records maximum possible)
So at this time, I tried to use Solr pull out 10000 or 5000 records for each
page(Divided to 100 times or 200 times queries) . Then just print out these
records to the client browser. 
I am not sure how the exception was generated?
Is my client program(the Servlet program)  out of memory?or Connect timeout
for some reason?
This exception doesn't always happen. Sometimes it works well even I query
10000 records each time and works for many times , but sometimes it crashes
only 5000 records without an explicit reason.   
You suggestion is great! But the implementation is a little complicated for
us.
Is Lucene better than Solr for this requirement? But the paging in Lucene
seems not very intuitively.  



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Fail-to-huge-collection-extraction-tp4003559p4006450.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to