If I am not wrong this works only with Solr version > 4.7.0 ?
On Mon, Sep 28, 2015 at 12:23 PM Markus Jelsma <markus.jel...@openindex.io>
wrote:

> Hi - you need to use the CursorMark feature for larger sets:
> https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results
> M.
>
>
>
> -----Original message-----
> > From:Ajinkya Kale <kaleajin...@gmail.com>
> > Sent: Monday 28th September 2015 20:46
> > To: solr-user@lucene.apache.org; java-u...@lucene.apache.org
> > Subject: Solr java.lang.OutOfMemoryError: Java heap space
> >
> > Hi,
> >
> > I am trying to retrieve all the documents from a solr index in a batched
> > manner.
> > I have 100M documents. I am retrieving them using the method proposed
> here
> >
> https://nowontap.wordpress.com/2014/04/04/solr-exporting-an-index-to-an-external-file/
> > I am dumping 10M document splits in each file. I get "OutOfMemoryError"
> if
> > start is at 50M. I get the same error even if rows=10 for start=50M.
> > Curl on start=0 rows=50M in one go works fine too. But things go bad when
> > start is at 50M.
> > My Solr version is 4.4.0.
> >
> > Caused by: java.lang.OutOfMemoryError: Java heap space
> > at
> >
> org.apache.lucene.search.TopDocsCollector.topDocs(TopDocsCollector.java:146)
> > at
> >
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1502)
> > at
> >
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1363)
> > at
> >
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:474)
> > at
> >
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:434)
> > at
> >
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
> > at
> >
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> > at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904)
> >
> > --aj
> >
>

Reply via email to