>
> I was doing it that way, but what I'm doing with the documents is do
> some manipulation and put the new classes into a different list.
> Because I basically have two times the number of documents in lists,
> I'm running out of memory.  So I figured if I do it 1000 documents at
> a time, the SolrDocumentList will get garbage collected at least.
>
You are right w.r.t to all that but I am surprised that you would need ALL
the documents from the index for a search requirement.

Cheers
Avlesh

On Tue, Nov 3, 2009 at 7:13 AM, Paul Tomblin <ptomb...@xcski.com> wrote:

> On Mon, Nov 2, 2009 at 8:40 PM, Avlesh Singh <avl...@gmail.com> wrote:
> >>
> >> final static int MAX_ROWS = 100;
> >> int start = 0;
> >> query.setRows(MAX_ROWS);
> >> while (true)
> >> {
> >>   QueryResponse resp = solrChunkServer.query(query);
> >>   SolrDocumentList docs = resp.getResults();
> >>   if (docs.size() == 0)
> >>     break;
> >>   ....
> >>  start += MAX_ROWS;
> >>  query.setStart(start);
> >> }
> >>
> > Yes. It will work as you think. But are you sure that you want to do
> this?
> > How many documents do you have in the index? If the number is in an
> > acceptable range, why not simply do a query.setRows(Integer.MAX_VALUE)
> once?
>
> I was doing it that way, but what I'm doing with the documents is do
> some manipulation and put the new classes into a different list.
> Because I basically have two times the number of documents in lists,
> I'm running out of memory.  So I figured if I do it 1000 documents at
> a time, the SolrDocumentList will get garbage collected at least.
>
>
>
> --
> http://www.linkedin.com/in/paultomblin
> http://careers.stackoverflow.com/ptomblin
>

Reply via email to