Hi Lance,
        Thanks for the reply. But I am using solr 1.3 in my application.
Could you send me sample code or something like that which I can check
it out too. I hope I am following the api in the right manner.

I am currently doing something like this

        SolrServer server = createNewSolrServer();
                while(rows.exist())
                {
                SolrQuery query = new SolrQuery();
                query.setRows(maxRows);
              query.setQuery( qString );
        UpdateResponse upres = server.commit(true, true);
                                   upres =server.optmize(true,true);
                }

Am I doing something wrong here.
                
Regards
Sundar  

-----Original Message-----
From: Lance Norskog [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 07, 2008 9:01 PM
To: solr-user@lucene.apache.org
Subject: RE: Memory improvements

Solr 1.2 has a bug where if you say "commit after N documents" it does
not.
But it does honor the "commit after N milliseconds" directive. 

This is fixed in Solr 1.3. 

-----Original Message-----
From: Sundar Sankaranarayanan
[mailto:[EMAIL PROTECTED]
Sent: Thursday, February 07, 2008 3:30 PM
To: solr-user@lucene.apache.org
Subject: Memory improvements

Hi All,
          I am running an application in which I am having to index
about 300,000 records of a table which has 6 columns. I am committing to
the solr server after every 10,000 rows and I observed that the by the
end of about 150,000 the process eats up about 1 Gig of memory, and
since my server has only 1 Gig it throws me an Out of Memory error. How
ever if I commit after every 1000 rows, it is able to process about
200,000 rows before throwing out of memory. This is just dev server and
the production data would be much more bigger. It will be great if
someone suggests a way to improve this scenario.
 
 
Regards
Sundar Sankarnarayanan

Reply via email to