Are you batching the documents before sending them to the solr server? Are
you doing a commit only at the end? Also since you have 32 cores, you can
try upping the number of concurrent updaters from 16 to 32. 



Jaeger, Jay - DOT wrote:
> 
> 500 / second would be 1,800,000 per hour (much more than 500K documents).
> 
> 1)  how big is each document?
> 2)  how big are your index files?
> 3)  as others have recently written, make sure you don't give your JRE so
> much memory that your OS is starved for memory to use for file system
> cache.
> 
> JRJ
> 
> -----Original Message-----
> From: Lord Khan Han [mailto:khanuniver...@gmail.com] 
> Sent: Monday, September 26, 2011 6:09 AM
> To: solr-user@lucene.apache.org
> Subject: SOLR Index Speed
> 
> Hi,
> 
> We have 500K web document and usind solr (trunk) to index it. We have
> special anaylizer which little bit heavy cpu .
> Our machine config:
> 
> 32 x cpu
> 32 gig ram
> SAS HD
> 
> We are sending document with 16 reduce client (from hadoop) to the stand
> alone solr server. the problem is we couldnt get speedier than the 500 doc
> /
> per sec. 500K document tooks 7-8 hours to index :(
> 
> While indexin the the solr server cpu load is around : 5-6  (32 max)  it
> means  %20 of the cpu total power. We have plenty ram ...
> 
> I turned of auto commit  and give 8198 rambuffer .. there is no io wait ..
> 
> How can I make it faster ?
> 
> PS: solr streamindex  is not option because we need to submit javabin...
> 
> thanks..
> 


--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOLR-Index-Speed-tp3368945p3370765.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to