Stats:
default config for 4.3.1 on a high memory AWS instance using jetty.
Two collections each with less than 700k docs per collection.

We seem to hit some performance lags when doing large commits.  Our front end 
service allows customers to import data which is stored in Mongo and then 
indexed in Solr.  We keep all of that data and do one big commit at the end 
rather than doing commits for each record along the way.  

Would it be better to use something like autoSoftCommit and just commit each 
record as it comes in?  Or is the problem more about disk IO?    Are there some 
other "low hanging fruit" things we should consider?  The solr dashboard shows 
that there is still plenty of free memory during these imports so it isn't 
running out of memory and reverting to disk.

Thanks!
Eric

Reply via email to