1) In SolrConfig.xml, find ramBufferSizeMB and change to:
1024
2) Also, try decrease the mergefactor to see if it will give you less
segments. In my experiment, it does.
--
View this message in context:
http://lucene.472066.n3.nabble.com/configuring-solr3-6-for-a-large-intensive-index-only-run
On 5/23/2012 12:27 PM, Lance Norskog wrote:
If you want to suppress merging, set the 'mergeFactor' very high.
Perhaps 100. Note that Lucene opens many files (50? 100? 200?) for
each segment. You would have to set the 'ulimit' for file descriptors
to 'unlimited' or 'millions'.
My installation (S
size fluctuates!
OtisĀ
Performance Monitoring for Solr / ElasticSearch / HBase -
http://sematext.com/spmĀ
>
> From: Scott Preddy
>To: solr-user@lucene.apache.org
>Sent: Wednesday, May 23, 2012 2:19 PM
>Subject: configuring solr3.6 for a large int
If you want to suppress merging, set the 'mergeFactor' very high.
Perhaps 100. Note that Lucene opens many files (50? 100? 200?) for
each segment. You would have to set the 'ulimit' for file descriptors
to 'unlimited' or 'millions'.
Later, you can call optimize with a 'maxSegments' value. Optimize
I am trying to do a very large insertion (about 68million documents) into a
solr instance.
Our schema is pretty simple. About 40 fields using these types:
We are runnin