Hi Renee,

Here's what I'd do:
* Check how many open files your system is set up for (ulimit -n).  You likely 
want to increase that (1024 seems to be a common default under Linux, and in 
the 
past I've set that to 30k+ without issues)
* Look at your mergeFactor.  If it's high, consider lowering it (will slow down 
indexing a bit)
* Consider using cfs, but if you do the above right, you can avoid using it.
* Consider a better Solr monitoring tool

Otis
----
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/



----- Original Message ----
> From: Renee Sun <renee_...@mcafee.com>
> To: solr-user@lucene.apache.org
> Sent: Fri, April 15, 2011 3:41:28 PM
> Subject: Re: partial optimize does not reduce the segment number to 
>maxNumSegments
> 
> sorry I should elaborate that earlier...
> 
> in our production environment,  we have multiple cores and the ingest
> continuously all day long; we only do  optimize periodically, and optimize
> once a day in mid night.
> 
> So  sometimes we could see 'too many open files' error. To prevent it  from
> happening, in production we maintain a script to monitor the segment  files
> total with all cores, and send out warnings if that number exceed  a
> threshold... it is kind of preventive measurement.  Currently we are  using
> the linux command to count the files. We are wondering if we can simply  use
> some formula to figure out this number, it will be better that way. Seems  we
> could use the stat url to get segment number and multiply it by 8 (that  is
> what we have given our schema).
> 
> Any better way to approach this?  thanks a lot!
> Renee
> 
> --
> View this message in context: 
>http://lucene.472066.n3.nabble.com/partial-optimize-does-not-reduce-the-segment-number-to-maxNumSegments-tp2682195p2825736.html
>
> Sent  from the Solr - User mailing list archive at Nabble.com.
> 

Reply via email to