> <ramBufferSizeMB>1024</ramBufferSizeMB> Ok, it will lower frequency of Buffer flush to disk (buffer flush happens when it reaches capacity, due commit, etc.); it will improve performance. It is internal buffer used by Lucene. It is not total memory of Tomcat...
> <mergeFactor>100</mergeFactor> It will deal with 100 Segments, and each segment will consist on number of files (equal to number of fields) - you may have 20 fields, 2000 files... For many such applications, set ulimit to 65536. You never know how many files you will need (including log files of Tomcat, class files, config files, image/css/html files, etc...) Even with 10 Lucene segments (mergeFactor), 10 files each, (100 files) Lucene may need much more during commit/optimize... -Fuad > -----Original Message----- > From: Ranganathan, Sharmila [mailto:sranganat...@library.rochester.edu] > Sent: October-23-09 1:08 PM > To: solr-user@lucene.apache.org > Subject: Too many open files > > Hi, > > I am getting too many open files error. > > Usually I test on a server that has 4GB RAM and assigned 1GB for > tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this > server and has following setting for SolrConfig.xml > > > > <useCompoundFile>true</useCompoundFile> > > <ramBufferSizeMB>1024</ramBufferSizeMB> > > <mergeFactor>100</mergeFactor> > > <maxMergeDocs>2147483647</maxMergeDocs> > > <maxFieldLength>10000</maxFieldLength> > > > > In my case 200,000 documents is of 1024MB size and in this testing, I am > indexing total of million documents. We have high setting because we are > expected to index about 10+ million records in production. It works fine > in this server. > > > > When I deploy same solr configuration on a server with 32GB RAM, I get > "too many open files" error. The ulimit -n is 1024 for this server. Any > idea? Is this because 2nd server has 32GB RAM? Is 1024 open files limit > too low? Also I don't find any documentation for <ramBufferSizeMB>. > I checked Solr 'Solr 1.4 Enterprise Search Server' book, wiki, etc. I am > using Solr 1.3. > > > > Is it good idea to use ramBufferSizeMB? Vs maxBufferedDocs? What does > ramBufferSizeMB mean? My understanding is that when documents added to > index which are initially stored in memory reaches size > 1024MB(ramBufferSizeMB), it flushes data to disk. Or is it when total > memory used(by tomcat, etc) reaches 1024, it flushed data to disk? > > > > Thanks, > > Sharmila > > > > > > > > > > > > > >