It's the virtual memory limit that matters; yours says unlimited below
(good!), but, are you certain that's really the limit your Solr
process runs with?

On Linux, there is also a per-process map count:

    cat /proc/sys/vm/max_map_count

I think it typically defaults to 65,536 but you should check on your
env.  If a process tries to map more than this many regions, you'll
hit that exception.

I think you can:

  cat /proc/<pid>/maps | wc

to see how many maps your Solr process currently has... if that is
anywhere near the limit then it could be the cause.

Mike McCandless

http://blog.mikemccandless.com

On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <gopalpa...@gmail.com> wrote:
> *I need help!!*
>
> *
> *
>
> *I am using Solr 4.0 nightly build with NRT and I often get this error
> during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
> have search this forum and what I found it is related to OS ulimit
> setting, please se below my ulimit settings. I am not sure what ulimit
> setting I should have? and we also get "**java.net.SocketException:*
> *Too* *many* *open* *files" NOT sure how many open file we need to
> set?*
>
>
> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
> 15GB, with Single shard
>
> *
> *
>
> *We update the index every 5 seconds, soft commit every 1 second and
> hard commit every 15 minutes*
>
> *
> *
>
> *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
>
> *
> *
>
> ulimit:
>
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 401408
> max locked memory       (kbytes, -l) 1024
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 10240
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 401408
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
>
>
> *
> *
>
> *ERROR:*
>
> *
> *
>
> *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
> *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
> *commit* *error...:java.io.IOException:* *Map* *failed*
>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
>        *at* 
> *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
>        *at* 
> *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
>        *at* 
> *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
>        *at* 
> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
>        *at* 
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
>        *at* 
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
>        *at* 
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
>        *at* 
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
>        *at* 
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
>        *at* 
> *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
>        *at* 
> *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
>        *at* 
> *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
>        *at* 
> *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
>        *at* 
> *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
>        *at* 
> *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
>        *at* 
> *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
>        *at* 
> *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
>        *at* 
> *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
>        *at* 
> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
>        *at* 
> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>        *at* 
> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>        *at* 
> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>        *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>        *at* 
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>        *at* 
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>        *at* 
> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>        *at* 
> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>        *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
> *java.lang.OutOfMemoryError:* *Map* *failed*
>        *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
>        *...* *28* *more*
>
> *
> *
>
> *
> *
>
> *
>
>
> SolrConfig.xml:
>
>
>        <indexDefaults>
>                <useCompoundFile>false</useCompoundFile>
>                <mergeFactor>10</mergeFactor>
>                <maxMergeDocs>2147483647</maxMergeDocs>
>                <maxFieldLength>10000</maxFieldLength-->
>                <ramBufferSizeMB>4096</ramBufferSizeMB>
>                <maxThreadStates>10</maxThreadStates>
>                <writeLockTimeout>1000</writeLockTimeout>
>                <commitLockTimeout>10000</commitLockTimeout>
>                <lockType>single</lockType>
>
>            <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
>              <double name="forceMergeDeletesPctAllowed">0.0</double>
>              <double name="reclaimDeletesWeight">10.0</double>
>            </mergePolicy>
>
>            <deletionPolicy class="solr.SolrDeletionPolicy">
>              <str name="keepOptimizedOnly">false</str>
>              <str name="maxCommitsToKeep">0</str>
>            </deletionPolicy>
>
>        </indexDefaults>
>
>
>        <updateHandler class="solr.DirectUpdateHandler2">
>            <maxPendingDeletes>1000</maxPendingDeletes>
>             <autoCommit>
>               <maxTime>900000</maxTime>
>               <openSearcher>false</openSearcher>
>             </autoCommit>
>             <autoSoftCommit>
>               <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>             </autoSoftCommit>
>
>        </updateHandler>
>
>
>
> Thanks
> Gopal Patwa
> *

Reply via email to