My colleague and I thought the same thing - that this is an O/S
configuration issue.

/proc/sys/vm/max_map_count = 65536

I honestly don't know how many segments were in the index. Our merge factor
is 10 and there were around 4.4 million docs indexed. The OOME was raised
when the MMapDirectory was opened, so I don't think were reopening the
reader several times. Our MMapDirectory is set to use the "unmapHack".

We've since switched back to non-compound index files and are having no
trouble at all.

On Tue, Sep 20, 2011 at 3:32 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Since you hit OOME during mmap, I think this is an OS issue not a JVM
> issue.  Ie, the JVM isn't running out of memory.
>
> How many segments were in the unoptimized index?  It's possible the OS
> rejected the mmap because of process limits.  Run "cat
> /proc/sys/vm/max_map_count" to see how many mmaps are allowed.
>
> Or: is it possible you reopened the reader several times against the
> index (ie, after committing from Solr)?  If so, I think 2.9.x never
> unmaps the mapped areas, and so this would "accumulate" against the
> system limit.
>
> > My memory of this is a little rusty but isn't mmap also limited by mem +
> swap on the box? What does 'free -g' report?
>
> I don't think this should be the case; you are using a 64 bit OS/JVM
> so in theory (except for OS system wide / per-process limits imposed)
> you should be able to mmap up to the full 64 bit address space.
>
> Your virtual memory is unlimited (from "ulimit" output), so that's good.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Wed, Sep 7, 2011 at 12:25 PM, Rich Cariens <richcari...@gmail.com>
> wrote:
> > Ahoy ahoy!
> >
> > I've run into the dreaded OOM error with MMapDirectory on a 23G cfs
> compound
> > index segment file. The stack trace looks pretty much like every other
> trace
> > I've found when searching for OOM & "map failed"[1]. My configuration
> > follows:
> >
> > Solr 1.4.1/Lucene 2.9.3 (plus
> > SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
> > )
> > CentOS 4.9 (Final)
> > Linux 2.6.9-100.ELsmp x86_64 yada yada yada
> > Java SE (build 1.6.0_21-b06)
> > Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
> > ulimits:
> >    core file size     (blocks, -c)     0
> >    data seg size    (kbytes, -d)     unlimited
> >    file size     (blocks, -f)     unlimited
> >    pending signals    (-i)     1024
> >    max locked memory     (kbytes, -l)     32
> >    max memory size     (kbytes, -m)     unlimited
> >    open files    (-n)     256000
> >    pipe size     (512 bytes, -p)     8
> >    POSIX message queues     (bytes, -q)     819200
> >    stack size    (kbytes, -s)     10240
> >    cpu time    (seconds, -t)     unlimited
> >    max user processes     (-u)     1064959
> >    virtual memory    (kbytes, -v)     unlimited
> >    file locks    (-x)     unlimited
> >
> > Any suggestions?
> >
> > Thanks in advance,
> > Rich
> >
> > [1]
> > ...
> > java.io.IOException: Map failed
> >  at sun.nio.ch.FileChannelImpl.map(Unknown Source)
> >  at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> > Source)
> >  at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> > Source)
> >  at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
> >  at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown
> Source)
> >
> >  at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> >  at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> >  at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
> >  at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown
> Source)
> >  at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
> >  at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
> > Source)
> >  at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
> >  at org.apache.lucene.index.IndexReader.open(Unknown Source)
> > ...
> > Caused by: java.lang.OutOfMemoryError: Map failed
> >  at sun.nio.ch.FileChannelImpl.map0(Native Method)
> > ...
> >
>

Reply via email to