Is it possible you are hitting this (just opened) Solr issue?:
https://issues.apache.org/jira/browse/SOLR-3392
Mike McCandless
http://blog.mikemccandless.com
On Fri, Apr 20, 2012 at 9:33 AM, Gopal Patwa wrote:
> We cannot avoid auto soft commit, since we need Lucene NRT feature. And I
> us
We cannot avoid auto soft commit, since we need Lucene NRT feature. And I
use StreamingUpdateSolrServer for adding/updating index.
On Thu, Apr 19, 2012 at 7:42 AM, Boon Low wrote:
> Hi,
>
> Also came across this error recently, while indexing with > 10 DIH
> processes in parallel + default index
Hi,
Also came across this error recently, while indexing with > 10 DIH processes in
parallel + default index setting. The JVM grinds to a halt and throws this
error. Checking the index of a core reveals thousands of files! Tuning the
default autocommit from 15000ms to 90ms solved the proble
I checked it was "MMapDirectory.UNMAP_SUPPORTED=true" and below are my
system data. Is their any existing test case to reproduce this issue? I am
trying understand how I can reproduce this issue with unit/integration test
I will try recent solr trunk build too, if it is some bug in solr or
lucene
On Apr 12, 2012, at 6:07 AM, Michael McCandless wrote:
> Your largest index has 66 segments (690 files) ... biggish but not
> insane. With 64K maps you should be able to have ~47 searchers open
> on each core.
>
> Enabling compound file format (not the opposite!) will mean fewer maps
> ... ie s
Your largest index has 66 segments (690 files) ... biggish but not
insane. With 64K maps you should be able to have ~47 searchers open
on each core.
Enabling compound file format (not the opposite!) will mean fewer maps
... ie should improve this situation.
I don't understand why Solr defaults t
Hi,
65K is already a very large number and should have been sufficient...
However: have you increased the merge factor? Doing so increases the
open files (maps) required.
Have you disabled compound file format? (Hmmm: I think Solr does so
by default... which is dangerous). Maybe try enabling
Michael, Thanks for response
it was 65K as you mention the default value for "cat
/proc/sys/vm/max_map_count" . How we determine what value this should be?
is it number of document during hard commit in my case it is 15 minutes?
or it is number of index file or number of documents we have in all
It's the virtual memory limit that matters; yours says unlimited below
(good!), but, are you certain that's really the limit your Solr
process runs with?
On Linux, there is also a per-process map count:
cat /proc/sys/vm/max_map_count
I think it typically defaults to 65,536 but you should che