bq: I assume this would go along with also increasing autoCommit?
Not necessarily, the two are have much different consequences if
openSearcher is set to false for autoCommit. Essentially all this is
doing is flushing the current segments to disk and opening new
segments, no autowarming etc. is be
Thank you Shawn. Sounds like increasing the autoSoftCommit maxTime would
be a good idea. I assume this would go along with also increasing
autoCommit?
All of our collections (just 2 at the moment) have the same settings. The
data directory is in HDFS and is the same data directory for every shar
On 2/5/2016 8:11 AM, Joseph Obernberger wrote:
> Thank you for the reply Scott - we have the commit settings as:
>
> 6
> false
>
>
> 15000
>
>
> Is that 50% disk space rule across the entire HDFS cluster or on an
> individual spindle?
That autoSoftCommit maxTime is pret
I'm wondering if the shutdown time is too short. When we shutdown the
cluster, could it be that it doesn't have enough time to flush? It only
happens some of the time; as to which node is seems to be random.
-Joe
On Tue, Feb 2, 2016 at 12:49 PM, Erick Erickson
wrote:
> Does this happen all th
Thank you for the reply Scott - we have the commit settings as:
6
false
15000
Is that 50% disk space rule across the entire HDFS cluster or on an
individual spindle?
Thank you!
-Joe
On Tue, Feb 2, 2016 at 12:01 PM, Scott Stults <
sstu...@opensourceconnections.com> wr
Does this happen all the time or only when bringing up Solr on some of
the nodes?
My (naive) question is whether this message: AlreadyBeingCreatedException
could indicate that more than one Solr is trying to access the same tlog
Best,
Erick
On Tue, Feb 2, 2016 at 9:01 AM, Scott Stults
wrote
It seems odd that the tlog files are so large. HDFS aside, is there a
reason why you're not committing? Also, as far as disk space goes, if you
dip below 50% free you run the risk that the index segments can't be merged.
k/r,
Scott
On Fri, Jan 29, 2016 at 12:40 AM, Joseph Obernberger <
joseph.ob
Already tried with same result (the message changed properly )
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-HDFS-settings-tp4165873p4166089.html
Sent from the Solr - User mailing list archive at Nabble.com.
This doesn't answer your question, but unless something is changed,
you're going to want to set this to false. It causes index corruption at
the moment.
On 10/25/14 03:42, Norgorn wrote:
true
Ok, new problem, while collection or shard creating:
Caused by: no segments* file found in
NRTCachingDirectory(HdfsDirectory@3a19dc74
lockFactory=org.apache.solr.store.hdfs.HdfsLockFactory@43507d1b;
maxCacheMB=192.0 maxMergeSizeMB=16.0): files: [HdfsDirectory@3a19dc74
lockFactory=org.apache.solr.s
10 matches
Mail list logo