We have some Solr 7.6 setups connecting to HDFS 3 clusters. So far that
did not show any compatibility problems.
On 02.05.19 15:37, Kevin Risden wrote:
For Apache Solr 7.x or older yes - Apache Hadoop 2.x was the dependency.
Apache Solr 8.0+ has Hadoop 3 compatibility with SOLR-9515. I did some
Thanks kevin, and also thanks for your blog on hdfs topic
> https://risdenk.github.io/2018/10/23/apache-solr-running-on-apache-hadoop-hdfs.html
On Thu, May 02, 2019 at 09:37:59AM -0400, Kevin Risden wrote:
> For Apache Solr 7.x or older yes - Apache Hadoop 2.x was the dependency.
> Apache Solr 8.
For Apache Solr 7.x or older yes - Apache Hadoop 2.x was the dependency.
Apache Solr 8.0+ has Hadoop 3 compatibility with SOLR-9515. I did some
testing to make sure that Solr 8.0 worked on Hadoop 2 as well as Hadoop 3,
but the libraries are Hadoop 3.
The reference guide for 8.0+ hasn't been releas
Hi
solr doc [1] says it's only compatible with hdfs 2.x
is that true ?
[1]: http://lucene.apache.org/solr/guide/7_7/running-solr-on-hdfs.html
--
nicolas
bq: I assume this would go along with also increasing autoCommit?
Not necessarily, the two are have much different consequences if
openSearcher is set to false for autoCommit. Essentially all this is
doing is flushing the current segments to disk and opening new
segments, no autowarming etc. is be
Thank you Shawn. Sounds like increasing the autoSoftCommit maxTime would
be a good idea. I assume this would go along with also increasing
autoCommit?
All of our collections (just 2 at the moment) have the same settings. The
data directory is in HDFS and is the same data directory for every shar
On 2/5/2016 8:11 AM, Joseph Obernberger wrote:
> Thank you for the reply Scott - we have the commit settings as:
>
> 6
> false
>
>
> 15000
>
>
> Is that 50% disk space rule across the entire HDFS cluster or on an
> individual spindle?
That autoSoftCommit maxTime is pret
I'm wondering if the shutdown time is too short. When we shutdown the
cluster, could it be that it doesn't have enough time to flush? It only
happens some of the time; as to which node is seems to be random.
-Joe
On Tue, Feb 2, 2016 at 12:49 PM, Erick Erickson
wrote:
> Does this happen all th
Thank you for the reply Scott - we have the commit settings as:
6
false
15000
Is that 50% disk space rule across the entire HDFS cluster or on an
individual spindle?
Thank you!
-Joe
On Tue, Feb 2, 2016 at 12:01 PM, Scott Stults <
sstu...@opensourceconnections.com> wr
Does this happen all the time or only when bringing up Solr on some of
the nodes?
My (naive) question is whether this message: AlreadyBeingCreatedException
could indicate that more than one Solr is trying to access the same tlog
Best,
Erick
On Tue, Feb 2, 2016 at 9:01 AM, Scott Stults
wrote
It seems odd that the tlog files are so large. HDFS aside, is there a
reason why you're not committing? Also, as far as disk space goes, if you
dip below 50% free you run the risk that the index segments can't be merged.
k/r,
Scott
On Fri, Jan 29, 2016 at 12:40 AM, Joseph Obernberger <
joseph.ob
Hi All - we're using Apache Solr Cloud 5.2.1, with an HDFS system that is
86% full. Some of the datanodes in the HDFS cluster are more close to
being full than other nodes. We're getting messages about "Error adding
log" from the index process, which I **think** is related to datanodes
being full
Already tried with same result (the message changed properly )
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-HDFS-settings-tp4165873p4166089.html
Sent from the Solr - User mailing list archive at Nabble.com.
This doesn't answer your question, but unless something is changed,
you're going to want to set this to false. It causes index corruption at
the moment.
On 10/25/14 03:42, Norgorn wrote:
true
uot;.
Have no idea what to do with that.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-HDFS-settings-tp4165873p4165878.html
Sent from the Solr - User mailing list archive at Nabble.com.
?
P.S. I've found, that the problem can be cause of different protobuf.jar,
I've changed that jar (and hadoop-*.jar for comparability) in my SOLR libs,
but the problem didn't change.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-HDFS-settings-tp4165873
16 matches
Mail list logo