Hi we have a new setup of solr 7.7 without cloud in a master/slave setup
Periodically our core stops responding to queries and must be
restarted on the slave.
Two hosts
is06 solr 7.7 master
ss06 solr 7.7 slave
simple replication is setup no solr cloud
so on the primary is06 we see this error
bq. In all my solr servers I have 40% free space
Well, clearly that's not enough if you're getting this error: "No
space left on device"
Solr/Lucene need _at least_ as much free space as the indexes occupy.
In some circumstances it can require more. It sounds like you're
having an issue with full
Hi all,
I use SOLR-6.5.1. Before couple weeks I started to use replication feature
in cloud mode without override default behavior of ReplicationHandler.
After deployment replication feature to production, almost every day I hit
these errors:
SolrException: Unable to download completely. Downloa
cause it already exists in the store!
Thanks,
Kelly
-Original Message-
From: Kelly Rusk [mailto:kelly.r...@rackspace.com]
Sent: Sunday, April 22, 2018 8:51 PM
To: solr-user@lucene.apache.org; solr-user@lucene.apache.org
Subject: Re: Solr 6.6.2 Master/Slave SSL Replication Error
Makes p
Replication Error
To:
On 4/22/2018 6:27 PM, Kelly Rusk wrote:
> Thanks for the assistance. The Master Server has a self-signed Cert with its
> machine name, and the Slave has a self-signed Cert with its machine name.
>
> They have identical configurations, and I created a keystor
On 4/22/2018 6:27 PM, Kelly Rusk wrote:
Thanks for the assistance. The Master Server has a self-signed Cert with its
machine name, and the Slave has a self-signed Cert with its machine name.
They have identical configurations, and I created a keystore per server. Should
I import the self-signe
keystore? Or are you stating
that I need to copy the keystore over to the Slave instead of having the one I
created?
Regards,
Kelly
_
From: Shawn Heisey
Sent: Sunday, April 22, 2018 7:56 PM
Subject: Re: Solr 6.6.2 Master/Slave SSL Replication Error
To:
On 4/22/2018 4
On 4/22/2018 4:40 PM, Kelly Rusk wrote:
I already have a key store/trust store and my settings are as follows:
set SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.jks
set SOLR_SSL_KEY_STORE_PASSWORD=secret
set SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.jks
set SOLR_SSL_TRUST_STORE_PASSWORD=secret
REM R
recommending?
Regards,
Kelly
_
From: Chris Hostetter
Sent: Sunday, April 22, 2018 5:43 PM
Subject: Re: Solr 6.6.2 Master/Slave SSL Replication Error
To:
You need to configure Solr to use a "truststore" that contains the
certificate you want it to trust. With a
lr/guide/6_6/enabling-ssl.html
: Date: Sat, 21 Apr 2018 14:40:08 -0700 (MST)
: From: kway
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org
: Subject: Re: Solr 6.6.2 Master/Slave SSL Replication Error
:
: ... looking at this line, I am wondering if this is an issue beca
... looking at this line, I am wondering if this is an issue because I am
using a Self-Signed Certificate:
Caused by: javax.net.ssl.SSLHandshakeException:
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to fin
Thanks Shawn,
Here is what I get from the logs:
2018-04-20 18:03:57.805 WARN (indexFetcher-19-thread-1) [
x:XP1Prod_core_index_rebuild] o.a.s.h.IndexFetcher Master at:
https://mastercomputername:8983/solr/XP1Prod_core_index_rebuild is not
available. Index fetch failed by exception:
org.apache.
Thanks Shawn,
Here is what I get from the logs:
2018-04-20 18:03:57.805 WARN (indexFetcher-19-thread-1) [
x:XP1Prod_core_index_rebuild] o.a.s.h.IndexFetcher Master at:
https://mastercomputername:8983/solr/XP1Prod_core_index_rebuild is not
available. Index fetch failed by exception:
org.apache.
On 4/21/2018 10:24 AM, kway wrote:
However, I can't get replication to work when using SSL/HTTPS. It throws IO
Communication errors as it can’t resolve the https connection to a localhost
certificate on the Master. The error is as follows:
Master at: https://mastercomputername:8983/solr/core_ind
I need to use SSL in my Master/Slave Solr 6.6.2 environment. I had created a
localhost SSL Cert on the Master (works on the Master because it’s local),
but this won’t work for the Slave which has replication based on the IP of
the Master server. I then changed it to a self-signed cert that uses the
Hi Abdel,
Your configuration looks ok regarding the cdcr update log.
Could you tell us a bit more about your Solr installation ? More
specifically, does the solr instances, both source and target, contain
one collection that was created prior the configuration of cdcr ?
Best,
--
Renaud Delbru
Hi there,
I am trying to configure Cross Data Center Replication using solr 6.0.
I am having issue creating collections or reloading old collections with
the new solrconfig.xml on both the target and source side. I keep getting
error
“org.apache.solr.common.SolrException:org.apache.solr.common
in context:
http://lucene.472066.n3.nabble.com/Solr-Replication-error-tp4252929.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Daniel.
So, if I understand correctly the below exception is almost always
caused because of merging segments ? Though I see different file names
(for e.g download_av3.fdt in this case) in the exception messages
[explicit-fetchindex-cmd] ERROR
org.apache.solr.handler.ReplicationHand
On 1/3/2014 10:34 AM, Daniel Collins wrote:
We see this a lot as well, my understanding is that recovery asks the
leader for a list of the files that it should download, then it downloads
them. But if the leader has been merging segments whilst this is going on
(recovery is taking a reasonable p
We see this a lot as well, my understanding is that recovery asks the
leader for a list of the files that it should download, then it downloads
them. But if the leader has been merging segments whilst this is going on
(recovery is taking a reasonable period of time and you have an NRT system
where
Hi,
I am hitting this error on replication, can somebody please tell me
what's wrong here and what can be done to correct this error :
[explicit-fetchindex-cmd] ERROR
org.apache.solr.handler.ReplicationHandler- SnapPull failed
:org.apache.solr.common.SolrException: Unable to download _av3.f
ue on the boxes
marked as leaders, the replicas have a few but nowhere near as many.
Thanks for the response.
-Original Message-
From: Andre Bois-Crettez [mailto:andre.b...@kelkoo.com]
Sent: 05 December 2012 17:57
To: solr-user@lucene.apache.org
Subject: Re: FW: Replication error and
[mailto:annette.new...@servicetick.com]
Sent: 05 December 2012 13:55
To: solr-user@lucene.apache.org
Subject: FW: Replication error and Shard Inconsistencies..
Update:
I did a full restart of the solr cloud setup, stopped all the instances,
cleared down zookeeper and started them up individually. I th
]
Sent: 05 December 2012 13:55
To: solr-user@lucene.apache.org
Subject: FW: Replication error and Shard Inconsistencies..
Update:
I did a full restart of the solr cloud setup, stopped all the instances,
cleared down zookeeper and started them up individually. I then removed the
index from one of
cetick.com]
Sent: 05 December 2012 09:04
To: solr-user@lucene.apache.org
Subject: RE: Replication error and Shard Inconsistencies..
Hi Mark,
Thanks so much for the reply.
We are using the release version of 4.0..
It's very strange replication appears to be underway, but no files are being
co
Hey Annette,
Are you using Solr 4.0 final? A version of 4x or 5x?
Do you have the logs for when the replica tried to catch up to the leader?
Stopping and starting the node is actually a fine thing to do. Perhaps you can
try it again and capture the logs.
If a node is not listed as live but is
Hi,
I found out the problem by myself.
The reason was a bad deployment of of Solr on tomcat. Two instances of
solr were instantiated instead of one. The two instances were managing
the same indexes, and therefore were trying to write at the same time.
My apologies for the noise created on the
Hi,
For months, we were using apache solr 3.1.0 snapshots without problems.
Recently, we have upgraded our index to apache solr 3.1.0,
and also moved to a multi-core infrastructure (4 core per nodes, each
core having its own index).
We found that one of the index slave started to show failure,
29 matches
Mail list logo